id
stringlengths 20
20
| content
stringlengths 211
2.4M
| meta
dict |
---|---|---|
BkiUdmjxaKgS2JEJIYOo | \section{Introduction}
In a generic quantum many-body system, a pure state is thermalized via long-time evolution, i.e., its expectation values of local observables are very close to those given by a statistical (micro-) canonical ensemble~\cite{rigol_thermalization_2008,nandkishore_many_2015,abanin_colloquium_2019}.
The entanglement entropy of such a thermal pure state obeys volume-law scaling, corresponding to the fact that the entropy of a thermal density matrix is extensive~\cite{eisert_textitcolloquium_2010,nakagawa_universality_2018,garrison_does_2018}.
Recent advances in understanding and controlling coherent quantum many-body dynamics have revealed a few exceptional systems which do not show such thermalization.
First, in integrable systems, such as the Lieb-Liniger model and the one-dimensional (1D) Ising model with a transverse field, many integrals of motion prevent a pure state from relaxation towards a thermal state~\cite{kinoshita_quantum_2006,rigol_relaxation_2007,rigol_thermalization_2008,vidmar_generalized_2016}.
Second, in many-body localized (MBL) systems, disordered potentials forbid ballistic propagation of quantum information such that the entanglement entropy grows only logarithmically with time~\cite{znidaric_many-body_2008,bardarson_unbounded_2012,nandkishore_many_2015,abanin_colloquium_2019}.
Recent theoretical studies of quantum circuit models have proposed another class of exceptional systems~\cite{li_quantum_2018,chan_unitary-projective_2019,skinner_measurement-induced_2019,szyniszewski_entanglement_2019,jian_measurement-induced_2020,gullans_dynamical_2019,li_measurement-driven_2019,choi_quantum_2020,bao_theory_2020,tang_measurement-induced_2020}.
In these studies, random unitary dynamics with probabilistic measurements have been investigated.
It has been shown that when the probability of measurements increases, the scaling law of the entanglement entropy exhibits a transition from a volume law to an area law at a certain critical point.
Since the volume-law scaling is a necessary condition for a pure state to be thermal, the emergence of the area-law scaling means that many measurements prevent a state after long-time evolution from the thermalization.
Despite the intensive interest in this measurement-induced transition (MIT), its experimental observation is still lacking.
In order to observe the MIT, one needs an experimental system with long coherence time and high controllability of measurements.
Ultracold gases have served as an ideal platform for analyzing long-time coherent dynamics of many-body systems thanks to their long thermalization time and isolation from the environment.
Indeed, coherent quantum dynamics of integrable systems~\cite{kinoshita_quantum_2006} and MBL systems~\cite{schreiber_observation_2015} has been observed in this platform for the first time.
Recent experiments have successfully introduced controllable dissipation to ultracold-gas systems to create and manipulate quantum many-body states~\cite{barontini_controlling_2013,patil_measurement-induced_2015,luschen_signatures_2017,tomita_observation_2017,bouganne_anomalous_2020}.
Since the introduced dissipation corresponds to a continuous quantum measurement, which can be interpreted as probabilistic measurements in terms of quantum trajectory representation of open quantum systems, we expect that it may be utilized for causing the MIT.
In this paper, we propose a specific protocol to realize the MIT with use of ultracold gases in optical lattices.
By means of the quantum trajectory method implemented with matrix product states (MPS)~\cite{dum_monte_1992,daley_quantum_2014,schollwock_density-matrix_2011}, we analyze the 1D Bose-Hubbard model with two-body losses, which can be widely controlled in experiment by the strength of a photoassociation (PA) laser~\cite{tomita_observation_2017}.
We find that this system exhibits a MIT from volume-law scaling to area-law scaling with a logarithmic correction (ALSLC) when the strength of the two-body losses increases in a weakly dissipative regime.
Moreover, we find another MIT in a strongly dissipative regime.
The latter transition can be attributed to a continuous quantum Zeno effect (QZE) and has not been reported in previous literature studying quantum circuit models.
We show that the experimentally accessible momentum distribution reflects the changes of the scaling laws.
We also analyze dynamics after release of the particles to an empty space in order to show that the states with ALSLC can be distinguished from the volume-law states by observing the strong suppression of particle transport, which can be recognized as the tendency towards the ergodicity breaking.
The rest of the paper is organized as follows.
In Sec.~\ref{sec:method}, we define the master equation describing ultracold bosons with a PA laser in a 1D optical lattice, and introduce the quantum trajectory method for analyzing the master equation. We also define ``entanglement entropy'' used in this study in the section. In Sec.~\ref{sec:transition}, we show that there exist two MIT in this system and that the MIT has the reentrant structure. In Sec.~\ref{sec:exp}, we discuss how to detect the MIT in ultracold-gas experiments. In Sec.~\ref{sec:summary}, we summarize the results.
\section{Model and methods\label{sec:method}}
We consider ultracold bosons confined in an optical lattice.
We assume that the lattice potential in the transverse (yz) directions is so deep that the hopping in these direction is forbidden, i.e., the system is 1D.
We also assume that the lattice potential in the longitudinal (x) direction is deep enough for the tight-binding approximation to be valid.
The two-body losses can be introduced by exposing the system to a PA laser~\cite{tomita_observation_2017}, which couples a local two-atom state to a molecular state with a very short lifetime.
In this system, the time-evolution of a density matrix \(\hat{\rho}(t)\) can be effectively described by the master equation in Lindblad form~\cite{tomita_observation_2017,gorini_completely_1976,lindblad_generators_1976}
\begin{align}
\frac{\mathrm{d}}{\mathrm{d} t} \hat{\rho}(t) &= -\frac{\mathrm{i}}{\hbar}[\hat{H}, \hat{\rho}(t)] + \hat{L}[\hat{\rho}(t)]
\label{eq:master_eq}
\end{align}
with the 1D Bose-Hubbard Hamiltonian
\begin{align}
\label{eq:BoseHubbard}
\hat{H} = -J\sum^{M-1}_{i=1} (\hat{b}^\dagger_i \hat{b}_{i+1} + \mathrm{H.c.}) + \frac{U}{2} \sum^M_{i=1} \hat{n}_i(\hat{n}_i - 1),
\end{align}
and the Lindblad superoperator for two-body atom losses
\begin{align}
\label{eq:Lindblad}
\hat{L}[\hat{\rho}] = - \frac{\gamma}{2} \sum_i (\hat{b}^\dagger_i \hat{b}^\dagger_i \hat{b}_i \hat{b}_i \hat{\rho} + \hat{\rho} \hat{b}^\dagger_i \hat{b}^\dagger_i \hat{b}_i \hat{b}_i - 2\hat{b}_i\hat{b}_i\hat{\rho}\hat{b}^\dagger_i\hat{b}^\dagger_i).
\end{align}
Here, \(J\) is the hopping amplitude, \(M\) is the number of lattice sites, \(\hat{b}^\dagger_i\) (\(\hat{b}_i\)) creates (annihilates) a boson at site \(i\), \(U\) is the on-site Hubbard interaction, \(\hat{n}_i = \hat{b}^\dagger_i \hat{b}_i\), and \(\gamma \) is the strength of the two-body inelastic collision which can be controlled by the intensity of the PA laser.
We denote the number of remaining particles in the system as \(N\), i.e., \(N = \sum_i \braket{\hat{n}_i}\).
At initial time \(t=0\), we assume that the system is a Mott insulating state at unit filling, i.e., \(\ket{\psi_0} = \prod_i \hat{b}^\dagger_i\ket{0}\) and thus \(\hat{\rho}(0) = \ket{\psi_0}\bra{\psi_0}\), where \(\ket{0}\) denotes the vacuum state.
Solving the master equation~\eqref{eq:master_eq} requires a very high numerical cost in general because the number of coefficients in the density matrix is the square of the dimension of the Hilbert space.
To circumvent this difficulty, we use quantum trajectory techniques which treat pure states in the density matrix~\cite{daley_quantum_2014,dum_monte_1992} instead of treating the density matrix directly.
Following the quantum trajectory techniques, we calculate the time-evolved state
\begin{align}
\ket{\psi(t)} = \mathrm{e}^{-\mathrm{i}\frac{t}{\hbar}\hat{H}_\mathrm{eff}}\ket{\psi_0}
\end{align}
with the effective non-hermitian Hamiltonian
\begin{align}
\hat{H}_\mathrm{eff} = \hat{H} - \mathrm{i}\frac{\hbar\gamma}{2} \sum_i \hat{b}^\dagger_i \hat{b}^\dagger_i \hat{b}_i \hat{b}_i.
\end{align}
As time \(t\) increases, the norm of the time-evolved state \(\ket{\psi(t)}\) decreases because of the non-hermitian part of the effective Hamiltonian \(\hat{H}_\mathrm{nh}\).
When the squared norm of the time-evolved state becomes lower than a random number generated from the uniform distribution \((0, 1)\), we calculate a probability \(p_i \propto \braket{\psi(t)|\hat{b}^\dagger_i\hat{b}^\dagger_i\hat{b}_i\hat{b}_i|\psi(t)}\) and choose one index \(j\) according to the probability \(p_i\).
Then, the jump operator \(\hat{b}_j \hat{b}_j\) is applied to \(\ket{\psi(t)}\) and the state is normalized.
This stochastic process emulates the open dynamics described by the master equation in Lindblad form, and the expectation values are obtained by the sample average
\begin{align}
\begin{aligned}
\braket{\hat{O}(t)} &= \mathrm{Tr}[\hat{O}\hat{\rho}(t)] \\
&\simeq \frac{1}{K}\sum^K_{l=1}\frac{\braket{\psi_l(t)|\hat{O}|\psi_l(t)}}{\braket{\psi_l(t)|\psi_l(t)}},
\end{aligned}
\end{align}
where \(\ket{\psi_l(t)}\) is the \(l\)-th sample of the stochastic process and \(K\) is the number of samples.
Notice that the application of the jump operator and the subsequent normalization correspond to a quantum measurement.
In the sense that a series of the measurement events stemming from the dissipation occur probabilistically according to the random number and the probability distribution \(p_i\), the dissipation can be interpreted as probabilistic measurements.
For numerically efficient calculations in 1D, we represent a state \(\ket{\psi(t)}\) with MPS and perform the time evolution by means of the time-evolving block decimation algorithm~\cite{vidal_efficient_2003,vidal_efficient_2004,white_real-time_2004,daley_time-dependent_2004} using the optimized Forest-Ruth-like decomposition~\cite{omelyan_optimized_2002}.
The truncation error is set to be less than \(10^{-8}\), and the time step \(\Delta t\) is adaptively changed after each jump operation as \(\Delta t = \min \{-\log(0.9)\hbar/\braket{\psi(t)|\mathrm{i}\hat{H}_\mathrm{nh}|\psi(t)}, \Delta t_{\max}\} \) in order to avoid a rapid decrease in the norm of wavefunction.
Here, \(\Delta t_{\max}\) is the upper bound of the time step that we set to be \(0.05 \hbar/J\) (\(0.02 \hbar/J\)) for small to intermediate \(\hbar \gamma /J\) (large \(\hbar \gamma / J \geq 100 \)).
It should be cautioned that we have to define what we call ``entanglement entropy'' in this study because the ordinary entanglement entropy is defined only for pure states \(\ket{\phi}\) on a system biparted into subsystems \(A\) and \(B\) as
\begin{align}
S_A(t) = -\mathrm{Tr}\hat{\rho}_A(t) \ln \hat{\rho}_A(t),
\end{align}
where \(\hat{\rho}_A\) is a reduced density matrix defined as
\begin{align}
\hat{\rho}_A(t) = -\mathrm{Tr}_B\ket{\phi(t)}\bra{\phi(t)}.
\end{align}
Here, \(\mathrm{Tr}_B\) means a partial trace over the subsystem \(B\).
In this study, as well as other studies investigating the MIT, the statistical average of the entanglement entropy of \(\ket{\psi(t)}/\sqrt{\braket{\psi(t)|\psi(t)}}\) is called ``entanglement entropy'' and the size dependence of the ``entanglement entropy'' is discussed.
In other words, what we discuss is typical behaviors of the entanglement entropy of relevant states in a density matrix \(\hat{\rho}(t)\).
An equal bipartition does not always give the maximal entanglement entropy in the presence of the two-body loss.
Therefore, we define the average of the maximal bipartite entanglement entropy
\begin{align}
S_{\max}(t) = \Braket{\max_A S_A(t)},
\end{align}
where \(\max_A\) means the biparted subsystem \(A\) that gives the maximal entanglement entropy.
In this study, we discuss the scaling law of the ``entanglement entropy'' based on \(S_{\max}(t)\).
Hereafter, we call \(S_{\max}(t)\) as entanglement entropy for simplicity.
It is worth noting that the MIT in the Bose-Hubbard model~\eqref{eq:BoseHubbard} with local projective measurements has been studied in Ref.~\cite{tang_measurement-induced_2020}.
In contrast to the previous study, here we incorporate the specific form of controllable dissipation that has been experimentally realized and show an observable suited for characterizing the transitions.
\section{Measurement-induced transitions in the dissipative Bose-Hubbard models\label{sec:transition}}
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{fig1.pdf}
\caption{Time evolution of the entanglement entropy for (a)\(\hbar \gamma / J =0.5\) and (b) \(\hbar \gamma / J = 5.0\) for several system sizes at \(U/J = 5.0\). Error bars indicate \(1\sigma \) uncertainty.~\label{fig:time-evol}}
\end{figure}
Figure~\ref{fig:time-evol} shows the time-evolution of the entanglement entropy for different values of \(\hbar \gamma / J\) and \(M\) at \(U/J = 5.0\).
By comparing the case of \((\hbar \gamma / J, M) = (0.5, 24)\) with that of \((\hbar \gamma / J, M) = (5.0, 24)\), we see that the dissipation suppresses the growth of the entanglement entropy.
Thanks to this suppression, when \(\hbar \gamma / J = 5.0\), we can compute long-time dynamics of a relatively large system, say \(M=256\).
The general tendency of the entanglement entropy in the presence of the two-body losses is that it rapidly grows in a short time regime and gradually decreases due to the two-body losses after taking a maximal value.
We show below that the maximal entanglement entropy during the time evolution at \(\hbar \gamma / J = 5.0\) obeys ALSLC.
In Fig.~\ref{fig:time-evol}(b), we see that a steady-value region, where \(S_{\max}(t)\) takes almost the same value as the maximal value, develops when the system size increases (see, e.g., the region \(15 \lesssim tJ/\hbar \lesssim 30\) in the case of \((\hbar \gamma / J, M) = (5.0, 128)\)).
The presence of the steady-value region allows us to identify the states with ALSLC analyzed in the present work as those in the realm of the MIT~\cite{li_quantum_2018,chan_unitary-projective_2019,skinner_measurement-induced_2019}.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{fig2.pdf}
\caption{System size \(M\) dependence of the maximal entanglement entropy \(\max_t S_{\max}(t)\) (a) in the linear scale for \(M\) and (b) in the logarithmic scale for \(M\) at \(U/J = 5.0\). The blue solid, orange dashed, green dashed-dotted, and red dotted lines correspond to \(\hbar \gamma / J = 0.5\), \(5.0\), \(50.0\), and \(500.0\), respectively. Error bars indicate \(1\sigma \) uncertainty.~\label{fig:Size_dep}}
\end{figure}
Figure~\ref{fig:Size_dep} shows the maximal values of the entanglement entropy, \(\max_t S_{\max}(t)\), during the time evolution as a function of the system size \(M\) for \(\hbar \gamma / J = 0.5\), \(5.0\), \(50.0\), and \(500.0\) at \(U/J = 5.0\).
When the dissipation is as small as \(\hbar \gamma / J = 0.5\) or is as large as \(\hbar \gamma / J = 500.0\), the entanglement entropy grows linearly with \(M\) within the system size that we can numerically compute (\(M \simeq 24\)), i.e., it follows the volume-law scaling.
On the contrary, in an intermediate dissipation regime, including \(\hbar \gamma / J = 5.0\) and \(50.0\), the entanglement entropy grows logarithmically with \(M\), i.e., it follows ALSLC.
We call the scaling with the logarithmic correction as area law in the sense that the correction grows more slowly with the system size than algebraic growth, i.e., it is not extensive.
This observation means that when the strength of the dissipation increases, the system exhibits a transition from a volume-law state to an ALSLC state at a relatively small value of dissipation and the other transition to another volume-law state at a relatively large value.
In short, in the present system the MIT has the reentrant structure.
The presence of the volume-law state at the small dissipation, \(\hbar \gamma / J =0.5\), implies that there is a finite critical value \({(\hbar \gamma/J)}_\mathrm{c}\) for the MIT likewise the cases of random unitary circuits.
On the other hand, that at the large dissipation, \(\hbar \gamma / J = 500.0\), can be interpreted as a consequence of the continuous QZE.\@
More specifically, the strong two-body losses suppress double occupation at each site such that the particles in the system behave as hardcore bosons~\cite{garcia-ripoll_dissipation-induced_2009}.
Hence, after several loss events at an early time range, which create a considerable number of holes, the measurement events rarely happen so that the holes spread ballistically to lead to the volume-law entanglement.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig3.pdf}
\caption{Time evolutions of the number of remaining particles per site for several system sizes with \(\hbar \gamma / J = 0.5\) and \(\hbar \gamma / J = 5.0\). The Hubbard interaction \(U/J\) is set to 5.0. Linear fits are obtained from the data for \(t > 10 \hbar / J\) (\(t > 5.0 \hbar / J\)) in the largest \(M\) for \(\hbar \gamma / J = 0.5\) (\(\hbar \gamma / J = 5.0\)) case. Error bars indicate 1\(\sigma \) uncertainty.~\label{fig:Nt}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig4.pdf}
\caption{The system size \(M\) dependencies of the number of remaining particles \(N\) when the entanglement entropy takes the maximal value for (a) \(\hbar \gamma / J = 0.5\) and (b) \(\hbar \gamma / J = 5.0\). Error bars indicate 1\(\sigma \) uncertainty.~\label{fig:NvsM}}
\end{figure}
Since the number of remaining particles \(N\) continues to decrease in the Lindblad dynamics, one might suspect that the transitions of the scaling law are results of the decrease of particles.
Figure~\ref{fig:Nt} represents the time evolution of the average density \(N/M\) in the dynamics shown in Fig.~\ref{fig:time-evol}.
For both \(\hbar \gamma / J = 0.5\) and \(5.0\) cases, the density decreases algebraically in long-time dynamics and its dependence on the system size is almost absent.
The exponents of the algebraic decreases estimated from the linear fits are \(-0.65\) for \(\hbar \gamma / J = 0.5\) and \(-0.66\) for \(\hbar \gamma / J = 5.0\).
These exponents are almost the same before and after the scaling law transition.
Furthermore, as shown in Fig.~\ref{fig:NvsM}, the number of remaining particles when the entanglement entropy takes the maximal value increases almost linearly as the system size increases for both \(\hbar \gamma / J = 0.5\) and \(5.0\) cases.
Therefore, the scaling of \(N\) is not so different before and after the transitions of the scaling laws, and thus the transitions cannot be understood as the result of the decrease of particles.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig5.pdf}
\caption{System size \(M\) dependencies of the maximal entanglement entropy \(\max_t S_{\max}(t)\) (a) with \(U/J = 1.0\) and (b) with \(U/J = 10.0\) for several strengths of the dissipation \(\gamma \). Error bars indicate 1\(\sigma \) uncertainty.\label{fig:Udep}}
\end{figure}
The reentrant structure we found is present in a broad range of \(U/J\).
Figure~\ref{fig:Udep} represents the system size dependencies of the maximal value of the entanglement entropy for \(U/J = 1.0\) and \(U/J = 10.0\) cases.
One can see the reentrant structure for both cases.
For the \(U/J = 1.0\) case, even with a small dissipation \(\hbar \gamma / J = 0.5\), the scaling law of the entanglement entropy is ALSLC in contrast to the \(U/J = 5.0\) case.
This can be attributed to the fact that the double occupancy rate increases compared to the \(U/J = 5.0\) case and thus the probability of measurement is effectively increased.
\section{How to experimentally detect measurement-induced transitions\label{sec:exp}}
In closed systems, a kind of the entanglement entropy, namely the 2nd order R\'{e}ny entropy, has been observed in experiments with ultracold gases in optical lattices by preparing a copy of the target system and measuring interference between the target and the copy~\cite{islam_measuring_2015,kaufman_quantum_2016}.
However, in open systems with dissipation, it is hard to use the same protocol because the copy cannot perfectly mimic measurement events which happen in a stochastic manner.
Hence, it is imperative to point out alternative experimental observables that can distinguish the ALSLC states from the volume-law states.
\subsection{Momentum distribution}
In this subsection, we show that the momentum distribution
\begin{align}
\braket{\hat{n}_k} = \frac{1}{M}\sum_{ij}\braket{b^\dagger_i b_j}\mathrm{e}^{\mathrm{i} k(i-j)},
\end{align}
which is a standard observable in ultracold-gas experiments, reflects the scaling law of the entanglement entropy.
Here, we set the lattice spacing to unity.
{We set \(U/J\) to \(5.0\) in this subsection.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig6.pdf}
\caption{Normalized momentum distributions for several values of the dissipation strength \(\gamma \) at the time that gives \(\max_t S_{\max}(t)\). The blue solid, orange dashed, green dashed-dotted, and red dotted lines correspond to \(\hbar \gamma / J = 0.5\), \(5.0\), \(50.0\), and \(500.0\), respectively. The system size \(M\) is set to 20 in order to investigate a vast range of \(\gamma \). Error bars indicate \(1\sigma \) uncertainty.~\label{fig:Momentum}}
\end{figure}
Figure~\ref{fig:Momentum} shows the normalized momentum distributions for \(\hbar \gamma / J = 0.5\), \(5.0\), \(50.0\), and \(500.0\) at the time that gives \(\max_t S_{\max}(t)\) (See Sec.~\ref{sec:method} for the definition).
The system size is set to \(M = 20\) in order to compute states with the volume-law entanglement.
In each of the three different regions of the dissipation strength, \(\braket{\hat{n}_k}/N\) exhibits a distinct signal.
Here, \(N\) is the total number of remaining particles in the system.
In the case of the small dissipation, \(\hbar \gamma / J = 0.5\), there exists a single peak at \(k=0\).
In the intermediate region, including \(\hbar \gamma / J = 5.0\) and \(50.0\), the dips at \(|k| = \pi/2\) are developed.
In the case of the strong dissipation, \(\hbar \gamma / J = 500.0\), the distribution is almost flat.
In order to characterize the signals more quantitatively, we show in Fig.~\ref{fig:Visibility} the visibility \(\braket{\hat{n}_\pi}/\braket{\hat{n}_{\pi/2}}\) as a function of \(\hbar \gamma / J\).
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{fig7.pdf}
\caption{Visibility \(\braket{\hat{n}_\pi} / \braket{\hat{n}_{\pi/2}}\) as a function of the dissipation strength \(\gamma \). Although it is in practice impossible to precisely determine the critical points with our matrix product states method, we have checked that the states in the region \(2 \leq \hbar \gamma / J \leq 50\) safely obey ALSLC. Error bars indicate \(1\sigma \) uncertainty.~\label{fig:Visibility}}
\end{figure}
Since the visibility becomes considerably large in the intermediate region, where the states with ALSLC emerge, it can be used for distinguishing the states with ALSLC from the volume-law states.
Notice that the visibility at \(M=20\) shown in Fig.~\ref{fig:Visibility} does not exhibit any singular behaviors across the transition points because the system size is too small.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig8.pdf}
\caption{(a) A possible loss process in the early time dynamics. Atoms with finite group velocity tend to be lost. (b) Momentum distributions before (blue solid line) and after (orange dashed line) a loss event in the early time dynamics. (c) Difference of momentum distributions before and after the loss event. To obtain the momentum distributions, we set \(M=64\).~\label{fig:Zeno}}
\end{figure}
The emergence of the dip structure in the intermediate region can be understood as a quantum Zeno effect in the momentum space.
At \(t=0\), there is no doubly-occupied site as depicted in Fig.~\ref{fig:Zeno} (a).
This means that in order for the loss events to happen, particles have to move with finite group velocity.
In other words, the loss event is more probable for faster particles.
Since the group velocity is the largest at \(|k|=\pi/2\) in the single-particle band of the one-dimensional Bose-Hubbard model, which is \(-2J\cos k\), the particles with \(|k|=\pi/2\) is the most likely to be lost.
In Figs.~\ref{fig:Zeno} (b) and (c), we compare the momentum distribution right before and after a loss event during early-time dynamics and see that the momentum distribution of the lost two particles is indeed peaked at \(|k| = \pi/2\).
As a consequence of series of such loss events, \(\braket{\hat{n}_k}/N\) forms the dips at \(|k| = \pi / 2\).
After the formation of the dip structure, the stronger dissipation for faster particles suppresses the redistribution of the particles towards states around \(|k| = \pi/2\).
\subsection{Strong suppression of particle transport}
The momentum distribution reflects the transition of the scaling law of the entanglement entropy, and is relatively easily accessible in ultracold-gas experiments.
However, the relation between the momentum distribution and the entanglement entropy seems unclear.
To resolve this difficulty, we propose a more direct signature of the entanglement transitions.
For this purpose, we borrow an idea from the experimental confirmations of the MBL states~\cite{schreiber_observation_2015,choi_exploring_2016}, which has utilized the breaking of ergodicity as an indicator of the area-law states.
In an area-law state, a part of a system does not possess an extensive entanglement entropy and thus cannot act as a thermal bath for the rest of the system~\cite{iyer_many-body_2013}.
The absence of the thermal bath results in the ergodicity breaking that is manifested, e.g., by the spatial imbalance of particles~\cite{schreiber_observation_2015,choi_exploring_2016}.
However, the true equilibrium state of this system is the vacuum state \(\ket{0}\) regardless of the scaling laws because the number of remaining particles continues to decrease.
Therefore, what one can observe is only dynamics towards a transient spatially imbalanced state.
We expect that particle transport reflects a tendency toward the spatially imbalanced non-ergodic state.
In order to confirm this expectation, we simulate the following dynamics: In a 2\(M\)-site system, we prepare an initial state \(\ket{\psi_0} = \prod^M_{i=1} \hat{b}^\dagger_i\ket{0}\) in the left half of the system and set a high barrier potential \(100J\sum^{2M}_{i=M+1} \hat{n}_i\) which prevents the particles from coming into the right half.
We set \(U/J\) to \(5.0\).
After \(t=t_\mathrm{rel}\), we turn off the barrier potential and release the particles to the right half of the system.
\(t_\mathrm{rel}\) is chosen to be within the time region where \(S_{\max}(t)\) takes almost a steady value.
We characterize particle transport by comparing the number of particles in the right half, i.e., \(N_\mathrm{r} = \sum^{2M}_{i=M+1}\braket{\hat{n}_i}\) with the half of the number of remaining particles \(N/2\).
Both \(N_\mathrm{r}\) and \(N/2\) can be measured in experiments~\cite{choi_exploring_2016}.
\begin{figure}[htp]
\centering
\includegraphics[width=\linewidth]{fig9.pdf}
\caption{Time evolution of the number of particles in the right half of the system \(N_\mathrm{r}\) (blue solid line) and the half of the remaining number of particles \(N/2\) (orange dashed line) for (a) \((\hbar \gamma / J, 2M, t_\mathrm{rel} J / \hbar) = (5.0, 128, 10)\) and (b) \((\hbar \gamma / J, 2M, t_\mathrm{rel} J / \hbar) = (0.5, 40, 5)\).\label{fig:barrier_release}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig10.pdf}
\caption{Spatial distribution of particles at \(t = t_\mathrm{rel}+\hbar M / J\) in the dynamics of (a) Fig.~\ref{fig:barrier_release}(a) and (b) Fig.~\ref{fig:barrier_release}(b).\label{fig:spatial_dis}}
\end{figure}
Figure~\ref{fig:barrier_release}(a) represents the time evolution of \(N_\mathrm{r}\) and \(N/2\) for \((\hbar \gamma / J, 2M, t_\mathrm{rel}J/\hbar) = (5.0, 128, 10)\), where the state at \(t=t_\mathrm{rel}\) is an ALSLC state.
For the broad region \(60 \lesssim t J / \hbar \lesssim 100\), there is a visible difference between \(N/2\) and a converged \(N_\mathrm{r}\).
The difference means that the delocalization of the particles is suppressed due to the dissipation, thus signaling the tendency toward a state without ergodicity.
By contrast, for \((\hbar \gamma / J, 2M, t_\mathrm{rel}J/\hbar) = (5.0, 128, 10)\), where the state at \(t=t_\mathrm{rel}\) is a volume-law state, \(N_\mathrm{r}\) exceeds \(N/2\) before the convergence as seen in Fig.~\ref{fig:barrier_release}(b).
This overshoot behavior implies ballistic transport of the particles, which is consistent with the fact that in a volume-law state quantum information ballistically propagates.
The difference of tendencies is also visible as the inhomogeneity of the spatial distribution of particles, left-leaning or right-leaning, as shown in Fig.~\ref{fig:spatial_dis}.
In another volume-law region, say \(\hbar \gamma / J = 500.0\), the spatial distribution of particles and the time evolution of \(N_\mathrm{r}\) and \(N/2\) shown in Fig.~\ref{fig:gamma_500} are similar to those of \(\hbar \gamma / J = 0.5\) in terms of the ballistic propagation and the right-leaning spatial distribution of particles.
Thus, the strong suppression of particle transport clearly distinguishes the scaling law of the entanglement entropy as expected.
For a fair comparison, we also present the dynamical behaviors of \(\hbar \gamma / J = 5.0\) case with system size \(2M=40\) in Fig.~\ref{fig:gamma_5}.
Even in the small system, the absence of the particle excess and the left-leaning spatial distribution of particles, which characterize the localization, are also visible.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig11.pdf}
\caption{(a) Time evolution of the number of particles in the right half of the system \(N_\mathrm{r}\) (blue solid line) and the half of the remaining number of particles \(N/2\) (orange dashed line) for \((\hbar \gamma / J, 2M, t_\mathrm{rel}J/\hbar) = (500.0, 40, 30.0)\). (b) Spatial distribution of particles at \(t = t_\mathrm{rel} + \hbar M / J\).\label{fig:gamma_500}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig12.pdf}
\caption{(a) Time evolution of the number of particles in the right half of the system \(N_\mathrm{r}\) (blue solid line) and the half of the remaining number of particles \(N/2\) (orange dashed line) for \((\hbar \gamma / J, 2M, t_\mathrm{rel}J/\hbar) = (5.0, 40, 5.0)\). (b) Spatial distribution of particles at \(t = t_\mathrm{rel} + \hbar M / J\).\label{fig:gamma_5}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{fig13.pdf}
\caption{The imbalance \(P\) versus the dissipation strength \(\hbar \gamma / J\) at \(t_\mathrm{obs} = t_\mathrm{rel} + \hbar M / J\).
For the simulation with \(2M = 40\), we use \(t_\mathrm{rel}J/\hbar = 5.0\) for \(\hbar \gamma / J \leq 5.0\) and set \(t_\mathrm{rel}J/\hbar \) to 6.0, 10.0, 15.0, 20.0, and 30.0 for \(\hbar \gamma / J = 10.0, 20.0, 50.0, 100.0\), and 500.0, respectively. For \(2M=64\) (128) case, we use \(t_\mathrm{rel}J/\hbar = 10.0\) for \(\hbar \gamma / J \leq 20.0\) (10.0) and \(t_\mathrm{rel}J/\hbar = 20.0\) for \(\hbar \gamma / J = 50.0\) (20.0).~\label{fig:imbalance}}
\end{figure}
In order to quantify how much particle transport is suppressed, we calculate the imbalance between \(N_\mathrm{r}\) and \(N/2\) defined as
\begin{align}
P = \frac{N/2 - N_\mathrm{r}}{N/2 + N_\mathrm{r}}
\end{align}
at \(t_\mathrm{obs} = t_\mathrm{rel} + \hbar M / J\).
\(t_\mathrm{obs} - t_\mathrm{rel}\) corresponds to a rough estimate of the time scale in which the particles released at \(t=t_\mathrm{rel}\) reaches the right edge of the system.
If \(P\) significantly exceeds zero, the state at \(t=t_\mathrm{rel}\) is an ALSLC state.
If \(P \leq 0\), that is a volume-law state.
Otherwise, the state lies in an intermediate regime.
Figure~\ref{fig:imbalance} represents \(P\) versus \(\hbar \gamma / J\) for \(2M = 40\), 64, and 128.
The distinction between the volume-law and ALSLC states made from \(P\) is consistent with the scaling law shown in Fig.~\ref{fig:Size_dep}.
We also see that the imbalance becomes more visible as the system size increases.
Although it is quite difficult to access the volume-law region in the case of the larger systems (\(2M=64\) and 128) with numerical simulations, it is expected that \(P \leq 0\) with the ballistic particle transport regardless of the system size.
In short, the imbalance \(P\) serves as an indicator of whether the initial state of the release dynamics is the ALSLC or the volume-law state, which can be observed in experiments.
\section{Summary\label{sec:summary}}
We proposed the measurement-induced transitions (MITs), which have been theoretically found in recent studies of quantum circuit models~\cite{li_quantum_2018,chan_unitary-projective_2019,skinner_measurement-induced_2019,szyniszewski_entanglement_2019,jian_measurement-induced_2020,gullans_dynamical_2019,li_measurement-driven_2019,choi_quantum_2020,bao_theory_2020,tang_measurement-induced_2020}, can be experimentally observed by using ultracold bosons in optical lattices with controllable dissipation.
We employed a quasi-exact numerical method to investigate effects of dissipation on quench dynamics of the one-dimensional Bose-Hubbard model with a two-body loss term.
By computing the maximal entanglement entropy of the system during the time evolution, we found two MITs.
Specifically, when the strength of the dissipation increases, the scaling of the entanglement changes from a volume law to an area law with a logarithmic correction, and again to the volume law.
We showed that the momentum distribution, a standard observable in ultracold-gas experiments, reflects the change of the scaling laws.
We also suggested that the strong suppression of particle transport in the dynamics after release of the particles to an empty space are more direct observable signatures in experiments for distinguishing the area-law states from the volume-law states.
We could not locate precisely the critical points for the two MITs because it was impossible to efficiently describe the volume-law states of the dissipative Bose-Hubbard model with currently available numerical techniques.
Since in experiments with ultracold gases the tractable system size is not limited by the volume-law entanglement, the determination of the critical points will be a meaningful target of quantum simulations.
\begin{acknowledgments}
We thank S. Nakajima, Y. Takahashi, Y. Takasu, and T. Tomita for fruitful discussions.
The MPS calculations in this work are performed with ITensor library, http://itensor.org.
This work was financially supported by KAKENHI from Japan Society for the Promotion of Science: Grant No. 18K03492 and No. 18H05228, by CREST, JST No. JPMJCR1673, and by MEXT Q-LEAP Grant No.\@ JPMXS0118069021.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.669922,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdn45qWTBGBH18ZJq | \section{INTRODUCTION}
In this paper, we consider a class of driftless control systems of the form
\begin{equation}
\dot x = \sum_{i=1}^m u_i f_i (x),\quad x\in \mathbb R^n,\; u\in \mathbb R^m,\;m<n, \; f_i \in C^2(\mathbb R^n),
\label{Sigma}
\end{equation}
where $x=(x_1,\dots,x_n)^\top$ is the state and $u=(u_1,\dots,u_m)^\top$ is the control.
The stabilization of such systems has been the subject of numerous studies over the last few decades, and many important results have been obtained in this area.
In particular, it follows from the famous result of R.W.~Brockett~\cite{Bro81} that the trivial equilibrium of~\eqref{Sigma} is not stabilizable by a regular time-invariant feedback law if the vectors $f_1(0)$, $f_2(0)$, ..., $f_m(0)$ are linearly independent.
Despite the significant progress in the development of control algorithms to stabilize the solution $x=0$ of system~\eqref{Sigma} (see, e.g.,~\cite{Ast94,Bloch92,Cor92,Kol95,Pan11,Pom92,Sar17,Tian02,Zu16,ZG16}, and references therein), the stabilization of nonholonomic systems to a given curve remains a challenging problem.
This issue can be formulated as the trajectory tracking problem. In many papers, this problem has been addressed under the assumption that the trajectory is admissible, i.e. satisfies the system equations with some control inputs~\cite{Ai05,Ali16,Dandr95,Dong99,Fl95,Mag17,Walsh92,Wang15,Yu15}.
Since the number of controls $m$ may be significantly smaller than the dimension of the state space $n$, not every path in the state space is admissible for system~\eqref{Sigma}.
However, in many applied problems, it is important to stabilize system~\eqref{Sigma} along an \emph{arbitrary} curve, which is not necessarily admissible. As it is mentioned in~\cite{MS08b}, although it is not possible to asymptotically stabilize nonholonomic systems to non-admissible curves because of the non-vanishing tracking error, the practical stabilization can be achieved. It has to be noted that such problem has been addressed only for particular classes of systems, e.g., for unicycle and car-like systems~\cite{MS08b,GMZME18,Rav18}
This paper deals with rather general formulation of the stabilization problem with non-admissible reference curves.
The main contribution of our paper is twofold. First, we introduce a class of control functions for the first degree nonholonomic systems, which allows stabilizing the system in a prescribed neighborhood of an arbitrary (not necessarily admissible) curve. We also show how the obtained results can be extended to higher degree nonholonomic systems. The proposed feedback design scheme is based on the approach introduced in~\cite{Zu16,ZG17,GZ18} for the stabilization and motion planning of nonholonomic systems. However, it has to be noted that the results of these papers cannot be directly applied for the stabilization of non-admissible curves. Second, we characterize stability properties of system~\eqref{Sigma} with the proposed controls in terms of families of sets. Note that the concept of stability of families of sets was used previously in~\cite{La02,GDEZ17,GMZME18} for non-autonomous system admitting a Lyapunov function. In the present paper, we do not assume the existence of a control Lyapunov function and define solutions of the closed-loop system in the sense of sampling.
The rest of the paper is organized as follows. In the remainder of this section, we introduce some basic notations, recall the notion of stability of sets, and give a precise problem statement.
The main result will be proved in Section~II and illustrated with some examples in Section~III.
\subsection{Notations and definitions}
To generate attractive control strategies for system~\eqref{Sigma} in a neighborhood of a given curve $\Gamma=\{\gamma(t)\}_{t\ge 0}\subset \mathbb R^n$, we will follow the idea of~\cite{Zu16} and define solutions of the corresponding closed-loop system in the sense of sampling.
With a slight abuse of notation, we will also identify the curve $\Gamma=\{\gamma(t)\}_{t\ge 0}$ with the map $\gamma:\mathbb R^+\to \mathbb R^n$, $\mathbb R^+= [0,+\infty)$.
For a given $\varepsilon>0$, we consider the partition $\pi_\varepsilon$ of $\mathbb R^+$ into intervals
$
I_j=[t_j,t_{j+1}),\;t_j=\varepsilon j, \quad j=0,1,2,\dots \; .
$
\begin{definition}
Assume given a curve $\gamma:\mathbb R^+ \to \mathbb R^n$, a feedback law $h: \mathbb R^+ \times \mathbb R^n \times \mathbb R^n \to \mathbb R^m$, and an~$\varepsilon>0$.
A $\pi_\varepsilon$-solution of~\eqref{Sigma} corresponding to $x^0\in \mathbb R^n$ and $u=h(t,x,\gamma)$ is an absolutely continuous function $x(t)\in \mathbb R^n$, defined for $t\in[0,+\infty)$, such that $x(0)=x^0$ and, for each $j=0, 1, 2, \dots$,
$$
\dot x(t)=\sum_{i=1}^mh_i(t,x(t_j),\gamma(t_j))\big)f_i(x(t)), \quad t\in I_j=[t_j,t_{j+1}).
$$
\end{definition}
For $f,g:\mathbb R^n\to\mathbb R^n $, $x\in\mathbb R^n$, we denote the Lie derivative as
{$ L_gf(x)=\lim\limits_{s\to0}\tfrac{f(x+sg(x))-f(x)}{s}$}, and $[f,g](x)= L_fg(x)- L_gf(x)$ is the Lie bracket.
Throughout this paper, $\|a\|$ stands for the Euclidean norm of a vector $a\in\mathbb R^n$, and the norm of an $n\times n$-matrix $\cal F$ is defined as $\|{\cal F}\|=\sup_{\|y\|=1}\|{\cal F}y\|$.
\subsection{Stability of a family of sets}
To characterize the asymptotic behavior of trajectories of system~\eqref{Sigma},
we will extend the concept of stability of a family of sets to the case of $\pi_\varepsilon$-solutions.
This concept has been developed, e.g., in~\cite{La02} for non-autonomous differential equations and applied to control problems under the classical definition of solutions in~\cite{GDEZ17,GMZME18}.
Let
$\{\mathcal S_t\}_{t\ge 0}$ be a one-parameter family of non-empty subsets of $\mathbb R^n$.
For a $\delta>0$, we denote the $\delta$-neighborhood of the set
$\mathcal S_{t}$ at time $t$ as
$
B_{\delta}(\mathcal S_{t}){=}\bigcup_{y{\in}\mathcal S_{t}}\{x{\in}\mathbb R^n:\|x{-}y\|{<}\delta\},
$
The distance from a point $x\in \mathbb R^n$ to a set $\mathcal S_{t}\subset \mathbb R^n$ is denoted as ${\rm dist}(x,\mathcal S_{t})=\inf_{y\in \mathcal S_{t}}\|x-y\|$.
Assume given a curve $\gamma:\mathbb R^+ \to \mathbb R^n$, a time-varying feedback law $h: \mathbb R^+ \times \mathbb R^n \times \mathbb R^n \to \mathbb R^m$, and a sampling parameter~$\varepsilon>0$. The basic stability definition that we exploit in this paper is as follows.
\begin{definition}
A one-parametric family of sets $\{\mathcal S_{t}\}_{t\ge0}$ is said to be \emph{exponentially stable} for the closed-loop system~\eqref{Sigma} with $u=h(t,x,\gamma)$ \emph{in the sense of} $\pi_\varepsilon$-\emph{solutions} if
there exist $\hat \delta,\lambda>0$ such that, for any $x^0{\in} B_{\hat\delta}(\mathcal S_{0})$, the corresponding
$\pi_\varepsilon$-solution of~\eqref{Sigma} satisfies ${\rm dist}(x(t),\mathcal S_t)\le C e^{-\lambda t}$ for all $t \ge 0$ with some $C=C(x^0)$.%
If the above exponential decay property holds for every $\hat\delta{>}0$, then the family of sets $\{\mathcal S_{t}\}_{t\ge0}$ is called \emph{globally exponentially stable}
in the sense of $\pi_\varepsilon$-solutions.
\end{definition}
\subsection{Problem statement}~\label{sec_problem}
Using the notion of stability of a family of sets, it is convenient to formulate the control design problem under consideration as follows:
\begin{problem}
Given a curve $\gamma:\mathbb R^+\to\mathbb R^n$ and a constant $\rho>0$, the goal is to find a time-varying feedback law $h: \mathbb R^+ \times \mathbb R^n \times \mathbb R^n\to \mathbb R^m$ such that the family of sets
\begin{align}\label{set}
\{\mathcal S_t^\rho\}_{t\ge0}=\left\{\mathcal S_t^\rho=B_\rho(\gamma(t))\}\right\}_{t\ge0}
\end{align}
is exponentially stable for the closed-loop system~\eqref{Sigma} with $u=h(t,x,\gamma)$ in the sense of Definition~2.
\end{problem}
We will propose a solution to the above problem with a $C^1$-curve $\gamma:\mathbb R^+\to\mathbb R^n$ for the nonholonomic systems of degree one, i.e.,
we assume that there is an $r>\rho$ such that the following rank condition holds in $D=\bigcup_{t\ge0}B_r(\gamma(t))$:
\begin{equation}\label{rank}
{\rm span}\big\{f_{i}(x), [f_{j_1},f_{j_2}](x):\,i{\in}S_1,(j_1,j_2){\in} S_2\big\}=\mathbb{R}^n
\end{equation}
for all $x\in D$, with some sets of indices $S_1\subseteq \{1,2,...,m\}$, $S_2\subseteq \{1,2,...,m\}^2$ such that $|S_1|+|S_2|=n$.
\section{MAIN RESULTS}
\subsection{Control design}
To solve Problem~1, we extend the control design approach proposed in~\cite{GZ18}. Namely, we use a family of trigonometric polynomials with state-dependent coefficients chosen in such a way that the trajectory of system~\eqref{Sigma} approximate the gradient flow of a time-invariant Lyapunov function.
In this paper, the corresponding Lyapunov function is time-varying, so we allow the above mentioned coefficients to depend on time.
We define the control functions in the following way:
\begin{align}
&u_i^\varepsilon(t,x,\gamma)=\sum_{j\in S_1}\delta_{i j} a_{j}(x,\gamma)\nonumber\\
&+\sqrt{\frac{4\pi}{\varepsilon}}\sum_{(j_1,j_2)\in S_2}{\sqrt{\kappa_{j_1j_2}|a_{j_1,j_2}(x,\gamma)|}}\Big(\delta_{ij_1}\cos{\frac{2\pi \kappa_{j_1j_2}}{\varepsilon}}t\nonumber\\
&\quad{+}\delta_{ij_2}{\rm sign}(a_{j_1,j_2}(x,\gamma))\sin{\frac{2\pi \kappa_{j_1j_2}}{\varepsilon}}t\Big),\; i=1,2,...,m.\label{cont}
\end{align}
Here $\delta_{ij}$ is the Kronecker delta, $\kappa_{j_1j_2}{\in}\mathbb N$ are pairwise distinct, and
$$\Big((a_{j}(x,\gamma))_{j \in S_1}\ ( a_{j_1j_2}(x,\gamma))_{(j_1,j_2)\in S_2}\Big)^\top=a(x,\gamma),$$
where
\begin{equation}\label{a}
a(x,\gamma)=- \alpha \mathcal F^{-1}(x) (x-\gamma),
\end{equation}
with
$
\mathcal F(x){= }\Big(\big(f_{j}(x)\big)_{j\in S_1}\ \ \big([f_{j_1},f_{j_2}](x)\big)_{(j_1,j_2)\in S_2}\Big)
$
and $\alpha{>}0$.
Note that~\eqref{rank} implies that $\mathcal F(x)$ is nonsingular in $ D$.
\subsection{Stability analysis}
The main result of this paper is as follows.
\begin{theorem}\label{main}
\emph{
Let $\gamma\in C^1(\mathbb R^+;\mathbb R^n)$,
$r> 0$, $\mu>0$, and $\nu\ge 0$ be such that the matrix $\mathcal F(x)$ is nonsingular in
$\displaystyle D=\bigcup_{t\ge0}B_r(\gamma(t))$,
$
f_i(x)$, $L_{f_j} f_i(x)$, $L_{f_k} L_{f_j} f_i(x)$ are bounded in~$D$ ($i,j,k=\overline{1,m}$), $\|\mathcal F^{-1}(x)\|\le \mu$ for all $x\in D$,
and $\|\dot \gamma(t)\|\le\nu$ for all $t\ge 0$.
Then, for any $\rho\in(0,r)$, there exists an $\hat \varepsilon>0$ such that the family of sets~\eqref{set}
is exponentially stable for system~\eqref{Sigma} with the controls $u_i=u_i^\varepsilon$ defined by~\eqref{cont}--\eqref{a} with any $\varepsilon\in (0,\hat \varepsilon)$ and $\alpha>\frac{\nu}{\rho}$ in the sense of Definition~2.}
\end{theorem}
The proof of this theorem is given in the Appendix.
The next corollary follows from the proof of Theorem~1.
\begin{corollary}
\emph{Let the conditions of Theorem~\ref{main} be satisfied, and let $\|\dot\gamma(t)\|\to 0$ as $t\to+\infty$. Then there is a $\hat \delta>0$ such that $\|x(t)-\gamma(t)\|\to 0$ as $t\to +\infty$, provided that $\|x(0)-\gamma(0)\|<\hat \delta$ and the solutions of the closed-loop system~\eqref{Sigma},~\eqref{cont}--\eqref{a} are defined in the sense of Definition~1.
}\end{corollary}
%
Let us emphasize that, in contrast to many other results on stability of non-autonomous systems (e.g.,~\cite{Khalil}), we do not require the boundedness of $\gamma(t)$ in general.
\section{EXAMPLES}
In this section, we consider some examples illustrating Theorem~\ref{main} and discuss the possibility of extending the above results to systems with a higher degree of nonholonomy.
\begin{figure*}[h!]
\begin{minipage}{0.33\linewidth}
\includegraphics[width=1\linewidth]{uni_bound_a.eps}
\includegraphics[width=1\linewidth]{uni_bound_b.eps}
\end{minipage}\hfill
\begin{minipage}{0.33\linewidth}
\includegraphics[width=1\linewidth]{uni_0_a.eps}
\includegraphics[width=1\linewidth]{uni_0_b.eps}
\end{minipage}\hfill
\begin{minipage}{0.33\linewidth}
\includegraphics[width=0.99\linewidth]{uni_feas_a.eps}
\includegraphics[width=1\linewidth]{uni_feas_b.eps}\label{uni_feas}
\end{minipage}
\caption{Trajectories of system~\eqref{ex_uni} with controls~\eqref{cont_uni} ($\alpha=15$, $\varepsilon=0.1$) and the curves $\gamma^{(1)}$ (left), $\gamma^{(2)}$ (middle), $\gamma^{(3)}$ (right). }
\end{figure*}
\subsection{Unicycle}
As the first example, consider the equations of motion of the unicycle:
\vskip-2ex
\begin{equation}\label{ex_uni}
\dot x_1=u_1\cos x_3,\ \dot x_2=u_1\sin x_3,\ \dot x_3=u_2,
\end{equation}
where $(x_1,x_2)$ are the coordinates of the contact point of the unicycle, $x_3$ is the angle between the wheel and the $x_1$-axis, $u_1$ and $u_2$ control the forward and the angular velocity, respectively. Denote $f_1(x)=\big(\cos (x_3),\sin (x_3), 0\big)^\top$, $f_2(x)=\big(0,0,1\big)^\top$. Then the rank condition~\eqref{rank} is satisfied for all $x\in \mathbb R^3$ with $S_1=\{1,2\}$, $S_2=\{(1,2)\}$, $[f_1,f_2](x)=\big(\sin (x_3),-\cos (x_3), 0\big)^\top$.
Thus, the conditions of Theorem~\ref{main} hold with $r=+\infty$, $\mu =1$. For stabilizing system~\eqref{ex_uni} to
a given curve $\gamma(t)\in\mathbb R^3$, we take controls~\eqref{cont} with $k_{12}=1$:
\begin{align}
& u_1(t,x,\gamma)=a_1(x,\gamma)+\sqrt{\frac{4\pi|a_{12}(x,\gamma)|}{\varepsilon}}\cos\frac{2\pi t}{\varepsilon},\label{cont_uni}\\
& u_2(t,x,\gamma)=a_2(x,\gamma)+{\rm sign}(a_{12}(x))\sqrt{\frac{4\pi|a_{12}(x,\gamma)|}{\varepsilon}}\sin\frac{2\pi t}{\varepsilon},\nonumber
\end{align}
$$
\left(
\begin{array}{c}
a_1(x,\gamma) \\
a_2(x,\gamma) \\
a_{12}(x,\gamma) \\
\end{array}
\right)
=\left(\begin{array}{c}
(x_1-\gamma_1)\cos x_3+(x_2-\gamma_2)\sin x_3 \\
x_3-\gamma_3 \\
(x_1-\gamma_1)\sin x_3-(x_2-\gamma_2)\cos x_3 \\
\end{array}
\right).
$$
Fig.~1 (left) shows the trajectory plots of system~\eqref{ex_uni} with the curve
$
\gamma^{(1)}(t)=\big(2\cos\tfrac{t}{2}\cos t,2\cos\tfrac{t}{2}\sin t,\cos\tfrac{t}{10} \big)^\top.
$
To illustrate Corollary~1, consider the curve
$
\gamma^{(2)}(t)=\big(3-e^{1-t},\,e^{-t^2},0 \big)^\top,
$
for which $\|\dot \gamma^{(2)}(t)\|\to 0 $ as $\to\infty$. Consequently, $\|x(t)-\gamma^{(2)}(t)\|\to 0$ as $t\to\infty$, see Fig.~1 (middle).
\begin{remark}
The above $\gamma^{(1)}$ and $\gamma^{(2)}$ are non-admissible for system~\eqref{ex_uni}, which yields an oscillatory behavior.
Note that the asymptotic stability can be achieved for admissible curves. To illustrate this, consider the trajectory $\gamma^{(3)}(t)$ governed by
$
\dot\gamma_1^{(3)}=\dot\gamma_1^{(1)}$, $\dot\gamma_2^{(3)}=\dot\gamma_2^{(1)}$, $\dot\gamma_3^{(3)}=\frac{\dot\gamma_1^{(1)}\ddot\gamma_2^{(1)}-\dot\gamma_2^{(1)}\ddot\gamma_1^{(1)}}{{\gamma_1^{(1)}}^2+{\gamma_2^{(1)}}^2}.
$ The corresponding plot is shown in Fig.~1 (right).
\end{remark}
\subsection{Underwater vehicle}
The next example is given by the equations of motion of an autonomous 3D
underwater vehicle (see, e.g.,~\cite{Bara}):
\begin{equation}\label{ex_under}
\dot x=\sum_{i=1}^4f_i(x)u_i,\quad x\in \mathbb R^6,\;u \in \mathbb R^4,
\end{equation}
where $(x_1, x_2, x_3)$ are the coordinates of the center of mass, $(x_4$, $x_5$, $x_6)$ describe the vehicle orientation (Euler angles), $u_1$ is the translational velocity along the $Ox_1$ axis, and $(u_2,u_3,u_4)$ are the angular velocity components,
$$
\begin{aligned}
&f_1(x)=(\cos x_5\cos x_6, \cos x_5\sin x_6,{-}\sin x_5,0,0,0)^\top, \\
&f_2(x) =(0,0,0,1,0,0)^\top,
\end{aligned}
$$
$$
\begin{aligned}
& f_3(x){=}(0,0,0,\sin x_4{\rm tg}\, x_5,\cos x_4,\sin x_4\sec x_5)^\top,\\
&f_4(x){=}(0,0,0,\cos x_4{\rm tg}\, x_5,{-}\sin x_4,\cos x_4\sec x_5)^\top.
\end{aligned}
$$
The rank condition~\eqref{rank} is satisfied in $D =\{x \in R^6:-\tfrac{\pi}{2}<x_5<\tfrac{\pi}{2}\}$ with $S=\{(1,3),(1,4)\}$. Therefore, the matrix
$$
\mathcal F(x)=\left(
f_1(x),\ f_2(x),\ f_3(x),\ f_4(x),\ [f_1,f_3](x),\ [f_1,f_4](x)
\right)
$$
is nonsingular in $D$.
Thus, controls~\eqref{cont} take the form
\begin{align}
u_1(t,x,\gamma)=&a_1(x,\gamma)+\sqrt{\frac{4\pi|a_{13}(x,\gamma)|}{\varepsilon}}\cos\frac{2\pi k_{13}t}{\varepsilon}\nonumber\\
&+\sqrt{\frac{4\pi|a_{14}(x,\gamma)|}{\varepsilon}}\cos\frac{2\pi k_{14}t}{\varepsilon},\nonumber\\
u_2(t,x,\gamma)=&a_2(x,\gamma),\label{cont_under}\\
u_3(t,x,\gamma)=&a_3(x,\gamma)+{\rm sign}(a_{13}(x))\sqrt{\frac{4\pi|a_{13}(x,\gamma)|}{\varepsilon}}\sin\frac{2\pi k_{13} t}{\varepsilon},\nonumber\\
u_4(t,x,\gamma)=&a_4(x,\gamma)+{\rm sign}(a_{14}(x))\sqrt{\frac{4\pi|a_{14}(x,\gamma)|}{\varepsilon}}\sin\frac{2\pi k_{14} t}{\varepsilon},\nonumber
\end{align}
with
$
a(x,\gamma)=-\alpha \mathcal F^{-1}(x)(x-\gamma).
$
For the illustration, take
$
\gamma^{(4)}(t)=\left(\cos\tfrac{t}{4},\,\tfrac{t}{4},\,\sin\tfrac{t}{4},\,0,\,0\,0\right)^\top.
$
The results of numerical simulations are shown in Fig.~2. Note that the curve $\gamma^{(4)}(t)$ is non-admissible for system~\eqref{ex_under}, which results in an oscillatory behavior of the trajectories.
\begin{figure*}[h!]
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1\linewidth,height=0.6\linewidth]{under_a.png}
\end{minipage}\hfill
\begin{minipage}{0.5\linewidth}
\includegraphics[width=1\linewidth]{underB}
\includegraphics[width=1\linewidth]{underC}
\includegraphics[width=1\linewidth]{underD}
\end{minipage}
\caption{Trajectories of system~\eqref{ex_under} with controls~\eqref{cont_under}; $\alpha=15$, $\varepsilon=0.1$, $k_{13}=1$, $k_{14}=2$, $x^0=(0,0,-1,\tfrac{\pi}{4},\tfrac{\pi}{4},\tfrac{\pi}{4})^\top$.}
\end{figure*}
\subsection{Rear-wheel driving car}
The proposed approach can also be extended to nonholonomic systems of higher degrees. For systems of degree two, it is possible to use a control design scheme similar to that introduced in~\cite{GZ18,ZG17}.
For example, consider a kinematic model of a rear-wheel driving car proposed in~\cite{Lu98}:
\begin{equation}\label{ex_car}
\dot x= f_1(x)u_1+f_2(x)u_2,\quad x\in\mathbb R^4,\,u\in\mathbb R^2,
\end{equation}
where $(x_1,x_2)$ are the
Cartesian coordinates of the rear wheel, $x_3$ is the steering angle, $x_4$ specifies the orientation of the car
body with respect to the $x_1$ axis, $u_1$ and $u_2$ are the driving and the steering velocity input, respectively,
$$
\begin{aligned}
&f_1(x)=(\cos x_4,\,\sin x_4,\,0,\tan x_3)^\top, \ f_2(x) =(0,0,1,0)^\top.\\
\end{aligned}
$$
In this case,
$
{\rm span}\{f_1(x),\,f_2(x),\,[f_1,f_2](x),\,[[f_1,f_2],f_1](x)\}=\mathbb R^4
$
for all $x\in D =\{x \in R^4:-\tfrac{\pi}{2}<x_3<\tfrac{\pi}{2}\}$. Following the control design scheme from~\cite{GZ18}, we take
\begin{align}
u_1&(t,x,\gamma)=a_1(x,\gamma)+\sqrt{\frac{4\pi|a_{12}(x,\gamma)|}{\varepsilon}}\cos\frac{2\pi k_{12}t}{\varepsilon}\nonumber\\
&+\sqrt[3]{\frac{16\pi^2(k_2^2-k_1^2)a_{121}(x,\gamma)}{\varepsilon^2}}\cos\frac{2\pi k_{1}t}{\varepsilon}\Big(1+\sin\frac{2\pi k_{2}t}{\varepsilon}\Big),\nonumber\\
u_2&(t,x,\gamma)=a_2(x,\gamma)+{\rm sign}(a_{12}(x))\sqrt{\frac{4\pi|a_{12}(x,\gamma)|}{\varepsilon}}\sin\frac{2\pi k_{12} t}{\varepsilon} \nonumber \\ &+\sqrt[3]{\frac{16\pi^2(k_2^2-k_1^2)a_{121}(x,\gamma)}{\varepsilon^2}}\sin\frac{2\pi k_{2}t}{\varepsilon},\label{cont_car}
\end{align}
with the vector of coefficients
$
a(x,\gamma)=-\alpha \mathcal F^{-1}(x)(x-\gamma)
$ and
$
\mathcal F(x)=\left(
f_1(x)\ f_2(x)\ [f_1,f_2](x),\ \big[[f_1,f_2],f_1\big](x)
\right).
$
Fig.~3 presents the trajectory plots of system~\eqref{ex_car}--\eqref{cont_car} for a non-admissible curve
$
\gamma^{(4)}(t)=\left(5\sin\tfrac{t}{4}, 5\sin\tfrac{t}{4} \cos\tfrac{t}{4},\,0,\,0\right)^\top.
$
\begin{figure}[hb]
\begin{minipage}{1\linewidth}
\includegraphics[width=1\linewidth]{car_a}
\includegraphics[width=1\linewidth]{car_b}
\includegraphics[width=1\linewidth]{car_c}
\caption{Trajectories of system~\eqref{ex_car} with controls~\eqref{cont_car}; $\alpha=5$, $\varepsilon=0.5$, $x^0=(8,0,0,0)^\top$.}
\end{minipage}\hfill
\end{figure}
\section{CONCLUSIONS AND FUTURE WORK}
The above numerical simulations confirm that the proposed controller~\eqref{cont} can be used for approximate tracking of reference curves under an appropriate choice of parameters $\alpha$ and $\varepsilon$.
By comparing the left and right plots in Fig.~1, we note that the amplitude of oscillations near non-admissible curve (Fig.~1, left) significantly exceeds the deviation from the admissible curve (Fig.~1, right).
This feature underlines the assertion of Corollary~1 and illustrates the essence of our approach for considering the stability of a family of sets.
The example in Section~III.C shows that our approach can also be extended to nonholonomic systems of higher degrees.
We do not study here the stabilization problem under general controllability conditions, leaving this issue for future work.
| {
"attr-fineweb-edu": 1.282227,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdnI5qdmDMTwvPaVa | \section{Introduction}
Recently, much progress has been achieved in the understanding of the infrared (IR) singularities of massless scattering amplitudes in non-abelian gauge theories. While factorization proofs guarantee the absence of IR divergences in inclusive observables \cite{Collins:1989gx}, in many cases large Sudakov logarithms remain after this cancellation. A detailed control over the structure of IR poles in the virtual corrections to scattering amplitudes is a prerequisite for the resummation of these logarithms beyond the leading order \cite{Sterman:1986aj,Catani:1989ne,Contopanagos:1996nh,Kidonakis:1997gm}. Catani was the first to predict the singularities of two-loop scattering amplitudes apart from the $1/\epsilon$ pole term \cite{Catani:1998bh}, whose general form was only understood much later in \cite{Sterman:2002qn,MertAybat:2006wq,MertAybat:2006mz}. In recent work \cite{Becher:2009cu}, it was shown that the IR singularities of on-shell amplitudes in massless QCD can be derived from the ultraviolet (UV) poles of operator matrix elements in soft-collinear effective theory (SCET). They can be subtracted by means of a multiplicative renormalization factor, whose structure is constrained by the renormalization group. It was proposed in this paper that the simplicity of the corresponding anomalous-dimension matrix holds not only at one- and two-loop order, but may in fact be an exact result of perturbation theory. This possibility was raised independently in \cite{Gardi:2009qi}. Detailed theoretical arguments supporting this conjecture were presented in \cite{Becher:2009qa}, where constraints derived from soft-collinear factorization, the non-abelian exponentiation theorem, and the behavior of scattering amplitudes in two-parton collinear limits were studied.
It is relevant for many physical applications to generalize these results to the case of massive partons. The IR singularities of one-loop amplitudes containing massive partons were obtained some time ago in \cite{Catani:2000ef}, but until very recently little was known about higher-loop results. In the limit where the parton masses are small compared with the typical momentum transfer among the partons, mass logarithms can be predicted based on collinear factorization theorems \cite{Mitov:2006xs,Becher:2007cu}. This allows one to obtain massive amplitudes from massless ones with a minimal amount of calculational effort. A major step toward solving the problem of finding the IR divergences of generic two-loop scattering processes with both massive and massless partons has been taken in \cite{Mitov:2009sv,Becher:2009kw}. One finds that the simplicity of the anomalous-dimension matrix observed in the massless case no longer persists in the presence of massive partons. Important constraints from soft-collinear factorization and two-parton collinear limits are lost, and only the non-abelian exponentiation theorem restricts the allowed color structures in the anomalous-dimension matrix. At two-loop order, two different types of three-parton color and momentum correlations appear, whose effects can be parameterized in terms of two universal, process-independent functions $F_1$ and $f_2$ \cite{Becher:2009kw}. Apart from some symmetry properties, the precise form of these functions was left unspecified. In this Letter we calculate these functions at two-loop order and study their properties in some detail.
\section{Anomalous-dimension matrix}
We denote by $|{\cal M}_n(\epsilon,\{\underline{p}\},\{\underline{m}\})\rangle$ a UV-renormalized, on-shell $n$-parton scattering amplitude with IR singularities regularized in $d=4-2\epsilon$ dimensions. Here $\{\underline{p}\}\equiv\{p_1,\dots,p_n\}$ and $\{\underline{m}\}\equiv\{m_1,\dots,m_n\}$ denote the momenta and masses of the external partons. The amplitude is a function of the Lorentz invariants $s_{ij}\equiv 2\sigma_{ij}\,p_i\cdot p_j+i0$ and $p_i^2=m_i^2$, where the sign factor $\sigma_{ij}=+1$ if the momenta $p_i$ and $p_j$ are both incoming or outgoing, and $\sigma_{ij}=-1$ otherwise. For massive partons we define 4-velocities $v_i=p_i/m_i$ with $v_i^2=1$ and $v_i^0\ge 1$. We further define the recoil variables $w_{ij}\equiv-\sigma_{ij}\,v_i\cdot v_j-i0$. We use the color-space formalism \cite{Catani:1996jh}, in which $n$-particle amplitudes are treated as $n$-dimensional vectors in color space. $\bm{T}_i$ is the color generator associated with the $i$-th parton and acts as a matrix on its color index. The product $\bm{T}_i\cdot\bm{T}_j\equiv T_i^a\,T_j^a$ is summed over $a$. Generators associated with different particles commute, and $\bm{T}_i^2=C_i$ is given in terms of the eigenvalue of the quadratic Casimir operator of the corresponding color representation, i.e., $C_q=C_{\bar q}=C_F$ for quarks and $C_g=C_A$ for gluons. Below, we will label massive partons with capital indices ($I,J,\dots$) and massless ones with lower-case indices ($i,j,\dots$).
It was shown in \cite{Becher:2009cu,Becher:2009qa,Becher:2009kw} that the IR poles of such amplitudes can be removed by a multiplicative renormalization factor $\bm{Z}^{-1}(\epsilon,\{\underline{p}\},\{\underline{m}\},\mu)$, which acts as a matrix on the color indices of the partons. More precisely, the product $\bm{Z}^{-1}|{\cal M}_n\rangle$ is finite for $\epsilon\to 0$ after the coupling constant $\alpha_s^{\rm QCD}$ used in the calculation of the scattering amplitude is properly matched onto the coupling $\alpha_s$ in the effective theory, in which the heavy partons are integrated out \cite{Becher:2009kw}. The relation
\begin{equation}\label{RGE}
\bm{Z}^{-1}\,\frac{d}{d\ln\mu}\,
\bm{Z}(\epsilon,\{\underline{p}\},\{\underline{m}\},\mu)
= - \bm{\Gamma}(\{\underline{p}\},\{\underline{m}\},\mu)
\end{equation}
links the renormalization factor to a universal anomalous-dimension matrix $\bm{\Gamma}$, which governs the scale dependence of effective-theory operators built out of collinear SCET fields for the massless partons and soft heavy-quark effective theory fields for the massive ones. For the case of massless partons, the anomalous dimension has been calculated at two-loop order in \cite{MertAybat:2006wq,MertAybat:2006mz} and was found to contain only two-parton color-dipole correlations. It has recently been conjectured that this result may hold to all orders of perturbation theory \cite{Becher:2009cu,Gardi:2009qi,Becher:2009qa}. On the other hand, when massive partons are involved in the scattering process, then starting at two-loop order correlations involving more than two partons appear \cite{Mitov:2009sv}. At two-loop order, the general structure of the anomalous-dimension matrix is \cite{Becher:2009kw}
\begin{eqnarray}\label{resu1}
\bm{\Gamma}
&=& \sum_{(i,j)}\,\frac{\bm{T}_i\cdot\bm{T}_j}{2}\,
\gamma_{\rm cusp}(\alpha_s)\,\ln\frac{\mu^2}{-s_{ij}}
+ \sum_i\,\gamma^i(\alpha_s) \nonumber\\
&&\mbox{}- \sum_{(I,J)}\,\frac{\bm{T}_I\cdot\bm{T}_J}{2}\,
\gamma_{\rm cusp}(\beta_{IJ},\alpha_s)
+ \sum_I\,\gamma^I(\alpha_s) \nonumber\\
&&\mbox{}+ \sum_{I,j}\,\bm{T}_I\cdot\bm{T}_j\,
\gamma_{\rm cusp}(\alpha_s)\,\ln\frac{m_I\mu}{-s_{Ij}} \\
&&\mbox{}+ \sum_{(I,J,K)} if^{abc}\,
\bm{T}_I^a\,\bm{T}_J^b\,\bm{T}_K^c\,
F_1(\beta_{IJ},\beta_{JK},\beta_{KI}) \nonumber\\
&&\mbox{}+ \sum_{(I,J)} \sum_k\,if^{abc}\,
\bm{T}_I^a\,\bm{T}_J^b\,\bm{T}_k^c\,
f_2\Big(\beta_{IJ},
\ln\frac{-\sigma_{Jk}\,v_J\cdot p_k}{-\sigma_{Ik}\,v_I\cdot p_k}
\Big) \,. \nonumber
\end{eqnarray}
The one- and two-parton terms depicted in the first three lines start at one-loop order, while the three-parton terms in the last two lines appear at ${\cal O}(\alpha_s^2)$. The notation $(i,j,\dots)$ etc.\ means that the corresponding sum extends over tuples of distinct parton indices. The cusp angles $\beta_{IJ}$ are defined via
\begin{equation}\label{wbrela}
\cosh\beta_{IJ} = \frac{-s_{IJ}}{2m_I m_J} = w_{IJ} \,.
\end{equation}
They are associated with the hyperbolic angles formed by the time-like Wilson lines of two heavy partons. The physically allowed values are $w_{IJ}\ge 1$ (one parton incoming and one outgoing), corresponding to $\beta_{IJ}\ge 0$, or $w_{IJ}\le -1$ (both partons incoming or outgoing), corresponding to $\beta_{IJ}=i\pi-b$ with real $b\ge 0$. These possibilities correspond to space-like and time-like kinematics, respectively. Since in a three-parton configuration there is always at least one pair of partons either incoming or outgoing, at least one of the $w_{IJ}$ or $v_I\cdot p_k$ variables must be time-like, and hence the functions $F_1$ and $f_2$ have non-zero imaginary parts.
The anomalous-dimension coefficients $\gamma_{\rm cusp}(\alpha_s)$ and $\gamma^i(\alpha_s)$ (for $i=q,g$) in (\ref{resu1}) have been determined to three-loop order in \cite{Becher:2009qa} by considering the case of the massless quark and gluon form factors. Similarly, the coefficients $\gamma^I(\alpha_s)$ for massive quarks and color-octet partons such as gluinos have been extracted at two-loop order in \cite{Becher:2009kw} by analyzing the anomalous dimension of heavy-light currents in SCET. In addition, the velocity-dependent function $\gamma_{\rm cusp}(\beta,\alpha_s)$ has been derived from the known two-loop anomalous dimension of a current composed of two heavy quarks moving at different velocity \cite{Korchemsky:1987wg,Kidonakis:2009ev}.
Here we complete the calculation of the two-loop anomalous-dimension matrix by deriving closed analytic expressions for the universal functions $F_1$ and $f_2$, which parameterize the three-parton correlations in (\ref{resu1}).
\section{Calculation of $\bm{F_1}$ and $\bm{f_2}$}
\label{sec:F1f2}
To calculate the function $F_1$ we compute the two-loop vacuum matrix element of the operator $\bm{O}_s=\bm{S}_{v_1}\,\bm{S}_{v_2}\,\bm{S}_{v_3}$, which consists of three soft Wilson lines along the directions of the velocities of three massive partons, without imposing color conservation. The anomalous dimension of this operator contains a three-parton term given by $6if^{abc}\,\bm{T}_1^a\,\bm{T}_2^b\,\bm{T}_3^c\,F_1(\beta_{12},\beta_{23},\beta_{31})$. The function $F_1$ follows from the coefficient of the $1/\epsilon$ pole in the bare matrix element of $\bm{O}_s$. We will then obtain $f_2$ from a limiting procedure.
\begin{figure}
\begin{center}
\includegraphics[width=0.32\columnwidth]{f1.eps}
\includegraphics[width=0.63\columnwidth]{f2.eps}
\includegraphics[width=0.63\columnwidth]{f3.eps}
\caption{\label{fig:dia}
Two-loop Feynman graphs (top row) and one-loop counterterm diagrams (bottom row) contributing to the two-loop coefficient of the renormalization factor $\bm{Z}_s$.}
\end{center}
\vspace{-4mm}
\end{figure}
The operator $\bm{O}_s$ is renormalized multiplicatively, so that $\bm{O}_s\bm{Z}_s$ is UV finite, where $\bm{Z}_s$ is linked to the anomalous dimension in the same way as shown in (\ref{RGE}). In order to calculate the two-loop $\bm{Z}_s$ factor, we have evaluated the two-loop non-planar and planar graphs shown in the first row of Figure~\ref{fig:dia}, as well as the one-loop counterterm diagrams depicted in the second row. Contrary to a statement made in \cite{Mitov:2009sv}, we find that $F_1$ receives contributions from all five diagrams, not just from the non-planar graph. The most challenging technical aspect of the analysis is the calculation of the diagram involving the triple-gluon vertex. We have computed this diagram using a Mellin-Barnes representation and checked the answer numerically using sector decomposition \cite{Smirnov:2008py}. We have also checked that for Euclidean velocities our result for the triple-gluon diagram agrees numerically with a position-space based integral representation derived in \cite{Mitov:2009sv}. Combining all contributions, we find
\begin{equation}\label{eq:F1}
F_1(\beta_{12},\beta_{23},\beta_{31})
= \frac{\alpha_s^2}{12\pi^2}
\sum_{i,j,k} \epsilon_{ijk}\,g(\beta_{ij})\,r(\beta_{ki}) \,,
\end{equation}
where we have introduced the functions
\begin{eqnarray}
r(\beta)
&=& \beta\,\coth\beta \,, \nonumber\\
g(\beta)
&=& \coth\beta \left[ \beta^2
+ 2\beta\,\ln(1-e^{-2\beta}) - \mbox{Li}_2(e^{-2\beta})
+ \frac{\pi^2}{6} \right] \nonumber\\
&&\mbox{}- \beta^2 - \frac{\pi^2}{6} \,.
\end{eqnarray}
The function $f_2$ can be obtained from the above result by writing $w_{23}=-\sigma_{23}\,v_2\cdot p_3/m_3$, $w_{31}=-\sigma_{31}\,v_1\cdot p_3/m_3$ and taking the limit $m_3\to 0$ at fixed $v_I\cdot p_3$. In that way, we obtain
\begin{equation}
f_2\Big( \beta_{12},
\ln\frac{-\sigma_{23}\,v_2\cdot p_3}%
{-\sigma_{13}\,v_1\cdot p_3} \Big)
= - \frac{\alpha_s^2}{4\pi^2}\,g(\beta_{12})\,
\ln\frac{-\sigma_{23}\,v_2\cdot p_3}%
{-\sigma_{13}\,v_1\cdot p_3} \,.
\end{equation}
Whether a factorization of the three-parton terms into two functions depending on only a single cusp angle persists at higher orders in $\alpha_s$ is an open question.
It is interesting to expand the two functions $r(\beta)$ and $g(\beta)$ about the threshold point $\beta=i\pi-b$ with $b\to 0^+$. We find
\begin{equation}\label{rgthreshold}
\begin{aligned}
r(\beta)
&= - \frac{i\pi}{b} + 1 + {\cal O}(b) \,, \\
g(\beta)
&= - \frac{\pi^2 + 2i\pi\ln(2b)}{b}
+ \left( 2 + \frac{5\pi^2}{6} \right)
+ {\cal O}(b) \,.
\end{aligned}
\end{equation}
Based on the symmetry properties of $F_1$ and $f_2$, it was concluded in \cite{Mitov:2009sv,Becher:2009kw} that these functions vanish whenever two of the velocities of the massive partons coincide. Indeed, this seems to be an obvious consequence of the fact that $F_1$ is totally anti-symmetric in its arguments, while $f_2$ is odd in its second argument. This reasoning implicitly assumes that the limit of equal velocities is non-singular, but is invalidated by the presence of Coulomb singularities in $r(\beta)$ and $g(\beta)$ near threshold. Consider the limit where two massive partons 1 and 2 are produced near threshold, with a small relative 3-velocity $\vec{v}_{12}\equiv\vec{v}_1-\vec{v}_2$ defined in their rest frame. It is then straightforward to derive that
\begin{equation}
\lim_{v_2\to v_1} f_2
= \frac{\alpha_s^2}{4\pi^2}
\left[ \pi^2 + 2i\pi\ln(2|\vec{v}_{12}|) \right] \cos\theta \,,
\end{equation}
where $\theta$ is the scattering angle formed by the 3-momenta of particles 1 and 3. A similar formula holds for $F_1$. This result is anti-symmetric in the parton indices 1 and 2 as required, but it does not vanish at threshold.
Another interesting limit is that of large recoil, where all the scalar products $w_{IJ}$ become large in magnitude. In that case, both $F_1$ and $f_2$ are suppressed like ${\cal O}(1/w^2)$, because for large $\beta$
\begin{equation}
g(\beta)
= \frac{1}{2w^2} \left[ \ln^2(2w) - \ln(2w) + \frac{\pi^2}{6}
- \frac12 \right] + {\cal O}\Big( \frac{1}{w^3} \Big) \,.
\end{equation}
Note that the non-planar contribution from the first graph in Figure~\ref{fig:dia}, which was studied in the Euclidean region in \cite{Mitov:2009sv}, contains the leading-power term
\begin{equation}
F_1^{\rm non-planar}
= - \frac{\alpha_s^2}{12\pi^2}\,\ln\frac{w_{12}}{w_{23}}\,
\ln\frac{w_{23}}{w_{31}}\,\ln\frac{w_{31}}{w_{12}}
+ {\cal O}\Big(\frac{1}{w^2}\Big)
\end{equation}
and is unsuppressed in this limit. However, this contribution cancels against a leading-power term in the planar and counterterm contributions.
Using that $w_{IJ}=-s_{IJ}/(2m_I m_J)$, we see that the large-recoil limit corresponds to $m_I m_J\to 0$ at fixed $s_{IJ}$. It follows that the three-parton correlation terms described by $F_1$ and $f_2$ vanish like $(m_I m_J/s_{IJ})^2$ in the small-mass limit. This observation is in accordance with a factorization theorem proposed in \cite{Mitov:2006xs,Becher:2007cu}, which states that massive amplitudes in the small-mass limit can be obtained from massless ones by a simple rescaling prescription for the massive external legs.
\section{Anomalous dimension for $\bm{q\bar q\to t\bar t}$ near threshold}
\label{sec:tt}
As a sample application, we apply our formalism to the calculation of the two-loop anomalous-dimension matrices for top-quark pair production near threshold in the $q\bar q\to t\bar t$ channel. This matrix (along with the corresponding one in the $gg\to t\bar t$ channel) forms the basis for soft-gluon resummation at the next-to-next-to-leading logarithmic (NNLL) order. We adopt the $s$-channel singlet-octet basis, in which the $t\bar t$ pair is either in a color-singlet or color-octet state. For the quark-antiquark annihilation process $q_l(p_1)+\bar q_k(p_2)\to t_i(p_3)+\bar t_j(p_4)$, we thus choose the independent color structures as $c_1 = \delta_{ij}\,\delta_{kl}$ and $c_2 = (t^a)_{ij}\,(t^a)_{kl}$. In the threshold limit $s=2p_1\cdot p_2\to 4m_t^2$ it is convenient to define the quantity $\beta_t=\sqrt{1-4m_t^2/s}$, which is related to the relative 3-velocity $\vec{v}_{t\bar t}$ between the top-quark pair in the center-of-mass frame by $|\vec{v}_{t\bar t}|=2\beta_t$. We find that in the limit $\beta_t\to 0$ the two-loop anomalous-dimension matrices reduces to
\begin{equation}\label{Gqqthresh}
\begin{split}
\bm{\Gamma}_{q\bar q}
&= \bigg[ C_F\,\gamma_{\rm cusp}(\alpha_s)
\left( \ln\frac{s}{\mu^2} - \frac{i\pi}{2\beta_t} - i\pi + 1
\right) \\
&\quad\mbox{}+ C_F\,\gamma_{\rm cusp}^{(2)}(\beta_t)
+ 2\gamma^q(\alpha_s) + 2\gamma^Q(\alpha_s) \bigg]
\begin{pmatrix}
1~ & ~0 \\ 0~ & ~1
\end{pmatrix}
\nonumber
\end{split}
\end{equation}
\begin{equation}
\begin{split}
&\mbox{}+ \frac{N}{2} \left[ \gamma_{\rm cusp}(\alpha_s)
\bigg( \frac{i\pi}{2\beta_t} + i\pi - 1 \bigg)
- \gamma_{\rm cusp}^{(2)}(\beta_t) \right]\!
\begin{pmatrix}
0~ & ~0 \\ 0~ & ~1
\end{pmatrix} \\
&\mbox{}+ \frac{\alpha_s^2}{2\pi^2}
\left[ \pi^2 + 2i\pi\ln(4\beta_t) \right] \cos\theta
\begin{pmatrix}
0 & \frac{C_F}{2} \\ -N & 0
\end{pmatrix}
+ {\cal O}(\beta_t) \,,
\end{split}
\end{equation}
where the two-loop expressions for the anomalous dimensions $\gamma_{\rm cusp}$, $\gamma^q$, and $\gamma^Q$ can be found in \cite{Becher:2009kw}, and
\begin{equation}
\gamma_{\rm cusp}^{(2)}(\beta_t)
= \frac{N\alpha_s^2}{2\pi^2}
\left[ \frac{i\pi}{2\beta_t} \left( 2 - \frac{\pi^2}{6} \right)
- 1 + \zeta_3 \right]
\end{equation}
arises from the threshold expansion of the two-loop coefficient of the velocity-dependent cusp anomalous dimension $\gamma_{\rm cusp}(\beta,\alpha_s)$.
We stress that, as a consequence of the Coulomb singularities, the three-parton correlation term does not vanish near threshold. Instead, it gives rise to scattering-angle dependent, off-diagonal contribution in (\ref{Gqqthresh}). The off-diagonal terms were omitted in two recent papers \cite{Beneke:2009rj,Czakon:2009zw}, where threshold resummation for top-quark pair production was studied at NNLL order. We leave it to future work to explore if and how the results obtained by these authors need to be modified in light of our findings.
\section{Conclusions}
\label{sec:concl}
The IR divergences of scattering amplitudes in non-abelian gauge theories can be absorbed into a multiplicative renormalization factor, whose form is determined by an anomalous-dimension matrix in color space. At two-loop order this anomalous-dimension matrix contains pieces related to color and momentum correlations between three partons, as long as at least two of them are massive. This information is encoded in two universal functions: $F_1$, describing correlations between three massive partons, and $f_2$, describing correlations between two massive and one massless parton. In this Letter we have calculated these functions at two-loop order. Using the exact analytic expressions, we studied the properties of the three-parton correlations in the small-mass and threshold limits. We found that the functions $F_1$ and $f_2$ vanish as $(m_I m_J/s_{IJ})^2$ in the small-mass limit, in accordance with existing factorization theoreoms for massive scattering amplitudes \cite{Mitov:2006xs,Becher:2007cu}. On the other hand, and contrary to naive expectations, the two functions do not vanish in the threshold limit, because Coulomb singularities compensate a zero resulting from the anti-symmetry under exchange of two velocity vectors. This fact has been overlooked in the recent papers \cite{Beneke:2009rj,Czakon:2009zw}, where the three-parton correlations were neglected near threshold.
Our results allow for the calculation of the IR poles in an arbitrary on-shell, $n$-particle scattering amplitude to two-loop order, where any number of the $n$ partons can be massive. As an application, we have derived the anomalous-dimension matrix for top-quark pair production in the $q\bar q\to t\bar t$ channel. We will explore in future work to what extent the new off-diagonal entries, arising from three-parton correlation terms, affect the numerical results for the threshold-resummed $t\bar t$ production cross sections at the Tevatron and LHC.
This Letter completes the study of IR divergences of two-loop scattering amplitudes with an arbitrary number of massive and massless external particles, and in arbitrary non-abelian (or abelian) gauge theories with massless gauge bosons. Details of our calculations will be presented in a forthcoming article.
{\em Acknowledgments:\/}
We are grateful to Thomas Becher for collaboration during the early stages of this work.
| {
"attr-fineweb-edu": 1.327148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdoQ5qdmDD94KRXqH |
\section{Introduction}
This is the 23$^{rd}$ in a series of papers from the U.S. Naval
Observatory's speckle interferometry program, presenting results of
observations obtained at the USNO 26-inch telescope in Washington,
DC (see, most recently, Mason \& Hartkopf 2017a).
From 4 January through 13 September 2017, the 26-inch telescope
was used on 60 of 184 (33\%) scheduled nights. While most nights
were lost due to weather conditions, time was also lost due to
testing and upgrades of instrumentation and software, other
mechanical or software issues, and to a lack of observing personnel.
Instrumentation and the observing technique were as described in
Mason \& Hartkopf (2017a). Observing was suspended in mid-September
when upgrades to the motors and encoders began. After initial success
in automation seen in this and recent previous entries of this
series, a more ambitious automation project was initiated in
September. This will be described in greater detail in the next
entry in this series.
Individual nightly totals varied substantially, from 7 to 146
observations per night (mean 66.5). The results yielded 3989
observations (pointings of the telescope) and 3862 resolutions.
After removing marginal observations, calibration data, tests, and
$``$questionable measures" a total of 3333 measurements remained.
These $``$questionable measures" are not all of inferior quality but
may represent significant differences from the last measure, often
made many decades ago. Before these measures are published they will
need to be confirmed in a new observing season to account for any
possible pointing or other identification problems. The tabulated
list of these is retained internally and forms a $``$high priority
observing list" for subsequent observing seasons. These 3333
measures were grouped into 1911 mean relative positions.
\section{Results}
Our 2017 observing list remained the same as the previous, discussed
in Mason \& Hartkopf (2017a). On a given night a pair may be
observed multiple times in different data collection modes and with
different magnification as it is not always obvious which will
produce the best result. Further, as object acquisition is the
most time-consuming portion of the duty cycle, adding additional
observations is less consequential. For those intranightly
observations ($n~=~832$) the rms values are quite low:
$d\theta~=~0\fdg10$ and $\frac{d\rho}{\rho}~=~0.0020$. A smaller
number ($n~=~262$) comprise those objects which appear to be slow
moving\footnote{We assume $\Delta\theta~=~\Delta\rho~=~0$ for
these.} and were observed on multiple nights. For those internightly
observations the rms values are twice the intranightly values:
$d\theta~=~0\fdg14$ and $\frac{d\rho}{\rho}~=~0.0044$. We take these
values as representative of the true error.
\subsection{New Pair}
Table 1 presents coordinates and magnitude information from
CDS\footnote{magnitude information is from one of the catalogs
queried in the {\it Aladin Sky Atlas}, operated at CDS, Strasbourg,
France. See {\tt http://aladin.u-strasbg.fr/aladin.gml}.} for a
pair which is presented here for the first time. It is a closer
component to a known system. Column one gives the coordinates of
the primary of the pair. Column two is the WDS identifier while
Column three is the discoverer designation associated with the known
pair which is used here for the new component as well. Columns four
and five give the visual magnitudes of the primary and secondary,
and Column six notes the circumstance of the discovery. The mean
double star positions of our 26$''$ measures (T, $\theta$, and
$\rho$) of this system is given in Table 3.
As this pair is quite wide we are able to provide two additional
measures of relative astrometry from other catalogs using the same
methodology as described in Wycoff et al.\ (2006) and Hartkopf et
al.\ (2013). In Table 2 the first two columns identify the system
by providing its epoch-2000 coordinates and discovery designation
(as given in Table 1). Columns three through five give the epoch
of observation (expressed as a fractional Julian year), the
position angle (in degrees), and the separation (in seconds of
arc). Note that in all tables the position angle, measured from
North through East, has not been corrected for precession, and is
thus based on the equinox for the epoch of observation\footnote{This
has been the standard for double star relative astrometry for several
hundred years and is in accordance with IAU Resolutions from the
Commissions governing double stars, most recently, Mason et al.\
(2016). See \S 4.1.1.}. Columns six and seven provide the source of
the measure and either a reference or note to the source.
\subsection{Measures of Known Pairs}
Tables 3 and 4 present the relative measurements of double stars
made with the 26$''$ telescope. Table 3 presents those with no
calculation for motion, either orbital or linear. As in Table 1, the
first two columns identify the system by providing its epoch-2000
coordinates and discovery designation. Columns three and four give
the epoch of observation (expressed as a fractional Julian year) and
the position angle (in degrees). Column five gives the position
angle error. This is the internightly rms value if one is available
or the mean value of 0\fdg1 if it is not. Columns six and seven
provide the separation (in seconds of arc) and its error. As above,
the error is its internightly value or the mean error
($d\rho~=~0.0044\rho$). Column eight is the number of nights in the mean
position. When this is $``$1" the errors in Columns five and seven
are the mean results as described above. Finally, Column nine is
reserved for notes. One of the pairs listed in Table 3 is the pair
listed in Tables 1 and 2 which has not been measured before. Five
pairs, designated with a $``$C" code, are confirmed here for the
first time. Eight very wide pairings, designated with a $``$V" code,
in mulitple systems have their positions determined from vector
addition. Two pairs have not been measured in over fifty years.
Those are WDS10536$-$0742 = J\phm{88888}90BC, last measured in 1954
(Harshaw 2013) and WDS17128$+$2433 = POU3264, last measured in 1892
(Pourteau 1933).
The 1726 measures presented in Table 3 have a mean separation of
13\farcs874 and a median value of 9\farcs156. The mean number of
years since the pair was last observed is 7.05.
Table 4 presents measurements of doubles where some prediction of
position (orbital or linear) is available. The first eight columns
are the same as Table 3 above. Columns nine and ten provide the
O$-$C residual to the determination referenced in Column eleven.
The final column, like that of Table 3, provides notes. In some
cases a measure has residuals to more than one calculation. In some
of those cases the second calculation refers to a new orbit (Table
5) or linear solution (Table 6) which is described below.
Not surprisingly, the objects in Table 4 are both closer and more
frequently observed than those of Table 3. The 185 measures
presented in Table 4 have a mean separation of 11\farcs507 and a
median value of 4\farcs158. The mean number of years since the pair
was last observed is 2.88.
\subsection{Improved Orbits}
Four systems with sufficient data to improve their orbits are
presented in Table 5 and Figure 1. All of the individual measures
were weighted by the procedures of Hartkopf et al.\ (2001) and
calculated with the venerable $``$grid-search" method of Hartkopf et
al.\ (1989).
Table 5 is broken into two groups. The first orbit we characterize
as $``$improved but still provisional" and is given without errors.
They fit the data better than the earlier orbit and should give
reasonable ephemerides over the next several decades, but the
elements will all require correction over the course of a complete
orbit before they can be considered even approximately correct. As
in earlier tables, the first two columns identify the system by
providing its epoch-2000 coordinates and discovery designation.
Columns three through nine provides the seven Campbell elements:
the period (P in years), the semimajor axis (a$''$ in arcseconds),
the inclination (i) and longitude of the node ($\Omega$), both in
degrees, the epoch of the most recent periastron passage (T$_o$ in
years), the eccentricity (e) and the longitude of periastron
($\omega$ in degrees). Column ten gives the reference to the
previous $``$best" orbit and Column eleven the orbital $``$grade"
following the procedures of Hartkopf et al.\ (2001).
In the second part of Table 5 are the two orbits we characterize
as $``$reliable", all with shorter periods than those in the
first group. All eleven columns are the same as the first part of
the table, however, here under each element is its formal error. The
precision of the element is defined by the precision of its error.
Relative visual orbits of all four systems are plotted in Figure 1,
with the x and y axes indicating the scale in arcseconds. Each solid
curve represents the newly determined orbital elements presented in
Table 5 and the dashed curve is the orbit of the earlier orbit
referenced in Column ten.
\begin{figure}[!ht]
\begin{center}
{\epsfxsize 3.2in \epsffile{wds09210+3811i.eps} \epsfxsize 3.2in \epsffile{wds11035+5432i.eps}}
{\epsfxsize 3.2in \epsffile{wds18359+1659i.eps} \epsfxsize 3.2in \epsffile{wds20524+2008i.eps}}
\end{center}
\caption{\small Figure 1 illustrates the new orbital solutions,
plotted together with all published data in the WDS database as well
as the new data in Table 4. In each of these figures, micrometric
observations are indicated by plus signs, interferometric measures
by filled circles, conventional CCD by pink triangles, space-based
measures are indicated by the letter `H', new measure from Table 4
are plotted as a filled star. ``$O-C$" lines connect each measure to
its predicted position along the new orbit (shown as a thick solid
line). Dashed ``$O-C$" lines indicate measures given zero weight in
the final solution. A dot-dash line indicates the line of nodes, and
a curved arrow in the lower right corner of each figure indicates the
direction of orbital motion. The earlier orbit referenced in Table 5
is shown as a dashed ellipse.}
\end{figure}
\subsection{New Linear Solutions}
Inspection of all observed pairs with either a 30$^{\circ}$ change
in their relative position angles or a 30\% change in separations
since the first observation cataloged in the WDS revealed six pairs
whose motion seemed linear. These apparent linear relative motions
suggest that these pairs are either composed of physically
unrelated stars or have very long orbital periods. Linear elements
to these doubles are given in Table 6, where Columns one and two
give the WDS and discoverer designations and Columns three to nine
list the seven linear elements: x$_{0}$ (zero point in x, in
arcseconds), a$_{x}$ (slope in x, in $''$/yr), y$_{0}$ (zero point
in y, in arcseconds), a$_{y}$ (slope in y, in $''$/yr), T$_{0}$
(time of closest apparent separation, in years), $\rho_{0}$ (closest
apparent separation, in arcseconds), and $\theta_{0}$ (position
angle at T$_{0}$, in degrees). See Hartkopf \& Mason (2017) for a
description of all terms.
Figure 2 illustrates these new linear solutions, plotted together
with all published data in the WDS database, as well as the
previously unpublished data from Table 4. Symbols are the same as in
Figure 1. In the case of linear plots, the dashed line indicates the
time of closest apparent separation. As in Figure 1, the direction
of motion is indicated at lower right of each figure. As the plots
and solutions are all relative, the proper motion ($\mu$) difference
is assumed to be zero. In some cases, cataloged proper motion
differences between the components is plotted as a red line.
\begin{figure}[p]
~\vskip -1.8in
\begin{center}
{\epsfxsize 2.8in \epsffile{wds03313-0542Q.eps} \epsfxsize 2.8in \epsffile{wds05313-1834Q.eps}}
\vskip 0.05in
{\epsfxsize 2.8in \epsffile{wds07377+1330Q.eps} \epsfxsize 2.8in \epsffile{wds08211+4725Q.eps}}
\vskip 0.05in
{\epsfxsize 2.8in \epsffile{wds15320-1123Q.eps} \epsfxsize 2.8in \epsffile{wds18130+4251Q.eps}}
\end{center}
\vskip -0.3in
\caption{\small New linear fits for the systems listed in Table 6
and all data in the WDS database and Table 4. Symbols are the same
as Figure 1. ``$O-C$" lines connect each measure to its predicted
position along the linear solution (shown as a thick solid line). An
arrow in the lower right corner of each figure indicates the
direction of motion. The scale, in arcseconds, is indicated on the
left and bottom of each plot. When determined, cataloged proper motion
differences between these components is plotted as a red line.}
\end{figure}
\vskip 0.1in
Table 7 gives ephemerides for each orbit or linear solution over the
years 2018 through 2026, in two-year increments. Columns (1) and (2)
are the same identifiers as in the previous tables, while columns
(3+4), (5+6), ... (11+12) give predicted values of $\theta$ and
$\rho$, respectively, for the years 2018.0, 2020.0, etc., through
2026.0.
Notes to individual systems follow:
{\bf 09210$+$3811 = STF1338AB} : Also known as HD 80441. Based on
the period and semi-major axis of Table 5 and the parallax
(23.44$\pm$1.08 mas; van Leeuwen 2007) the mass sum of this system
is 1.66$\pm$0.33 \msun. This is lower than expected for a pair of F3
dwarfs. Using the more recent parallax from Gaia's DR2 (14.90$\pm$0.59
mas; Gaia Collaboration et al.\ 2016, 2018) an even lower solution of
is 1.18$\pm$0.26 \msun is determined. If the spectral classification
is approximately correct, an orbital solution of the same period with
a semi-major axis about $\frac{1}{3}$ larger would produce an expected
mass sum. While it has been 188 years since the first resolution
(Struve 1837), only continued observation, over a long timebase, can
make the orbital solution more definitive. The wider C component is
optical.
{\bf 11035$+$5432 = A\phn\phn1590} : Also known as HD 95690. Based
on the period and semi-major axis of Table 5 and the parallax
(23.06$\pm$1.48 mas; van Leeuwen 2007) the mass sum of this system
is 1.73$\pm$0.70 \msun. Using the more recent parallax from Gaia's
DR2 (23.49$\pm$0.05 mas; Gaia Collaboration et al.\ 2016, 2018) a
more precise result of 1.64$\pm$0.36 \msun is determined. Both seems
reasonable for a K2V and its companion.
\subsection{Double Stars Not Found}
Table 8 presents two systems which were observed but not detected.
Possible reasons for nondetection include orbital or differential
proper motion making the binary too close or too wide to resolve at
the epoch of observation, a larger than expected $\Delta$m,
incorrect pointing of the telescope, and misprints and/or errors in
the original reporting paper. It is hoped that reporting these will
encourage other double star astronomers to either provide
corrections to the USNO observations or to verify the lack of
detection.
\acknowledgements
This research has also made use of the SIMBAD database, operated at
CDS, Strasbourg, France, NASA's Astrophysics Data System and made
use of data from the European Space Agency (ESA) mission {\it Gaia}
({\tt https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
{\tt https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding
for the DPAC has been provided by national institutions, in
particular the institutions participating in the {\it Gaia}
Multilateral Agreement.
The continued instrument maintenance by the USNO instrument shop,
Gary Wieder, Chris Kilian and Phillip Eakens, makes the operation of
a telescope of this vintage a true delight.
| {
"attr-fineweb-edu": 1.885742,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdpPxK6EuM_UbrOHT | \section{Introduction}
Including noise in lattice Boltzmann simulations has been an active field of research in the last few years.
It was pioneered by Ladd\cite{ladd-1993} who suggested to introduce noise on the non-conserved hydrodynamic modes, i.e. the stress modes. This approach works reasonably well in the hydrodynamic limit but for short length scales the fluctuations are underrepresented due to interaction with the non-hydrodynamic degrees of freedom which are typically called the 'ghost'-modes. Adhikari {\em et al.} \cite{adhikari-2005} recognized the necessity to include noise on all non-conserved degrees of freedom, including the non-physical 'ghost'-modes and D\"{u}nweg {\em et al.} \cite{duenweg-2007} reformulated this approach to follow a detailed-balance condition description. All of these publications describe a fluctuating isothermal ideal gas. Just recently there was significant progress in extending this concept to non-ideal equations of state \cite{gross-2010, gross-2011, ollila-2011}.
The Adhikari implementation employs a multi-relaxation time (MRT) method similar to the one originally introduced by d'Humieres \cite{dhumieres-1992} except that the modes are orthogonal with respect to the Hermite norm. This allows for independent relaxation to the physically relevant moments. In particular it simplifies the construction of a noise term that does not violate conservation laws while allowing for non-correlated noise on all other degrees of freedom. The derivation of the fluctuation-dissipation theorem in both, Adhikari's and D\"{u}nweg's approaches requires the MRT transforms to be orthogonal with respect to a certain norm. In the case of a fluctuating ideal gas this norm depends on the equilibrium distribution. However, in all previous publications the equilibrium distribution in this norm is taken only to zeroth order, i.e. only the weight factors in the equilibrium distribution are used. The result is that the MRT orthogonality condition employed is identical to what is typically known as the Hermite norm \cite{benzi-1992}. This approximation, as we first discussed in \cite{kaehler-2011} and show later in this paper, formally introduces non-Galilean invariant terms. We investigate here the effects of using this zeroth order approximation with respect to fluctuations in the context of non-zero flow speeds. The observed Galilean invariance violations suggest that this approximation may be inappropriate in some cases. To avoid this approximation we developed a novel kind of lattice Boltzmann method which includes the full second order expression which we expected to significantly reduce the Galilean invariance violations observed. Such a method necessarily has a local collision matrix that depends on the velocity at the respective lattice site.
The paper is structured as follows: In section two we present a more detailed derivation based on Adhikari's noise implementation to show where the non-Galilean invariant terms originate. We elaborate on the source of the orthogonality condition and the consequences of the zeroth order approximation and illustrate the impact on the MRT transforms. In section three we test the current literature standard for the example of a D2Q9 simulation. We measure the validity of two core assumptions of the derivation in the context of large flow speeds and find that Galilean invariance is indeed violated. Section four then discusses approaches to remedy the Galilean invariance violations. In particular we move away from the zeroth order orthogonality condition and attempt to introduce first and second order velocity terms of the equilibrium distribution. As a consequence we derive a lattice Boltzmann method for which the MRT transforms become locally velocity dependent. However, a simplistic implementation of this method is numerically inefficient. This inefficiency can be overcome by introducing look-up tables. The resulting LB scheme's computational cost is only slightly larger than that of the Hermite norm implementation and Galilean invariance violations are significantly reduced.
\section{Lattice Boltzmann simulation of a fluctuating ideal gas}
In order illustrate the origin of Galilean invariance violations in fluctuating lattice Boltzmann implementations we present a short derivation of the fluctuating ideal gas in the Lattice Boltzmann context. The derivation presented is based on Adhikari {\em et al.}'s work \cite{adhikari-2005} who first recognized the necessity to include noise on all non-conserved degrees of freedom. The derivation given in Adhikari {\em et al.}'s original paper is not very detailed and we clarify some of the omitted steps of their derivation in this section. We put emphasis on a clear notation that separates the velocity space distibution functions $f_i$ and the moment space moments we call $M^a$.
The fluctuating lattice-Boltzmann equation is given by
\begin{align}
\label{eqn:LBE1}
&f_i(\vecc{x}+v_i, t+1) = \\\nonumber
&f_i(\vecc{x}, t) + \sum_j \Lambda_{ij}\left\lbrack f_j(\vecc{x}, t) - f_j^0(\vecc{x},t)\right\rbrack + \xi_i(\vecc{x}, t),
\end{align}
where the $f_i$ are densities associated with the velocities $v_{i}$. The local equilibrium distribution depends on position and time through the local density $\rho = \sum_i f_i$ and velocity $\mathbf{u} = \sum_i f_i \mathbf{v}_i / \rho$. The structure of the collision matrix $\Lambda_{ij}$ is discussed later in this section. This is the standard BGK lattice-Boltzmann equation with an added noise term $\xi_i(\vecc{x}, t)$. These noise terms must be chosen such that conserved quantities $\rho$, $\vecc{j}$, where $\mathbf{j} = \sum_i f_i \mathbf{v}_i$, are not changed and a proper fluctuation dissipation theorem (FDT) is obeyed. How we obtain the latter while ensuring the former is outlined below.
Throughout this paper we use Qian's second order expansion \cite{qian-1992} of the continuous Maxwell-Boltzmann distribution as expression for the equilibrium distribution
\beq{eqn:f0}
f_i^0(\rho, \vecc{u}, \theta) = \rho w_i \left\lbrack 1 + \frac{1}{\theta} \vecc{u}.v_i + \frac{1}{2\theta^2}\left(\vecc{u}.v_i\right)^2 - \frac{1}{2\theta}\vecc{u}.\vecc{u}\right\rbrack.
\end{equation}
This form is typically used for simulations of isothermal hydrodynamics. The extention to thermal hydrodynamics is conceptually straight forward.
All references below to zeroth, first or second order terms in velocity of the equilibrium distribution are to be understood in terms powers of $\vecc{u}$ of this expression.
In order to gain independent access of conserved and non-conserved moments it is useful to shift from Boltzmann type particle distributions $f_i$ to what is called generalized lattice-Boltzmann, moment space representation, or multi relaxation time representation (MRT)\cite{dhumieres-1992, lallemand-2000}. One thus gains access to the hydrodynamically relevant moments directly. For this purpose a set of a forward transform from velocity space and its density functions $f_i$ to moment space and its so-called moments $M^a$
\begin{equation}
\label{eqn:transform1}
M^a(\vecc{x}, t) = \sum_i m_i^a f_i(\vecc{x}, t).
\end{equation}
and the corresponding back transform
\begin{equation}
\label{eqn:transform2}
f_i(\vecc{x}, t) = \sum_a n_i^a M^a(\vecc{x}, t).
\end{equation}
must be chosen. While the original matrix elements $m_i^a$ and $n_i^a$ in \cite{dhumieres-1992} were identical this is not necessary. But they need to follow the orthogonality conditions
\begin{equation}
\label{eqn:symmetry}
\sum_i m_i^a n_i^b = \delta^{ab} \text{ and } \sum_a m_i^a n_j^a = \delta_{ij}.
\end{equation}
The particular choice of these transforms aims to generate a simple form for the fluctuation dissipation theorem and is of key importance to the validity of the noise derivation and Galilean invariance or lack thereof. As such they differ from those in the publications introducing the MRT formalism \cite{dhumieres-1992, lallemand-2000}. At least in the case of the ideal gas implementation it is convenient to choose the moments $M^a$ such that the representation of the collision matrix $\Lambda$ in moment space is diagonal $\Lambda^{ab} = \frac{t}{\tau^a} \delta^{ab}$.
For practical purposes it is then useful to perform the collision in moment space. The fluctuating LBE \eref{eqn:LBE1} is then written as
\begin{align}
\label{eqn:LBEMRT}
&f_i(\vecc{x} + v_i, t + 1) - f_i(\vecc{x}, t) = \\
\nonumber
&\sum_a n_i^a \left \lbrace\sum_b \Lambda^{ab} \left \lbrack M^b(\vecc{x}, t) - M^{b,0}(\vecc{x},t)\right \rbrack + \xi^a N \right \rbrace
\end{align}
where $\xi^a$ is the noise amplitude associated with moment $M^a$ and $N$ is a random number chosen from a Gaussian distribution with a variance of one. The primary advantage here is that we gain independent access to the hydrodynamically relevant physical moments and we can choose the noise amplitudes $\xi^a$ such that conservation laws are not violated, i.e. $\xi^{a, conserved} = 0$.
Now we separate the $f_i$ in \eref{eqn:LBE1} into their global mean values and a local fluctuating term
\beq{eqn:dfdef}
f_i = \langle f_i \rangle + \delta f_i
\end{equation}
and we obtain
\begin{align}
\label{eqn:LBE2}
&\langle f_i \rangle + \delta f_i(\vecc{x}+v_i, t+1) = \langle f_i \rangle + \delta f_i(\vecc{x}, t) \\ \nonumber
&+ \sum_j \Lambda_{ij}\left\lbrack \langle f_j \rangle + \delta f_j(\vecc{x}, t) - \langle f_j^0 \rangle - \delta f_j^0(\vecc{x},t)\right\rbrack \\ \nonumber
&+ \xi_i(\vecc{x}, t).
\end{align}
Subtracting the $\langle f_i \rangle$ and assuming
\beq{eqn:efif0}
\langle f_i \rangle = f_i^0(\rho_0, \mathbf{u}_0),
\end{equation}
where $\rho_0$ and $\mathbf{u}_0$ are the equilibrium values of the density and the velocity, yields a LBE for the fluctuation part of the distribution
\begin{align}
\label{eqn:LBE3}
&\delta f_i(\vecc{x}+v_i, t+1) = \\ \nonumber
&\delta f_i(\vecc{x}, t) + \sum_j \Lambda_{ij}\left\lbrack \delta f_j(\vecc{x}, t) - \delta f_j^0(\vecc{x},t)\right\rbrack + \xi_i(\vecc{x}, t).
\end{align}
We can now Fourier transform in space and apply the moment space transform $\sum_i m_i^a$ to obtain the moment space evolution equation in $k$-space
\begin{align}
\label{eqn:LBEft5}
&\delta M^a(k, t+1) = \sum_i \sum_b m_i^a e^{-ikv_i} n_i^b \Big\lbrace \delta M^b(k, t) + \\\nonumber
&\sum_j \sum_c \sum_d \Lambda^{bc} m_j^c n_j^d \left\lbrack \delta M^d(k,t) - \delta M^{0,d}(k,t)\right\rbrack +\\\nonumber
&\xi^b(k,t)\Big\rbrace,
\end{align}
where we also used $\Lambda_{ij} = \sum_a\sum_b n_i^a \Lambda^{ab} m_j^b$.
We now assume that we can choose the moments such that the multi relaxation time collision operator is diagonal in moment space, i.e. $\Lambda^{ab} = -\delta^{ab} \frac{1}{\tau^a}$. Using $\Gamma^{ab}(k) = \sum_i m_i^a n_i^b e^{-ikv_i}$ and $\delta M^0 = 0$ we thus get the evolution equation of the fluctuations in spatial Fourier representation of moment space
\begin{align}
\label{eqn:LBEft8}
&\delta M^a(k, t+1) = \\\nonumber
&\sum_b \Gamma^{ab}(k) \left\lbrace \left(1 - \frac{1}{\tau^b} \right) \delta M^b(k,t) +\xi^b(k,t)\right\rbrace.
\end{align}
Taking the outer product of $\delta M^a$ with itself, performing an ensemble average and substituting $r^a = 1-1/\tau^a$ we obtain
\begin{align}
\label{eqn:outerproduct}
&\left\langle \delta M^a(k, t+1) \delta M^c(k, t+1) \right\rangle = \\\nonumber
&\Big\langle \sum_b \sum_d \Gamma^{ab} \big\lbrack r^b \delta M^b(k, t) + \xi^b \big\rbrack \Gamma^{cd}\\\nonumber
&\left\lbrack r^d \delta M^d(k, t) + \xi^d \right\rbrack \Big\rangle.
\end{align}
For an ideal gas we know the results to be $\vecc{k}$-independent. Henceforth Adhikari {\em et al.} only consider the case $k = 0$ at which $\Gamma^{ab} = \delta^{ab}$. They also invoke stationarity of equal time correlators $\langle \delta M^a (t+1) \delta M^b (t+1) \rangle = \langle \delta M^a(t) \delta M^b(t)\rangle$ and get
\begin{align}
\label{eqn:outerproduct2}
&\left\langle \delta M^a(t+1) \delta M^c(t+1) \right\rangle = r^c r^a \left\langle \delta M^a(t) \delta M^c(t) \right\rangle + \\ \nonumber
&r^c \left\langle \delta M^c(t) \xi^a(t) \right\rangle + r^a \left\langle \delta M^a(t) \xi^c(t) \right\rangle + \left\langle \xi^a \xi^c \right\rangle.
\end{align}
Now, using the fact that the current system state is independent of the noise contribution, i.e. $\langle \delta M^a \xi^a \rangle = 0$, they obtain
\begin{eqnarray}
\label{eqn:FDT}
\left\langle \xi^a \xi^c \right\rangle & = & (1 - r^a r^c) \left\langle \delta M^a \delta M^c \right\rangle \nonumber \\
& = & \frac{\tau^a + \tau^c - 1}{\tau^a \tau^c} \left\langle \delta M^a \delta M^c \right\rangle ,
\end{eqnarray}
which acts as the fluctuation dissipation theorem (FDT). It relates the noise to the moment fluctuations. What is left is finding a prediction for $\langle \delta M^a \delta M^b \rangle$.
Assuming the case of the ideal gas ~\cite{lifshitz-1981} they use the fact that the distribution functions $f_i$ follow Poisson statistics with a mean value and variance of $\langle f_i\rangle$. Thus with \eref{eqn:efif0} they get
\beq{eqn:dfdf}\langle \delta f_i \delta f_j \rangle = f_i^0 \delta_{ij}.
\end{equation}
The back transform to velocity space can now be applied to the moment space correlator to obtain
\begin{align}
\label{eqn:fluctf2}
\langle \delta M^a \delta M^b \rangle = \sum_i \sum_j m_i^a m_j^b \langle \delta f_i \delta f_j\rangle = \\ \nonumber \sum_i \sum_j m_i^a m_j^b f_i^0 \delta_{ij}.
\end{align}
This implies that the moment fluctuations and by \eref{eqn:FDT} the noise terms are generally correlated. However, we can decouple these terms by choosing $n_i^a = m_i^a f_i^0/\rho$ because then according to \eref{eqn:symmetry}
\beq{eqn:fluctf3}
\sum_i m_i^a m_i^b f_i^0/\rho = \delta^{ab}
\end{equation}
and thus
\beq{eqn:fluctf4}
\langle \delta M^a \delta M^b \rangle = \rho \delta^{ab}.
\end{equation}
Of course one has also to show that this is also consistent with identifying the $M^a$ with the hydrodynamic moments. For a discussion of this see \cite{kaehler-2011}.
Now that it has been established that the moment fluctuations can be decoupled according to \eref{eqn:fluctf4} we can solve \eref{eqn:FDT} for the noise amplitude
\begin{equation}
\label{eqn:noiseamp}
\xi^a = \frac{1}{\tau^a} \sqrt{\rho \left(2 \tau^a - 1\right)} .
\end{equation}
The actual implementation performes the collision in moment space according to \eref{eqn:LBEMRT} where the moments $M^b$ are constructed at each time step by the standard forward transform. The streaming, however, still has to happen in velocity space and consequently each update involves two matrix transforms.
Of course, the problem here is that such an orthogonality condition \eref{eqn:fluctf3} is difficult to fulfill at all times and it is not entirely clear which values for $\rho$ and $\vecc{u}$ we have to choose for use in the equilibrium distribution. Both Adhikari\cite{adhikari-2005} and D\"unweg\cite{duenweg-2007} implicitly assume very low flow speeds or the zeroth order expression
\beq{eqn:u=0}
\lim_{\vecc{u} \to 0} f_i^0\left(\rho, \vecc{u}\right) = \rho w_i ,
\end{equation}
thereby avoiding aforementioned problem and simplifying the orthogonality condition to
\beq{eqn:fluctf5}
\sum_i m_i^a m_i^b w_i = \delta^{ab}.
\end{equation}
This implies $n_i^a = m_i^a w_i$ and is identical to what is frequently called the Hermite norm and was originally introduced by Benzi \cite{benzi-1992}. The orthogonality condition \eref{eqn:fluctf5} therefore qualifies the requirements on the transforms in addition to the necessity that they preserve hydrodynamics. An extensive study on the second condition has been published in \cite{kaehler-2011}. There we found that the Hermite norm of \eref{eqn:fluctf5}, does indeed also preserve hydrodynamics and that, in fact, we are free to add any conserved quantity moments to hydrodynamic modes without impacting the validity of the hydrodynamic equations.
The choice of the zeroth order approximation in \eref{eqn:fluctf5} is, however, not well documented or motivated in the original literature and gives rise to the question whether Galilean invariance violations of the fluctuations result as a consequence.
\section{Galilean Invariance Violations in the Hermite Norm Implementation}
\begin{figure}
\subfigure{
\begin{psfrags}
\includegraphics[width=0.2\textwidth, angle = 0]{d2q9scheme.eps}
\end{psfrags}
}
\caption{Basis vectors $v_i$ of the D2Q9 scheme used in all simulations in this manuscript.}
\label{fig:d2q9}
\end{figure}
First we want to evaluate what effect choosing the simplified norm of \eref{eqn:fluctf5} has on the Galilean invariance of a fluctuating lattice Boltzmann implementation. Here we show the numerical results for an isothermal D2Q9 fluctuating lattice Boltzmann method with periodic boundary conditions. Moment space transforms are generated with respect to the Hermite norm of \eref{eqn:fluctf5}. The basis vectors $v_i$ are shown in Fig.~\figref{fig:d2q9}. All $i$ indices in the following correspond to these basis vectors. The details of the D2Q9 Hermite norm transforms and the equilibrium moments are documented in appendix \sref{sec:apphermite}.
The results in the following were all obtained in a 2D lattice Boltzmann simulation of size $21\times21$. The odd side lengths are chosen to avoid the independent conservation of momentum components in odd and even lattice sites in either dimension. They occur for even side lengths because collisions conserve momentum and streaming of the densities that constitute momentum and could interact always moves two lattice sites at once. Consequently momenta in odd and even numbered lattice sites would never interact. We use a large average density of $\rho_0 = 10^6$ to avoid stability issues due to local negative density events. These can occur when the noise $\xi_i$ on the distribution functions $f_i$ exceeds the value of these distribution functions. This is more likely for small $\rho$ as the noise amplitude in moment space \eref{eqn:noiseamp} is proportional to $\sqrt{\rho}$.
All averages were taken over a simulation time of $10^6$ iterations after a thermalization phase of $10^5$ iterations to equilibrate the system.
The fundamental identity that allows us to decouple the moment fluctuations is given by \eref{eqn:dfdf}. We can verify its validity in the simulation directly by measuring $\langle \delta f_i \delta f_j \rangle$ as a function of $u_{x,0}$ and comparing it to $f_i^0$ and $w_i$ of \eref{eqn:dfdf} and \eref{eqn:dfidfj2dmadmb}. If the ideal gas hypothesis were to hold we would expect \eref{eqn:dfdf} to be fulfilled independently of $\vecc{u}$. However, using only the Hermite norm \eref{eqn:fluctf5} suggests that we might only find \eref{eqn:dfdf} fulfilled to zeroth order, i.e. to the weight factors $w_i$.
\begin{figure}
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 270]{wi_f0f0.eps}
\end{psfrags}
\caption{$\langle \left( \delta f_0 \right)^2 \rangle$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the Hermite norm. We plot $w_i$ and $f_i^0$for comparison.}
\label{fig:wdf0df0}
\end{figure}
\begin{figure}
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 270]{wi_f14f14.eps}
\end{psfrags}
\caption{$\langle \left( \delta f_i \right)^2 \rangle$ for $i=1...3$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the Hermite norm. We plot $w_i$ and $f_i^0$ for comparison. $\langle \left( \delta f_4 \right)^2 \rangle$ is not shown as it is identical to $\langle \left( \delta f_2 \right)^2 \rangle$ for symmetry reasons.}
\label{fig:wdf14df14}
\end{figure}
\begin{figure}
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 270]{wi_f58f58.eps}
\end{psfrags}
\caption{$\langle \left( \delta f_i \right)^2 \rangle$ for $i=5...8$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the Hermite norm. We plot $w_i$ and $f_i^0$ for comparison. $\langle \left( \delta f_8 \right)^2 \rangle$ and $\langle \left( \delta f_7 \right)^2 \rangle$ are not shown as they appears identical to $\langle \left( \delta f_5 \right)^2 \rangle$ and $\langle \left( \delta f_6 \right)^2 \rangle$ respectively in the scale of this plot.}
\label{fig:wdf58df58}
\end{figure}
\begin{figure}
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 270]{wi_f0fi.eps}
\end{psfrags}
\caption{Off-diagonal correlators $\langle \delta f_0 \delta f_i \rangle$ for $i=1...8$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the Hermite norm. $\langle \delta f_0 \delta f_4 \rangle$, $\langle \delta f_0 \delta f_7 \rangle$, and $\langle \delta f_0 \delta f_8 \rangle$ are omitted as they behave identical to $\langle \delta f_0 \delta f_2 \rangle$, $\langle \delta f_0 \delta f_6 \rangle$, and $\langle \delta f_0 \delta f_5 \rangle$ respectively.}
\label{fig:wdf0dfi}
\end{figure}
In Figs.~\ref{fig:wdf0df0},~\ref{fig:wdf14df14},~\ref{fig:wdf58df58} we show the simulation results of all unique $\langle \delta f_i \delta f_i \rangle$ correlators as functions of $u_{x,0}$. We find that with increasing velocity $u_{x,0}$ we do indeed deviate strongly from both, the weights $w_i$, and the equilibrium distributions $f_i^0$. In this implementation the correlators approach neither the $w_i$ nor the $f_i^0$ and in some cases not even an intermediate value. For correlators corresponding to base velocities without an $x$-component ($\langle \delta f_0^2\rangle$, $\langle \delta f_2^2\rangle$, $\langle \delta f_4^2\rangle$) the trend opposes that of the $f_i^0$. In these plots and all similar figures in this paper the statistical error bars are omitted in the graphs when they are smaller than the symbol size.
\begin{figure}
\begin{psfrags}
\psfrag{XXXXXXXXXXXXXXXXXRR}[Bl][Br]{\scriptsize\hspace{-0.1\textwidth} $\langle \delta \rho \delta\rho \rangle$}
\psfrag{XXXJXJX}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta j_x \delta j_x \rangle$}
\psfrag{XXXJYJY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta j_y \delta j_y \rangle$}
\psfrag{XXXPXXMPYY}[Bl][Br]{\scriptsize\hspace{-0.1\textwidth} $\langle \left( \delta \Pi_{xx-yy} \right) ^2 \rangle$}
\psfrag{XXXPXY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \left( \delta \Pi_{xy} \right)^2 \rangle$}
\psfrag{XXXPXXPPYY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \left( \delta \Pi_{xx+yy} \right)^2 \rangle$}
\psfrag{XXXG1G1}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta q_x \delta q_x \rangle$}
\psfrag{XXXG2G2}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta q_y \delta q_y \rangle$}
\psfrag{XXXG3G3}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \epsilon \delta \epsilon \rangle$}
\includegraphics[width=0.3\textwidth, angle = 270]{wi_mama.eps}
\end{psfrags}
\caption{Correlators calculated in the Hermite norm $\langle \delta M^a \delta M^a\rangle$ normalized to $\rho$ according to \eref{eqn:fluctf4} in a $21\times21$ D2Q9 fluctuating LB simulation employing the Hermite norm.}
\label{fig:wdmadma}
\end{figure}
In previous publications \cite{adhikari-2005, gross-2010} the fluctuations were characterized by the fluctuations of the hydrodynamics and ghost moments. The corresponding moment correlators follow directly from the distribution function deviations according to
\beq{eqn:dfidfj2dmadmb}
\langle \delta M^a \delta M^b \rangle = \sum_{ij} m_i^a m_j^b \langle \delta f_i \delta f_j \rangle.
\end{equation}
and are arguably of more practical importance since they represent the fluctuations of the hydrodynamic fields.
These correlators were expected, in the theory of \cite{adhikari-2005, duenweg-2007, gross-2010, gross-2011, ollila-2011} to obey $\langle \delta M^a \delta M^b \rangle = \rho \delta_{ab}$. However, for this to work we would need $\langle \delta f_i \delta f_j \rangle = w_i$ in \eref{eqn:dfidfj2dmadmb}, which is not the case for non-zero velocities, as we have shown above. We show the observed deviations for the diagonal correlators in Fig.~\figref{fig:wdmadma}. Here the correlator of the current in $x$-direction, $\langle \delta j_x \delta j_x \rangle$, exhibits the largest deviations.
Note that, while most $f_i$ are not symmetric with regard to the $u_{x,0} \rightarrow -u_{x,0}$ inversion, all the moments are constructed to be either symmetric or antisymmetric under $u_{x,0} \rightarrow -u_{x,0}$.
\begin{figure}
\subfigure[Linear coefficient $l$, Hermite norm]{
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 0]{wi_mm1.eps}
\end{psfrags}
}
\subfigure[Quadratic coefficient $q$, Hermite norm]{
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 0]{wi_mm2.eps}
\end{psfrags}
}
\caption{Linear and quadratic coefficient $l$ and $q$ of all $81$ ($45$ unique) correlators as a result of fitting $\langle \delta M^a \delta M^b\rangle(u_{x,0})-\delta^{ab}$ to $l u_{x,0} + q u_{x,0}^2$. Brighter color indicates larger coefficients. Moments were reordered to visually identify correlations better. To accommodate for symbol size the stress moments were simplified: $\Pi_\times = \Pi_{xy}, \Pi_- = \Pi_{xx-yy}, \Pi_+ = \Pi_{xx+yy}$). The coefficient at position (0, 1) in image (a) would correspond to linear portion of the $\langle \delta j_x \delta q_x \rangle$ correlator.
Coefficients were measured on a $21 \times 21$ D2Q9 simulation employing the Hermite norm. Fit range used was $-0.25 <= u_x <= 0.25 $.
}
\label{fig:wdmadmbfit1}
\end{figure}
To obtain some quantitative measure of the dependency of all 81 (45 unique) correlators in
\eref{eqn:dfidfj2dmadmb} we fit a second order polynomial $l u_{x,0} + q u_{x,0}^2$ to $\langle \delta M^a \delta M^b \rangle / \rho_0 - \delta^{ab}$. The resulting coefficients $l$ for odd combinations and $q$ for even combinations give a rough estimate of the deviation of the particular moment correlators and are depicted in Fig.~\figref{fig:wdmadmbfit1}. We notice in Fig.~\figref{fig:wdmadmbfit1}(b) that while the quadratic dependency of the correlations on the velocity is present in several correlators, it is particularly apparent on the square correlators. The linear dependency only appears in cross-correlators which are anti-symmetric under $u_{x,0} \rightarrow -u_{x,0}$ as seen in Fig.~\figref{fig:wdmadmbfit1}(a).
The ensemble averages of the correlation functions shown so far do not resolve the length scale dependency of the deviations we observed. To gain some understanding here we measure the static structure factor
\begin{equation}
\label{eqn:sfactorr}
S_{\vecc{k}}(\rho) = \frac{1}{\rho_0} \left\langle \delta \rho(\vecc{k}) \delta \rho(\vecc{-k}) \right\rangle,
\end{equation}
the $j_x$ momentum correlator
\begin{equation}
\label{eqn:sfactoru}
S_{\vecc{k}}(j_x) = \frac{1}{\rho_0} \left\langle \delta j_x(\vecc{k}) \delta j_x(\vecc{-k}) \right\rangle,
\end{equation}
at chosen velocities and the momentum cross correlator
\begin{equation}
\label{eqn:rcorrelator}
R_{\vecc{k}}(j_x, j_y) = \frac{1}{\rho_0} \left\langle \delta j_x(\vecc{k}) \delta j_y(\vecc{-k}) \right\rangle
\end{equation}
at imposed average system velocities $u_{x,0} = 0.0$, $u_{x,0} = 0.1$, and $u_{x,0} = 0.2$. We chose $R_{\vecc{k}}(j_x, j_y)$ in reference to Donev {\it et al.}'s investigation of the accuracy of finite volume schemes \cite{donev-2009}.
Here $\delta \rho(\vecc{k}) = \sum_{\vecc{x}} \lbrack\rho(\vecc{x})-\rho_0\rbrack e^{-i \vecc{k} \cdot \vecc{x}}$ and $\delta j_x(\vecc{k}) = \sum_{\vecc{x}} \lbrack j_x(\vecc{x})-j_{x,0} \rbrack e^{-i \vecc{k} \cdot \vecc{x}}$ are the discrete spatial Fourier transforms and $\sum_{\vecc{x}}$ is understood to be the summation over all discrete lattice sites.
In Figs.~\ref{fig:wsrr}, \ref{fig:wsuxux}, and \ref{fig:wsuxuy} we observe that the correlators lose the relatively good agreement with the isotropy requirement of the ideal gas, i.e. the wave number independence as we increase the velocity.
They are sensitive to increased velocities and isotropy at the correlations is destroyed. Errors are not limited to large $\mathbf{k}$ and impinge on the hydrodynamic ($\mathbf{k}$ small) region. Different correlators violate isotropy at different length scales and directions but we can generalize that the violations for certain length scales and spatial directions exceed those observed on the level of the ensemble averaged correlations discussed so far. As an example the density correlator $S_{\vecc{k}}(\rho)$ deviates by more than $20\%$ on all length scales in the $x$ direction at $u_{x,0} = 0.2$ in Fig.~\figref{fig:wsrr}(c) while the ensemble average finds a deviation of about $6\%$ in Fig.~\figref{fig:wdmadma}. Comparing Figs.~\ref{fig:wsrr}, \ref{fig:wsuxux}, and \ref{fig:wsuxuy} at $u_{x,0} = 0.2$ with $u_{x,0} = 0.1$ we observe that the structure of the anisotropy is largely independent of the average system speed although there are small deviations. Another observation is that although $\langle j_x j_y\rangle$ is small compared to other cross correlators in Fig.~\figref{fig:wdmadmbfit1} this is mostly due to a fortuitious cancellation of errors for different values of $k$. The absolute deviations for the $\langle \delta j_x (k) \delta j_y (k) \rangle$ are of similar magnitude compared to $\langle \delta j_x (k) \delta j_x (k) \rangle$.
\begin{figure}
\begin{psfrags}
\psfrag{X / }[Bl][Bc]{$k_x$}
\psfrag{Y / }[Bl][Bc]{$k_y$}
\psfrag{ 10^-3}[Bl][Bc]{$10^{-3}$}
\subfigure[$u_{x,0}=0.0$]{\includegraphics[width=.3\textwidth]{wi_rr_u00.eps}}
\subfigure[$u_{x,0}=0.1$]{\includegraphics[width=.3\textwidth]{wi_rr_u01.eps}}
\subfigure[$u_{x,0}=0.2$]{\includegraphics[width=.3\textwidth]{wi_rr_u02.eps}}
\end{psfrags}
\caption{Static structure factor $S_{\vecc{k}}(\rho)$ at different velocities measured for the Hermite norm.}
\label{fig:wsrr}
\end{figure}
\begin{figure}
\begin{psfrags}
\psfrag{X / }[Bl][Bc]{$k_x$}
\psfrag{Y / }[Bl][Bc]{$k_y$}
\psfrag{ 10^-3}[Bl][Bc]{$10^{-3}$}
\subfigure[$u_{x,0}=0.0$]{\includegraphics[width=.3\textwidth]{wi_uxux_u00.eps}}
\subfigure[$u_{x,0}=0.1$]{\includegraphics[width=.3\textwidth]{wi_uxux_u01.eps}}
\subfigure[$u_{x,0}=0.2$]{\includegraphics[width=.3\textwidth]{wi_uxux_u02.eps}}
\end{psfrags}
\caption{Static structure factor $S_{\vecc{k}}(j_x)$ at different velocities measured for the Hermite norm.}
\label{fig:wsuxux}
\end{figure}
\begin{figure}
\begin{psfrags}
\psfrag{X / }[Bl][Bc]{$k_x$}
\psfrag{Y / }[Bl][Bc]{$k_y$}
\psfrag{ 10^-4}[Bl][Bc]{$10^{-4}$}
\subfigure[$u_{x,0}=0.0$]{\includegraphics[width=.3\textwidth]{wi_uxuy_u00.eps}}
\subfigure[$u_{x,0}=0.1$]{\includegraphics[width=.3\textwidth]{wi_uxuy_u01.eps}}
\subfigure[$u_{x,0}=0.2$]{\includegraphics[width=.3\textwidth]{wi_uxuy_u02.eps}}
\end{psfrags}
\caption{Cross correlator $R_{\vecc{k}}(j_x, j_y)$ at different velocities measured for the Hermite norm.}
\label{fig:wsuxuy}
\end{figure}
In summary we can clearly see that as function of the fluid velocity we observe strong deviations from the identities in ~\eref{eqn:fluctf4} and \eref{eqn:dfdf} and the appearance of off-diagonal correlations which are not present in the case of $\vecc{u} = 0$. We conclude that Galilean invariance is indeed violated and that the fluctuation-dissipation theorem of \eref{eqn:FDT} is not longer diagonalized by the simple choice of $f_i^0 / \rho \approx w_i$ in \eref{eqn:fluctf3}.
\section{Local Velocity dependent Transforms}
The question now is whether we can alleviate the difficulties we have encountered by avoiding the approximation of $f_i^0(\vecc{u} = 0) = \rho w_i$ in the normalization condition. Removing the velocity dependence in the normalization condition could very likely be the source of the Galilean invariance violations observed. Instead of using \eref{eqn:fluctf5} we now include the velocity dependence of the equilibrium distribution in \eref{eqn:fluctf3}. The orthogonalization condition then becomes
\beq{eqn:fnorm}
\sum_i \tilde{m}_i^a (\vecc{u}) \tilde{m}_i^b(\vecc{u}) w_i \left\lbrack 1 + \frac{1}{\theta} \vecc{u}.v_i + \frac{1}{2\theta^2}\left(\vecc{u}.v_i\right)^2 - \frac{1}{2\theta}\vecc{u}.\vecc{u}\right\rbrack = \delta^{ab}
\end{equation}
where the velocity $\vecc{u}(\vecc{r}, t)$ is understood to be local to the lattice site $\vecc{r}$. We obtain a new set of transformation matrices $\tilde{m}_i^a$ by starting with the physical moments, $\rho$, $j_x$, $j_y$, $\Pi_{xx-yy}$, $\Pi_{xy}$, $\Pi_{xx+yy}$ and perform a Gram-Schmidt orthogonalization with respect to the new scalar product
\beq{eqn:fnormsp}
\sum_i a_i f_i^0 b_i.
\end{equation}
The iterative procedure then follows
\beq{eqn:gramschmidt}
\hat{m}_i^a = m_i^a - \sum_{b=0}^{a-1} \tilde{m}_i^b \sum_j \tilde{m}_j^b f_j^0 m_j^a
\end{equation}
with an intermediate normalization step
\beq{eqn:gsortho}
\tilde{m}_i^a = \frac{\hat{m}_i^a}{\sum_j \hat{m}_j^a f_j^0 \hat{m}_j^a }.
\end{equation}
With these new matrix elements $\tilde{m}_i^a$ we can define the physically relevant moments
\beq{eqn:Mtilde}
\tilde{M}^a = \sum_i \tilde{m}_i^a f_i.
\end{equation}
One useful side effect of this transform is that the equilibrium values for all moments other than the density vanish such that
\beq{eqn:fnormmeq}
\tilde{M}^{a,0} =
\left\{
\begin{array}{cl}
\rho & \text{if } a=0\\
0 & \text{otherwise}
\end{array}
\right.
\end{equation}
This is a direct consequence of condition \eref{eqn:fnorm} if we recognize that $\tilde{M}^{a,0} = \sum_i \tilde{m}_i^a f_i^0 \tilde{m}_i^0 = \rho \delta^{a0}$ because the density mode is still the one vector $m_i^0 = \tilde{m}_i^0 = 1_i$.
This new process does not alter the hydrodynamic limit of the lattice Boltzmann method because we only alter the moments multiples of $\vecc{u}(\vecc{r})$ with the conserved quantity eigenvectors of the density $1_i$ and momentum modes $v_{i{\alpha}}$. If we interpret the local velocity $\vecc{u}(\vecc{r})$ as an arbitrary constant we do not alter the hydrodynamic equations at all by virtue of our discussion in \cite{kaehler-2011}. We will refer to \eref{eqn:fnorm} simply as the ``$f$-norm'' in the following.
In order to maintain positive-definiteness of the scalar product \eref{eqn:fnormsp} we must be mindful here of the fact that the normalization constant needs to be positive at all times. The second order expansion of the equilibrium distribution \eref{eqn:f0} we use here, however, is not. For large enough $|\vecc{u}|$ the $f_{i, 0}(\rho, \vecc{u}, \theta) < 0$ and the orthogonalisation has no solution.
\begin{figure}
\includegraphics[width=.3\textwidth]{uvalidity.eps}
\caption{$f_i^0(u_{x,0}, u_y)=0$ for all $i$ in the case of the D2Q9 model. In the area inside the curves $f_i > 0$ for all $i$. Outside at least one $f_i < 0$ and consequently the orthogonalization does not find a solution.}
\label{fig:uvalidity}
\end{figure}
In Fig.~\figref{fig:uvalidity} we show the $0$-transition of the second order expansion of the equilibrium distribution in the case of the D2Q9 model as a function of $\vecc{u}$. This plot shows the accessible velocity range. As long as our velocities do not fall outside the central area of Fig.~\figref{fig:uvalidity} the transformation matrix is guaranteed to be positive definite and the Gram-Schmidt will provide a solution.
The matrix elements $\tilde{m}_i^a(\vecc{u}(\vecc{r}))$ we obtain are now functions of the local velocity $\vecc{u}(\vecc{r})$ at lattice site $\vecc{r} = \left( x, y \right)^T$. In principle they have to be evaluated at every lattice site during every update cycle. We have implemented a fluctuating LB simulation with these matrices and the results are encouraging in that Galilean invariance violations are significantly smaller. Some results of these are shown in Figs.~\figref{fig:fdf0df0}, \figref{fig:fdf14df14}, and \figref{fig:fdf58df58}. However, even in the relatively simple D2Q9 model the matrix elements of higher order moments are polynomials of $O(\vecc{u}^{16})$ and therefore the local evaluation of these matrix elements becomes prohibitively costly. Our test implementation used between $95\%$ and $99\%$ of the computation time of an update cycle in the evaluation of the local transforms.
One might think that going to the full second order expansion of $f_i^0$ might not be necessary and going only to first order in $\mathbf{u}$ would make the structure of the matrix elements significantly simpler. However, working with only the first order expansion introduces anisotropy effects between the different spatial axis. Removing these effectively makes the expressions for the $\tilde{m}_i^a$ even more complicated than the regular second order expressions where our Gram-Schmidt orthogonalization renders the moments isotropic.
It is, however, not strictly necessary to calculate the transforms to machine precision. Judging from our observations of the Hermite norm implementation it is sufficient to calculate tables of the matrix elements on a velocity grid with velocities $\vecc{u}_g(g_{\alpha})$ where $g_{\alpha}$ is the grid position and use these matrix elements from a look up table in the transforms. The benefit is practicality, the pay off is that we may not quite obtain the same amount of improvement we might expect to find otherwise.
One caveat is that we lose the convenient form of the equilibrium moments in \eref{eqn:fnormmeq}. In fact the projection of the moments in the representation of current local velocity to that of the nearest look up table velocity becomes algebraically similarly complex as the calculation of the matrix elements themselves. However, as we are concerned with a second order theory here we choose to only use terms of up to $O\left(\vecc{u}_g^3\right)$. While we do not change the conserved quantities we do change the stress and ghost moments at orders $O\left(\vecc{u}^4\right)$ and higher and thus introduce small errors. An example of these equilibrium moments and the matrix transform elements for D2Q9 can be found in \cite{kaehler-2012-notebook}.
The velocity grid spacing for the look up table can be relatively coarse. It is helpful if the entire look up table of velocities can fit into the second level cache of the CPU the simulation is run on. In our D2Q9 test case we typically use a $51 \times 51$ grid with $-0.5 \le u_{g,x} \le 0.5$, $-0.5 \le u_{g,y} \le 0.5$, and $\Delta u_g = 0.02$. Comparing this velocity range with Fig.~\figref{fig:uvalidity} we notice that the corners of this square in velocity space falls outside the valid $f_i^0(\mathbf{u}) > 0$ range. The matrix elements here are simply evaluated to ``not a number'' and the simulation fails once any one of these velocities are reached. In principle one could also catch outliers in the velocity and just choose the matrix elements for a smaller velocity. The moment projection would still function. However, this would alter the algorithm and the results would not be reliable representations of the method discussed here. For applications, especially at high velocities and low densities it will be necessary to include such an exception handling routine.
One could argue that we might as well have just calculated the matrix elements to a lower order directly, forego the matrix element look up tables and use the original simple equilibrium moments. However, in that case we would violate conservation laws and the calculation of the $2q^2$ matrix element polynomials is still significantly more expensive than the evaluation of $q-d-1$ non-conserved moments in a D$d$Q$q$ lattice Boltzmann configuration.
\begin{figure}
\begin{psfrags}
\psfrag{XXXXXXXXf0f0lt}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_0 \delta f_0 \rangle$}
\psfrag{XXXf0f0w}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_0 \delta f_0 \rangle_H$}
\psfrag{XXXf0f0full}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_0 \delta f_0 \rangle_f$}
\includegraphics[width=0.3\textwidth, angle = 270]{fi_f0f0.eps}
\end{psfrags}
\caption{$\langle \left( \delta f_0 \right)^2 \rangle$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the $f$-norm with look up tables. Equilibrium moments are calculated to third order. $\langle \delta f_0 \delta f_0 \rangle_f$ are datapoints taken from a fully local implementation that foregoes the look up table solution. We plot the equilibrium distribution $f_0^0$ and the Hermite norm correlator $\langle \delta f_0 \delta f_0 \rangle_H$ for comparison.}
\label{fig:fdf0df0}
\end{figure}
\begin{figure}
\begin{psfrags}
\psfrag{XXXf1f1}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_1 \delta f_1 \rangle$}
\psfrag{XXXf1f1full}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_1 \delta f_1 \rangle_f$}
\psfrag{XXXf2f2}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_2 \delta f_2 \rangle$}
\psfrag{XXXf3f3full}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_3 \delta f_3 \rangle_f$}
\psfrag{XXXf3f3}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_3 \delta f_3 \rangle$}
\psfrag{XXXf2f2f4f4full}[Bl][Br]{\scriptsize\hspace{-0.075\textwidth} $\langle \delta f_2 \delta f_2 \rangle_f$}
\includegraphics[width=0.3\textwidth, angle = 270]{fi_f14f14.eps}
\end{psfrags}
\caption{$\langle \left( \delta f_i \right)^2 \rangle$ for $i=1...3$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the $f$-norm. We plot $f_i^0$ for comparison. $\langle \left( \delta f_4 \right)^2 \rangle$ is not shown as it appears identical to $\langle \left( \delta f_2 \right)^2 \rangle$ within the scale of this plot.}
\label{fig:fdf14df14}
\end{figure}
\begin{figure}
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 270]{fi_f58f58.eps}
\end{psfrags}
\caption{$\langle \left( \delta f_i \right)^2 \rangle$ for $i=5...8$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the $f$-norm. We plot $f_i^0$ for comparison. $\langle \left( \delta f_8 \right)^2 \rangle$ and $\langle \left( \delta f_7 \right)^2 \rangle$ are not shown as they appears identical to $\langle \left( \delta f_5 \right)^2 \rangle$ and $\langle \left( \delta f_6 \right)^2 \rangle$ respectively in the scale of this plot.}
\label{fig:fdf58df58}
\end{figure}
\begin{figure}
\begin{psfrags}
\includegraphics[width=0.3\textwidth, angle = 270]{fi_f0fi.eps}
\end{psfrags}
\caption{$\langle \delta f_0 \delta f_i \rangle$ for $i=1...8$ in a $21\times21$ D2Q9 fluctuating LB simulation employing the $f$-norm. }
\label{fig:fdf0dfi}
\end{figure}
To evaluate the implementation of the $f$-norm we perform the same measurements we did for the Hermite norm. We use a D$2$Q$9$ ideal gas simulation with periodic boundaries, and a side length of 21. In Fig.~\figref{fig:fdf0df0} we observe the same $\langle \delta f_0 \delta f_0 \rangle$ correlator we did in Fig.~\figref{fig:wdf0df0}. We find that with the $f$-norm the trend actually does follow the $f_0^0$ prediction and within $-0.2 \le u_{x,0} \le 0.2$ we are in good agreement with $f_0^0$ but at larger speeds we find smaller but noticeable deviations. In Figs.~\ref{fig:fdf14df14},~\ref{fig:fdf58df58} we find much better agreement for all other distribution function correlation functions for the $f$-norm compared to the Hermite norm in Figs.~\ref{fig:wdf14df14}, and~\ref{fig:wdf58df58}. Again we notice very good agreement for $|u_x| \le 0.2$.
The remaining deviations from the equilibrium distributions we find with the $f$-norm are not an artifact of either the look up table method or the third order expansion of the equilibrium moments. We performed the same measurement with the fully locally orthogonalized set of transforms, albeit with fewer data points due to the much higher computational effort involved. $\langle \delta f_i \delta f_i \rangle_f$ in Figs.~\figref{fig:fdf0df0},~\figref{fig:fdf14df14}, and \figref{fig:fdf58df58} indicate that the deviations from the equilibrium distributions can indeed not be explained with either the look up table method or the cut off on the equilibrium moments as the results obtained form the look up table method with third order equilibrium moments appears to be consistent from the fully locally orthogonalized $f$-norm.
\begin{figure}
\begin{psfrags}
\psfrag{XXXXXXXXXXXXXXXXXRR}[Bl][Br]{\scriptsize\hspace{-0.1\textwidth} $\langle \delta \tilde{\rho} \delta\tilde{\rho} \rangle$}
\psfrag{XXXJXJX}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{j}_x \delta \tilde{j}_x \rangle$}
\psfrag{XXXJYJY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{j}_y \delta \tilde{j}_y \rangle$}
\psfrag{XXXPXXMPYY}[Bl][Br]{\scriptsize\hspace{-0.1\textwidth} $\langle \left( \delta \tilde{\Pi}_{xx-yy} \right) ^2 \rangle$}
\psfrag{XXXPXY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \left( \delta \tilde{\Pi}_{xy} \right)^2 \rangle$}
\psfrag{XXXPXXPPYY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \left( \delta \tilde{\Pi}_{xx+yy} \right)^2 \rangle$}
\psfrag{XXXG1G1}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{q}_x \delta \tilde{q}_x \rangle$}
\psfrag{XXXG2G2}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{q}_y \delta \tilde{q}_y \rangle$}
\psfrag{XXXG3G3}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{\epsilon} \delta \tilde{\epsilon} \rangle$}
\includegraphics[width=0.3\textwidth, angle = 270]{fi_mama.eps}
\end{psfrags}
\caption{Correlators $\langle \delta \tilde{M}^a \delta \tilde{M}^a\rangle_f$ normalized to $\rho$ according to \eref{eqn:fluctf4} in a $21\times21$ D2Q9 fluctuating LB simulation employing the $f$-norm.}
\label{fig:fdmadma}
\end{figure}
\begin{figure}
\begin{psfrags}
\psfrag{XXXXXXXXXXXXXXXXXRR}[Bl][Br]{\scriptsize\hspace{-0.1\textwidth} $\langle \delta \tilde{\rho} \delta\tilde{\rho} \rangle$}
\psfrag{XXXJXJX}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{j}_x \delta \tilde{j}_x \rangle$}
\psfrag{XXXJYJY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{j}_y \delta \tilde{j}_y \rangle$}
\psfrag{XXXPXXMPYY}[Bl][Br]{\scriptsize\hspace{-0.1\textwidth} $\langle \left( \delta \tilde{\Pi}_{xx-yy} \right) ^2 \rangle$}
\psfrag{XXXPXY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \left( \delta \tilde{\Pi}_{xy} \right)^2 \rangle$}
\psfrag{XXXPXXPPYY}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \left( \delta \tilde{\Pi}_{xx+yy} \right)^2 \rangle$}
\psfrag{XXXG1G1}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{q}_x \delta \tilde{q}_x \rangle$}
\psfrag{XXXG2G2}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{q}_y \delta \tilde{q}_y \rangle$}
\psfrag{XXXG3G3}[Bl][Br]{\scriptsize\hspace{-0.10\textwidth} $\langle \delta \tilde{\epsilon} \delta \tilde{\epsilon} \rangle$}
\includegraphics[width=0.3\textwidth, angle = 270]{wi_mamawf.eps}
\end{psfrags}
\caption{Correlators $\langle \delta \tilde{M}^a \delta \tilde{M}^a\rangle$ normalized to $\rho$ according to \eref{eqn:fluctf4} measured in a $21\times21$ D2Q9 fluctuating LB simulation employing the Hermite norm.}
\label{fig:wfdmadma}
\end{figure}
Measuring the moment space correlators in the $f$-norm poses an interesting question. Do we measure with respect to the Hermite norm or the $f$-norm and in the case of the latter with respect to which velocity? To answer this question we conduct a thought experiment. $\delta M^a$ should be Galilean invariant for any $a$, in particular the momentum components. In the Hermite norm we have
\beq{eqn:widjx}
\delta j_{x} = \sum_i m_i^a f_i - \sum_i m_i^a f_i^0 = \sqrt{3} \left( \rho u_x - \rho_0 u_{x,0}\right)
\end{equation}
and for the $f$-norm
\beq{eqn:fidjx}
\delta \tilde{j}_{x} = \sum_i \tilde{m}_i^a f_i - \sum_i \tilde{m}_i^a f_i^0 = \sqrt{3} \rho \left( u_x - u_{x,0}\right).
\end{equation}
Again $\mathbf{u}_0$ is the mean velocity in the system and $\mathbf{u}$ the local velocity at a given lattice site. If we set $\mathbf{u}_0 = 0$ we have $\delta j_{x} = \delta \tilde{j}_{x} = \sqrt{3}\rho u_x$. Introducing a constant velocity offset $-\mathbf{u}_O$ should leave $\delta j_x$ Galilean invariant, i.e. we expect $\mathbf{u} \rightarrow \mathbf{u} - \mathbf{u}_O$. If we now interpret $\mathbf{u}_0$ as such an offset the Hermite norm is clearly not Galilean invariant under velocity offsets as it introduces an extra $ u_{x,0} \left( \rho_0 - \rho \right)$ in \eref{eqn:widjx} whereas the $f$-norm in \eref{eqn:fidjx} behaves as required. Consequently we use the $f$-norm as it provides the correct measurements that leave the $\delta \tilde{M}^a$ invariant under Galilean transformations. Furthermore we measure with respect to the average system velocity $\mathbf{u}_0$ and average density $\rho_0$. Measuring with respect to the local velocity $\mathbf{u}$ and density $\rho$ is nonsensical as $\delta \rho = 0$ and $\delta \mathbf{j} = 0$ in this case. We thus use the $f$-norm such that $\tilde{m}_i^a \tilde{m}_i^b \langle f_i \rangle = \delta^{ab}$ where we make the approximation of \eref{eqn:efif0} $\langle f_i \rangle = f_i^0(\rho_0, \mathbf{u_0})$.
Much like the distribution function correlators the moment correlators $\langle (\delta M^a)^2 \rangle$ shown in Fig.~\figref{fig:fdmadma} exhibit significant improvement compared to those of the Hermite norm in Fig.~\figref{fig:wdmadma}. This improvement is smaller than the general trend of the distribution function correlators would imply for some modes. In particular the $\langle (\delta \tilde{\rho})^2 \rangle $, $\langle (\delta \tilde{\Pi}_{xx-yy})^2 \rangle $, and $\langle (\delta \tilde{j}_y)^2 \rangle $ correlators deviate significantly for larger $u_x$. Their overall decrease is about $1/3$ compared to the Hermite norm. To make a valid comparison between moment correlators computed in the $f$-norm and the Hermite norm one needs to ensure that for both measurements the moments are obtained in the same way. We therefore measure the moments obtained in a Hermite norm simulation with the $f$-norm evaluated at $\mathbf{u}_0$ in Fig.~\figref{fig:wfdmadma}. We observe that for all moments but $\langle \delta \tilde{\rho} \delta \tilde{\rho} \rangle$ and $\langle \delta \tilde{j}_y \delta \tilde{j}_y \rangle$ the deviations are larger than those measured in the Hermite norm.
\begin{figure}
\subfigure[Linear coefficient $l$, $f$-norm]{
\begin{psfrags}
\psfrag{X / }[Br][Bc]{}
\psfrag{Y / }[Br][Bc]{}
\psfrag{jx}[Bc][Bc]{\scriptsize$\tilde{j}_x$\normalsize}
\psfrag{qx}[Bc][Bc]{\scriptsize$\tilde{q}_x$\normalsize}
\psfrag{pxy}[Bc][Bc]{\scriptsize$\tilde{\Pi}_{\times}$\normalsize}
\psfrag{rho}[Bc][Bc]{\scriptsize$\tilde{\rho}$\normalsize}
\psfrag{ppl}[Bc][Bc]{\scriptsize$\tilde{\Pi}_{+}$\normalsize}
\psfrag{pmn}[Bc][Bc]{\scriptsize$\tilde{\Pi}_{-}$\normalsize}
\psfrag{eps}[Bc][Bc]{\scriptsize$\tilde{\epsilon}$\normalsize}
\psfrag{jy}[Bc][Bc]{\scriptsize$\tilde{j}_y$\normalsize}
\psfrag{qy}[Bc][Bc]{\scriptsize$\tilde{q}_y$\normalsize}
\includegraphics[width=0.3\textwidth, angle = 0]{fi_mm1.eps}
\end{psfrags}
}
\subfigure[Quadratic coefficient $q$, $f$-norm]{
\begin{psfrags}
\psfrag{X / }[Br][Bc]{}
\psfrag{Y / }[Br][Bc]{}
\psfrag{jx}[Bc][Bc]{\scriptsize$\tilde{j}_x$\normalsize}
\psfrag{qx}[Bc][Bc]{\scriptsize$\tilde{q}_x$\normalsize}
\psfrag{pxy}[Bc][Bc]{\scriptsize$\tilde{\Pi}_{\times}$\normalsize}
\psfrag{rho}[Bc][Bc]{\scriptsize$\tilde{\rho}$\normalsize}
\psfrag{ppl}[Bc][Bc]{\scriptsize$\tilde{\Pi}_{+}$\normalsize}
\psfrag{pmn}[Bc][Bc]{\scriptsize$\tilde{\Pi}_{-}$\normalsize}
\psfrag{eps}[Bc][Bc]{\scriptsize$\tilde{\epsilon}$\normalsize}
\psfrag{jy}[Bc][Bc]{\scriptsize$\tilde{j}_y$\normalsize}
\psfrag{qy}[Bc][Bc]{\scriptsize$\tilde{q}_y$\normalsize}
\includegraphics[width=0.3\textwidth, angle = 0]{fi_mm2.eps}
\end{psfrags}
}
\caption{Linear and quadratic coefficient $l$ and $q$ of all $81$ ($45$ unique) correlators as a result of fitting $\langle \delta \tilde{M}^a \delta \tilde{M}^b\rangle(u_{x,0})-\delta^{ab}$ to $l u_{x,0} + q u_{x,0}^2$. Brighter color indicates larger coefficients. Moments were reordered to visually identify correlations better. To accommodate for symbol size the stress moments were simplified: $\tilde{\Pi}_\times = \tilde{\Pi}_{xy}, \tilde{\Pi}_- = \tilde{\Pi}_{xx-yy}, \tilde{\Pi}_+ = \tilde{\Pi}_{xx+yy}$). The coefficient at position (0, 1) in image (a) would correspond to linear portion of the $\langle \delta \tilde{j}_x \delta \tilde{q}_x \rangle$ correlator.
Coefficients were measured on a $21 \times 21$ D2Q9 simulation employing the $f$-norm with look up tables, $u_g = 0.02$. Fit range used was $-0.25 <= u_x <= 0.25 $.}
\label{fig:fdmadmbfit1}
\end{figure}
\begin{figure}[ht]
\begin{psfrags}
\psfrag{X / }[Bl][Bc]{$k_x$}
\psfrag{Y / }[Bl][Bc]{$k_y$}
\subfigure[$u_x=0.1$]{\includegraphics[width=.3\textwidth]{fi_rr_u01.eps}}
\subfigure[$u_x=0.2$]{\includegraphics[width=.3\textwidth]{fi_rr_u02.eps}}
\end{psfrags}
\caption{Static structure factor $S_{\vecc{k}}(\tilde{\rho})$ at different velocities measured for the $f$-norm with the look up table and $\Delta u_g = 0.02$.}
\label{fig:fsrr}
\end{figure}
\begin{figure}[ht]
\begin{psfrags}
\psfrag{X / }[Bl][Bc]{$k_x$}
\psfrag{Y / }[Bl][Bc]{$k_y$}
\subfigure[$u_x=0.1$]{\includegraphics[width=.3\textwidth]{fi_uxux_u01.eps}}
\subfigure[$u_x=0.2$]{\includegraphics[width=.3\textwidth]{fi_uxux_u02.eps}}
\end{psfrags}
\caption{Static structure factor $S_{\vecc{k}}(\tilde{j}_x)$ at different velocities measured for the $f$-norm with the look up table and $\Delta u_g = 0.02$.}
\label{fig:fsuxux}
\end{figure}
\begin{figure}[ht]
\begin{psfrags}
\psfrag{X / }[Bl][Bc]{$k_x$}
\psfrag{Y / }[Bl][Bc]{$k_y$}
\subfigure[$u_x=0.1$]{\includegraphics[width=.3\textwidth]{fi_uxuy_u01.eps}}
\subfigure[$u_x=0.2$]{\includegraphics[width=.3\textwidth]{fi_uxuy_u02.eps}}
\end{psfrags}
\caption{Cross correlator $R_{\vecc{k}}(\tilde{j}_x, \tilde{j}_y)$ at different velocities measured for the $f$-norm with the look up table and $\Delta u_g = 0.02$.}
\label{fig:fsuxuy}
\end{figure}
Linear and quadratic fit coefficients for all moment correlators $\langle \delta \tilde{M}^a \delta \tilde{M}^b \rangle$ in Fig.~\figref{fig:fdmadmbfit1} show significant improvement as well. We notice that in particular the coefficients $l$ that apply to those off-diagonal correlators that have a linear dependence on $u_x$ at least a factor of~$13$ smaller than those measured in the Hermite norm case shown in Fig.~\figref{fig:wdmadmbfit1} (a). We also observe a decrease of the quadratic term $q$ but in line with the observations of Fig.~\figref{fig:fdmadma} the coefficients corresponding to some correlators decrease less compared to the ones observed in the Hermite norm in Fig.~\figref{fig:wdmadmbfit1} (b): $\langle (\delta \tilde{\rho})^2\rangle $ from $1.9$ to $0.47$, $\langle (\delta \tilde{\Pi}_{xx-yy})^2 \rangle$ from $1.6$ to $0.54$, and $\langle (\delta \tilde{j}_y)^2 \rangle$ from $1.14$ to $0.75$.
These findings are confirmed by the structure factor plots for the $f$-norm in Figs.~\figref{fig:fsrr}, \figref{fig:fsuxux}, and \figref{fig:fsuxuy} which for non-vanishing fixed velocity $u_{x,0}$ are significantly smaller than the one measured for the Hermite norm at the same velocity in Figs.~\figref{fig:wsrr},~\figref{fig:wsuxux}, and \figref{fig:wsuxuy}.
We can conclude that employing the $f$-norm significantly reduces the Galilean invariance effects observed on the Hermite norm implementation. The look up tables provide a practically feasible approach to implementing the $f$-norm at a performance loss of about 20 \%. All the measurements here were performed on a single CPU.
\section{Conclusion and Outlook}
The current standard implementation of thermal fluctuations in an isothermal ideal gas was tested for Galilean invariance violations. We found that with non zero average velocity the moment space covariance matrix of \eref{eqn:fluctf4} is neither diagonal nor are the diagonal elements unity as predicted and required by the derivation of the FDT in both \cite{adhikari-2005} and \cite{duenweg-2007}. We identified an approximation in the orthogonality condition that defines the moment space transforms \eref{eqn:fluctf3} as the likely source of the Galilean invariance violations as it directly removes an otherwise necessary velocity dependence from the moment space transforms. The approximation allows for the use of Hermite norm to define the moment space transforms. However, to recover Galilean invariance at least to some degree requires the matrix transforms to be locally velocity dependant, i.e. unique to every lattice site and the Hermite norm is no longer applicable. This led us to introduce a novel variant of the lattice Boltzmann method. We find that using the local fully velocity dependent $f$-norm to machine precision in a straight forward manner to be computationally impractical. Evaluating the individual matrix elements leads to an overhead in computational cost of $>2000\%$ in evaluating the individual matrix elements. However, as the Galilean invariance violations scale quadratically for most moments it is feasible to generate look up tables for the matrix elements on a velocity grid. This requires to projection of the equilibrium moments into the look up table reference velocity. This look up table approach provides comparable benefits to the locally orthogonalized transforms but at only a $20 \%$ loss of computation time. All the simulations presented here were performed in a example D$2$Q$9$ implementation. However, all calculations and considerations discussed can easily be generalized to other models.
We provide a Mathematica notebook \cite{kaehler-2012-notebook} that contains the necessary calculations done for the D$2$Q$9$ model used here.
This new method is poentially important for non-equilibrium situations when locally varying flow fields exist which is the standard realm of lattice Boltzmann simulations.
\begin{acknowledgements}
The authors would like to thank Markus Gross and Eric Foard for helpful and insightful discussion.
This work has been funded, in part, by the ND EPSCoR SEED grand.
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.608398,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdqw5qX_BqBVRHEcB | \section{Introduction }
\vskip 12pt
A change of variable in the 1-dim Schr\"odinger equation
(1-DSE) is one of the basic technic used to solve 1-dim problems
(see \cite{12} for example). In the context of semiclassical (JWKB)
approximation the procedure is in fact a main ingredient of
Fr\"oman and Fr\"oman (F-F) approach to 1-DSE \cite{3,4}
with an aim of getting improved JWKB quantization formulae \cite{4,5,6}.
Sometimes, a suitable change of variable provides us with
JWKB-like formulae solving the problem of energy spectra even
exactly \cite{4}. No doubts, however, that the latter
possibility depends totally on a potential considered and
changing variable plays in such cases only an auxiliary role
\cite{11}.
A change of variables is
also an essential ingredient of a more general approach to the
semiclassical approximations formulated by Maslov and his
collaborators [13]. In the context of the latter approach the
change-of-variable procedure is an inherent part of the
continuation procedure of semiclassical series defined
originally in some domain of the configuration space to another
domain of the space. The relevant variable transformations used
in the Maslov method are the canonical ones (in a sens of
classical mechanics). A relation of the Maslov method to the one
applied in this paper is discussed in Appendix. It is argued
there however that using fundamental solutions as we do in our
paper is equivalent to the method of Maslov \underline{et al} in the
semiclassical regime of the considered 1-dim problems but has
many advantages over the Maslov procedure in the remaining of
our investigations. In particular the problem of Borel
resummation central for our paper cannot be put and considered
properly ignoring the existence of the fundamental solutions and
their properties. After all the method of Maslov \underline{et al} is purely
asymptotic from the very begining and any problem of resummation
of the semiclassical series used in the method has not been
considered as yet.
An improvement of the standard JWKB formulae achieved by the
changing variable procedure appears as corrections having
typically forms of additional $\hbar$-dependent term in emerging
effective potentials \cite{1,4,5,6}. Since in all these cases of
variable changing the standard JWKB formulae can be easily restored
simply by $\hbar$-expansions of the improved ones the latter
seems to be a kind of some hidden resummation of a part (in the
case of improvements only) or a full (when exact formulae
emerge) standard semiclassical expansion corresponding to
considered cases.
It is the aim of this paper to show that indeed
this hidden resummation mentioned above
really takes place and that a class of applied changes of
variable in 1-DSE results as the Borel resummation of suitably
chosen standard
semiclassical solutions to SE multiplied by appropriately chosen
$\hbar$-dependent constants which can always be
attached to any of such semiclassical solution.
As it has been shown by Milczarski and Giller \cite{7} (see also \cite{8})
such specific Borel summable solutions to SE are provided
for meromorphic potentials by the F-F construction
\cite{3} in the form of so called fundamental solutions (FS)\cite{8,9}. These
are the only solutions with the Borel summability property among all the
F-F-like solutions \cite{7}. Despite their rareness the FS's when collected
into a full set of them allow us to solve any 1-dim problem \cite{8,9}
(see also a discussion below).
The paper is organized as follows.
In the next section the fundamental solutions and their use are
recalled.
In Sec.3 the standard semiclassical expansions and
their properties are reconsidered.
In Sec.4 the Borel
resummation aspects of a change-of-variable operation are
discussed.
In Sec.5 the impossibility of achieving the exact
JWKB formulae by a change-of-variable operation only is
discussed.
We conclude with Sec.6.
\section{ Fundamental solutions }
A standard way to introduce FS's is a construction of
a Stokes graph (SG) \cite{7,8,9} for a given (meromorphic)
potential $V(x)$.
SG consists of Stokes lines (SL) emerging from roots (turning
points) of the equation:
\begin{eqnarray}
V(x) + \hbar^2 \delta(x) = E
\label{1}
\end{eqnarray}
with $E$ as energy as well as from simple poles of the
considered potential $V(x)$.
The presence and role of the $\delta$-term in (\ref{1}) is explained below.
It contributes to (\ref{1}) only when $V(x)$ contains simple and second order
poles. The $\delta$-term is constructed totally from these poles.
The points of SL's satisfy one of the following equations:
\begin{eqnarray}
\Re \int_{x_{i}}^{x} \sqrt{V(y) + {\hbar^2}\delta(y)- E}dy = 0
\label{2}
\end{eqnarray}
with $x_{i}$ being a root of (\ref{1}) or a simple pole of $V(x)$.
SL's which are not closed end at these points of the $x$-plane
(i.e. have the latter as the boundaries) for which an action
integral in (\ref{2}) becomes infinite. Of course such points
are singular for the potential $V(x)$ and can be finite poles,
higher than the simple ones, or poles of $V(x)$ lying at the infinity.
Each such a singularity $x_{0}$ of $V(x)$ defines a domain called a sector.
This is the connected domain of the x-plane bounded by the SL's and $x_0$
itself with the latter point being also a boundary for the SL's or being
an isolated boundary point of the sector (as it is in the case of the second
order pole).
In each sector the LHS in (\ref{2}) is only positive or negative.
Consider now the Schr\"odinger equation:
\begin{eqnarray}
\Psi^{\prime\prime}(x) - \hbar^{-2} q(x) \Psi(x) = 0
\label{7}
\end{eqnarray}
where $q(x)=V(x)-E$ (we have put the mass m in (\ref{7}) to be equal
to $1/2$).
Following Fr\"oman and Fr\"oman one can define in each sector $k$ having
$x_0$ at its boundary a solution of the form:
\begin{eqnarray}
\Psi_{k}(x) = \tilde{q}^{-\frac{1}{4}}(x){\cdot}
e^{\frac{\sigma}{\hbar} W(x)}{\cdot}{\chi_{k}(x)} &
& k = 1,2,\ldots
\label{3}
\end{eqnarray}
where:
\begin{eqnarray}
\chi_{k}(x) = 1 + \sum_{n{\geq}1}
\left( -\frac{\sigma \hbar}{2} \right)^{n} \int_{x_{0}}^{x}d{\xi_{1}}
\int_{x_{0}}^{\xi_{1}}d{\xi_{2}} \ldots
\int_{x_{0}}^{\xi_{n-1}}d{\xi_{n}}
\omega(\xi_{1})\omega(\xi_{2}) \ldots \omega(\xi_{n})
\label{4}
\end{eqnarray}
\begin{eqnarray*}
{\times} \left( 1 -
e^{-\frac{2\sigma}{\hbar}{(W(x)-W(\xi_{1}))}} \right)
\left(1 - e^{-\frac{2\sigma}{\hbar}{(W(\xi_{1})-W(\xi_{2}))}} \right)
\cdots
\left(1 - e^{-\frac{2\sigma}{\hbar}{(W(\xi_{n-1})-W(\xi_{n}))}} \right)
\end{eqnarray*}
with
\begin{eqnarray}
\omega(x) = \frac{\delta(x)}{\tilde{q}^{\frac{1}{2}}(x)} -
{\frac{1}{4}}{\frac{\tilde{q}^{\prime\prime}}{\tilde{q}^{\frac{3}{2}}(x)}} +
{\frac{5}{16}}{\frac{\tilde{q}^{\prime 2}}{\tilde{q}^{\frac{5}{2}}(x)}}
\label{5}
\end{eqnarray}
and
\begin{eqnarray}
W(x,E) = \int_{x_{i}}^{x} \sqrt{\tilde{q}(\xi,E)}d\xi
\label{6}
\end{eqnarray}
\begin{eqnarray*}
\tilde{q}(x,E) = V(x) +\hbar^2 \delta(x) - E
\end{eqnarray*}
In (\ref{3}) and (\ref{4}) a sign $\sigma (= \pm 1)$ and an
integration path are chosen in such a way to have:
\begin{eqnarray}
\sigma \Re \left(W(\xi_{j}) - W(\xi_{j+1}) \right) \leq 0
\label{8}
\end{eqnarray}
for any ordered pair of integration variables (with $\xi_{0} =
x$). Such a path of integration is then called canonical.
The term $\delta(x)$ appearing in (\ref{5}) and in (\ref{6})
is necessary to ensure all the integrals in (\ref{4}) to
converge when $x_0$ is a first or a second order pole of $V(x)$
or when solutions (\ref{3}) are to be continued to such poles.
Each such a pole $x_0$ demands a contribution to $\delta(x)$
of the form $(2(x-x_0))^{-2}$ so that $\delta(x)$ collects all
of them and its final form depends of course on the corresponding
singular structure of $V(x)$.
Note that the effect of introducing the $\delta$-term is completely
equivalent to making some change of variable in the SE,
a possibility which in this context shall, however, not be
discussed in the paper.
In a domain $D_{k}$ of the $x$-plane where the condition
(\ref{8}) is satisfied (so called canonical domain) the series
in (\ref{4}) defining $\chi_{k}$ is uniformly convergent.
$\chi_{k}$ itself satisfies the following initial conditions:
\begin{eqnarray}
\chi_{k}(x_{0}) = 1 & \mbox{and} & \chi_{k}^{\prime}(x_{0}) = 0
\label{9}
\end{eqnarray}
corresponding to the equation:
\begin{eqnarray}
\chi_{k}(x) = 1 -
\frac{\sigma{\hbar}}{2}\int_{x_{0}}^{x}dy{\omega(y)}\chi_{k} -
\frac{\sigma{\hbar}}{2}\tilde{q}^{-\frac{1}{2}}(x)\chi_{k}^{\prime}(x)
\label{10}
\end{eqnarray}
this function has to obey as a consequence of SE (\ref{7}) and
the initial conditions (\ref{9}).
In the canonical domain $D_{k}$ and the sector $S_{k} (\subset
D_{k})$ where the solution (\ref{3}) is defined the latter has
two following basic properties:
$1^{0}$ It can be expanded in $D_{k}$ into a standard
semiclassical series obtained by iterating Eq.(\ref{10}) and
taking into account the initial conditions (\ref{9});
$2^{0}$ The emerging semiclassical series is Borel summable in
$S_{k}$ to the solution itself.
The solutions (\ref{3}) defined in the above way are known as
the fundamental ones \cite{8,9}. They are pairwise independent
and collected into a full set of them they allow to solve
\underline{any} one-dimensional problem. They are distinguished by the
property $2^{0}$ above i.e. they are the unique solutions to SE
with this property \cite{7}.
\section{Standard semiclassical expansions}
By a standard semiclassical expansion for $\chi$ we
mean the following series:
\begin{eqnarray*}
\chi(x) \sim C(\hbar)\sum_{n\geq{0}}
\left(-\frac{\sigma{\hbar}}{2} \right)^{n}
\chi_{n}(x)
\end{eqnarray*}
\begin{eqnarray*}
\chi_{0}(x) = 1
\end{eqnarray*}
\begin{eqnarray}
\chi_{n}(x) = \int_{x_{0}}^{x}d\xi_{n}\tilde{D}(\xi_{n})
\times
\label{11}
\end{eqnarray}
\begin{eqnarray*}
\times \int_{x_{0}}^{\xi_{n}}d\xi_{n-1}\tilde{D}(\xi_{n-1})
\ldots \int_{x_0}^{\xi_3} d\xi_2 \tilde{D}(\xi_2)
\int_{x_{0}}^{x}d\xi_{1}\left( \tilde{q}^{-\frac{1}{4}}(\xi_{1})
\left( \tilde{q}^{-\frac{1}{4}}(\xi_{1}) \right)^{\prime{\prime}} +
\tilde{q}^{-\frac{1}{2}}(\xi_{1})\delta(\xi_1)\right)
\end{eqnarray*}
\begin{eqnarray*}
n = 1,2,\ldots
\end{eqnarray*}
\begin{eqnarray*}
\tilde{D}(x) = \tilde{q}^{-\frac{1}{4}}(x)
\frac{d^2}{dx^2} \tilde{q}^{-\frac{1}{4}}(x) +
\tilde{q}^{-\frac{1}{2}}(x) \delta(x)
\end{eqnarray*}
\begin{eqnarray*}
C(\hbar) = \sum_{n\geq{0}} C_{n} \left(-\frac{\sigma{\hbar}}{2}
\right)^{n}
\end{eqnarray*}
where a choice of a point $x_{0}$ and constants
$C_{k},k=1,2,\ldots$, is arbitrary. However, for the particular
$\chi_{k}$ (as defined by (\ref{4}), for example) this choice is of course
definite (if $x_{0}$ is given by the lower limit of the
integrations in the expansion (\ref{4}) then $C(\hbar)\equiv 1$).
Nevertheless, even in such cases the choice of $x_0$ can be
arbitrary. Only the constants $C_k$ accompanied to the
choice are definite depending on the choice \cite{7}.
The representation (\ref{11}) is standard in a sense that any
other one can be brought to (\ref{11}) by redefinitions of the
constants $C_{k}$. Therefore, any semiclassical expansion can
be uniquely given by fixing $x_{0}$ and the constants $C_{k}$.
And conversly, multiplying a given semiclassical expansion by an
asymptotic series as defined by the last series in (\ref{11})
with other constants $C_{k}, k=1,2,\ldots$, one can obtain any
other semiclassical expansion.
We have mentioned above that the semiclassical series for $\chi_k$
is Borel summable for $x$ staying in the sector $S_{k}$ where $\chi_k$
is defined. In fact it is as such at least inside a circle
$Re({\hbar^{-1}})^* =(2R)^{-1}$ of the $\hbar$-plane satisfying
sufficient conditions of the Watson-Sokal-Nevanlinna (WSN) theorem \cite{10}.
Construct now a new semiclassical series by multiplying the one
for $\chi_k$ by a $\hbar$-dependent constant $C(\hbar)$ with an
{\it analytic} behaviour at $\hbar = 0$. Expand $C(\hbar)$ into
a power series in $\hbar$, the latter being simultanuously an
{\it asymptotic} expansions for the constant. Multiply with this
power series the corresponding semiclassical expansion for
$\chi_k$.
A resulting semiclassical series can be now Borel resummed
leading us again to another solution to SE. However, this new
solution can have now two representions: the one being the
solutions (\ref{3}) multiplied by $C(\hbar)$, and the second
being a solution provided by the performed Borel resummation
i.e. there is no a priori a necessity for these two
representations to coincide.
This is exactly what is observed when a change of variable in SE
is performed.
\section{Change of variable as Borel resummations }
Consider therefore a change of variable in (\ref{7}) putting
$y=y(x)$ and assuming $y^{\prime}(x)$ to be meromorphic.
Such a change of variable preserves the SE (\ref{7}) if
simultanuously we make a substitution: $\Phi(y(x)) \equiv
y^{\prime \frac{1}{2}}(x)\Psi(x)$ and $Q(y)$ corresponding to
$\Phi(y)$ in its Schr\"odinger-like equation is given by:
\begin{eqnarray}
{y^{\prime}}^{2}(x)Q(y(x)) = q(x) - \hbar^{2}
\left( \frac{3}{4}\frac{{y^{\prime{\prime}}}^{2}(x)}{{y^{\prime}}^{2}(x)} -
\frac{1}{2}\frac{y^{\prime{\prime}\prime}(x)}{y^{\prime}(x)} \right)
\label{12}
\end{eqnarray}
Therefore, the above change of variable provide us with a new potential
differing from the old one by the term which depends totaly on $y(x)$.
It follows from the form of this term that since $y^{\prime}(x)$ is assumed
to be meromorphic this dependence can introduce to the new potential at most
second order poles not cancelling the ones of the original potential
$V(x)$ if the latter poles do not depend on $\hbar$. It then follows
further that the new second order poles can introduce to the corresponding
SG additional sectors and SL's not cancelling the old ones built around
the old infinite points of the actions. The old sectors of course change
their boundaries and enviroments (having possibly as their neighbours
some new sectors).
Consider now therefore the old sector $S_k$ and its new modified form
$\tilde{S}_k$. Both the sectors have a common part containing $x_0$
at its boundary. Using $\Phi(y)$ and $Q(y)$ we can construct
in the $\tilde{S}_k$ a solution $\tilde{\Psi}_{k}(x)$ to SE (\ref{7}).
Namely, we have:
\begin{eqnarray}
\tilde{\Psi}_{k} =
\left({y^{\prime}}^{2}\tilde{Q}(y(x)) \right)^{-\frac{1}{4}}
e^{\textstyle {\frac{\sigma}{\hbar}
\int_{x_{i}}^{x}\sqrt{y^{\prime{2}}(\xi)\tilde{Q}(y(\xi))}d\xi}}
\tilde{\chi}_{k}(y(x))
\label{13}
\end{eqnarray}
\begin{eqnarray*}
k=1,2,\ldots
\end{eqnarray*}
where $\tilde{\chi}_{k}(y)$ is constructed according to
(\ref{4}) - (\ref{6}) by making there substitutions: \\
$x{\rightarrow}y(=y(x))$, $\delta(x)\rightarrow\tilde{\delta}(y)$,
$\tilde{q}(x)\rightarrow\tilde{Q}(y)$, $\omega(x)\rightarrow
\tilde{\omega}(y)$,
$W(x)\rightarrow\tilde{W}(y)$ and $x_{0}{\rightarrow}y_{0}(=y(x_0))$.
Note that the new second order poles introduced to (\ref{13})
by $y^{\prime}(x)$ being not present in the original potential
$V(x)$ are not real singularities of $\tilde{\Psi}_{k}(x)$.
They are only singularities of the representation (\ref{13}).
To the solution (\ref{13}) there correspond a domain $\tilde{D}_{k}$
(an obvious analogue of $D$ given by the inequality (\ref{8}))
in which the solution has the same properties $1^{0}, 2^{0}$ above as the
previous ones defined by (\ref{3})-(\ref{6}). In particular
the solutions (\ref{13}) is Borel summable to itself in
$\tilde{S}_{k}$
Let us note further that because the sectors $S_{k}$ and $\tilde{S}_{k}$
have a common part with $x_{0}$ at its boundary then the solutions
(\ref{3}) and (\ref{13}) defined in the corresponding sectors
have to coincide with each other up to a muliplicative constant
$C_{k}$ i.e.
\begin{eqnarray}
\tilde{\Psi}_{k}(x) = C_{k}(\hbar)\Psi_{k}(x) & k = 1,2,\ldots
\label{14}
\end{eqnarray}
with $C_{k}(\hbar)$ given by
\begin{eqnarray}
C_{k}(\hbar) = exp \left[ \sigma{\hbar}\int_{x_{i}}^{x_{0}}
\frac{\tilde{\delta}(x)-f(x)}{\sqrt{\tilde{q}(x)} + \sqrt{q(x) +
\hbar^2 \tilde{\delta}(x)
- \hbar^{2}f(x)}}dx \right]
\label{15}
\end{eqnarray}
where
\begin{eqnarray}
f(x) =
\frac{3}{4}\frac{{y^{\prime{\prime}}}^{2}(x)}{{y^{\prime}}^{2}(x)} -
\frac{1}{2}\frac{y^{\prime{\prime}\prime}(x)}{y^{\prime}(x)}
\label{16}
\end{eqnarray}
The coefficient $C_{k}$ was calculated by taking a limit $x
\rightarrow x_{0}$ on both sides of (\ref{14}).
From (\ref{14}) and (\ref{15}) we get the following relation
between $\tilde{\chi}_{k}$ and $\chi_{k}$:
\begin{eqnarray}
\tilde{\chi}_k(x)=
\left(1+\hbar^{2}\frac{\tilde{\delta}(x)-f(x)}{\tilde{q}(x)}
\right)^{\frac{1}{4}}
\exp\left[-\sigma{\hbar}\int_{x_0}^{x}
\frac{\tilde{\delta}(\xi)-f(\xi)}{\sqrt{\tilde{q}(\xi)}+
\sqrt{\tilde{q}(\xi)+\hbar^2\tilde{\delta}(\xi)
-\hbar^{2}f(\xi)}}d\xi \right]\chi_{k}(x)
\label{17}
\end{eqnarray}
Note that the two factors in (\ref{17}) staying in front of $\tilde{\chi}_{k}$
are holomorphic with respect to $\hbar$ at $\hbar=0$.
We shall now show that the solution (\ref{13}) as well as its
$\tilde{\chi}_{k}$-function are just the Borel sums of the corresponding
semiclassically expanded right hand sights in (\ref{14}) and
(\ref{17}), respectively.
This is an immediate consequence of the holomorphicity of the
coefficient $C_{k}(\hbar)$ and of the two factors in (\ref{17}) at
$\hbar=0$ due to which their semiclassical expansions coincide
with their convergent power series expansion in $\hbar$.
Therefore, due to our earlier discussion the WSN conditions
for Borel summability of the semiclassical series emerging from
RHS in (\ref{14}) and (\ref{17}) are satisfied and
$\tilde{\Psi}_k(x)$ and $\tilde{\chi}_k(x)$ are obtained by
taking these Borel sums.
\section{Change of variable and exactness of JWKB quantization
formulae}
The last result can be done even more appealing by
using the following exponential representation for $\tilde{\chi}_k(x)$ and
${\chi}_k(x)$:
\begin{eqnarray}\label{18}
\tilde{\chi}_k(x) = \exp\left(\int_{x_0}^x \tilde{\rho}_k (\xi)d\xi \right) &,
& {\chi}_k(x) = \exp\left(\int_{x_0}^x {\rho}_k (\xi)d\xi \right)
\end{eqnarray}
so that
\begin{eqnarray}\label{19}
\tilde{\rho}_k(x) = \frac{\tilde{\chi}_k^{\prime}}{\tilde{\chi}_k}, &
{\rho}_k(x) = \frac{\chi_k^{\prime}}{\chi_k}
\end{eqnarray}
and the relation (\ref{17}) takes the form:
\begin{eqnarray}
\tilde{\rho}_k(x)=\rho_k(x)-\sigma{\hbar}
\frac{\tilde{\delta}(x)-f(x)}{\sqrt{\tilde{q}(x)}+\sqrt{q(x)+
\hbar^2 \tilde{\delta}(x)-\hbar^{2}f(x)}}+ \nonumber \\
+\frac{\hbar^2}{4} \frac{\tilde{q}(x)}{q(x)+
\hbar^2\tilde{\delta}(x)-\hbar^{2}f(x)}\left(
\frac{\tilde{\delta}(x)-f(x)}{\tilde{q}(x)}\right)^{\prime}
\label{20}
\end{eqnarray}
It follows from (\ref{19}) that both $\tilde{\rho}_k(x,\hbar)$ and
$\rho_k(x,\hbar)$ are Borel summable and from (\ref{20}) that their
Borel transforms differ by a function holomorphic on the whole
Borel plane if both the functions $f(x)$ and $\tilde{\delta}(x)$ are
$\hbar$-independent. In the latter case it is clear that one cannot
find such $f(x)$ (the form of $\tilde{\delta}(x)$
has to follow from this of $f(x)$) to cause $\tilde{\rho}_k(x,\hbar)$ to
disappear i.e. one cannot be left in $\tilde{\Psi}_k(x)$ with its first
two JWKB facors only.
This is because $\tilde{\rho}_k(x,\hbar)$ is singular at $\hbar=0$.
However, making $f(x)$ to be also $\hbar$-dependent but choosing it
holomorphic at $\hbar=0$ we can achieve a result when the first $n$ terms of
the semiclassical expansion of $\tilde{\rho}_k(x,\hbar)$ vanish. The latter
is possible globally (i.e. independently of $k$) since the semiclassical
expansions of $\tilde{\rho}_k(x,\hbar)$ are $k$-independent (i.e. do not
contain any inegration on the $x$-plane, see for example \cite{6}).
One of our earlier paper is just a good illustration of this possibility
\cite{6} (see also a comment below). However, to achieve the goal of
vanishing $\tilde{\rho}_k(x,\hbar)$ we have to use $f(x,\hbar)$ being
singular at $\hbar=0$ and therefore being
expected to satisfy all the necessary conditions of
Watson-Sokal-Nevanlinna theorem to be Borel summable. In such a
case $f(x,\hbar)$ becomes, similarly to $\tilde{\rho}_k(x,\hbar)$, sector
dependent i.e. within the class of the Borel summable functions there is
no possibilty to define a \underline{global} $y(x,\hbar)$ which when used
as a variable transformation defining $f(x,\hbar)$ provides us
with $\tilde{\rho}_k(x,\hbar)$ deprived of its $\tilde{\chi}_k$-factor for
all $k$ simultaneously. In a more obvious way one
can conclude this from (\ref{20}) putting there $\tilde{\rho}_k(x,\hbar)$
equal to zero and then
treating the equation obtained in this way as the differential
one for $f(x,\hbar)$ where $\rho_k(x,\hbar)$ is given. However, for
any two different $k$'s there are two different $\rho_k(x,\hbar)$'s
and in consequence two different solutions for $f(x,\hbar)$ have to emerge.
Summarizing the above discussion we can conclude that the effect
of variable changing leading us to the solutions (\ref{13}) can be
obtained also as a result of Borel resummations of the standard
semiclassical expansions for the solutions (\ref{4}) multiplied by a
suitaibly chosen $\hbar$-dependent constants.
A choice of the
constants in (\ref{14}) can be even done in such a way to produce
simultaneously fundamental solutions for which the series in (\ref{5})
start with an arbitrary high power of $\hbar$ \cite{6}. Such a choice
corresponds to a total effect of repeating changes of variable
when for each subsequent Schroedinger-like equation a new
independent variable is the action i.e. ${y^{\prime}}^2 (x)=
\tilde{q}(x,\hbar)$. The 'lacking'
powers of $\hbar$ are collected then in $({y^{\prime}}^2 (x,\hbar)
\tilde{Q}(x,\hbar))^{\frac{1}{4}}$ and in the corresponding
exponential factors of the solutions (\ref{4}). These two factors are
then the sources of new JWKB approximations generalizing the
conventional ones \cite{6}.
Nevertheless, as it follows from the
above discussion, there is no such a choice of the constants $C_k$
which could cause all the corresponding $\tilde{\chi}_k$'s to be reduced to
unity if all the constants as given by (\ref{15}) are to be defined
only by one global $f(x,\hbar)$ given on its own by some $y(x,\hbar)$
realizing the underlying change of the $x$-variable.
Some basic conclusion of the latter statement for the possibility to get
the exact JWKB formula for energy level quantization is the
following.
Consider a quantization of $1$-dim quantum systems
with the help of the fundamental solutions (it has been
described in many of our earlier papers \cite{1,13,14,16}). Let us
limit the problem to the case when after a change of the
$x$-variable there are only two real turning points $x_1$, $x_2$
of ${y^{\prime}}^2 (x,\hbar)\tilde{Q}(x,\hbar)$
whilst the rest of them are complex and conjugated pairwise
(we assume ${y^{\prime}}^2 (x,\hbar)\tilde{Q}(x,\hbar)$ and E to be real).
We assume also that the problem has
been limited to a segment $z_1 \leq x \leq z_2$ at the ends of which
${y^{\prime}}^2 (x,\hbar)\tilde{Q}(x,\hbar)$ has poles.
In particular we can push any of $z_{1,2}$ (or both of them) to
$\mp \infty$ respectively.
To write the corresponding quantization condition
for energy $E$ and to handle simultaneously the cases of second
and higher order poles we assume $z_1$ to be the second order pole
and $z_2$ to be the higher ones.
It is also necessary to fix to
some extent the closest enviroment of the real axis of the $x$-
plane to draw a piece of SG sufficient to write the quantization
condition. To this end we assume $x_3$ and $\bar{x}_3$ as well as
$x_4$ and $\bar{x}_4$
to be another four turning points and $z_3$ and $\bar{z}_3$ another two
second order poles of ${y^{\prime}}^2 (x,\hbar)\tilde{Q}(x,\hbar)$ closest
the real axis. Then a possible piece of SG can look as in Fig.1.
There is no a unique way of
writing the quantization condition corresponding to the figure.
Some possible three forms of this condition can be written as
\cite{8}:
\begin{eqnarray}\label{21}
\exp\ll[\frac{\sigma}{\hbar}\oint\limits_K
({y^{\prime}}^2 \tilde{Q}(x,\hbar))^\frac{1}{2} dx \right]=
-\frac{\chi_{1\to 3}(\hbar)\chi_{2\to
\bar{3}}(\hbar)}{\chi_{1\to
\bar{3}}(\hbar)\chi_{2\to 3}(\hbar)}
=-\frac{\chi_{1\to 4}(\hbar)\chi_{2\to
\bar{3}}(\hbar)}{\chi_{1\to\bar{3}}(\hbar)\chi_{2\to 4}(\hbar)}
\end{eqnarray}
and $\chi_{k\to j}(\hbar) k,j=1,2,3,4$\ are calculated for $x\to z_j$.
The closed integration path $K$ is shown in Fig.1.
In the figure the paths $\gamma_{1\to 3},\; \gamma_{2\to 3}$,
etc., are the integration paths in the formula (\ref{4})
whilst the wavy lines designate corresponding cuts of the
$x$--Riemann surface on which all the FS are defined.
\vskip 12pt
\begin{tabular}{c}
\psfig{figure=f1.eps, width=12cm} \\
Fig.1 The SG corresponding to general quantization rule \mref{21}
\end{tabular}
\vskip 12pt
The condition (\ref{21}) is $exact$. Its LHS has just the JWKB form. If we
substitude each $\chi_{k\to j}(\hbar)$ in (\ref{21}) by unity
(which these coefficients
approach when $\hbar \to 0$) we obtain the well- known JWKB quantization
rule which in general is only an approximation to (\ref{21}).
Now, since there is no an $x$-variable transformation $y(x,\hbar)$ by which
all $\chi_{k\to j}(\hbar)$ in (\ref{21}) could become simultaneously equal
to unity the RHS of (\ref{21}) cannot be reduced to unity by any such
$y(x,\hbar)$ i.e. the JWKB
formula provided in this way by (\ref{21}) is always only an
approximation. Some additional symmetry conditions have to be
satisfied by the initial $q(x)$ to provide us with such an exact
JWKB formula \cite{11}.
\section{Conclusions}
To conclude we have shown that the Borel summable fundamental solutions
to SE can be modified by appropriate Borel resummations of the
latter multiplied by properly chosen $\hbar$-dependent constans.
Sometimes effects of such resummations can be recognized as a
proper change of variable in SE. But the latter can always be
considered as an effect of such resummations. This justifies
certainly all the improvements and sometimes exact results
provided by the change-of-variable procedure applied in JWKB
calculations. The latter possibility (i.e. the exact results),
however, can realize only due to particular properties of
considered potentials reflected in global structures of their
respective Stokes graphs \cite{11}.
\section*{Appendix}
The Maslov method is formulated for an arbitrary linear
partial differential equation (LPDE) having as its semiclassical
partner a dynamical system with a finite number of degrees of
freedom \cite{13}. Maslov's semiclassical theory of solutions to the
corresponding LPDE is developed on the 'classical' objects known
in the classical mechanics as Lagrangian manifolds \cite{13,14}.
Limited to the one-degree-of-freedom case and to the $1$-DSE the
Lagrangian manifolds are nothing but the $1$-dim classical
trajectories in the corresponding $2$-dim phase space. Exact
solutions to the stationary Schroedinger equation having
particular Dirac forms (\ref{4}) can be naturally redefined to live on
the Lagrangian manifold (LM) corresponding to a given energy.
However, to cover by such a description the whole coordinate
domain which the corresponding wave fuctions are defined on, the
imaginary time evolution of the classical equations of motion
has also to be switched on to take into account so called
'classically forbidden regions'. The emerging LM contains then
branches corresponding to the real time motions (performed in
classically allowed regions) as well as to the imaginary ones
with the imaginary part of the momentum in the latter case
playing the role of the classical momentum. Of course, the
semiclassical conditions for the considered global wave function
are the following: it has to vanish exponentially when $\hbar \to 0$ in the
classically forbidden regions and to oscillate in the
classically allowed ones.
Unfortunately, the Dirac
representation of these solutions considered as functions of the
coordinate cannot be defined globally on the above LM being
singular at points where the manifold branches making impossible
a matching procedure of the solutions defined on different
branches. These singular points are called in general the
caustic ones but in the $1$-dim case they are known as turning
points. Maslov and Fedoriuk's remedy to solve this arising
'connection problem' is to change the coordinate variable around
such points into the corresponding momentum i.e. to change the
coordinate representation of the wave function into the momentum
one preserving the Dirac form of the solution. Assuming the wave
function to be normalized, its latter representation can be
given formally by the Fourier transformation of the former. In
the new representation the wave function is then regular at the
coordinate turning points of LM (being on the other hand
singular at the emerging momentum turning points). The invers
Fourier transformation considered close to a coordinate turning
point provides us again with the solution in the coordinate
representation given on both the sides of the chosen coordinate
turning point. As we have mentioned above the semiclassical
limit condition for the latter solution is of course to vanish
exponentially (when $\hbar \to 0$) on one side of the turning point and to
oscillate on the other. This condition determines the way the
local solutions determined on both the sides of each turning
point and having the Dirac form are to be matched.
The above idea of matching the solutions on different branches the
Lagrangian manifold does not seem to be effective for the exact
solutions to SE but it becomes as such when the solutions in
their Dirac forms are substituted by their corresponding
semiclassical series. This is in fact the subject of the
original approach of Maslov and collaborators. Namely, in such a
case the classically forbidden parts of the solutions
disappeared completely (being exponentially small) and the
remaining ones are then given uniquely on the classically
allowed branches of LM. The matching procedure connects then
only two oscillating solutions separated by the corresponding
turning point. The underlying Fourier transformation becomes
then effectively a point transformation determining the
connection. As it is well known \cite{13} such an semiclassical wave
function continued through a turning point on LM changes its
phase by $\pm 1$. (These changes are controlled in general by so
called Maslov indeces). Synthetically the whole operation is
performed with the help of the Maslov canonical operator \cite{13}.
It is easy to note however that the necessity to use Fourier
transformation diappears if there are possibilities to avoid
somehow turning (caustic) points on the way the wave function is
continued on. This can be achieved for example by enlarging the
number of dimensions the problem is formulated in. The
complexification of the problem is the one of such ways to be
used \cite{15}. In the $1$-dim case this can be done effectively and
without appealing directly to the semiclassical series
expansions by defining the problem on the complex coordinate
plane and utilizing the notions of Stokes graphs and fundamental
solutions. In comparison with Maslov's approach the complex
coordinate plane (in fact the latter is rather a Riemann
surface) corresponds to the complex Lagrangian manifold endowed
with the coordinate charts collected of all canonical domains
defined by the corresponding Stokes graph. To each canonical
domain a fundamental solution is attached having the
corresponding domain as the maximal one where its semiclassical
expansion as given by (\ref{11}) is valid. There is no necessity to
construct and to use the Maslov canonical operator to continue
(analytically) the fundamental solutions and to match them in
any domain of the plane. The Maslov indeces gained by the
fundamental solutions on the way of their analytical
continuations are provided by crossed cuts of the corresponding
Riemann surface. Therefore using the fundamental solution method
in the $1$-dim problems is completely equivalent to the
corresponding Maslov one in the semiclassical regime of the
problem but it has many obvious advantages over the latter with
their use as the exact solutions to SE being the first one.
Other important properties of the method have been mentioned and
used in the main body of this as well as other papers \cite{6,7,8,9,11}.
\vspace{10mm}
| {
"attr-fineweb-edu": 1.529297,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdr44dbghXafMtWAt | \section*{Abstract} {\bf Infinite projected entangled pair states
(iPEPS) provide a convenient variational description of infinite,
translationally-invariant two-dimensional quantum states. However,
the simulation of local excitations is not directly possible due to
the translationally-invariant ansatz. Furthermore, as iPEPS are
either identical or orthogonal, expectation values between different
states as required during the evaluation of non-equal-time
correlators are ill-defined. \\
Here, we show that by introducing auxiliary states on each site, it
becomes possible to simulate both local excitations and evaluate
non-equal-time correlators in an iPEPS setting under real-time
evolution. We showcase the method by simulating the $t-J$ model
after a single hole has been placed in the half-filled
antiferromagnetic background and evaluating both return
probabilities and spin correlation functions, as accessible in
quantum gas microscopes.}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{\label{sec:intro}Introduction}
While tensor network methods in the form of matrix-product states have
become the method of choice for the simulation of one-dimensional
quantum systems and provide both excellent ground-state
data\cite{schollwoeck11} and good accuracy for time-dependent
quantities\cite{paeckel19:_time}, the study of two-dimensional systems
remains more difficult. The limited system size of methods such as
exact diagonalisation or matrix-product states on a
cylinder\cite{stoudenmire12:_study_two_dimen_system_densit} becomes
particularly relevant when studying time-dependent correlators after
local excitations, as the system must be able to accommodate the
spread of those correlations over time and avoid their interaction
with any boundaries. Infinite projected entangled pair
states\cite{verstraete04:_renor, verstraete08:_matrix,
jordan08:_class_simul_infin_size_quant} (iPEPS) on the other hand
allow for the simulation of ground-state properties of \emph{infinite}
two-dimensional systems with high accuracy by repeating a finite unit
cell of tensors infinitely in both directions. iPEPS were also
recently shown to allow for the simulation of global
quenches\cite{czarnik19:_time, hubig19:_time, kshetrimayum19:_time} at
least for short times. This simulation of a real-time evolution
following a global quantum quench is relatively straightforward:
evolution methods exist\cite{lubasch14:_algor, phien15:_fast,
phien15:_infin}, the quench can be enacted by a change of the
Hamiltonian governing this evolution and translational invariance is
retained. Equal-time correlators can also be
evaluated as usual for each of the computed time-evolved post-quench
states.
However, when attempting to simulate a local quench and evaluate
non-equal-time correlators, one encounters two problems: First, it is
not possible to simply apply an operator (such as $\hat c^\dagger$) to
a single site of the quantum state to create the local excitation: To
follow this route, one would have to apply this operator to a specific
site, repeated on each unit cell. While making the unit cell itself
relatively large is feasible, in this case one merely recovers the
case of a finite PEPS calculation and loses the inherent infinity of
the iPEPS ansatz. The handling of fermionic commutation rules further
complicates this approach.
Second, when pursuing this avenue to simulate the evolution of many
excitations -- one per unit cell -- over time, it is then still not
possible to evaluate non-equal-time correlators: These correlators are
calculated as expectation values between two different quantum
states. However, evaluating the norms of those states will yield
either 0 or 1 in the thermodynamic limit and the scale of the
correlator is hence not known. In comparison, equal-time correlators
are evaluated as
$\braket{\hat O(t)} = \frac{\bra{ \psi(t)} \hat O
\ket{\psi(t)}}{\braket{\psi(t) | \psi(t)}}$,
but the denumerator is clearly ill-defined for a correlator
$\braket{\hat O(t^\prime, t)}$ between two different infinite quantum
states $\Ket{\psi(t^\prime)}$ and $\Ket{\psi(t)}$.
Here, we avoid both problems by adding one auxiliary site to each of
the physical sites of our system while preserving translational
invariance. We demonstrate the method by evaluating the return
probability and diagonal-spin-correlators of a single hole in the
two-dimensional antiferromagnetic background of the $t-J$ model\cite{dagotto90:_stron, poilblanc92:_singl, poilblanc93:_singl, poilblanc93:_dynam, beran96:_eviden_jbb, bohrdt19:_dynam, zhang91:_exact_j_hubbar_hamil, mierzejewski11:_noneq_quant_dynam_charg_carrier, lenarifmmode14:_optic, goleifmmode14:_mechan, eckstein14:_ultraf_separ_photod_carrier_mott_antif}.
\section{\label{sec:exc}Local excitations and non-equal-time
correlators}
Consider a system composed of physical local state spaces
$\mathcal{H}^p_i$ repeated on each site $i$ of an infinite lattice. We
will later focus on the case of a square two-dimensional lattice, but
the method likewise applies to other lattice geometries. The total
Hilbert space is the tensor product of the local spaces,
\begin{equation}
\mathcal{H}^p = \bigotimes_i \mathcal{H}^p_i \;.
\end{equation}
We can represent a translationally invariant quantum state
$\Ket{\psi^p} \in \mathcal{H}^p$ using a tensor network ansatz if it
has low entanglement, which is typically true for ground states of
local Hamiltonians. If $\Ket{\psi^p}$ is only invariant under
translation by multiple sites (such as e.g. an antiferromagnetic state
under translation by two instead of one site), we can also capture
this by using a sufficiently large unit cell of tensors in the ansatz.
To simulate a local excitation without breaking translational
invariance, we now create a translationally invariant superposition of
excitations on top of our initial state, simulate the time evolution
of this superposition under some Hamiltonian $\hat H$ and then select
the part of the superposition which contains an excitation at a
specific local site\cite{paredes05:_exploit_quant_paral_simul_quant,
knap13:_probin_real_space_time_resol}.
To create the superposition of local excitations, one could apply
e.g. $\left( \hat 1 + \epsilon \hat x^p_i \right)$ with some creation or
annihilation operator $\hat x^p_i$ and a small prefactor $\epsilon$
governing the density of excitations on each site as
\begin{equation}
\hat Y = \prod_i \left( \hat 1 + \epsilon \hat x^p_i \right) \;.
\end{equation}
If we let this operator act on our quantum state, we obtain a superposition
\begin{equation}
\hat Y \Ket{\psi^p} = \Ket{\psi^p} + \sum_i \epsilon \hat x^p_i \Ket{\psi^p} + \mathcal{O}(\epsilon^2).
\end{equation}
By including a suitable operator (e.g. the particle number operator)
in expectation values later, we can select one of the states with an
excitation (e.g. a hole at a particular site), which is most likely
one of the summands in the second term if $\epsilon$ is
small. Crucially, we can also do so after a real-time evolution of
$\hat Y \Ket{\psi^p}$, in this way post-selecting the evolution of a
single excitation out of the translationally invariant background.
This approach using $\hat Y$ has two downsides: First, the operator
$\hat x^p_i$ alone typically breaks some symmetry of the system such
as spin projection, particle conservation or fermionic parity. While
the former two merely lead to a less efficient simulation (as those
symmetries then cannot be used in the tensor network ansatz), the
breaking of fermionic parity is a serious problem which makes the
simulation of fermionic systems impossible. Furthermore, while it is
possible to post-select a quantum state with an excitation present at
a particular site \emph{after} the time evolution, we cannot
post-select for a state where the excitation was \emph{created at a
particular site initially}.
To circumvent both problems, we add an auxiliary state space
$\mathcal{H}^a_i$ of the same dimension as $\mathcal{H}^p_i$ to each
site of our lattice. The total Hilbert space $\mathcal{H}$ is then
defined as the tensor product of the auxiliary and physical tensor
product spaces on each lattice site
\begin{equation}
\mathcal{H} = \bigotimes_i \left( \mathcal{H}^p_i \otimes \mathcal{H}^a_i \right) \;.
\end{equation}
The initial quantum state $\Ket{\psi^p}$ is extended by a
suitably-chosen empty quantum state $\Ket{0^a}$ to form a state in the
full Hilbert space $\Ket{\psi} = \Ket{\psi^p} \otimes \Ket{0^a}$. In
the case of the $t-J$ model, for example, $\Ket{0^a}$ is the state
with zero particles on each site in the auxiliary system. The
Hamiltonian $\hat H$ used for the time evolution still only acts on
the physical system.
We then replace the excitation operator $\hat Y$ by a form which
conserves all symmetries of the system, namely
\begin{equation}
\hat X = \prod_i \left( \hat 1 + \epsilon \hat x^p_i \left(\hat x^a_i\right)^\dagger + \mathrm{h.c.} \right) \;,
\end{equation}
where for convenience with existing implementations, we then instead
use the local exponential form
\begin{equation}
\hat X = \prod_i \mathrm{exp} \left\{ \epsilon \hat x^p_i \left(\hat x^a_i\right)^\dagger + \mathrm{h.c.} \right\} \;.
\end{equation}
Instead of creating excitations from nothing as $\hat Y$ did, $\hat X$
now moves (e.g.) particles from the physical to the auxiliary system
and thereby creates an excitation in the physical sector. The density
of particles moved and hence the density of local excitations is given
by $\epsilon$, ideally we want to consider the case $\epsilon \to
0$.
No symmetry is broken during this process if we account for auxiliary
particles in the same way as we account for physical particles and
$\hat X$ hence leaves the fermionic parity of the state well-defined.
Additionally, it is now possible to not only post-select based on the
physical state of some particular site (to select an excitation
present there after the evolution), but also to post-select based on
the auxiliary state of some particular site. Because there are no
dynamics in the auxiliary layer, the auxiliary state at time $t$ is
equal to the auxiliary state at time $0$ and hence allows for the
selection of an excitation which was created at a particular site
initially.
\section{Application to the $t-J$ model}
Specifically, we consider the two-dimensional $t-J$ model on the
square lattice with a local physical three-dimensional state space
$\mathcal{H}^p_i = \mathrm{span}\left\{ \Ket{0^p_i},
\Ket{\uparrow^p_i}, \Ket{\downarrow^p_i} \right\}.$
Taking a second such space $\mathcal{H}^a_i$ increases the local
physical dimension of the iPEPS tensor from three to nine, but iPEPS
methods scale favourably in this dimension, so this is not a
concern. Let $\hat c^{p(\dagger)}_{i\sigma}$ annihilate (create) a
physical fermion on site $i$ with spin $\sigma$, let
$\hat s^{p[+,-,z]}_i$ be the physical spin-$[+,-,z]$ operator on site
$i$ (0 if the site is empty) where $\hat s^z$ has eigenvalues
$\pm \nicefrac{1}{2}$ and let $\hat c^{a(\dagger)}_{i\sigma}$
annihilate (create) an auxiliary fermion on site $i$ with spin
$\sigma$. Finally, let $\hat n^p_i$ ($\hat n^a_i$) denote the particle
number operator ($0$ or $1$) on the physical (auxiliary) site $i$.
The Hamiltonian
\begin{equation}
\hat H = -t\sum_{\langle i, j\rangle, \sigma} \left( \hat c^{p\dagger}_{i\sigma} c^p_{j\sigma} + \hat c^{p\dagger}_{j\sigma} c^p_{i\sigma} \right) + J \sum_{\langle i,j \rangle} \left[ \frac{1}{2} \left( \hat s^{p+}_i \hat s^{p-}_j + \hat s^{p+}_j \hat s^{p-}_i \right) + \hat s^{pz}_i \hat s^{pz}_j - \frac{1}{4} \hat n^p_i \hat n^p_j \right]
\end{equation}
acts on the physical sector only and is the standard $t-J$ Hamiltonian
linking all nearest-neighbour sites $\langle i, j \rangle$. Here, we
fix $t=1$ and $J=\nicefrac{1}{3}$.
Now take $\ket{\mathrm{GS}}$ to be an approximation of the infinite ground
state of $\hat H$ at a given iPEPS bond dimension $D$ and half-filling
(one fermion per site) in the physical sector, with the auxiliary
sector being entirely empty:
\begin{align}
\ket{\mathrm{GS}} = \ket{\mathrm{GS}^p} \otimes \ket{0^a} \;.
\end{align}
The physical ground state $\ket{\mathrm{GS}^p}$ is simply the
ground-state of the Heisenberg Hamiltonian, which can be reasonably
well approximated by a $D=4$ or $D=5$ iPEPS (other states may of
course require a larger bond dimension). This state breaks
translational invariance, so we use a $2 \times 2$ unit cell. It
preserves both $\mathrm{U}(1)_N$ particle number and
$\mathrm{U}(1)_{S^z}$ spin-projection symmetry and we make use of
both\cite{hubig18:_abelian}. Fermionic commutation relations are
ensured using the fermionic tensor network
ansatz\cite{barthel09:_contr, bultinck17:_fermion} as implemented in
\textsc{SyTen}'s \texttt{STensor}
class\cite{hubig17:_symmet_protec_tensor_networ, hubig:_syten_toolk}.
Given $\Ket{\mathrm{GS}}$ as described above, we create the initial
excitation with the operator
\begin{equation}
\hat X = \prod_i \mathrm{exp}\left\{ \epsilon \sum_\sigma \left( \hat c^{p\dagger}_{i\sigma} c^a_{i\sigma} + \hat c^{a\dagger}_{i\sigma} c^p_{i\sigma} \right) \right\} \;.
\end{equation}
This operator will move particles from the occupied physical sector to
the empty auxiliary sector and results in new state $\Ket{\psi(0)}$
with a finite hole density on each physical site. Evolving this state
under the physical Hamiltonian $\hat H$ is straightforward and for a
given time $t$ results in a state
\begin{equation}
\Ket{\psi(t)} = e^{-\mathrm{i}t\hat H} \Ket{\psi(0)}\;.
\end{equation}
In the following, we are particularly interested in (a) the return
probability $p^R(t)$ of a hole to its creation site and (b) the
diagonal spin-spin correlator $z^{\mathrm{diag}}(t)$ at time $t$ with
a hole present at time $t$ between the two spins.
The return probability $p^R(t)$ is given by
\begin{equation}
p^R(t) = \frac{\Bra{\psi(t)} \left( \hat 1 - \hat n^p_i \right) \hat n^a_i \Ket{\psi(t)}}{\Braket{\psi(t) | \hat n^a_i | \psi(t)}} \;,
\end{equation}
where the numerator evaluates the joint probability of a hole created
at site $i$ (via the density on the auxiliary site, $\hat n^a_i$)
present there at a later time (via the density on the physical site,
$\hat n^p_i$) with the denumerator conditioning on the initial
creation of a hole at this site. As the hole density is low, we
neglect the case of the hole created at site $i$ moving away and
another hole created at some neighbouring site $j$ taking its place.
For the diagonal spin-spin correlator around a hole, let us first
define site indices $00$, $10$ and $11$ of the $2 \times 2$ unit
cell. The correlator is then
\begin{align}
z^\mathrm{diag}(t) & = \frac{\Bra{\psi} \hat s^{pz}_{00}(t) \left( \hat 1 - \hat n^p_{10}(t) \right) \hat s^{pz}_{11}(t)\Ket{\psi}}{\Bra{\psi} \left( \hat 1 - \hat n^p_{10}(t) \right) \Ket{\psi}} \\
& = \frac{\Bra{\psi(t)} \hat s^{pz}_{00} \left( \hat 1 - \hat n^p_{10} \right) \hat s^{pz}_{11}\Ket{\psi(t)}}{\Bra{\psi(t)} \left( \hat 1 - \hat n^p_{10} \right) \Ket{\psi(t)}} \;.
\end{align}
These correlators are sketched in \Cref{fig:observables-tn}. Note
that, if desired and with larger computational effort, it would be
conceivable to repeat the same calculation at different values of
$\epsilon$ and subsequently extrapolate $\epsilon \to 0$.
\begin{figure}
\centering
\tikzsetnextfilename{observables-tn}
\begin{tikzpicture}
\node[] (pr) at (0.875,3.5) {$p^R(t)$};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux00) at (0.25,0) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys00) at (0,0) {};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux01) at (0.25,1.5) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys01) at (0,1.5) {};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux10) at (1.75,0) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys10) at (1.5,0) {};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux11) at (1.75,1.5) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys11) at (1.5,1.5) {};
\draw[dashed,thick] (0.125,0) -- (0.125,1.5);
\draw[dashed,thick] (1.625,0) -- (1.625,1.5);
\draw[dashed,thick] (aux00) -- (phys10);
\draw[dashed,thick] (aux01) -- (phys11);
\draw[dashed,thick] (phys00) -- +(-0.75,0);
\draw[dashed,thick] (phys01) -- +(-0.75,0);
\draw[dashed,thick] (aux10) -- +(+0.75,0);
\draw[dashed,thick] (aux11) -- +(+0.75,0);
\draw[dashed,thick] (0.125,0) -- +(0,-0.75);
\draw[dashed,thick] (0.125,1.5) -- +(0,+0.75);
\draw[dashed,thick] (1.625,1.5) -- +(0,+0.75);
\draw[dashed,thick] (1.625,0) -- +(0,-0.75);
\draw[thick,<-] (phys00) -- +(-0.75,-0.75) node[below]{$1 - \hat n^p_{00}$};
\draw[thick,<-] (aux00) -- +(0.75,-0.75) node[below]{$\hat n^a_{00}$};
\node[] (zt) at (5.875,3.5) {$z^{\mathrm{diag}}(t)$};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux00) at (5.25,0) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys00) at (5,0) {};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux01) at (5.25,1.5) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys01) at (5,1.5) {};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux10) at (6.75,0) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys10) at (6.5,0) {};
\node[draw,circle,dotted,draw=black, fill=white, thick, minimum width=1em] (aux11) at (6.75,1.5) {};
\node[draw,circle,draw=black, fill=black,thick, minimum width=1em] (phys11) at (6.5,1.5) {};
\draw[dashed,thick] (5.125,0) -- (5.125,1.5);
\draw[dashed,thick] (6.625,0) -- (6.625,1.5);
\draw[dashed,thick] (aux00) -- (phys10);
\draw[dashed,thick] (aux01) -- (phys11);
\draw[dashed,thick] (phys00) -- +(-0.75,0);
\draw[dashed,thick] (phys01) -- +(-0.75,0);
\draw[dashed,thick] (aux10) -- +(+0.75,0);
\draw[dashed,thick] (aux11) -- +(+0.75,0);
\draw[dashed,thick] (5.125,0) -- +(0,-0.75);
\draw[dashed,thick] (5.125,1.5) -- +(0,+0.75);
\draw[dashed,thick] (6.625,1.5) -- +(0,+0.75);
\draw[dashed,thick] (6.625,0) -- +(0,-0.75);
\draw[thick,<-] (phys00) -- +(-0.75,-0.75) node[below]{$\hat s^{pz}_{00}$};
\draw[thick,<-] (phys11) -- +(-0.75,+0.75) node[above]{$\hat s^{pz}_{11}$};
\draw[thick,<-] (phys10) -- +(-0.75,-0.75) node[below]{$1-\hat n^p_{10}$};
\end{tikzpicture}
\caption{\label{fig:observables-tn}Top view of a single iPEPS unit
cell, representing a state $\Ket{\psi(t)}$. Each site is the
product space of a physical (black) and auxiliary (white/dotted)
site. Sites are connected via iPEPS virtual bonds (dashed). Left:
The return probability $p^R(t)$ is evaluated by measuring
$1 - \hat n^p_{i}$ and $\hat n^a_i$ at the same iPEPS site. Right:
The equal-time correlator $z^\mathrm{diag}(t)$ around a hole
at time $t$ is evaluated by measuring $\hat s^{pz}_{00}$,
$\hat s^{pz}_{11}$ and $1 - \hat n^p_{10}$.}
\end{figure}
\section{Results}
In the following, we apply the method described above to evaluate the
return probability and diagonal-nearest-neighbour spin correlators in
the $t-J$ model after the effective introduction of a single hole. We
also simulate this system using time-dependent matrix-product
states\cite{paeckel19:_time} on cylinders of width 4 and 6 to obtain
comparison data for short times.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{retprob_mps_not_all.pdf}
\caption{\label{fig:retprob-tdmps}Return probability as calculated
using MPS-TDVP or the MPO $W^{\mathrm{II}}$ methods at
$J=\nicefrac{1}{3}$. Both methods used a step size
$\delta t = 0.05$. On $W=4$ cylinders, results are well-converged
at $m=1000$ already. On $W=6$ cylinders, we only achieve
qualitative convergence as the required MPS bond dimension would
exceed computational resources.}
\end{figure}
\paragraph{Time-dependent matrix-product states} on cylindrical
geometries are used to provide comparison data, assumed to be valid at
least for short times when the finite circumference of the cylinders
is not yet relevant. We compute the ground-states of the $t-J$ model
at half-filling and apply an excitation
$\hat c_{0,\uparrow} + \hat c_{0,\downarrow}$ in the centre of the
system. The resulting excited state is then time-evolved with either
the 2TDVP\cite{haegeman16:_unify} or the MPO $W^{\mathrm{II}}$
method\cite{kjaell13:_phase_xxz, zaletel15:_time,
gohlke17:_dynam_kitaev_heisen_model} using the
\textsc{SyTen}\cite{hubig17:_symmet_protec_tensor_networ,
hubig:_syten_toolk} and \textsc{TeNPy}
toolkits\cite{hauschild18:_effic_tensor_networ} respectively. The
return probability is given simply as $\Braket{1 - \hat n_0(t)}$. On
cylinders of width $W=4$, convergence is easy to achieve at modest
bond dimensions $m=1000$, increasing the bond dimension further (up to
$m=5000$) does not lead to different results. As the MPS bond
dimension scales exponentially with the circumference of the cylinder,
convergence is more difficult on $W=6$ cylinders. Running the time
evolution at the same fixed bond dimension as the initial ground state
does not converge well. Preparing the initial ground state at a
smaller bond dimension $200$ and then running the time evolution at
bond dimension $m=1000$ leads to results at least on short times very
similar to the $W=4$ cylinder (cf.~\cref{fig:retprob-tdmps}), which is
expected as the short-time dynamics are independent of the spin
background and hence governed by the hole motion only. Departing from
the short-time regime, however, the results become
uncontrolled. Increasing the bond dimension further or evolving with
the same bond dimension as the initial state does not lead to good
convergence. Additionally, while the hole spreads isotropically along
the $x$- and $y$-direction on the $W=4$ cylinder, this is not the case
on the $W=6$ cylinder (not shown). Overall, we only obtain reliable
data for the return probability on cylinders of width $W=4$ and qualitative data for cylinders of width $W=6$.
\begin{figure}[p]
\centering
\includegraphics[width=0.8\textwidth]{retprob_D4_e1e-1_short.pdf}
\caption{\label{fig:retprob_short}Return probability $p^R(t)$
calculated using iPEPS with the simple update and td-MPS on short
times from an initial $D^\prime=4$ state excited with a global
hole density of 0.01 and $J=\nicefrac{1}{3}$ with various iPEPS
bond dimensions $D$. We observe good convergence of the initial
decay once $D \geq 8$. Data is evaluated every $\delta t = 0.05$,
with symbols shown only for identification.}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=0.8\textwidth]{ztptzt_D4_e1e-1_short.pdf}
\caption{\label{fig:ztptzt_short}Equal-time diagonal spin correlator
$z^\mathrm{diag}(t)$ when a hole is present in the lower right
side of the two spins calculated using iPEPS with the simple
update. The expected zero crossing is observed when increasing the
iPEPS bond dimension around time $t \approx 0.6$. Data is
evaluated every $\delta t = 0.05$, with symbols shown only for
identification.}
\end{figure}
\paragraph{In the iPEPS simulation,} we use the fast full update (FFU,
\cite{phien15:_fast, phien15:_infin}) to obtain the initial ground
state and perform the subsequent evolution with the simple update
(SU). While the (fast) full update would be able to make better use of
the bond dimension of our state, we have encountered some stability
issues\cite{hubig19:_time} resulting from this update method which
lead to very limited time scales. The simple update may not make
perfect use of the iPEPS bond dimension but, given a sufficiently
large bond dimension, still provides good results without any of the
stability issues observed with the FFU.
We prepare the initial (ground) state at an initial bond dimension
$D^\prime = 4$ and create an excitation density of $10^{-2}$. During
the subsequent real-time evolution, we allow a range of bond
dimensions $D=4, \ldots, 16$. We focus on even bond dimensions $D$, as
odd bond dimensions show slightly worse convergence behaviour due to
truncation within spin multiplets. Future computational and
algorithmic advances may make bond dimensions $D > 17$ possible. We
use a time step size $\delta t = 0.01$ together with a second-order
Trotter decomposition of the time-evolution operator.
Exploratory calculations at $D^\prime = 5$ and/or hole density
$\approx 10^{-4}$ result in decreased hole mobility at a given
evolution bond dimension $D$ as the competition between spin and hole
entanglement during the iPEPS state truncation favour the spin sector
disproportionally when it is initially more strongly entanglend
($D^\prime = 5$) or there are fewer holes. Hole mobility still
increases when increasing the evolution bond dimension $D$, but
convergence is much slower than when starting with $D^\prime = 4$.
Expectation values are calculated using the corner transfer matrix at
increasing bond dimensions $\chi$ until the difference between results
of two successive dimensions $\chi$ and $2 \chi$ are sufficiently
small; error bars are smaller than symbol sizes in all cases.
\Cref{fig:retprob_short} and \Cref{fig:ztptzt_short} show the
short-time dynamics of the return probability $p^R(t)$ and diagonal
spin-spin correlator $z^\mathrm{diag}(t)$ calculated with iPEPS. We
observe good convergence in the bond dimension starting from
$D \geq 8$ for short times. There, the td-MPS results are
reproduced. In particular, the motion of the hole away from its
initial site on times of the order of the nearest-neighbour hopping is
captured well. At the same time, $z^\mathrm{diag}(t)$ becomes negative
because the moving hole distorts the original antiferromagnetic
background. Hence, spin correlators between both originally
nearest-neighbour and originally next-nearest-neighbour fermions
contribute to $z^\mathrm{diag}(t)$. The stronger nearest-neighbour
correlators then dominate the sum and cause the observed sign
change. Because the $\mathrm{SU}(2)$-spin symmetry is spontaneously
broken along the preferred $z$-axis in the iPEPS calculation but still
present in the finite td-MPS calculations, a comparison of numerical
values is not meaningful in this case.
For longer times, convergence is very difficult, as our ansatz is
inherently limited in entanglement and -- due to the simple update --
does not make optimal use of the available bond dimension.\footnote{A
further check on convergence may lie in a deeper analysis of the
singular value spectrum obtained after each simple update. While not
exact due to missing normalisation of the environment, one might
still expect a flattening of the spectrum as entanglement grows over
time. We would like to thank Referee 3 for this suggestion.}
However, the first revival of the return probability observed in the
td-MPS data is still reproduced well by the iPEPS results around
$t \approx 1.5$, cf.~\Cref{fig:retprob}. The iPEPS data also contains
a second, much larger revival at later times $t \approx 3.5$ which is
not observed in the td-MPS data and not physically expected either
(instead we expect the hole to move away from its creation point with
frustrated spins left behind healed by spin
flips\cite{bohrdt19:_dynam}). At the moment, it is unclear whether
this revival is due to limited entanglement in the iPEPS ansatz which
hinders healing of frustrated spins through spin-exchange interactions
and hence increases the cost of moving the hole further from its
origin or a side-effect of the typically overestimated magnetisation
in the iPEPS ground state which may lead to more Ising-like physics.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{retprob_D4_e1e-1.pdf}
\caption{\label{fig:retprob}Same as \Cref{fig:retprob_short} for
longer times $t \geq 1$. The return probability shows qualitative
features common to all calculations at large bond dimensions, but
quantitative convergence is difficult. The revival around
$t \approx 3.5$ is not expected and likely due to limited
entanglement in our ansatz.}
\end{figure}
\section{Conclusion}
We have shown that both the simulation of local excitations and the
evaluation of time-dependent correlators is possible within the iPEPS
formalism. Our predictions, such as the sign-change of diagonal
correlators around the hole in \Cref{fig:ztptzt_short}, can already be
tested in state-of-the-art quantum-gas
microscopes\cite{mazurenko17:_fermi, chiu19:_strin_hubbar,
koepsell19:_imagin_fermi,
vijayan19:_time_resol_obser_spin_charg}. Future work using an
environment-based truncation scheme such as the FFU together with a
stabilised environment (e.g. as introduced in
Ref.~\cite{vanderstraeten16:_gradien}) will be in a position to make
much better use of the available bond dimension than the simple update
employed here and hence will be able to analyse the physics of the
system for longer times, in particular the interactions between holons
and spinons. This would also open an alternative
avenue\cite{vanderstraeten19:_simul} to obtaining spectral functions
of two-dimensional systems.
\section*{Acknowledgements}
The authors would like to thank I. Bloch, E. Demler, D. Golez,
M. Greiner, I. P. McCulloch, F. Pollmann, and U. Schollwöck for useful
discussions.
\paragraph{Funding information}
C. H. and J. I. C. acknowledge funding through ERC Grant QUENOCOBA,
ERC-2016-ADG (Grant no. 742102) by the DFG under Germany's Excellence
Strategy -- EXC-2111 -- 390814868. A.B., F.G., and M.K. acknowledge
support from the Technical University of Munich -- Institute for
Advanced Study, funded by the German Excellence Initiative, the
European Union FP7 under grant agreement 291763, the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under
Germany's Excellence Strategy -- EXC-2111 -- 390814868, DFG grant
No. KN1254/1-1, DFG TRR80 (Project F8), and from the European Research
Council (ERC) under the European Union's Horizon 2020 research and
innovation programme (grant agreement No. 851161).
| {
"attr-fineweb-edu": 1.663086,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdrQ5qdmDHWQtrN6M | \section{Introduction}
\label{sec_intro}
Let $W$ be any finite Weyl group.
In \cite{Lam}, Lam defined an interesting Markov chain $\Theta'_W$ on $W$ whose stationary distribution is connected to the study of reduced expressions in the corresponding affine group.
It turns out \cite{AL} that for $W$ of type A, $\Theta'_W$ is the well-known multi-type TASEP on a ring, studied for some time by probabilists.
In particular its stationary distribution has a very elegant description in terms of the \emph{multi-line queues} of Ferrari and Martin \cite{FM}.
One could hope for a similar description of the stationary distribution of $\Theta'_W$ for general $W$.
This appears to be difficult.
Lam also defined a weighted variant $\Theta_W$ whose stationary distribution seems more amenable to analysis (though its connection to reduced words is unclear).
\begin{itemize}
\item For each $W$, we identify certain projections of $\Theta_W$, which generalize the familiar operation of merging different particle classes in the TASEP. We state this in its most general form in Section 5.
\item For $W$ of type C, we compute the stationary distribution of one of these projections. We also show how to 'invert' some of the projections. That is, we give a way to simulate one projection $\Theta_1$ in terms of a further projection $\Theta_2$ of it, together with independent randomness. One of these inversions can be seen as an analogue of the multi-line queues of Ferrari and Martin (see Theorem \ref{th_queue}).
\item We give an equivalent description of the classical (type A) TASEP which seemingly can be formulated purely in terms of geometrical notions (i.e. independently of the permutation representation of the Weyl group of type A). Though it has some interest in itself, we do not manage to generalize it to other Weyl groups.
\item In the final section we make some concluding remarks and pose several questions.
\end{itemize}
The paper is structured as follows. In Section \ref{sec_C}, we describe Lam's chain in the special case we are most interested in, the type C TASEP, and explain the notion of projections of Markov chains in this context. In Sections \ref{sec_fi} and \ref{sec_se} we prove two theorems partially describing the stationary distribution of $\Theta_W$ for $W$ of type $C$. In Section \ref{sec_general} we generalize some of the results in the preceding sections to arbitrary Weyl groups. In Section \ref{sec_ktasep} we prove a a theorem on parallel update rules for the classical TASEP on a ring, which coincides with $\Theta_W$ for $W$ of type A. Finally in Section \ref{sec_que} we pose some questions which we have no satisfactory answer to.
\section{Type C}
\label{sec_C}
Fix $n \geq 2$.
Throughout this section we fix $W$ to be the Weyl group of type C and rank $n$.
Rather than work through Lam's definition of $\Theta = \Theta_W$ (given in Section \ref{sec_general}), we give another definition which is easily checked to be equivalent (using the well-known permutation presentation of $W$, see \cite{BB}).
A \emph{state} in $\Theta$ is an assignment of labeled circles to the sites of a cycle of length $2n$ drawn as in Figure \ref{fi_1} (in the case $n = 10$).
\begin{figure}
\begin{tikzpicture}
\node[draw,circle] at (1,0){$1$};
\node at (1,1.5){$\cdot$};
\node[draw,circle] at (2,1.5){$1$};
\node at (2,0){$\cdot$};
\node[draw,circle] at (3,1.5){$2$};
\node at (3,0){$\cdot$};
\node[draw,circle] at (4,1.5){$4$};
\node at (4,0){$\cdot$};
\node[draw,circle] at (5,0){$2$};
\node at (5,1.5){$\cdot$};
\node[draw,circle] at (6,0){$3$};
\node at (6,1.5){$\cdot$};
\node[draw,circle] at (7,1.5){$3$};
\node at (7,0){$\cdot$};
\node at (8,0){$\cdot$};
\node at (8,1.5){$\cdot$};
\node[draw,circle] at (9,0){$5$};
\node at (9,1.5){$\cdot$};
\node[draw,circle] at(10,1.5){$1$};
\node at (10,0){$\cdot$};
\draw(1.5,1.0)--(9.5,1.0);
\draw(1.5,0.5)--(9.5,0.5);
\draw(1.5,1.0)--(1.5,0.5);
\draw(9.5,1.0)--(9.5,0.5);
\end{tikzpicture}
\caption{A state.}
\label{fi_1}
\end{figure}
We refer to the circles in the diagram as \emph{particles}, and the numbers in them as their corresponding \emph{classes}. No two particles are allowed to occupy the same column, though columns are allowed to be empty (such as column number 8 from the left in the example).
There is an exponential bell with rate $1$ at each site. When a bell is activated, the particle at that position (if there is one) either (i) jumps to the next site $s$ counter-clockwise if that position is empty (ii) trades place with the particle at $s$ if that particle has higher class (iii) does nothing if the particle at $s$ has lower class. Moreover, the bell in the upper line in column $i$ and the bell in the lower line in column $i+1$ trigger each other, for each $1 \leq i < n$. This ensures that no two particles occupy the same column at any time. We may thus think of the pair of bells as a single bell with rate $2$. There are then $n$ bells with rate $2$ in the middle and two bells with rate $1$ in the leftmost and rightmost column.
We denote the action of a bell at site $i$ in the lower line and site $i+1$ in the upper line by $\sigma_i$, for $1 \leq i < n$. The bell at the site furthest down to the right similarly defines $\sigma_n$ and the bell furthest up to the left defines $\sigma_0$.
This defines our Markov chain $\Theta$. Clearly the number $m_i$ of particles of each class $i$ is conserved by the dynamics. We refer to the vector $\mathbf{m} = (m_1, m_2, \dots)$ as the {\it type} of the state.
When there is one particle of each class, and each column is occupied by one particle, states are in one-to-one correspondence with signed permutations $w$ of $[n]$ -- there is a particle of class $i$ in the upper row in column $j$ if $w(j) = +i$ and in the lower row if $w(j) = -i$. In the general case, states correspond to left cosets of the group of signed permutations -- we will explain this in more detail later.
For $J \subseteq [n]$ we define a type $\mathbf{m}_J$ as follows. Start with a state of type $(1,1,\dots,1)$. If $j \in J - \{n\}$, identify the particle classes $j$ and $j+1$. If $n \in J$, remove all particles of class $n$. Now renumber the remaining particle classes with integers $1, 2, \dots$. The resulting state has a type which we define to be $\mathbf{m}_J$. For example, the states in Figures \ref{fi_1} and \ref{fi_2} have type $\mathbf{m}_{\{1, 2,4,6,10\}}$ and $\mathbf{m}_{\{1,2,4,6\}}$ respectively. Denote the set of all states of type $\mathbf{m}_J$ by $\Omega_J$.
Clearly the $\mathbf{m}_J$ enumerate all the interesting variations of types of states. Using the identifications above we similarly obtain a projection map $\varphi_i$ from words of type $\mathbf{m}_J$ to words of type $\mathbf{m}_{J\cup\{i\}}$ for each $J$ such that $i\notin J$.
We denote the restriction of $\Theta$ to states of type $\mathbf{m}_J$ by $\Theta_J$. It is easy to see that $\Theta_J$ is aperiodic and irreducible. Thus it has a unique stationary distribution $\pi_J$.
Suppose $(i, J, J')$ satisfies $i \notin J\subseteq [n]$, $J' = J \cup \{i\}$. We should think of $(i, J, J')$ as a covering relation in the boolean poset of all subsets of $[n]$, and call such a triple a {\it link}. Of course any two of $i,J$ and $J'$ determine the third, but the terminology of links turns out to be convenient.
\begin{prop}
\label{proj1}
Suppose $(X_1, X_2, \dots)$ is a random walk in $\Theta_J$, and $(i,J,J')$ is a link. Then $(\varphi_i(X_1), \varphi_i(X_2), \dots)$ is a random walk in $\Theta_{J'}$.
\end{prop}
\begin{defi}
For a Markov chain $\Theta$ with state space $\Omega$, we define its $\Omega\times\Omega$ {\it transition matrix} $M_\Theta$ by letting
\[
M_\Theta(u, v) = \sum_{\gamma : u \to v} \textrm{rate}(\gamma),
\]
the sum of the rates of all transitions $\gamma$ from state $u$ to state $v$.
\end{defi}
Write $M_J$ for $M_{\Theta_J}$. Thus the stationary distribution $\pi_J$ is an eigenvector of $M_J$, with eigenvalue $2n$.
Suppose $(i, J, J')$ is a link, and define a $\Omega_{J'}\times \Omega_J$ matrix $D = D_{i,J}$ by letting $D(v, u)$ be $1$ if $v = \varphi_i(u)$, and $0$ otherwise. Proposition \ref{proj1} can be strengthened as follows.
\begin{prop}
\label{proj2}
Suppose $(i,J,J')$ is a link. Then
\[
D_{i,J}M_J = M_{J'}D_{i,J}.
\]
\end{prop}
\begin{proof}
Choose $(v, u) \in \Omega_{J'} \times \Omega_J$.
The $(v,u)$ entry of the left hand side counts the number of $j\in [0, n]$ (where $0$ and $n$ are counted with weight $1$ and the others with weight $2$) such that $v = \varphi_i(\sigma_j u)$.
Similarly, the $(v,u)$ entry of the right hand side counts the number of $j'$ such that $v = \sigma_{j'} (\varphi_i(u))$.
These two counts can be matched to each other simply by taking $j = j'$.
\end{proof}
Proposition \ref{proj2} is stronger than Proposition \ref{proj1} in the sense that for \emph{any} eigenvector $v$ of $M_J$ with eigenvalue $\lambda$, we have $\lambda D_{i,J}v = D_{i,J}M_Jv = M_{J'} (D_{i,J}v)$, so that $D_{i,J}v$ is an eigenvector of $M_{J'}$ if it is non-zero.
Since $\pi_J$ spans a one-dimensional eigenspace of $M_J$ for each $J$, $D_{i,J}\pi_J$ must be a scalar multiple of $\pi_{J'}$. Though we are only interested in the stationary distribution of the $\Theta_J$, considering \emph{all} the eigenvectors will help us to compute the stationary distribution $\pi_J$. \footnote{In \cite{AAMP}, the authors investigate the eigenvalues and eigenvectors for the TASEP on a ring, ie. our chain when $W$ is of type A.}
Of course, Proposition \ref{proj2} appears to have no practical use -- the chain $\Theta_J$ is more complicated than the chain $\Theta_{J'}$. Therefore, the following fact from linear algebra is quite a revelation.
\begin{prop}
\label{linalg}
Suppose $A, B$ are matrices. If there is a matrix $D$ of full rank such that
\[
DA = BD,
\]
then there is a matrix $U$ of full rank such that
\[
AU = UB.
\]
\end{prop}
So the mere existence of the {\it projection matrix} $D_{i,J}$ implies the existence of some {\it conjugation matrix} $U_{i,J}$ such that
\[
M_J U_{i,J} = U_{i,J}M_{J'},
\]
which would allow us to compute $\pi_J$ from $\pi_{J'}$ by $\pi_{J} = U_{i,J}\pi_{J'}$! Of course there is no guarantee that there will be a "simple" matrix $U_{i,J}$ (eg. with small positive integer entries). However, in the coming sections we will identify such $U_{i,J}$ for some links $(i,J,J')$. It would be very interesting if these $U_{i,J}$ could be defined at the generality of Proposition \ref{linalg} -- i.e. as a function of $(M_J,M_{J'},D_{i,J})$. The two cases we consider will correspond to adding/removing the particles of largest class, and the case of only two classes of particles.
\section{Adding particles of largest class}
\label{sec_fi}
In this section we construct a conjugation matrix $U = U_{i,J}$ for links $(i, J, J')$ with $i = n$ and $J\subseteq[n-1]$ arbitrary. To describe $U$, take a state $u$ of type $\mathbf{m}_{J'}$, and let $\vartheta \in \{+, -\}^n$. We will produce a new state $v = \tau_\vartheta u$, of type $\mathbf{m}_J$. Column $j$ in $v$ will be occupied in the upper line if $\vartheta_j = +$ and in the lower line otherwise (in particular, there will be no empty columns in $v$). In the start we refer to these sites as Not Yet Occupied.
Go through the particles in $u$ in any order such that particles with small class come before particles with larger class. The order of particles of same class does not matter. Now, when considering a particle $p$ at a site $s$ in $u$, find the first NYO site in $v$, going \emph{clockwise} from $s$. Put $p$ at this site. Now $s$ has been occupied. When all particles in $u$ have been processed, fill the remaining NYO sites in $v$ with particles of a new largest class.
In Figure \ref{fi_2}, we have applied $\tau_\vartheta$, where $\vartheta=(+,-,-,+,+,-,-,-,-,-)$, to the state in Figure \ref{fi_1}.
We define $U$ by letting its $(u,v)$ entry be the number of $\vartheta\in\{+,-\}^n$ such that $v = \tau_\vartheta u$.
\begin{figure}
\begin{tikzpicture}
\node[draw,circle] at (1,1.5){$1$};
\node at (1,0){$\cdot$};
\node[draw,circle] at (2,0){$6$};
\node at (2,1.5){$\cdot$};
\node[draw,circle] at (3,0){$2$};
\node at (3,1.5){$\cdot$};
\node[draw,circle] at (4,1.5){$1$};
\node at (4,0){$\cdot$};
\node[draw,circle] at (5,1.5){$2$};
\node at (5,0){$\cdot$};
\node[draw,circle] at (6,0){$3$};
\node at (6,1.5){$\cdot$};
\node[draw,circle] at (7,0){$5$};
\node at (7,1.5){$\cdot$};
\node[draw,circle] at (8,0){$4$};
\node at (8,1.5){$\cdot$};
\node[draw,circle] at (9,0){$3$};
\node at (9,1.5){$\cdot$};
\node[draw,circle] at(10,0){$1$};
\node at (10,1.5){$\cdot$};
\draw(1.5,1.0)--(9.5,1.0);
\draw(1.5,0.5)--(9.5,0.5);
\draw(1.5,1.0)--(1.5,0.5);
\draw(9.5,1.0)--(9.5,0.5);
\end{tikzpicture}
\caption{The result of applying $\tau_{(+,-,-,+,+,-,-,-,-,-)}$ to the state in Figure \ref{fi_1}.}
\label{fi_2}
\end{figure}
\begin{theo}
\label{th_queue}
In the notation above, we have
\[
U M_{J'} = M_J U.
\]
\end{theo}
\begin{proof}
Fix $u \in \Omega_{J'}$. We need to show that the number (counted with the same weights as before) of $(j, \vartheta)$ such that $v = \sigma_j \tau_\vartheta u$ equals the number of $(j', \vartheta')$ such that $v = \tau_{\vartheta'} \sigma_{j'} u$, for each $v \in \Omega_J$.
Some pairs $(j, \vartheta)$ have a natural corresponding pair $(j',\vartheta')$ with $j' = j$, with same weight and such that $\tau_\vartheta \sigma_j u = \sigma_{j'}\tau_{\vartheta'}$, as follows.
\begin{itemize}
\item if $0 < j < n$ and $\vartheta_j = \vartheta_{j+1}$, let $\vartheta' = \vartheta$.
\item if $0 < j < n$ and $(\vartheta_{j},\vartheta_{j+1}) = (-, +)$, let $\vartheta' = (\vartheta_1, \dots, \vartheta_{j-1}, \vartheta_{j+1}, \vartheta_{j}, \vartheta_{j+2}, \dots, \vartheta_n)$.
\item if $j = 0$ and $\vartheta_1 = -$, let $\vartheta'= (+,\vartheta_2,\dots,\vartheta_n)$.
\item if $j = n$ and $\vartheta_n = +$, let $\vartheta'=(\vartheta_1,\dots,\vartheta_{n-1},-)$.
\end{itemize}
We illustrate the first of these in Figure \ref{fi_3}.
\begin{figure}
\begin{tikzpicture}
\node[draw,circle] at (1,0){$?$};
\node[draw,circle] at (2,0){$?$};
\node[draw,circle] at (3,1.5){$1$};
\node[draw,circle] at (4,1.5){$3$};
\node at (0,0){$\dots$};
\node at (0,1.5){$\dots$};
\node at (5,0){$\dots$};
\node at (5,1.5){$\dots$};
\node at (2.5,2){$\leftarrow$};
\draw(0.5,1.0)--(4.5,1.0);
\draw(0.5,0.5)--(4.5,0.5);
\node at (6.5,1){$\sigma_{j}$};
\node at (6.5,0.5){$\longrightarrow$};
\node[draw,circle] at (9,0){$?$};
\node[draw,circle] at (10,1.5){$1$};
\node[draw,circle] at (11,0){$?$};
\node[draw,circle] at (12,1.5){$3$};
\node at (8,0){$\dots$};
\node at (8,1.5){$\dots$};
\node at (13,0){$\dots$};
\node at (13,1.5){$\dots$};
\draw(8.5,1.0)--(12.5,1.0);
\draw(8.5,0.5)--(12.5,0.5);
\node at (2.7,3.5){$\tau_{\vartheta}$};
\node at (3,3.5){$\downarrow$};
\node[draw,circle] at (1,6.5){$1$};
\node[draw,circle] at (2,6.5){$5$};
\node[draw,circle] at (3,6.5){$3$};
\node[draw,circle] at (4,5){$2$};
\node at (0,5){$\dots$};
\node at (0,6.5){$\dots$};
\node at (5,5){$\dots$};
\node at (5,6.5){$\dots$};
\draw(0.5,6.0)--(4.5,6.0);
\draw(0.5,5.5)--(4.5,5.5);
\node at (1,5.75){$\downarrow$};
\node at (2,5.75){$\downarrow$};
\node at (3,5.75){$\uparrow$};
\node at (4,5.75){$\uparrow$};
\node at (2.5,7){$\leftarrow$};
\node at (6.5,6){$\sigma_{j'}$};
\node at (6.5,5.5){$\longrightarrow$};
\node[draw,circle] at (9,6.5){$1$};
\node[draw,circle] at (10,6.5){$3$};
\node[draw,circle] at (11,6.5){$5$};
\node[draw,circle] at (12,5){$2$};
\node at (8,5){$\dots$};
\node at (8,6.5){$\dots$};
\node at (13,5){$\dots$};
\node at (13,6.5){$\dots$};
\draw(8.5,6.0)--(12.5,6.0);
\draw(8.5,5.5)--(12.5,5.5);
\node at (9,5.75){$\downarrow$};
\node at (10,5.75){$\uparrow$};
\node at (11,5.75){$\downarrow$};
\node at (12,5.75){$\uparrow$};
\node at (10.7,3.5){$\tau_{\vartheta'}$};
\node at (11,3.5){$\downarrow$};
\end{tikzpicture}
\caption{In the top left corner we have drawn a state $u$, and indicated the actions of $\sigma_{j'}$ and $\tau_{\vartheta}$.
Below and to the right are $\sigma_{j'}u$ and $\tau_{\vartheta}u$ with $\tau_{\vartheta'}$ and $\sigma_j$ indicated respectively.
The claim is that the state in the bottom right is both $\sigma_j \tau_{\vartheta} u$ and $\tau_{\vartheta'}\sigma_{j'}u$.
We have only indicated one possible result $\tau_{\vartheta}u$; it could be that the position labeled $3$ is replaced by a particle with smaller class.
However, since there are no particles of class $< 1$ we are certain that the position labeled $1$ is labeled so in any circumstance.
It is easy to argue for each particle that it is going to end up in the same place in both $\tau_{\vartheta'}\sigma_{j'}u$ and $\sigma_j\tau_{\vartheta}u$.}
\label{fi_3}
\end{figure}
Thus we need to show that the set $S$ of remaining pairs $(j, \vartheta)$, ie. those satisfying
\begin{itemize}
\item $0 < j < n$, $(\vartheta_{j}, \vartheta_{j+1})=(-,+)$,
\item $j = 0$, $\vartheta_1 = +$, or
\item $j = n$, $\vartheta_n = -$,
\end{itemize}
have the same effect as the set $S'$ of remaining pairs $(j', \vartheta')$, ie. those satisfying
\begin{itemize}
\item $0 < j < n$, $(\vartheta_{j}, \vartheta_{j+1}) = (+,-)$,
\item $j = 0$, $\vartheta'_1 = -$, or
\item $j = n$, $\vartheta'_n = +$.
\end{itemize}
It is easy to see that for all $(j, \vartheta) \in S$, the state $\tau_\vartheta \sigma_j u$ is the same, and equal to $\sigma_{j'}\tau_{\vartheta'} u$ for all $(j', \vartheta')\in S'$.
Thus it suffices to show that their (weighted) counts are the same! This is easily done by considering separately the cases $(\vartheta_1, \vartheta_n) = (+,+), (+,-), (-,+), (-,-)$ and similarly for $(\vartheta'_1, \vartheta'_n)$.
\end{proof}
Thus, if $u$ is distributed according to $\pi_{J'}$ and $\vartheta \in \{+,-\}^n$ is chosen indepedently and uniformly at random, then $\tau_\vartheta u$ will be distributed according to $\pi_J$.
\begin{coro}
The Markov chain on $\Omega_J$ obtained by at each time step applying a random $\tau_\vartheta$ has the same stationary distribution as $\Theta_J$.
\end{coro}
\begin{proof}
In the proof of Theorem \ref{th_queue}, we never used the fact that there was some empty column in the state $u$. Thus the same proof shows that $M_J U = U M_J$ where $U(u, v) = 1$ if $v = \tau_\vartheta$ for some $\vartheta$ and $0$ otherwise, for $(u,v)\in\Omega_J\times\Omega_J$. Thus $U$ maps $\pi_J$ onto a constant multiple of itself.
\end{proof}
\section{Two particle classes}
\label{sec_se}
In this section we construct $U = U_{t, J}$ for the case $J = [n]-\{t\}$, $t \neq n$ (note that the case $t = n$ is trivial). In this case $U$ will only have one column, so this is equivalent to describing the stationary state of $\Theta_J$. We will not phrase the result in terms of $U$.
The chain $\Theta_J$ has strong similarities with the 3-type TASEP on a ring, and with the chain studied in \cite{DEHP}. Indeed we will show that the stationary distribution satisfies similar recursion relations to these two chains (only the initial data will be different), and the proof is very similar to that of \cite{DEHP}.
When $J = [n]- \{t\}$, we are considering states with $t$ particles of class $1$. It will be more convenient to write states as words, as follows. A state $u$ is described by a word $w_1, \dots, w_n \in \{-1,0,1\}^n$, where $w_i = 1$ if there is a particle (with the only class $1$) in the upper line in column $i$, $w_i = -1$ if there is one in the lower line, and $w_i = 0$ if both lines are empty in this column. We sometimes write $-x = \bar{x}$ for $x = 1, 0, \bar{1}$. It would be more natural to write $1,2,3$ for $1,0,\bar{1}$, though we will not do this in order to be consistent with previous sections.
In this notation, $\Theta_J$ becomes a chain on words, where any subword $01,\bar{1}1,\bar{1}0$ turns into $10,1\bar{1},0\bar{1}$ respectively at rate $2$, a $\bar{1}$ on the right end turns into $1$ at rate $1$, and a $1$ at the left end turns into a $\bar{1}$ at rate $1$.
Our analysis becomes more transparent if we temporarily generalize $\Theta_J$ so that a subword $01$ turns into $10$ at rate $a$, $\bar{1}1$ into $1\bar{1}$ at rate $b$, $\bar{1}0$ into $0\bar{1}$ at rate $c$, $\bar{1}$ on the right end into $1$ at rate $d$, and $1$ at the left end into $\bar{1}$ at rate $e$, for indeterminates $a,b,c,d,e$.
Furthermore, we will not consider the length $n$ of the words to be fixed anymore; we want to describe the value of stationary distribution at \emph{any} word in $\{-1,0,1\}$ of length at least $2$.
Thus our case of interest is $(a,b,c,d,e)$ proportional to $(2,2,2,1,1)$. To get conventions right, the most convenient choice will be $(a,b,c,d,e) = (1,1,1,\frac{1}{2},\frac{1}{2})$.
\begin{defi}
For each word $u$ (of any length $\geq 2$) we define a Laurent polynomial $[u]$ in $a,b,c,d,e$ as follows. Let $v,w$ be any words (possibly empty). Then
\begin{equation}\label{recrel1}
[v01w] = [v0w]/a,
\end{equation}\begin{equation}\label{recrel2}
[v\bar{1}1w] = ([v\bar{1}w] + [v1w])/b,
\end{equation}\begin{equation}\label{recrel3}
[v\bar{1}0w] = [v0w]/c,
\end{equation}\begin{equation}\label{recrel4}
[v\bar{1}] = [v] / d,
\end{equation}\begin{equation}\label{recrel5}
[1v] = [v] / e.
\end{equation}
Moreover, if $u$ consists of $0$s only, then $[u] = 1$.
\end{defi}
To prove that this defines $[u]$ in a unique way, we need to show that when expanding according to the recursions above we always arrive at the same result $t \cdot [0^s]$, where $t$ is some Laurent polynomial and $s$ is the number of $0$'s in $u$ (which is clearly conserved by the recursions - in fact we could have made any set of choices for $[], [0], [00], [000], \dots$ rather than setting them all equal to $1$). This is easy by induction, using the following induction hypothesis:
\emph{For words of length $\leq n$, taking any definition of $[u]$ for each $u$, all equations between brackets of words of length $\leq n$ are satisfied.}
Here is an interesting consequence of the definition.
\begin{prop}
If $u = v_1 0 v_2 0 \dots 0 v_r$ where $v_1,\dots,v_r$ are any words in $\{-1,0,1\}$, then $[u] = [v_1 0][0 v_2 0]\dots [0 v_r]$.
\end{prop}
\begin{theo}
For any $n$ and $t$, the stationary distribution of $\Theta_J$, where $J = [n] - \{t\}$, evaluated at the state $u$ is proportional to $[u]$.
\end{theo}
\begin{proof}
It suffices to prove that the numbers $[u]$ satisfy the equilibrium equation, which reads
\begin{equation}
\label{eqeq}
0 = T_0 + \sum_{0 < i < n} T_i + T_n,
\end{equation}
where
\[
T_0 = \wt(u_1\to \bar{u_1})[u_1\dots u_n] - \wt(\bar{u_1}\to u_1)[\bar{u_1} u_2 \dots u_n],
\]
\[
T_i = \wt(u_iu_{i+1} \to u_{i+1} u_i)[u_1\dots u_n] - \wt(u_{i+1}u_i\to u_iu_{i+1}) [u_1\dots u_{i-1}u_{i+1}u_iu_{i+2}\dots u_n],
\]
for $0 < i <n$ and
\[
T_n = \wt(u_n \to \bar{u_n})[u_1\dots u_n] - \wt(\bar{u_n} \to u_n)[u_1\dots u_{n-1} \bar{u_n}].
\]
Here $\wt(01 \to 10) = a$, $\wt(1\bar{1} \to \bar{1}1) = 0$, $\wt(1 \to \bar{1}) = e$ at the left end, et.c.
A case by case analysis (using \ref{recrel1}-\ref{recrel5}) shows that
\[
T_0 = a_{u_1} [u_2 \dots u_n],
\]
\[
T_i = a_{u_i} [u_1\dots u_{i-1} u_{i+1} \dots u_n] - a_{u_{i+1}}[u_1\dots u_i u_{i+2} \dots u_n]
\]
for $0 < i < n$ and
\[
T_n = -a_{u_n} [u_1\dots u_{n-1}]
\]
where $(a_1, a_0, a_{\bar{1}}) = (1, 0, -1)$.
This turns the right hand side of (\ref{eqeq}) into a telescoping sum with value $0$ (each term $a_{u_i}[\dots \hat{u_i} \dots]$ occurs twice - once with each sign), so the equilibrium equation is satisfied.
\end{proof}
For the case we are interested in, $(a,b,c,d,e)=(1,1,1,\frac{1}{2},\frac{1}{2})$, we obtain
\begin{coro}
For each $n, t$, choose $\alpha_{n,t} > 0$ such that the minimum of the numbers $n_u = \alpha_{n,t} \pi_J(u)$ for $u \in \Omega_{[n]-t}$ is $1$. Then
\begin{itemize}
\item Each $n_u$ is a positive integer.
\item $n_u = 1$ if and only if $u = \bar{1}^i0^j1^k$ for some $i,j,k$ (satisfying $i+k=t$, $i+j+k = n$).
\item $n_u \leq 2^t$ for all $u$, with equality if and only if $u = 1^i 0^j \bar{1}^k$ for some $i,j,k$.
\item For any words $u, v, w$ in $\{-1,0,1\}$, $n_{u0v0w} = n_{u0} \cdot n_{0v0} \cdot n_{0w}$.
\end{itemize}
\end{coro}
\section{The general case}
\label{sec_general}
In this section, we explain how much of the analysis carries over to general Weyl groups. The short answer is that the general setup can be formulated for a general Weyl group, but the lack of a useful permutation representation makes it hard to describe the conjugation matrices in an easy way.
Fix a root system $\Phi$ of rank $n$ in $\mathbb{R}^N$ (for some $N$) with a simple system $\alpha_1, \dots, \alpha_n$. Let $\alpha_0$ be the highest root with respect to this choice of simple system, and write $a_0 \alpha_0 = \sum_{i=1}^n a_i \alpha_i$ where the $a_i$ are nonnegative integers with $a_0 = 1$. For $i \in [0, n]$, we let $t_i$ be the reflection in the hyperplane orthogonal to $\alpha_i$. Then $t_1,\dots,t_n$ generate the Weyl group of $\Phi$. We denote its length function by $\ell(\cdot)$. For an element $w \in W$ and $i \neq 0$, we define $\sigma_i(w)$ to be equal to $w$ if $\ell(wt_i) > \ell(w)$ and equal to $wt_i$ otherwise. For $i = 0$, we let $\sigma_i(w) = w$ if $\ell(wt_0) < \ell(w)$ and $wt_i$ otherwise.
The following is Lam's original definition of $\Theta$.
\begin{defi}
The Markov chain $\Theta$ has $W$ as state space, and the outgoing transition from any state $w \in W$ are all $w \to \sigma_i(w)$, $i \in [0, n]$. Transitions corresponding to $\sigma_i$ have rate $a_i$.
\end{defi}
For type A, we have $(a_0, \dots, a_n) = (1, \dots, 1)$ and the chain $\Theta'$ in Section \ref{sec_intro} coincides with $\Theta$ in this case. For type C we have $(a_0, a_1, \dots, a_{n-1}, a_n) = (1,2, \dots, 2, 1)$. The chain for type C is equivalent to the chain defined in Section \ref{sec_intro}.
It appears that the above choice of rates $(a_0, \dots, a_n)$ is essentially the only one that gives a nice stationary distribution for $\Theta$ (eg. such that the probability of any state divided by the probability of a least likely state is an integer, and these integers are not 'too large'). I have no good explanation for this experimental fact; however, see Remark 5 in \cite{Lam}.
We now show how to generalize Propositions \ref{proj1} and \ref{proj2} to the general setting. For $J \subseteq [n]$, let $W_J$ be the subgroup of $W$ generated by $\{t_j : j \in J\}$. Note that we do not allow $0 \in J$.
\begin{prop}
Suppose $w, w'\in W$ satisfy $W_J w = W_J w'$. Then, for any $i\in[0,n]$, we have $W_J (\sigma_i(w)) = W_J(\sigma_i(w'))$.
\end{prop}
\begin{proof}
This is clear if $\ell(wt_i) > \ell(w)$ and $\ell(w't_i) > \ell(w')$, or if $\ell(wt_i) < \ell(w)$ and $\ell(w't_i) < \ell(w')$.
We can thus focus on the case when $\ell(wt_i) > \ell(w)$ and $\ell(w't_i) < \ell(w')$, the fourth case then follows by symmetry in $w, w'$.
It suffices to prove that $wt_iw^{-1} \in W_J$ since this will prove that $W_J \sigma_iw' = W_J w't_i = W_J w t_i = W_J w = W_J \sigma_i w$ if $i \neq 0$ and $W_J \sigma_iw' = W_J w' = W_J w = W_J wt_i = W_J \sigma_i w$ if $i = 0$.
Since each $W_J$ has a minimal element (Corollary 2.4.5 in \cite{BB}), we can reduce to the case when $w < w'$ in Bruhat order (the case $w' < w$ is similar).
Then (see the mirrored version of Proposition 2.4.4 in \cite{BB}), $w$ has a reduced expression
\[
s_1\dots s_r,
\]
and $w'$ has a reduced expression
\[
s'_1 \dots s'_l
\]
where $s'_i \in W_J$.
The conditions imply (Corollary 1.4.4 in \cite{BB}) that $t_i=s_r\dots s_1 s'_l \dots s'_k \dots s'_l s_1 \dots s_r$ for some $k$ and thus $ut_iu^{-1} = s'_l\dots s'_k\dots s'_l \in W_J$.
\end{proof}
Thus all the subgroup inclusion $W_{J'} \leq W_J$ (with $J' \subseteq J$) induce projections $D_{J,J'}:\Theta_J \to \Theta_{J'}$ in the same way as in Section \ref{sec_C}.
Explicitly, let $D_{i, J}(W_Ju, W_{J'}v) = 1$ if $W_{J'}u = W_{J'}v$ and $0$ otherwise. Then $D_{i,J}M_J = M_{J'}D_{i,J}$.
We should thus be able to expect there to be corresponding conjugation matrices $U_{J,J'}$. We have not succeeded in finding a general description of these (indeed, not even for groups of type C), though there appears to be 'surprisingly simple' such $U$ in several cases.
\section{The $k$-TASEP}
\label{sec_ktasep}
In this section, we prove a theorem about the TASEP on a ring (i.e. the type A case of the chains considered above), which can be formulated independently of the permutation representation. However the most obvious generalizations of the theorem do not appear to hold for a general Weyl group.
We now define the classical multi-type TASEP on a ring. We will include indeteminates $x_1, x_2, \dots$ in the definition as in \cite{LW}. This is more general than the chain $\Theta_W$ considered earlier, but reduces to it on letting $x_1 = x_2 = \dots = 1$.
The state space is the set of words of length $n$ in the alphabet $\{1, 2, \dots\}$. We consider the words to be cyclic, so that the letter to the left of $w_1$ is $w_0 = w_n$ (all indices are taken modulo $n$). For $i\in[n]$ we define $\sigma_i(w)$ as the result of sorting the two letters $w_i$ and $w_{i-1}$. Thus if the letters already satisfy $w_{i-1} \leq w_i$, nothing happens, and otherwise they swap positions.
The outgoing transitions from a general state (word) $u$ are all $u \to \sigma_i u$, $i\in[n]$, where transitions corresponding to $\sigma_i$ have rate $x_{u_i}$ (note that this definition is \emph{not} symmetric in $(u_{i-1},u_i)$).
For a proper subset $S \subsetneq [n]$ we would like to define $\sigma_S$ as the composition of all $\sigma_j$, $j\in S$. To do this we need to specify in which order non-commuting pairs of $\sigma_j$'s should be taken. Note that $\sigma_j$ and $\sigma_k$ commute whenever $|k-j|>1$ modulo $n$. We use the convention that \emph{$\sigma_{j-1}$ is taken before $\sigma_j$}.
Thus, for example, if $n = 7$, $\sigma_{ \{1,2,4,5,7\}}$ equals $\sigma_2\sigma_1\sigma_7\sigma_5\sigma_4$.
The transition $u \to \sigma_S u$ is given rate $\prod_{i\in S} x_{u_i}$.
\begin{theo}
\label{ktasep}
Fix $k \in (0,n)$. The Markov chain $\Theta_k$ on words whose outgoing transitions from a word $u$ are all $u \to \sigma_Su$, $S\subseteq[n]$, $|S|=k$, has the same stationary distribution for all $k$.
\end{theo}
In particular $\Theta_k$ has the same stationary distribution as $\Theta_1$ for each $k$, where $\Theta_1$ is the inhomogenous TASEP on a ring introduced in \cite{LW}.
A weaker theorem has been proved by Martin and Schmidt \cite{MS}, where the underlying graph is the infinite discrete line, all $x_i = 1$, and subsets $S$ of size $k$ are instead chosen at rate $p^k$, where $p \in (0,1)$ is a parameter. Since our proof will be purely local, the theorem above implies their result.
The theorem can be reduced to proving that the transition matrices of all the $\Theta_k$ commute among themselves. To see this, note that the stationary distribution corresponds to the largest eigenvalue $n$ and that the corresponding eigenspace is one-dimensional. In the next section we prove this commutation property.
Before that, let us note that in the homogenous case, $x_1 = \dots = 1$, the chain $M_k$ in Theorem \ref{ktasep} has a purely geometric definition: randomly compose a subset $S$ of $k$ simple generators (together with the reflection in the highest root), ordering them according to some orientation of the affine Dynkin diagram (the graph with one node for each generator and one node for the highest root, where two nodes are joined by an edge if their order is $>2$ (i.e. if they do not commute)). We have tried all possible orientations of the affine Dynkin graph (on $4$ nodes) of the group $B_3$, but none is consistent with a result analogous to Theorem \ref{ktasep}.
Finally, we remark that the operators $\sigma_S$ are closely related to the multi-line queues \cite{FM}. Indeed, a special case of the case $k = n-1$ of the theorem follows directly from the theory of multi-line queues: for readers familiar with multi-line queues, this is given by comparing the $(n-1)$-TASEP with multiline queues which have an extra last row to which no new particles are added. Thus, Theorem \ref{ktasep} interpolates between the definition of the chain ($k = 1$) and its highly non-trivial description in terms of multi-line queues ($k=n-1$).
\subsection{Proof of Theorem \ref{ktasep}}
As noted earlier, it suffices to show that for any $k, l \in (0, n)$, the transition matrices $A_k$, $A_l$ of $M_k$ and $M_l$ commute.
For words $u, v$, the $(u,v)$ entry of $A_kA_l$ is a weighted count of pairs $(S, T)$ such that $v = \sigma_T \sigma_S u$, $|S| = l$ and $|T| = k$ (weighted by the product of the rates of the two transitions $u \to \sigma_S u \to \sigma_T \sigma_S u$). Similarly, the entry $(A_lA_k)(u,v)$ counts pairs $(S', T')$ such that $ v = \sigma_{T'}\sigma_{S'} u$, $|S'| = k$, $|T'|=l$. The idea of the proof is to find a weight-preserving involution from pairs $(S,T)$ to $(S', T')$. This turns out to be tricky. Without loss of generality, we will only consider words with distinct letters -- the statement for general words follows by merging particle classes.
Given a triple $(u, S, T)$, define a $2\times n$ array $D$ (cyclic in the horizontal direction) as follows. If $i \in S$ and $u_i < u_{i-1}$, color the site $(1, i)$ in the array black ($\bullet$). Otherwise, if $u_i > u_{i-1}$, color it white ($\circ$). Similarly, if $w = \sigma_S u$ and $i \in T$, color site $(2, i)$ black if $w_i < w_{i-1}$ and white if $w_i > w_{i-1}$. We refer to a site which is not colored as {\it empty}. Any $2\times n$ array which arises in this way will be called a {\it diagram}. Let $C(D)$ denote the set of words $u$ which (together with some sets $S,T$) give $D$ as above. An example of a diagram is given in Figure \ref{fi_di1}, where its associated {\it particle trajectories} are also given. Note that these lines may be added or omitted as we please -- they are determined by the coloring of the $2\times n$ array. Formally, a trajectory is given by the three positions $p_1, p_2, p_3$ of the particle in $u$, $\sigma_S u$, $\sigma_T \sigma_S u$ respectively. We then say that the particle has {\it visited} position $p_1$ in the upper row and position $p_2$ in the lower row (we will use no such notation for $p_3$). Instead of defining an involution on triples $(u, S, T)$, we will define an involution on diagrams. Though diagrams are considered cyclic, we will most often deal with segments of diagrams.
\begin{figure}
\begin{tikzpicture}
\node[draw,shape=circle] at (2,2){};
\node[circle,fill=black] at (3,2){};
\node[circle,fill=black] at (4,2){};
\node[circle,fill=black] at (5,2){};
\node[circle,fill=black] at (3,1){};
\node[draw,shape=circle] at (4,1){};
\node[circle,fill=black] at (6,1){};
\draw(1,2)--(1,0);
\draw(2,2)--(5,1)--(6,0);
\draw(3,2)--(2,1)--(3,0);
\draw(4,2)--(3,1)--(2,0);
\draw(5,2)--(4,1)--(4,0);
\draw(6,2)--(6,1)--(5,0);
\end{tikzpicture}
\caption{A diagram $D$. The set $C(D)$ is given by all words $u_1\dots u_6$ such that $u_2$ is larger than all other letters $u_1,u_3,u_4,u_5,u_6$, and $u_4 < u_3 < u_5$. For example, the crossing down to the left implies that $u_3 > u_4$.}
\label{fi_di1}
\end{figure}
\begin{figure}
\begin{tikzpicture}
\node[draw,shape=circle] at (2,2){};
\node[circle,fill=black] at (4,2){};
\node[draw,shape=circle] at (5,2){};
\node[circle,fill=black] at (3,1){};
\node[circle,fill=black] at (4,1){};
\node[circle,fill=black] at (5,1){};
\node[circle,fill=black] at (6,1){};
\draw(1,2)--(1,0);
\draw(2,2)--(2,1)--(6,0);
\draw(3,2)--(4,1)--(3,0);
\draw(4,2)--(3,1)--(2,0);
\draw(5,2)--(5,1)--(4,0);
\draw(6,2)--(6,1)--(5,0);
\end{tikzpicture}
\caption{Another diagram.}
\label{fi_di2}
\end{figure}
Thus we can think of a diagram as describing the trajectories of the particles $u_1, \dots u_n$.
Say that two diagrams $D$, $D'$ of the same length $n$ are {\it compatible} if the following conditions hold.
\begin{enumerate}
\item The order relations implied by $D$ and $D'$ are equivalent. That is, $C(D) = C(D')$.
\item The number of colored sites in the top row of $D$ equals the number of colored sites in the bottom row of $D'$ and conversely.
\item The number of black sites passed by particle $i$ is the same in $D$ as in $D'$, for each $i$, and the same holds for white sites.
\end{enumerate}
As an example, the diagrams in Figues \ref{fi_di1} and \ref{fi_di2} are compatible. Not all assignments of $\{$white, black, empty$\}$ to the sites of a $2 \times n$ array gives a diagram - for example, the $2 \times n$ array with only two colored sites, one white atop a black one is not a diagram (if a particle failed to jump the first time and the particle in front of it did not move, it will fail again).
To prove the theorem, it suffices to construct an involution on diagrams pairing up compatible diagrams - condition (3) guarantees that two compatible diagrams have the same weight (for any input word $u \in S(D) = S(D')$).
We will construct the involution $\alpha$ by successively restricting the set of all diagrams to yet smaller classes of diagrams, and show how to 'lift' any definition of an involution on a smaller class to a bigger one. These restrictions all involve finding some subdiagram and replacing it by a smaller one. They are defined as follows.
\begin{enumerate}
\item
For a diagram $D$ with the following sub-$2\times 2$-diagram in column $c$ and $c+1$,
\[
\begin{tikzpicture}
\node[fill,shape=circle] at (1,1){};
\node[fill,shape=circle] at (0,0){};
\node at (0,1) {A};
\node at (1,0) {B};
\end{tikzpicture}
\]
let $\varphi_{\begin{tinymatrix} \cdot & \bullet \\ \bullet & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ denote the diagram where the sub-$2\times 2$-diagram is replaced by the following $2\times 1$-diagram (in a single new column replacing $c$, $c+1$).
\[
\begin{tikzpicture}
\node at (0,1) {A};
\node at (0,0) {B};
\end{tikzpicture}
\]
Here, $A,B$ are placeholders for any of $\bullet$, $\circ$, or an empty space.
\item
\begin{enumerate}
\item If $D$ contains the $2\times 2$ subdiagram
\[
\begin{tikzpicture}
\node[fill,shape=circle] at (1,1){};
\node[fill,shape=circle] at (2,1){};
\node[draw,shape=circle] at (1,0){};
\node at (2,0){A};
\end{tikzpicture}
\]
in column $c$ and $c+1$, let $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ denote the result of replacing the $2\times 2$ diagram by the $2\times 1$ diagram
\[
\begin{tikzpicture}
\node[fill,shape=circle] at (1,1){};
\node at (1,0){A};
\end{tikzpicture}
\]
\item
If $D$ is a diagram containing the $2\times 2$ subdiagram
\[
\begin{tikzpicture}
\node at (1,1){A};
\node[fill,shape=circle] at (1,0){};
\node[fill,shape=circle] at (2,0){};
\node[draw,shape=circle] at (2,1){};
\end{tikzpicture}
\]
in column $c$ and $c+1$, let $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D)$ denote the result of replacing the $2\times 2$ diagram by the $2\times 1$ diagram
\[
\begin{tikzpicture}
\node[fill,shape=circle] at (1,0){};
\node at (1,1){A};
\end{tikzpicture}
\]
\end{enumerate}
\item
If $D$ contains the sub-$2\times 2$-diagram
\[
\begin{tikzpicture}
\node[fill,shape=circle] at (1,1){};
\node[draw,shape=circle] at (0,0){};
\node at (0,1) {A};
\node at (1,0) {B};
\end{tikzpicture}
\]
in column $c$ and $c+1$, let $\reiii^{(c)}(D)$ denote the diagram with the subdiagram replaced by the following.
\[
\begin{tikzpicture}
\node at (0,1) {A};
\node at (0,0) {B};
\end{tikzpicture}
\]
\end{enumerate}
To find $\alpha(D)$ for a diagram $D$, we first perform three kinds of reductions. After these reductions we make a small change to the diagram, and then invert the reductions. The whole process is illustrated in Figure \ref{fi_4}. The proofs that the reductions 'work' are all similar -- we do all the details for one case in Lemma \ref{le_red2} and leave the remainder to the reader.
\newcommand{\bb}[2]{\node[circle,fill=black] at (#1,#2){};}
\newcommand{\wb}[2]{\node[draw,shape=circle] at (#1,#2){};}
\newcommand{\nb}[2]{}
\begin{figure}
\begin{tikzpicture}
\wb{2}{2};\wb{3}{2};\bb{4}{2};\bb{5}{2};\bb{6}{2};\bb{7}{2};\bb{8}{2};\wb{10}{2};\bb{12}{2};\wb{13}{2};\bb{14}{2};\wb{15}{2};
\bb{1}{1};\bb{2}{1};\bb{3}{1};\wb{6}{1};\wb{7}{1};\wb{8}{1};\bb{9}{1};\bb{10}{1};\bb{11}{1};\wb{12}{1};\wb{13}{1};
\draw(0,2)--(0,1)--(3,0);
\draw(1,2)--(1,1)--(0,0);
\draw(2,2)--(2,1)--(1,0);
\draw(3,2)--(8,1)--(11,0);
\draw(4,2)--(3,1)--(2,0);
\draw(5,2)--(4,1)--(4,0);
\draw(6,2)--(5,1)--(5,0);
\draw(7,2)--(6,1)--(6,0);
\draw(8,2)--(7,1)--(7,0);
\draw(9,2)--(9,1)--(8,0);
\draw(10,2)--(10,1)--(9,0);
\draw(11,2)--(12,1)--(12,0);
\draw(12,2)--(11,1)--(10,0);
\draw(13,2)--(14,1)--(14,0);
\draw(14,2)--(13,1)--(13,0);
\draw(15,2)--(15,1)--(15,0);
\end{tikzpicture}
\vspace{1.5cm}
\begin{tikzpicture}
\wb{2}{2};\wb{3}{2};\bb{4}{2};\bb{5}{2};\bb{6}{2};\bb{7}{2};\wb{9}{2};\wb{11}{2};\bb{12}{2};\wb{13}{2};
\bb{1}{1};\bb{2}{1};\wb{5}{1};\wb{6}{1};\wb{7}{1};\bb{8}{1};\bb{9}{1};\wb{10}{1};\wb{11}{1};
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\wb{2}{2};\bb{3}{2};\bb{4}{2};\wb{7}{2};\bb{8}{2};\wb{9}{2};
\bb{1}{1};\wb{4}{1};\bb{5}{1};\wb{6}{1};\wb{7}{1};
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\wb{2}{2};\bb{3}{2};\bb{4}{2};\wb{7}{2};\wb{8}{2};
\bb{1}{1};\wb{4}{1};\bb{5}{1};\wb{6}{1};
\node at (1,3){L};\node at (2,3){T};\node at (3,3){U};\node at (4,3){U};\node at (5,3){L};\node at (6,3){L};\node at (7,3){U};\node at (8,3){U};
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\bb{1}{2};\wb{2}{2};\bb{3}{2};\wb{7}{2};
\wb{3}{1};\bb{4}{1};\bb{5}{1};\wb{6}{1};\wb{8}{1};
\node at (1,3){U};\node at (2,3){T};\node at (3,3){U};\node at(4,3){L};\node at(5,3){L};\node at(6,3){L};\node at (7,3){U};\node at (8,3){L};
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\bb{2}{2};\wb{3}{2};\bb{4}{2};\wb{8}{2};\bb{9}{2};
\wb{4}{1};\bb{5}{1};\bb{6}{1};\wb{7}{1};\wb{8}{1};\wb{10}{1};
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\bb{1}{2};\bb{2}{2};\wb{3}{2};\bb{4}{2};\nb{5}{2};\wb{6}{2};\wb{7}{2};\nb{8}{2};\wb{9}{2};\nb{10}{2};\wb{11}{2};\bb{12}{2};
\wb{1}{1};\nb{2}{1};\nb{3}{1};\wb{4}{1};\bb{5}{1};\bb{6}{1};\bb{7}{1};\bb{8}{1};\bb{9}{1};\wb{10}{1};\wb{11}{1};\nb{12}{1};\wb{13}{1};
\end{tikzpicture}
\vspace{1cm}
\begin{tikzpicture}
\bb{1}{2};\bb{2}{2};\wb{3}{2};\bb{4}{2};\bb{5}{2};\wb{7}{2};\wb{8}{2};\nb{9}{2};\wb{10}{2};\nb{11}{2};\bb{12}{2};\wb{13}{2};\bb{14}{2};
\wb{1}{1};\nb{2}{1};\bb{3}{1};\wb{5}{1};\bb{6}{1};\bb{7}{1};\bb{8}{1};\bb{9}{1};\bb{10}{1};\bb{11}{1};\wb{12}{1};\wb{13}{1};\wb{15}{1};
\end{tikzpicture}
\caption{Example of computing $\alpha(D)$ for the diagram $D$ in the top. The three diagrams after the first one correspond to I-, II- and III-reductions. The next step consists of computing $\alpha(D')$ for the III-reduced diagram $D'$. The remaining steps consists of inverting the III-, II- and I-reductions. So the last diagram is the image under the involution of the first. To reduce clutter we have drawn the trajectories only for the first diagram.}
\label{fi_4}
\end{figure}
\newcommand{\circ}{\circ}
\newcommand{\bullet}{\bullet}
\begin{lemma}
Let $D, D'$ be two diagrams and $c$ a column for which $\varphi_{\begin{tinymatrix} \cdot & \bullet \\ \bullet & \cdot \\ \end{tinymatrix}}^{(c)}$ is applicable.
If $\varphi_{\begin{tinymatrix} \cdot & \bullet \\ \bullet & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ and $\varphi_{\begin{tinymatrix} \cdot & \bullet \\ \bullet & \cdot \\ \end{tinymatrix}}^{(c)}(D')$ are compatible, then $D$ and $D'$ are compatible.
\end{lemma}
Call a diagram $D$ for which there is no $c$ such that $\varphi_{\begin{tinymatrix} \cdot & \bullet \\ \bullet & \cdot \\ \end{tinymatrix}}^{(c)}$ can be applied a {\it I-reduced diagram}. The lemma shows that to construct our required involution it suffices to construct it on the set of I-reduced diagrams. An example of our involution $\alpha$ to be defined on this restricted set is given by removing the forbidden subdiagram from Figures \ref{fi_di1} and \ref{fi_di2}.
It is easy to see that for a I-reduced diagram, two sites in the same column cannot both be black, and if there is a column colored black in the lower row and white in the upper row, then the site in the lower row to the left of the column is black.
\begin{lemma}
\label{le_red2}
Suppose $D$ and $D'$ are I-reduced diagrams, $c$ a column. Then
\begin{itemize}
\item $D$ and $D'$ are compatible whenever $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ and $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D')$ are both defined and compatible.
\item $D$ and $D'$ are compatible whenever $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ and $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D')$ are both defined and compatible.
\item $D$ and $D'$ are compatible whenever $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D)$ and $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D')$ are both defined and compatible.
\end{itemize}
\end{lemma}
\begin{proof}
We consider the second case. Denote the placeholders in $D$ and $D'$ by $A$ and $A'$. Note that since the diagrams are I-reduced, $A$ (and $A'$) is not colored black.
First we need to show that the output of $D$ and $D'$ are the same (for any input word $u \in C(D) = C(D')$), that is, we need to show that each particle ends up in the same place in both $D$ and $D'$. We know this is true for $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ and $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D')$ so we only need to consider particles that pass through column $c$ in $D$. This is easy to check -- if the particle passes column $c$ in $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ it must do so in the upper row, and then it must pass column $c$ in the lower row in $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D')$. The column added when constructing $D$ respectively $D'$ preserves this property. And if the particle does not pass column $c$ in $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}(D)$ and $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}(D')$, the same holds in $D$ and $D'$. Finally it's easy to check that the particle starting in the 'new' column in $D$ and $D'$ end up in the same place (indeed, it's the only place left).
Next, we need to show that $C(D) = C(D')$. This amounts to checking what new relations on the input word $u$ the colors of the new sites in $D$ and $D'$ give us. This can be checked directly and indivdually for the new white site in $D$ and $D'$ and the new black site in $D$ and $D'$.
\end{proof}
Call a I-reduced diagram for which no $\varphi_{\begin{tinymatrix} \cdot & \circ \\ \bullet & \bullet \\ \end{tinymatrix}}^{(c)}$ nor $\varphi_{\begin{tinymatrix} \bullet & \bullet \\ \circ & \cdot \\ \end{tinymatrix}}^{(c)}$ is defined a II-reduced diagram. We now make the final restriction.
\begin{lemma}
Suppose $D, D'$ are II-reduced diagrams, and $c$ is a column for which $\reiii^{(c)}(D)$ and $\reiii^{(c)}(D')$ are both defined. Then, if the latter two are compatible, then so are $D$ and $D'$.
\end{lemma}
If $D$ is a II-reduced diagram for which no $\reiii^{(c)}$ is defined we call $D$ a III-reduced diagram. The variety of III-reduced diagrams is sufficiently narrow to be analyzed directly. Note that in any III-reduced diagram, if a particle passes two colored sites, then those are necessarily white. Suppose $D$ is a III-reduced diagram. Above the starting column of each particle (see Figure \ref{fi_4}), write either $U, L$ or $T$, depending on whether the particle passes (T)wo sites colored white or a unique colored site in the (U)pper row or a unique colored site in the (L)ower row. To define $\alpha(D)$, for each maximal word $U^r D^s$ strictly between two words of the type $T$ or $LU$, change the behavior of the particles corresponding to $U^rL^s$ so that it will read $U^s L^r$ instead (see the example in Figure \ref{fi_4}). We define the resulting diagram to be $\alpha(D)$.
It is readily checked that the map is an involution, and that it pairs up compatible diagrams. Thus this is true of the extended involution on all diagrams, too. This finishes the proof.
\section{Questions, remarks}
\label{sec_que}
\subsection{}
Is there an easily described conjugation matrix $U_{i,J}$ for general links $(i, J)$ in type C (or B or D)?
As an example, in the notation of Section \ref{sec_C}, consider the link $(i, J, J') = (2, \{3,4\}, \{2,3,4\})$ when $n=4$. Convert states in $\Theta_J$ (and $\Theta_{J'}$) to words $w_1w_2w_3w_4$. If the $i$th column is empty, let $w_1 = 3$ ($2$), otherwise let $w_i = +i$ if there is a particle of class $i$ in the upper line, and $w_i = -i$ if there is a particle of class $i$ in the lower line. We order the $48$ states of $\Theta_J$ lexicographically: $\bar{2}\bar{1}33 < \bar{2}133 < \bar{2}3\bar{1}3 < \dots < 3321$. Similarly, the $8$ states of $\Theta_{J'}$ are ordered (from left to right) as $\bar{1}222 < 1222 < 2\bar{1}22 < 2122 < 22\bar{1}2 < 2212 < 222\bar{1} < 2221$. Indexing rows and columns this way, the transpose $U$ of the following matrix satisfies $M_J U = U M_{J'}$.
$
\left(
\begin{array}{c}
1 0 1 0 2 1 0 3 0 1 0 0 0 4 0 2 0 0 2 0 2 0 4 2 1 0 2 1 0 2 0 0 0 1 0 0 1 0 2 1 4 2 0 1 0 0 2 1 \\
1 0 1 0 1 0 0 3 0 1 1 1 0 4 0 2 2 2 2 0 2 0 2 0 1 0 1 0 0 2 1 1 0 1 1 1 1 0 1 0 3 1 1 2 1 1 1 0 \\
1 0 1 1 2 1 0 3 0 0 0 0 0 4 0 0 0 0 2 0 2 2 4 2 1 1 2 1 0 1 0 0 0 0 0 0 1 1 2 1 4 3 0 0 2 1 0 0 \\
1 0 1 0 0 0 0 3 0 1 2 1 0 4 0 2 4 2 2 0 2 0 0 0 1 0 0 0 0 2 2 1 0 1 2 1 1 0 0 0 2 2 2 1 2 1 0 0 \\
1 1 1 1 2 1 0 2 0 0 0 0 0 2 0 0 0 0 2 2 2 2 4 2 1 2 2 1 0 0 0 0 1 1 2 1 0 0 0 0 4 3 0 0 2 1 0 0 \\
1 0 0 0 0 0 0 3 1 1 2 1 0 4 2 2 4 2 2 0 0 0 0 0 0 1 0 0 1 1 2 1 1 1 2 1 0 0 0 0 4 3 0 0 2 1 0 0 \\
1 2 1 1 2 1 0 1 0 0 0 0 1 3 1 1 2 1 1 1 1 1 2 1 1 2 2 1 0 0 0 0 1 1 2 1 0 0 0 0 4 3 0 0 2 1 0 0 \\
0 1 0 0 0 0 1 2 1 1 2 1 2 4 2 2 4 2 0 0 0 0 0 0 1 2 2 1 0 0 0 0 1 1 2 1 0 0 0 0 4 3 0 0 2 1 0 0 \\
\end{array}
\right)
$
(This is a $8\times 48$ matrix all of whose entries are in $\{0,1,2,3,4\}$.)
Is there a combinatorial rule (along the lines of the queueing process of Section \ref{sec_fi}) which produces the column corresponding to any given state in $\Theta_{J'}$?
\subsection{} Is it possible to carry the explicit description of the stationary distribution $\Theta_J$ further than is done in Section \ref{sec_se}, say for $|J| = n - 2$?
\subsection{} Can the matrix $U$ in Section \ref{sec_fi} be defined without reference to the permutation representation of the group, ie. using only the realization of the group as a reflection group?
\subsection{} Small examples indicate that a similar queuing process exists for groups of type B and D. Is it easier to extend the analysis in Sections \ref{sec_fi} and \ref{sec_se} for these groups?
\subsection{} Can the $k$-TASEP be extended to general Weyl groups?
| {
"attr-fineweb-edu": 1.263672,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdrfxK1ThhBMLgsl1 | \section{Introduction}
The application of perturbative QCD~\cite{gross} is not straightforward
even for reactions with large momentum transfer $Q$.
In higher order corrections soft and collinear momentum
regions may lead to large contributions of
order
$(\alpha_s \log(Q/m_q))^n$ where $m_q$ denotes a light quark mass.
These terms give ${\cal O}( 1)$ corrections which destroy the validity
of the perturbative treatment.
The applications
of perturbative QCD are limited to phenomena where such
terms are either cancelled or can be controlled with improved treatment
(resummation).
Fortunately, in perturbative QCD the soft and collinear structure
is relatively
well understood~\cite{colsoprevs}.
The main features are summarized by
fundamental cancellation and factorization theorems
valid in all orders in perturbation theory.
This lecture will show explicitly how the cancellation and factorization
theorems ``work'' in next-to-leading order applications.
I describe how to combine virtual and
real next-to-leading order infrared singular amplitudes into
finite physical cross sections.
The method of calculating
amplitudes in leading and next-to-leading order
have been described
in great detail by Lance Dixon~\cite{lancetasi}.
He also discussed
the asymptotic properties of amplitudes in the soft and collinear regions
as well as the soft and collinear singular terms appearing in one-loop amplitudes.
I describe the singularities as they appear in various
cross section contributions (loop contributions and bremsstrahlung-contributions).
I consider their universal features and show explicitly that they cancel for
infrared safe quantities. The examples will always be chosen
from the physics of jet-production.
In order to build a numerical program for efficiently calculating
hard scattering cross sections in next-to-leading order (NLO)
accuracy one should use a well defined algorithm
for the analytic cancellation of the soft and collinear singularities.
Several methods are used in the literature. I shall focus on the
subtraction method.
\section{Cancellation and factorization theorems}
\subsection{ KLN cancellation theorem}
In simple inclusive reactions,
such as the total cross section of $\epem$ annihilation into
quarks and gluons,
the soft and collinear contributions
cancel~\cite{kinoshitalee}. That is a consequence of the KLN theorem.
In the simplest examples, only one high momentum transfer scale
is relevant, the effective coupling becomes
small and
the cross section can reliably be calculated in power series
of the effective coupling up to small power
corrections
\begin{equation}
R = \frac {\sigma(e^+e^- \rightarrow {\rm hadrons})}{\sigma (e^+e^- \rightarrow \mu^+
\mu^-)} = \overbrace{(1 + \frac {\alpha_s}{\pi}+...)}^{\bar R} 3\sum_2
e_{q}^{2}
\end{equation}
where
\begin{equation}
\bar R = 1 + \frac{\alpha_s(\mu)}{\pi} + \biggl(\frac
{\alpha_s(\mu)}{\pi} \biggr)^2 \bigl[ \pi b_0 \ln \frac{\mu^2}{s} + B_2
\bigr] + ...
\end{equation}
${\ifmmode \alpha_S \else $\alpha_S$ \fi}$ is the running coupling constant,
$\mu$ is the renormalization scale, $B_2$ is a known constant
given by the NNLO calculation~\cite{nnloepem} and $b_0$ is the first coefficient
in the beta-function
\begin{equation}
b_0=\frac {11 - \frac {2}{3} n_f}{6 \pi}
\end{equation}
where $n_f $ denotes the number of quark flavours.
The truncated series is $\mu$-dependent but the $\mu$ dependence is
${\cal O}({\ifmmode \alpha_S \else $\alpha_S$ \fi}^{(n+1)})$ if the cross section is calculated to
${\cal O}( {\ifmmode \alpha_S \else $\alpha_S$ \fi}^{n})$.
The KLN theorem remains
valid also for integrating over final states in a limited phase space
region, as is the case of jet production.
The Sterman-Weinberg two-jet cross section~\cite{stermanweinberg}
is defined by requiring
that all the final state partons are within a back-to-back cone
of size $\delta$
provided their energy is less than $\epsilon \sqrt{s}$.
At NLO
\begin{equation}\eqalign{
\sigma_{\rm 2jet} &= \sigma_{\rm SW} (s,\varepsilon,\delta)
\cr
& = \sigma_{\rm tot} - \sigma_{q \bar q
g}^{(1)} ({\rm all}\ E > \varepsilon \sqrt{s},
{\rm all}\ \theta_{ij} > \delta)
\cr &=
\sigma_0 \biggl[ 1 - \frac {4\alpha s}{3\pi}\bigl(4 \ln 2 \varepsilon
\ln \delta + 3 \ln \delta - 5/2 + \pi^2/3\bigr)\biggr]
}
\end{equation}
where $\sigma_0=4\pi\alpha^2/3s$.
The cancellation theorem here is hidden in the calculation
of the total annihilation cross section.
\subsection{Factorization theorem}
The initial state collinear singularities,
which in general do not cancel,
are universal and process-independent in all
orders in perturbation theory~\cite{colsoprevs}.
Therefore they can be cancelled
by universal collinear counter terms generated
by the \lq renormalization\rq of the incoming parton densities. The rule
for defining the finite part of this counter term is fixed
by the factorization scheme. As in the case of ultraviolet
renormalization~\cite{collinstasi},
the physics is unchanged under a change of the
factorization scheme, provided the parton densities are also changed
suitably. This feature is expressed by the Altarelli-Parisi
evolution equation of parton densities. The collinear
subtraction terms
define the kernels of the evolution equations.
The differential
cross section for hadron collisions can be written
as
\begin{equation}
d\sigma_{AB}(p_A,p_B)=\sum_{ab}\int dx_1 dx_2 f_{a/A}(x_A)
f_{b/B}(x_B)d\hat{\sigma}_{ab}(x_A p_A,x_B p_B)\,,
\label{factth}
\end{equation}
where $A$ and $B$ are the incoming hadrons, $p_A$ and $p_B$
their momentum, and the sum runs over all the parton flavours
which give a non-trivial contribution. The quantities
$d\hat{\sigma}_{ab}$ are the {\it subtracted} partonic cross sections,
in which the singularities due to collinear emission
of massless partons from the incoming partons have been cancelled
by some suitable counter terms.
According to the factorization theorem,
the subtracted cross section is obtained by adding the collinear
counter terms to
the unsubtracted cross section. The latter quantity
can be directly calculated in perturbative
QCD.
Due to universality, eq.~(\ref{factth}) applies also when
the incoming hadrons are formally substituted for partons.
In this case, we are also able to evaluate the partonic
densities, which at NLO read
\begin{equation}
f_{a/d}(x)=\delta_{ad}\delta(1-x)-\frac{{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{2\pi}
\left(\frac{1}{\epb}P_{a/d}(x,0)-K_{a/d}(x)\right)
+{\cal O}\left({\ifmmode \alpha_S \else $\alpha_S$ \fi}^2\right),
\end{equation}
where $P_{a/d}(x,0)$ are the Altarelli-Parisi kernels in four
dimensions (since we will usually work in $4-2\epsilon$ dimensions, the $0$
in the argument of $P_{a/d}$ stands for $\epsilon=0$) and the functions
$K_{a/d}$ depend upon the subtraction scheme in which the calculation
is carried out. For ${\rm \overline{MS}}$, $K_{a/d}\equiv 0$. Writing
the perturbative expansion of the unsubtracted and subtracted partonic
cross sections at next-to-leading order as
\begin{equation}
d\sigma_{ab}=d\sigma_{ab}^{(0)}+d\sigma_{ab}^{(1)}\,,\;\;\;\;
d\hat{\sigma}_{ab}=d\hat{\sigma}_{ab}^{(0)}+d\hat{\sigma}_{ab}^{(1)}\,,
\label{decomposition}
\end{equation}
where the superscript 0 (1) denotes the leading (next-to-leading)
order contribution, we have
\begin{eqnarray}
d\hat{\sigma}_{ab}^{(0)}(p_1,p_2)&=&d\sigma_{ab}^{(0)}(p_1,p_2)
\\*
d\hat{\sigma}_{ab}^{(1)}(p_1,p_2)&=
&d\sigma_{ab}^{(1)}(p_1,p_2)
+ d\sigma_{ab}^{\rm count}(p_1,p_2)
\label{counterterms}
\end{eqnarray}
where
\begin{eqnarray}
d\sigma_{ab}^{\rm count}(p_1,p_2)&=&
\frac{{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{2\pi}\sum_d\int dx\left(\frac{1}{\epb}P_{d/a}(x,0)
-K_{d/a}(x)\right)d\sigma_{db}^{(0)}(xp_1,p_2)
\nonumber \\*&&
+\frac{{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{2\pi}\sum_d\int dx\left(\frac{1}{\epb}P_{d/b}(x,0)
-K_{d/b}(x)\right)d\sigma_{ad}^{(0)}(p_1,xp_2)\,. \nonumber
\\*
\label{counterterms2}
\end{eqnarray}
Eq.~(\ref{counterterms2}) defines the
collinear counter terms for any finite hard scattering cross section
for processes with quarks and/or gluons in the initial state.
Notice that in this
equation the Born terms $d\sigma^{(0)}$ are evaluated in
$4-2\epsilon$ dimensions.
The KLN theorem and the factorization theorems,
constitute the theoretical basis of
the description of scattering processes of hadrons in
perturbative QCD.
Those physical quantities for which these theorems remain valid
are called infrared safe~\footnote{ In other words
a measurable is infrared safe if
it is insensitive to collinear splittings of partons
and/or emission of soft gluons.}.
These theorems constitute the necessary
consistency condition for the validity of the fundamental assumption of
the QCD improved parton model. This assumption is that for
the case of infrared safe quantities
the perturbative QCD predictions given in terms of partons are a good
approximation to the same quantities measured in terms of
hadrons (up to power corrections which are small at high momentum
scales).
Provided the higher order corrections at a given order
are larger than the power corrections,
one can systematically
improve the accuracy of the predictions by calculating
terms of higher and higher order.
Indeed the analysis of the experimental
results required the inclusion of higher order radiative corrections
for a large number of measured quantities.
\section{Jet cross sections at next-to-leading order}
We consider the cross sections for three jet
production in $\epem$ annihilation and two-jet production
in hadron-hadron collisions. These cross sections are
proportional
to at least one power of ${\ifmmode \alpha_S \else $\alpha_S$ \fi}$ and are studied
experimentally with high precision.
\subsection{Three-jet production in ${\epem}$ annihilation}
\label{sec:infraredsafe}
Let us consider the process
\begin{equation}
e^-(k_-) + e^+(k_+) = a_1(p_1) + ...\ + a_n(p_n)
\end{equation}
where
$a_i$ denote quarks or gluons and $n=3,4$.
The amplitudes of these processes ${\cal} A^{(n,i)}$ are known in the tree
approximation $(i=0)$ for $n=3,4,5$ partons
and in the one-loop approximation
$(i=1)$ for $(n=3)$ partons. Lance Dixon explained how to calculate
these amplitudes quickly with modern techniques.
It is convenient to consider the squared amplitude divided
by the flux and the spin averaging factor $8 s$ ($s=2k_+k_-$) :
\begin{equation}
\psi^{(n,0)}_{\epem}(\{a_l\}_{1,n};\{p_l\}_{1,n})=\frac{1}{8 s}\
\sum_{\stackrel{\rm colour}{\rm spin}}\abs{{\cal A}^{(n,0)}_{\epem}}\,
\label{bornampdef}
\end{equation}
and
\begin{equation}
\psi^{(3,1)}_{\epem}(\{a_l\}_{1,3};\{p_l\}_{1,3})=\frac{1}{8 s}\
\sum_{\stackrel{\rm colour}{\rm spin}}\left(
{\cal A}^{(3,0)}_{\epem}\,{\cal A}^{(3,1)*}_{\epem} +
{\cal A}^{(3,0)*}_{\epem}\,{\cal A}^{(3,1)}_{\epem}\right)
\label{virtual}
\end{equation}
where
$\{v_l\}_{m,m+n}$ is a short-hand notation for the list
of variables $${v_m, v_{m+1}, ...,v_{m+n}}$$ and we
indicated the flavour and momentum dependence
only for the $\psi$ functions.
The one-loop corrections
to the production of three
partons
are given by
$\psi^{(3,1)}$. They were
calculated first by R.K.~Ellis, Ross
and Terrano (ERT)~{\cite{ert}}. Recently, a new
derivation using the helicity method,
where the orientation with
respect to the beam direction is not averaged,
was given by Giele and Glover~\cite{gieleglover}.
The physical cross sections
are obtained by integrating
the product of the $\psi$ functions
and some ``measurement functions'' $S_{X}$
over the corresponding phase space volume
\begin{equation}
d\sigma^{\rm nlo}=
d\sigma^{\rm Born}
+
d\sigma^{\rm virt}
+
d\sigma^{\rm real}\,,
\label{bornvirtreal}
\end{equation}
where
\begin{eqnarray}
d\sigma^{\rm Born}(s,{X})
&=&\sum_{\{a_l\}_{1,3}}
\psi^{(3,0)}_{\epem}
{\cal S}_{X,3}(\{p_l\}_{1,3};{X}) d\phi_3(\{p_l\}_{1,3})\\
d\sigma^{\rm virt}(s,{X})
&=&\sum_{\{a_l\}_{1,3}}
\psi^{(3,1)}_{\epem}
{\cal S}_{X,3}(\{p_l\}_{1,3};{X}) d\phi_3(\{p_l\}_{1,3})\\
d\sigma^{\rm real}(s,{X})
&=&\sum_{\{a_l\}_{1,4}}
\psi^{(4,0)}_{\epem}
{\cal S}_{X,4}(\{p_l\}_{1,4};{X}) d\phi_4(\{p_l\}_{1,3})
\end{eqnarray}
where $X$ stands for the measured physical quantity and
\begin{equation}
d\phi_n = \frac{1}{n!}\int \prod_{i=1}^n\frac{d^{d-1}p_i}{(2\pi)^{d-1}}
(2\pi)^{d}\delta^{(d)}(k_+ + k_ - -\sum_{i=1}^n p_i)
\end{equation}
with $d=4-2\epsilon$. As a result of complete flavour sum
the final particles behave as though they were identical;
this explains the identical particle factor of
$n!$. All quantities are calculated
in $d=4-2\epsilon$ dimensions. The singular
terms appear as single or double poles in $\epsilon$.
The singularities, however, cancel in the sum~(\ref{bornvirtreal}).
{\it Infrared safe measurement function.}
The cancellation of the soft and collinear singularities
of the virtual corrections against the singular part of the
real contribution
is independent of the
form of the measurement functions
provided they
are insensitive to collinear splitting and soft emission.
This means that one obtains the same measured result
whether or not a parton splits into two collinear partons and whether or
not one parton emits another parton that carries infinitesimal
momentum. A physical quantity that is designed to look at
short distance physics should have this property, otherwise it will be
sensitive to the details of parton shower development and
hadronization. The mathematical requirements for ${\cal S}_{X,3}$ and
${\cal S}_{X,4}$ are that ${\cal S}_{X,4}$ should reduce to
${\cal S}_{X,3}$ when two of the outgoing partons become collinear:
\begin{equation}
\label{safeS}
{\cal S}_{X,4}(p_1^\mu,p_2^\mu,(1-\lambda)p_3^\mu,\lambda p_3^\mu)
={\cal S}_{X,3}(p_1^\mu,p_2^\mu,p_3^\mu)
\end{equation}
for $0\le \lambda \le 1$
plus similar conditions where the $\lambda$ and $1-\lambda$
factors are inserted to all possible pairs of the momenta in
$S_{X,4}(p_1^\mu,p_2^\mu,p_3^\mu,p_4^\mu)$.
{\it Example of an infrared safe observable.}
For the sake of illustration I give the definition of
the measurement function for the shape
variable
thrust $T$
\begin{equation}
{\cal S}_{T,n} = \delta \bigl(T - \tau_n (p_1^\mu, p_2^\mu, ..., p_n^\mu)\bigr)
\,,\hspace{.5cm} {\rm where}\hspace{.5cm}
\tau_n = \displaystyle\mathop{\max}_{\vec u}
{{\sum\limits_{i=1}^{n}\abs{\vec p_i \vec u}}\over
{\sum\limits_{i=1}^n \abs{\vec p_i}}}\,.
\end{equation}
Thrust is well defined for arbitrary number of
final-state particles. It is easy to check that it satisfies
the conditions of infrared safety formulated with
the help of eq.~(\ref{safeS}).
\begin{table}[htbp]
\begin{center}
{
\begin{tabular}{|lcl|l|}
\hline
Two & particles & $T = 1$ & Discontinuous\\
Three & particles & 1 $> T >$ 2/3 & in particle\\
Four & particles & 1 $> T >$ 1/ $\sqrt{3}$ & multiplicity\\
$\infty$ & particles & 1 $> T >$ 1/2 & \\
\hline
\end{tabular}
}
\vskip 0.1cm
\caption{ Range of thrust for various numbers of partons}
\label{thrusttable}
\end{center}
\end{table}
\noindent
Thrust measures the sum of the lengths of the longitudinal momenta
of the final particles relative to the thrust axis $\vec{n}$
chosen to maximize the sum. For two-particle final states its value
is 1. Its allowed range changes with the particle number,
therefore its differential distribution is only well defined after
some smearing\,\footnote{Smearing is required also by hadronization
effects. The typical width of a Gaussian smearing is
$\Delta T = m_h/\sqrt{s}\approx 0.02\,.$}.
Carrying out the phase-space integral for the Born term one gets
\begin{equation}
\frac{1}{\sigma_0}
\; \frac{d \sigma^{\rm Born}}{dT} \; = \; \frac{\alpha_s}{2
\pi} \; \frac{4}{3} \; \biggl[ \frac{2 (3T^2 - 3T + 2)}
{ (1-T)T} \; \ln
\; \left(\frac{2T - 1}{1 - T} \right) - \frac{
3 (3T - 2) (2 - T)}{ (1-T)} \biggr]
\end{equation}
In the classic paper of ERT
the singular pieces have been evaluated analytically, and it was
demonstrated that indeed for infrared safe quantities they cancel
as is required by the KLN theorem.
The remaining finite next-to-leading order
cross section is suitable for
numerical evaluation. ERT calculated the distribution
for the shape variable $C$.
But their analytic result can be used to calculate
any infrared safe three-jet-like quantity in next-to-leading order.
It is convenient to give the one-dimensional
distributions of various three jet
measures generically denoted as
$ X=t,C...E_J$ in a form that
satisfies the renormalization group equation
\begin{equation}
{1\over \sigma_0} {{d\sigma}\over{ dX }}
={{{\ifmmode \alpha_S \else $\alpha_S$ \fi} (\mu)}\over {2\pi}} A_X(X)
+\left({{\ifmmode \alpha_S \else $\alpha_S$ \fi} (\mu )\over\pi}\right)^2 \Bigl[A_X(X)2\pi b_0 \log
(\mu^2/S)+B_X(X)\Bigr]
\label{xxsec}
\end{equation}
$A(x)$ and $B(x)$ are scale-independent functions.
Their values are tabulated for many quantities
in ref.~{\cite{KunsztNason}}.
The next-to-leading order expression is scale independent
up to ${\cal O}({\ifmmode \alpha_S \else $\alpha_S$ \fi}^3)$
\begin{equation}
{d\over d\mu^2} \left({d\sigma\over dX}\right) = {\cal O}({\ifmmode \alpha_S \else $\alpha_S$ \fi}^3)\ .
\label{xrengr}
\end{equation}
The size and sign of the corrections is rather different
for the various jet measures.
In many cases the corrections are substantial $(\approx 30\%)$
even at LEP energy.
Thanks to the technical development described by Lance Dixon,
the NLO calculation will soon be available also for
four-jet production ( $\psi^{(4,1)}_{\epem} $).
\subsection{ Jet cross section in hadron-hadron collisions }
At the Tevatron, multijet cross sections are observed up to
six jets~\cite{huston}.
The analysis of the data requires the evaluation
of the amplitudes of the parton processes
\begin{equation} \label{hhnproc}
a_1(p_1) + a_2(p_2)\rightarrow a_3(p_3) + ... + a_n(p_n)
\end{equation}
where $a_i$ denotes parton flavour labels, $p_i$ are their four-momenta and
$n$ is the number of the participating partons.
The data are rather precise for two- and three-jet like
quantities therefore the comparison with the theory
has to be done at next-to-leading order.
For this purpose
the tree amplitudes have to be known for $n=4,5,6$ while
in the case of $n=4,5$ we have
to know also the one-loop amplitudes.
It is convenient again to introduce $\psi$ functions
giving the squared amplitude divided by the flux and spin averaging
factors
\begin{eqnarray}
\psi^{(n,0)}(\{a_l\}_{1,n};\{p_l\}_{1,n})
&=&\frac{1}{2 s\,\omega(a_1)\omega(a_2) }\,
\sum_{\stackrel{\rm colour}{\rm spin}}\abs{{\cal A}^{(n,0)}}\,
\label{bornampdefhh}\\
\psi^{(n,1)}(\{a_l\}_{1,n};\{p_l\}_{1,n})
&=&\frac{1}{2 s\,\omega(a_1)\omega(a_2) }\,
\times\nonumber\\ &&
\sum_{\stackrel{\rm colour}{\rm spin}}
\left({\cal A}^{(n,0)}\,{\cal A}^{(n,1)*} +
{\cal A}^{(n,0)*}\,{\cal A}^{(n,1)}\right)
\label{virtualhh}
\end{eqnarray}
where $s=2p_1p_2$, ${\cal A}^{(n,i)}$ denote the tree- ($i=0$) and
one-loop ($i=1$) amplitudes of the
process~(\ref{hhnproc}) (helicity, flavour
and momentum labels are all suppressed), and
$\omega(a)$ is the number of colour and spin degrees
of freedom for the flavour $a$, in $4-2\epsilon$
dimensions
\begin{equation}
\omega(q)=2N_c\,,\;\;\;\;
\omega(g)=2(1-\epsilon)(N_c^2-1)\,.
\end{equation}
The hard-scattering cross section is decomposed
into four contributions
\begin{equation}
d\hat{\sigma}^{\rm nlo}=
d\sigma^{\rm Born}
+
d\sigma^{\rm virt}
+
d\sigma^{\rm real}
+
d\sigma^{\rm count}\,,
\label{bornvirtrealcount}
\end{equation}
where
\begin{eqnarray}
d\sigma^{\rm Born}_{a_1a_2}(p_1,p_2;{X})
&=&\sum_{\{a_l\}_{3,n}}
\psi^{(n,0)}
{\cal S}_{X,n-2}(\{p_l\}_{3,n};{X})\times\nonumber\\ &&
\hspace*{0.5cm}
d\phi_{n-2}(p_1,p_2;\{p_l\}_{3,n})
\label{hhborn} \\
d\sigma^{\rm virt}_{a_1a_2}(p_1,p_2;{X})
&=&\sum_{\{a_l\}_{3,n}}
\psi^{(n,1)}
{\cal S}_{X,n-2}(\{p_l\}_{3,n};{X})
\times\nonumber\\ &&
\hspace*{0.5cm} d\phi_{n-2}(p_1,p_2;\{p_l\}_{3,n})
\label{hhvirt}\\
d\sigma^{\rm real}_{a_1a_2}(p_1,p_2;{X})
&=&\sum_{\{a_l\}_{1,n+1}}
\psi^{(n+1,0)}
{\cal S}_{X,n-1}(\{p_l\}_{3,n+1};{X})
\times\nonumber\\ &&
\hspace*{0.5cm}
d\phi_{n-1}(p_1,p_2;\{p_l\}_{3,n+1})
\label{hhreal}
\end{eqnarray}
where
$d\phi_{n-2}(p_1,p_2;\{p_l\}_{3,n})$ is the phase-space volume
for $n-2$ final particles with total energy defined by
the incoming four-momenta $p_1$ and $p_2$,
$X$ is a generic notation of the measured physical
quantity, ${\cal S}_{X,n}$ denotes the measurement functions of $X$
defined in terms of $n$ partons
and the counter-term cross sections
$ d\sigma_{ab}^{(\rm count)}(p_1,p_2;{X})$ are given
by eq.~(\ref{counterterms2}).
The loop corrections $\psi^{(4,1)}$ have been obtained
for the spin-independent case by Ellis and Sexton~\cite{ES};
the spin-dependent one-loop corrections have been obtained
by the helicity method~\cite{kusitr} and very
recently the one-loop
corrections to the five-parton processes $\psi^{(5,1)}$
have also been calculated (see the lecture
of Lance Dixon).
All four terms contributing to the hard scattering
cross section~(\ref{bornvirtrealcount})
have to be calculated in $4-2\epsilon$ dimensions. The individual
terms have singular $1/\epsilon$ and $1/\epsilon^2$ contributions.
The singularities cancel in the sum, provided the
measurement functions satisfy the conditions of infrared
safety formulated in
subsection~\ref{sec:infraredsafe}\,\footnote{
In the case of hadron-hadron collisions the measurement
functions should also fulfil the condition
that
${{\cal S}}_{X,n-1}$
should reduce to ${{\cal S}}_{X,n-2}$
when one of the partons becomes parallel to one of the beam
momenta.}.
The hard scattering cross section given by
eq.~(\ref{bornvirtrealcount}) is finite
and, provided the cancellation of the singular terms is achieved
analytically, it is suitable for the numerical evaluation
of physical cross sections
with the use of eq.~({\ref{factth}).
\section{Methods of analytic cancellation of the singularities}
Although eqs.~(\ref{bornvirtreal}),\,(\ref{bornvirtrealcount})
define finite cross-sections, they cannot be used directly
for numerical evaluation since the singular terms in the
real contributions are obtained by integrating
over the soft and collinear
kinematical range. Since the phase space is large
and its boundary due to the presence of arbitrary
measurement functions is complicated,
analytic evaluation is impossible.
Fortunately, in the soft and collinear
regions the cross-sections and the measurement
functions have a simple universal behaviour
such that the integration relevant for the calculation
of the singular terms becomes feasible
analytically. This feature can be implemented
in two basically equivalent methods.
In the first case ( {\it phase space slicing method} )
one excludes (slices) from the numerical integration
domain the singular regions such that the numerical
integrations becomes well defined. In the excluded regions
the $\psi$ functions, ${\cal S}$ functions and the relevant
phase-space factors can be replaced
with their limiting values and the integrals in these regions
are performed
analytically. The boundary of the excluded regions
are defined with some small parameters (invariant mass parameter
or some small angle and energy-fraction parameter
as in the case of the Sterman-Weinberg jet cross-sction).
Let us consider as illustration a one-dimensional problem
with integration domain $0\le x \le 1$ and an integrand which
has a simple pole at $x=0$.
One slices the integration region into two pieces,
$0<x<\delta$ and $\delta<x<1$. We choose $\delta \ll 1$, thus allowing
us to use the simple approximation $F(x) \rightarrow F(0)$ for $0 < x < \delta$.
This gives
\begin{eqnarray}
I &\sim& \lim_{\epsilon \rightarrow 0}
\left\{
F(0)\ \int_0^\delta { dx \over x}\ x^\epsilon
\ +\
\int_\delta^1 { dx \over x}\ x^\epsilon\ F(x)
\ - \ {1 \over \epsilon}\ F(0)
\right\}
\nonumber\\
&=&
F(0)\ \ln (\delta)
\ +\
\int_\delta^1 { dx \over x}\ F(x) \,.
\label{cutting}
\end{eqnarray}
Now the second integral can be performed by normal Monte Carlo
integration. As long as $\delta$ is small, the sum of the
two terms will be
independent of $\delta$. About the first use of
this method, see refs.~\cite{Owens,FSKS,Greco}.
The method was
further developed using systematically the
universal features of the soft and collinear limits
by Giele, Glover and Kosower~\cite{gieleglover,GiGlKo}.
The actual implementation is not completely straightforward.
The
numerical integration over the real contribution
has to be done with a certain accuracy. Since the integrand near the
boundaries of the singular regions increases steeply the
parameters defining the boundaries must not be very small.
Furthermore, the result of the analytic integration
is approximate since
the integrand is replaced with its
limiting value. The result is more and more accurate
as the parameters defining the boundary of the singular
regions become smaller and smaller. Fortunately, the measured
cross sections have finite
accuracy, which
sets the required precision for the theoretical evaluation.
In practical applications one compromises over these
conflicting requirements to achieve the best efficiency.
The other method is called the {\it subtraction method}.
This method takes account of the fact that the
singular behaviour of the real contribution
is just some simple pole
with well known simple residue.
After subtraction the
numerical integration over the subtracted integrand becomes
convergent. The quantity subtracted should be added back on.
The
subtraction terms, however, are simple,
the dimensionally regulated singular
integrations can be carried out analytically.
This can be illustrated again with a simple one-dimensional integral.
We write
\begin{eqnarray}
I &=& \lim_{\epsilon \rightarrow 0}
\left\{
\int_0^1 { dx \over x}\ x^\epsilon\ [ F(x) - F(0) ]
\ +\ F(0) \int_0^1 { dx \over x}\ x^\epsilon
\right\}
\nonumber\\
&=&
\int_0^1 { dx \over x}\ [ F(x) - F(0) ] \ + \ {1 \over \epsilon}\ F(0).
\label{subtraction}
\end{eqnarray}
The integral can now be performed by Monte Carlo integration.
This method was
first used for QCD
calculations by R.K.~Ellis, Ross, and Terrano\cite{ert} and it was
further developed with systematic use of the simplicity
of the soft and collinear limit in ref.\cite{KS}.
More applications can be found in
refs.~\cite{KunsztNason,EKS,nrm,FiKuSi}.
The disadvantage of this method is that outside the singular region
the
values of the physical parameters at a given phase-space
point will change if we take the limit corresponding to
the soft or collinear configurations of the subtraction terms.
In the case of Monte Carlo numerical evaluation this can lead
to relatively large fluctuations in the binned distributions.
This problem can be avoided by using
Gaussian smearing over the bin size~\cite{KS}.
\section{Virtual contributions}
In the following I shall consider only the squared matrix
elements of process~(\ref{hhnproc}).
The virtual contributions to the cross sections
are given be eq.~(\ref{hhvirt}).
We are interested in the form of the singular
terms of the functions $\psi^{(n,1)}$. It turns out that they
have a very simple general structure:
\begin{eqnarray}
\psi^{(n,1)}(\{a_l\}_{1,n},\{p_l\}_{1,n})&=&
\frac{{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{2\pi}\left(\frac{4\pi\mu^2}{Q^2}\right)^{\epsilon} c_{\Gamma}\nonumber\\
&&\hspace{-1cm}
\left\{-\frac{1}{\epsilon^2}\sum_{l=1}^n C(a_l)
-\frac{1}{\epsilon}\sum_{l=1}^n \gamma(a_l)\right\}
\psi^{(n,0)}(\{a_l\}_{1,n},\{p_l\}_{1,n})\nonumber\\
&&
\hspace{-1cm} + \
\frac{1}{2\epsilon}
\sum_{\stackrel{i,j=1}{i\neq j}}^{n}
\ln \left(\frac{2 \ p_i\cdot p_j}{Q^2}\right)
\psi_{ij}^{(n,0)}(\{a_l\}_{1,n},\{p_l\}_{1,n})\nonumber\\
&&
\hspace{-1cm} + \
\psi^{(n,1)}_{{\rm NS}}(\{a_l\}_{1,n},\{p_l\}_{1,n})
\label{psivirtDR}
\end{eqnarray}
where we introduced the short-hand notation
\begin{equation}
c_\Gamma=\frac{\Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}
{\Gamma(1-2\epsilon)}\,,
\label{cgdefcol}
\end{equation}
$Q^2$ is an auxiliary variable which cancels in the full
expression, $\mu$ is the scale introduced by dimensional regularization,
$\psi_{ij}^{(n,0)}$ denotes
the colour-connected Born squared matrix elements,
$C(a_l)$ is the colour charge of parton $a_l$
and the constant $\gamma (a_l)$ gives the size of the virtual
contributions
to the diagonal Altarelli-Parisi kernel $P_{a_l/a_l}(\xi)$
\begin{eqnarray} \label{Candgamma}
C(g) &=& N_c ; \qquad \quad
\gamma(g) = {11N_c - 2 N_{\rm f} \over 6} \; \; \; {\rm for\ \ gluons}\\
C(q) &=& { N_c^2-1 \over 2 N_c}; \quad
\gamma(q) = {3 (N_c^2-1) \over 4 N} \; \; \; \; {\rm for\ \ quarks}\,,
\end{eqnarray}
finally $\psi^{(3,1)}_{{\rm NS}}$ represents the remaining
finite terms.
The derivation of this result is rather
simple~\cite{gieleglover,kusitr-sing}.
First we observe that in
axial gauge the collinear singularities come from the self
energy corrections to the external lines \cite{colsoprevs}.
For each helicity and colour sub-amplituds
they are
proportional to the Born term
since the
Altarelli-Parisi functions, $P_{a/a}(z)$,
for diagonal splitting preserve helicity in the $z\rightarrow 1$ limit.
Therefore, the collinear
singularities of the one-loop amplitudes have the form
\begin{equation}
\label{Colsing}
{\cal A}^{\rm loop}_{\rm col}=
-\left(g\over 4\pi\right)^2
\sum_a^n {\gamma {(a)}\over \epsilon}
{\cal A}^{\rm tree}.
\end{equation}
There is a contribution for every external leg and
the full contribution to $\psi^{(n,1)}$ is easily obtained
using eq.~(\ref{hhvirt}).
The structure of multiple soft
emission from hard processes in QED was investigated by Grammer and Yennie
\cite{Gam73}. They have shown that the energetic electrons
participating in a hard process receive an eikonal phase
factor. In quantum chromodynamics, the situation is very similar
except, that the eikonal factor is a matrix equal
to the path ordered product of the matrix-valued gluon field
\cite{ColSop81,Cia81,March88}.
For one soft gluon, the main result is very simple:
it states that the singular contributions
come from configurations where the soft gluons are attached
to the external legs of the graphs. Therefore,
the soft contribution can easily be calculated in terms
of the Born amplitude.
The insertion of a soft gluon that connects the external legs $i$ and
$j$ has a twofold effect.
\noindent
First, after carrying out the loop integral and dropping singular
terms corresponding to collinear configurations, we pick up
the same eikonal factor as in QED~\footnote{Using unitarity this form of the soft factor can be confirmed
by integrating the gluon momenta over the bremsstrahlung
eikonal factor, as we shall see in the next section.}
\begin{equation}
E_{ij}=
- \left( {g\over 4\pi}\right)^2 c_{\Gamma}
\frac{1}{\epsilon^2}\left(-{ \mu^2\over s_{ij} }\right)^{\epsilon}.
\label{softsin}
\end{equation}
Secondly, the remaining part of the amplitude
is the same as the Born amplitude except that it gets rotated in the
colour space by
the insertion
of the colour matrices appearing in the two vertices of the
soft line connecting
the hard lines $i$ and $j$, therefore we have the replacement
\begin{equation}
{\cal A}^{(n,0)}_{c_1 c_2..c_n} \rightarrow
E_{i,j}\sum_{a, c_i', c_j'}
t^{a}_{ c_i c_i'}
t^{a}_{ c_j c_j'}
{\cal A}^{(n,0)}_{c_1... c_i'...c_j'...c_n}
\end{equation}
where $c_i$'s denote colour indices for the external partons
and
$t^{a}_{c_i c_i'}$ is the SU(3) generator matrix for the colour
representation of line $i$, that is $t^{a}_{ij}$ is
$(1/2) \lambda_{ij}^a$ for an outgoing quark,
$-(1/2) \lambda_{ij}^{*\, a}$ for an outgoing antiquark, and
$i f_{aij}$ for an outgoing gluon.
For an incoming parton, the same formula can be used as long
as we use the conjugate colour representations,
$-(1/2) \lambda_{ij}^{*\, a}$ for a quark,
$(1/2) \lambda_{ij}^a$ for an antiquark, and
$i f_{aij}$ for a gluon.
Finally, using the definition of $\psi^{(n,1)}$
we
obtain
\begin{eqnarray}
\psi^{(n,0)}_{ij}(\{a_l\}_{1,n},\{p_l\}_{1,n})&=&
\frac{1}{4 p_i\cdot p_j \omega (a_1)\omega (a_2)}\nonumber \\
&&\hspace{-3cm}
2\,{\large {\cal R}eal}
\left(\sum_{spin}\sum_{\{c_l\}_{1,n}}\sum_{a, c_i', c_j'}
t^{a}_{ c_i c_i'}
t^{a}_{ c_j c_j'}
{\cal A}^{(n,0)}_{c_1... c_i'...c_j'...c_n}
{\cal A}^{(n,0)*}_{c_1... c_i'...c_j'...c_n}\right)
\label{psimn}
\end{eqnarray}
This result of eq.~(\ref{cgdefcol})
is scheme-dependent~\cite{kusitr}.
In conventional dimensional regularization
$\psi^{(3,0)}$ and $\psi_{mn}^{(n,0)}$ denote the Born and colour-connected
Born cross section in $4-2\epsilon$ dimensions and $\psi_{NS}^{(3,1)}$ denotes
the remaining final terms in this scheme.
Finally, we note that to obtain the form of eq.~(\ref{psivirtDR})
one should expand the eikonal factor in $\epsilon$ and
apply the soft-colour identity~\cite{KS}
\begin{equation}
\sum_{\stackrel{j=1}{i\neq j}}^{n}\psi^{(n,0)}_{ij}(\{a_l\}_{1,n},\{p_l\}_{1,n}) =
2\,C(a_i)\,\psi^{(n,0)}(\{a_l\}_{1,n},\{p_l\}_{1,n})\,.
\label{mmnident}
\end{equation}
\section{Real contributions}
In this section I consider the limiting
behaviour of the real contribution of the process~(\ref{hhnproc})
in the soft and collinear limit.
I also give the local subtraction terms which
render the real contributions integrable over the whole
phase-space region.
The discussion will be detailed on the soft contributions.
In the case of the collinear limits
I only summarise their most salient features.
\subsection{Kinematics}
The four-momenta of the reaction~(\ref{hhnproc})
can be parameterised, for example,
in terms of transverse momenta,
rapidities and azimuthal angles
\begin{eqnarray}\nonumber
&&p_1^\mu \,=\, \frac{\sqrt{s}}{2} (x_1,0,0,x_1)\,, \quad
p_2^\mu \,=\, \frac{\sqrt{s}}{2} (x_2,0,0,-x_2)\, \\ \label{fourmom}
&&p_{i}^\mu \,=\,
p_{\perp,i} \ ({\rm ch}\ y_i,\cos \phi_i, \sin \phi_i, {\rm sh}\ y_i),
\; \; i \in \{ 3,...,n \} \,.
\end{eqnarray}
From energy and longitudinal momentum conservation
we obtain for the momentum fractions
\begin{equation}
x_1 \,=\, \frac{1}{\sqrt{s}} \sum_{i=3}^{n} p_i e^{y_i}\,, \quad
x_2 \,=\, \frac{1}{\sqrt{s}} \sum_{i=3}^{n} p_i e^{-y_i}\,.
\end{equation}
Considering singular limits we shall always assume
the use of some suitable set of independent variables
appropriate for the phase-space integration after imposition
of four-momentum conservation.
For the definition of measurable quantities, rapidity and transverse
momentum variables are particularly convenient since they
are boost-invariant~\cite{KS,EKS}.
In the evaluation of
the soft and collinear limit it appears more convenient, however,
to use energy-angle variables
\begin{equation}
p_i^{\mu}=(E_i,
\sin \phi_i\sin \theta_i, \cos \phi_i\sin \theta_i,\cos \theta_i)\,.
\end{equation}
\subsection{Soft subtraction terms and soft contributions}
The cross section of the real contribution (see eq.~(\ref{hhreal}))
is constructed from products of three factors:
function $\psi^{(n+1,0)}$, the measurement function ${\cal S}_{X,n-1}$ and
the phase-space factor $d\phi_{n-1}$.
Considering their soft limits
let us assume that parton $a_k$ is soft
($3\le k \le n+1$).
Its energy is denoted by $E_k$ and its angular
correlations are controlled by the four-vector $n^{\mu}_k$
defined by the relation
\begin{equation}
k^{\mu}=E_k n^{\mu}_k=E_k (1,\vec{n}_k)\,.
\end{equation}
The method that has been used in the previous section to calculate
the soft limit of the virtual contribution $\psi^{(n,1)}$
also applies for the real emission,
and we obtain
\begin{eqnarray}
\lim_{E_k\rightarrow 0}\psi^{(n+1,0)}(\{a_l\}_{1,n+1};\{p_l\}_{1,n+1})
&=& \delta_{ga_k}
\frac{4\pi{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{E_k^2}
\sum_{i,j;\,i<j}^{[k]}e_{ij}(p_i,p_j,n_k)\times\nonumber\\ &&
\psi^{(n,0)}_{ij}(\{a_l\}^{[k]}_{1,n+1};\{p_l\}^{[k]}_{1,n})
\end{eqnarray}
where the list $\{v_l\}^{[k]}_{m,m+n}$ denotes the same list
as $\{v_l\}_{m,m+n}$ but $v_k$ ($m\le k\le m+n$) is left out,
$\psi^{(n,0)}_{ij}$ is the colour-correlated Born contribution
defined by eq.~(\ref{psimn}) in the previous section and $e(p_i,p_j,n_k)$
is the eikonal factor for real emissions:
\begin{equation}
e(p_i,p_j,n_k) =
\frac{p_i\cdot p_j}{p_i\cdot n_k~p_j\cdot n_k}.
\label{eij}
\end{equation}
We note that $e(p_i,p_j,n_k)$ is independent of $E_k$ but is
dependent on the
angular variables of the soft line. The colour-correlated Born terms
$\psi_{ij}$, however, are
completely independent of the soft momenta.
The soft limit of the
measurement function is given by the requirement of infrared safety
\begin{equation}
\lim_{E_k\rightarrow 0}{\cal S}_{X,n-1}(\{p_l\}_{3,n+1 };X)=
{\cal S}_{X,n-2}(\{p_l\}^{[k]}_{3,n+1};X)
\end{equation}
and the phase-space factor behaves as
\begin{equation}
\lim_{E_k\rightarrow 0}
d\phi_{n-1}(p_1,p_2\rightarrow \{p_l\}_{3,n+1})
= \frac{1}{n-1}d\phi_1[k]d\phi_{n-2}\left(p_1,p_2\rightarrow \{p_l\}^{[k]}_{3,n+1}\right)
\end{equation}
where
\begin{equation}
d\phi_1([k]) = \mu^{2\epsilon}
\int \frac{d^{d-1}p_k}{(2\pi)^{d-1}\,2p^0_k} = \mu^{2\epsilon}
\int \frac{E^{1-2\epsilon}_k}{2\,(2\pi)^{(3-2\epsilon)}} dE_k \int{\rm d}\Omega_{3-2\epsilon}
\end{equation}
is the phase-space integral over parton $a_k$
and the decomposition into energy and angular integrals is also
shown\,\footnote{
We note that in the soft limit the condition imposed by the
delta function of momentum conservation is different from the original one;
therefore our notation is not completely precise. In the
soft limit some of the components of the momenta $\{p_l\}^{[k]}_{3,n+1}$
will be different from the original ones and
instead $\{p_l\}^{[k]}_{3,n+1}$ we should write
$\lim_{E_k\rightarrow 0}\{p_l\}^{[k]}_{3,n+1}$. We tacitly assume that
in the soft limit $p_l$ denotes its limiting value.
It
is uniquely defined after choosing the independent set of variables
in the phase space-integral $d\phi_{n-1}$.}.
{\it The local soft subtraction term} for subtracting the soft singular
behaviour in the $E_k\rightarrow 0$ limit is
given by the integrand of the expression below
\begin{eqnarray}
d\sigma^{(\rm soft,sub)}_{a_1a_2,k}(p_1,p_2;{X})
&=&-\frac{\delta_{a_kg}}{n-1}\,\frac{{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{ 2\pi}
\sum_{i,j;\,i<j}^{[k]}\nonumber\\
&&\hspace{-1.5cm}
\sum_{\{a_l\}^{[k]}_{3,n+1}}
\left[8\pi^2\, e(p_i,p_j,n_k)\,
\frac{1}{E^2_k}\Theta(E_k\ge E_c)d\phi_1[k]\right]
\nonumber \\
&&\psi^{(n,0)}_{ij}(\{a_l\}^{[k]}_{1,n+1};\{p_l\}^{[k]}_{1,n+1})
{\cal S}_{X,n-2}(\{p_l\}^{[k]}_{3,n+1};{X})
\times\nonumber\\ &&
\hspace*{0.5cm}
\frac{1}{(n-2)!}d\phi_{n-2}(p_1,p_2\rightarrow \{p_l\}^{[k]}_{3,n+1})
\label{hhsoftsub}
\end{eqnarray}
In eqs. (\ref{hhreal}) and
(\ref{hhsoftsub}), we can use the
same independent integration variables,
therefore, by adding eq.\,(\ref{hhsoftsub})
to the integrand of (\ref{hhreal}) we subtract its singular
regions defined by the $E_k\rightarrow 0$ limit.
Again, to ensure that
the subtraction does not change
the value of the
original expression what is subtracted has to be added back on
but in an integrated form where the
singular terms are calculated analytically.
This is achieved by carrying out the
phase-space integral $d\phi_1[k]$. The expression
in the square bracket in eq.(\ref{hhsoftsub})
defines the soft integral
\begin{equation}
{\cal I}^{\rm soft}_{ij}(z_{ij},\mu/E_c,\epsilon)
=8\pi^2\mu^{\epsilon}\int\frac{d^{d-1}p_k}{2p^0_k(2\pi)^{d-1}}\,
\frac{p_i\cdot p_j}{p_i\cdot p_k\,p_j\dot p_k}
\Theta(E_k\ge E_c)
\end{equation}
where $E_c$ is a cut-off value on the energy of the particle which
is allowed to be soft.
We also indicated the remaining dependences of the integral;
the angular variable
$z_{ab}$ is defined as
$$z_{ab}=\cos\theta_{ab}=\frac{\vec{p}_a\vec{p}_b}
{\abs{\vec{p}_a}\abs{\vec{p}_b}}.$$
\begin{table}[!htbp]
\begin{center}
{
\begin{tabular}{|c|c|c|}
\hline
\rule[-1.2ex]{0mm}{6ex}
{\it Definition } & {\it Answer } \\
\hline
\rule[-2ex]{0mm}{8ex}
$ J(\epsilon) =
\frac{\Omega_{1-2\varepsilon}}{\Omega_{2-2\varepsilon}}
\int_{-1}^1d\cos\theta
\int _0^{\pi}d\phi\,
\frac{(\sin\theta\sin\phi)^{-2\varepsilon}}{1-\cos \theta} $
& $ - \frac{1}{\varepsilon} +2\ln 2 $\\
\rule[-2ex]{0mm}{6ex}
$\quad\quad =
\int_{-1}^1 \frac{(1-z^2)^{-\varepsilon}}{1-z} $
&
$ +\varepsilon({\rm Li_2}(1)-2\ln^2 2)$
\\
\hline
\rule[-2ex]{0mm}{6ex}
$ I_{\phi}(z_{ij},z_{ik}) = \int_0^{\pi}d\phi_k
\frac{1}{1- z_{ij}z_{ik}
-\sin\theta_{ij}\sin\theta_{ik}\cos\phi_k }
$ &
$\frac{\pi}{\abs{z_{ik}-z_{ij}}}
$ \\
\hline
\rule[-2ex]{0mm}{6ex}
$J^{(0)}
(z_{ij}) = \frac{1}{\pi}\int_{-1}^1dz_{ij}\int_0^{\pi}
d\phi_k\, l^{R}_{ij}$ &
$ \ln\frac{1-z_{ij}}{2} $
\\
\hline
\rule[-2ex]{0mm}{6ex}
$J^{(\phi)}
(z_{ij}) = \frac{1}{\pi}\int_{-1}^1dz_{ij}\int_0^{\pi}
d\phi_k\,\ln\sin^2\phi\ l^{R}_{ij}$ &
$\ - \ln 2(1+z_{ij})\ln\frac{1-z_{ij}}{2} $
\\
\hline
\rule[-2ex]{0mm}{6ex}
$ J^{(z)}
(z_{ij}) = \frac{1}{\pi}\int_{-1}^1dz_{ij}
\ln\sin^2\theta_{ik}
\int_0^{\pi}
d\phi_k\, l^{R}_{ij}$ &
$\frac{1}{2}\ln^2 2(1-z_{ij})-2\ln^22$
\\
\rule[-2ex]{0mm}{6ex}
{} &
$ +{\rm Li_2}(1)-
{\rm Li_2}(\frac{ 1-z_{ij} }{2})
$ \\
\hline
\end{tabular}
}
\vskip 0.1cm
\caption{ List of angular integrals}
\label{softintab}
\end{center}
\end{table}
It is convenient to decompose the eikonal factor
into a term which has one collinear singularity and
terms which are finite
\begin{equation}
\frac{p_i\cdot p_j}{(p_i\cdot p_k)(p_j\cdot p_k)}=\frac{1}{E_k^2}\left[
l(z_{ij},z_{ik},z_{jk})+ (i\leftrightarrow j)\right]
\label{lijdef}
\end{equation}
where
\begin{equation}
l(z_{ij},z_{ik},z_{jk}) =\frac{1}{1-z_{ik}} + l^R(z_{ij},z_{ik},z_{jk})
\label{lsplit}
\end{equation}
and
\begin{equation}
l^R(z_{ij},z_{ik},z_{jk})=
\frac{1}{2}\left[\frac{1-z_{ij}}{(1-z_{ik})(1-z_{jk})}
-\frac{1}{1-z_{ik}}-\frac{1}{1-z_{jk}}\right].
\label{langord}
\end{equation}
The integral over the energy is trivial and
the angular integrals have to be calculated only up to order
$\epsilon$ so one obtains
\begin{eqnarray}\nonumber
I_{ij}^{\rm \, soft}&=&c_{\Gamma}(\frac{4\pi\mu^2}{E_c^2})^{\varepsilon}
(-\frac{1}{\varepsilon})\\ \label{isoftn}
& &\left[J(\varepsilon)+J^{(0)}(z_{ij})(1-\varepsilon\ln 4)-\varepsilon
J^{(z)}(z_{ij})+J^{(\phi)}(z_{ij})\right]
\label{jjjint}
\end{eqnarray}
where $c_\Gamma$ was defined in eq.~(\ref{cgdefcol}).
The integrals
$J^{(0)}$, $J^{(\phi)}$ and $J^{(z)}$ are
listed in Table~(\ref{softintab}).
Inserting their values into eq.(\ref{jjjint}) and expanding in
$\epsilon$ we get both the singular
and finite parts
\begin{eqnarray}\label{softspliti}
I_{ij}^{\rm\, soft}&=&
I_{ij}^{\rm\, soft,sing}+
I_{ij}^{\rm\, soft,fin}\\
\label{softsingi}
I_{ij}^{\rm\, soft,sing}&=&c_{\Gamma}
\left({4\pi\mu^2\over Q^2}\right)^\varepsilon
\left[\frac{1}{\varepsilon^2}-
\frac{1}{\varepsilon}\ln \frac{2p_ip_j}{Q^2}+
\frac{1}{\varepsilon}\ln \frac{E_iE_j}{E_c^2}
\right]\,.
\label{isofres}
\end{eqnarray}
The explicit form of the finite part is not interesting
for us here; it can be found in ref.~\cite{FiKuSi}.
We can use completely covariant
notation by replacing
the energies with the
covariant expressions
\begin{equation}
E_i=({\cal P}p_i)/\sqrt{s}\,, \ \ {\rm where}\ \
{\cal P}^{\mu}=p_A^{\mu} + p_B^{\mu}=(\sqrt{s},0,0,0)
\label{covEi}
\end{equation}
where $p_A$ and $p_B$ are the four momenta of the incoming hadrons.
If the sum over flavour is carried out
the result becomes independent of the label of the soft line. Ttherefore
every leg in the final state
gives the same contributions and the factor $1/(n-1)$ in
eq.~(\ref{hhsoftsub})
gets cancelled.
As a result,
{\it the soft contribution}
of the real leading order process with $n+1$ partons
can be written in the form of the virtual contribution of
the corresponding $n$-parton process. It
can be obtained form the right hand side of
eq.~(\ref{hhsoftsub}) by multiplying it
with $n-1$, changing its sign and inserting the integrated value
of ${\cal I}^{\rm soft}_{ij}$ (eq.~\ref{isofres}):
\begin{eqnarray}
d\sigma^{(n+1,{\rm soft})}_{a_1a_2}(p_1,p_2;{X})
&=&\sum_{\{a_l\}_{3,n}}
\psi^{(n+1,{\rm soft})}
{\cal S}_{X,n-2}(\{p_l\}_{3,n};{X})
\times\nonumber\\ &&
\hspace*{0.5cm} d\phi_{n-2}(p_1,p_2;\{p_l\}_{3,n})
\label{hhsoftcont}
\end{eqnarray}
where
\begin{eqnarray}
\psi^{(n+1,{\rm soft})}(\{a_l\}_{1,n},\{p_l\}_{1,n})&=&
\frac{{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{2\pi}\left(\frac{4\pi\mu^2}{Q^2}\right)^{\epsilon}
c_{\Gamma}\nonumber\\
&&\hspace{-2cm}
\left\{\sum_{l=1}^n \frac{C(a_l)}{\epsilon^2} +
2 \frac{C(a_l)}{\epsilon} \ln\frac{E_l}{ E_c}
\right\}
\psi^{(n,0)}(\{a_l\}_{1,n},\{p_l\}_{1,n})\nonumber\\
&&
\hspace{-3cm} + \
\frac{1}{2\epsilon}
\sum_{\stackrel{i,j=1}{i\neq j}}^{n}
\ln \left(\frac{2 \ p_i\cdot p_j}{Q^2}\right)
\psi_{ij}^{(n,0)}(\{a_l\}_{1,n},\{p_l\}_{1,n})\nonumber\\
&&
\hspace{-1cm} + \
\psi^{(n+1,{\rm soft})}_{{\rm NS}}(\{a_l\}_{1,n},\{p_l\}_{1,n})
\label{psisoftDR}
\end{eqnarray}
The first term is the soft-collinear singularity
and it cancels the soft-singular terms appearing in the
virtual corrections. The third term is the soft contribution
proportional to the colour correlated Born-term and again it cancels
the corresponding terms in the virtual contributions.
The second term comes from the collinear singularities of the
eikonal factors and is cancelled by the direct singular
collinear contributions. The last term is finite and can be evaluated in
four dimensions.
\subsection{Collinear subtraction terms and collinear contributions}
\newcommand\LC{\stackrel{\sss i\parallel j}{\longrightarrow}}
\newcommand\sss{\scriptscriptstyle}
\newcommand\CA{C_{\sss A}}
\newcommand\DA{D_{\sss A}}
\newcommand\CF{C_{\sss F}}
\newcommand\TF{T_{\sss F}}
In the previous section we constructed the local subtraction term for the
soft singular region. We also demonstrated
that the soft singularities of the
virtual corrections are cancelled
by the soft contributions of the real corrections
after the integrals
over the energy and angular variables of the soft line are carried out.
The
subtraction and addition procedure can also be applied to
the singular collinear regions.
Similarly to the soft case, one finds again
simple limiting behaviours for the
$\psi$-functions, the measurement functions and the phase space.
We shall discuss only the collinear limit of the
$\psi$ function. The reader can find further details in
refs.~\cite{GiGlKo,KS,FiKuSi}.
From the simple behaviour
of the helicity amplitudes in the collinear limit
we obtain for the
$\psi$ function the limiting behaviour~\cite{KS,FiKuSi}
\begin{eqnarray}
&&\psi^{(n+1,0)}\left(p_1,p_2;\,..,p_i,..,p_j,..\right)\LC
\nonumber \\*&&\phantom{aaaaaaa+}
\frac{4\pi{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{p_i\cdot p_j}\,P_{a_i S(a_i,a_j)}^<(z)
\psi^{(n,0)}\left(p_1,p_2;\,..,_{\sss P},..\right)
\nonumber \\*&&\phantom{aaaaaaa}
+\frac{4\pi{\ifmmode \alpha_S \else $\alpha_S$ \fi}}{p_i\cdot p_j}\,Q_{a_i S(a_i,a_j)^\star}(z)
\tilde{\psi}^{(n,0)}\left(p_1,_2;\,..,p_i,..,p_j,..\right),
\label{collimit}
\end{eqnarray}
where $z$ denotes the momentum fraction defined through the equations
\begin{equation}
p_i=zp_{\sss P}\,,\;\;\;\;k_j=(1-z)p_{\sss P}\,,
\label{zdef}
\end{equation}
$P_{ab}(z)$ denotes the standard Altarelli-Parisi splitting functions
and $Q_{ab*}(z)$'s are some new universal functions which
control the azimuthal angle behaviour of the collinear limit.
Assuming that the collinear particles belong to the final state
\begin{eqnarray}
Q_{gg^{\star}}(z)&=&-4\CA\,z(1-z)\,,
\label{Q1}
\\
Q_{qg^{\star}}(z)&=&4\TF\,z(1-z)\,,
\\
Q_{gq^{\star}}(z)&=&0\,,
\\
Q_{qq^\star}(z)&=&0\,.
\label{Q4}
\end{eqnarray}
In these equations, the $^\star$ symbol over the flavour of the
particle that eventually splits reminds that this particle
is off-shell. In principle, this notation should be extended
also to the Altarelli-Parisi splitting kernels, but at the leading
order $P_{ab^\star}=P_{a^\star b}$, and therefore there is no
need to keep track of the off-shell particle; the $\tilde{ \psi}$
function is constructed from the helicity amplitudes of
$n$-parton processes but with some linear dependence on the
azimuthal angle of the quasi-collinear configuration.
The $\tilde{\psi}$ functions therefore are as simple as the
Born-terms. They are important in constructing a correct
local subtraction term. However, upon integrating over
the azimuthal angle of the collinear momenta
their contributions vanish and they do not appear
in the integrated {\it collinear contributions.}
It is an interesting simplifying
feature of the five parton processes that the
$\tilde{\psi}$ functions vanish identically.
Using crossing properties of the splitting
functions and of the $\psi$ functions similar relations remain
valid also for initial collinear singularities.
The remaining procedure, in principle, is the same what we used in the
soft case
although it is somewhat more tedious.
The
properties of the $S$-functions and the phase-space again ensure
that we can easily construct the local collinear counter terms
and in the collinear contributions we can obtain
the singular terms analytically.
For further details the reader
should consult the original literature \cite{GiGlKo,KS}.
\noindent {\bf Exercise:} Calculate the inclusive one-jet cross section
$d\sigma/dE_Jd\cos \Theta_J$ for $\epem$ annihilation following the
general algorithm explained above.
\section{Conclusion}
In this lecture I described
the methods and techniques to get
differential cross-section formulae
from NLO singular scattering amplitudes.
These are
free from singular terms
well defined in all integration regions
and therefore suitable for numerical evaluation,
The ingredients of the methods are
the collinear counter
terms, the
local subtractions terms,
the virtual, the soft and the collinear contributions.
Their sum defines the
{\it finite hard scattering cross-section}
\begin{eqnarray}
d\hat{\sigma}^{\rm hard}_{a_1a_2}&=&d\sigma^{\rm born}_{a_1a_2} +
d\sigma^{\rm virt}_{a_1a_2} +
d\sigma^{\rm soft}_{a_1a_2} +
d\sigma^{\rm( coll, initial)}_{a_1a_2} +
d\sigma^{\rm (coll, final)}_{a_1a_2}\nonumber\\ &+&
d\sigma^{\rm (coll, counter)}_{a_1a_2} +
d\sigma^{\rm (real, subtracted)}_{a_1a_2}\,
\label{hardxsection}
\end{eqnarray}
where the last term has the kinematics of an $n+1$ parton
process while all the other terms have the kinematics of an $n$
parton process. In addition,
the evaluation of some physical quantity
requires the explicit construction
of the corresponding measurement functions. We note that
in the case of jet-production
this
is a non-trivial exercise (see
refs.\cite{bkss,es,cdss}).
We have seen that although the methods are conceptually simple,
their implementation is non-trivial.
Recently, several papers attempted to give a comprehensive
documentation \cite{GiGlKo,KS,FiKuSi,cataniseymour}, which are
recommended for further reading.
\section*{Acknowledgements}
I would like to thank Dave Soper for his significant contribution
to my understanding of the subject of this talk and Keith Ellis
for reading the manuscript.
I also thank Dave Soper and K.~T.~ Mahanthappa for their hospitality during my
stay and for the organization of a great
summer-school.
\section*{References}
| {
"attr-fineweb-edu": 1.953125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUds825V5i1dkg3z83 | \section{Introduction}
\IEEEPARstart{I}{mage} registration can be considered as finding the optimal transformation $T$ between the reference image $I_R$ and the floating image $I_F$ to maximize a defined similarity measure such as mutual information (MI). Since 1995 \cite{1}\cite{2}, MI has been proved to be very effective in image registration. The MI between $I_R $ and $I_F $ (with intensity bins $r$ and $f$) is defined as:
\begin{equation}
\label{eq1}
\mbox{MI}=H\left({I_R }\right)+H\left({I_F}\right)-H\left({I_R,I_F}\right)
\end{equation}
where $H\left({I}\right)=-\sum_{i}p\left({i}\right)\log p\left({i}\right)$ and $H\left({I_R,I_F}\right)=-\sum_{r,f}p\left({r,f}\right)\log p\left({r,f}\right)$ are the entropy of the intensities of image $I$ and the entropy of the joint intensities of two
images, $p\left({i}\right)$ is the intensity probabilities with $p\left(r\right)=\sum\nolimits_f{p\left({r,f}\right)}$ and $p\left(f\right)=\sum\nolimits_r {p\left({r,f}\right)}$, $p\left( {r,f}\right)$ is the joint intensity probabilities estimated by the joint histogram $h\left({r,f}\right)$.
MI-based registration methods take advantage of the fact that properly registered images usually correspond to compactly-clustered joint histograms \cite{3}. They measure the joint histogram dispersion by computing the entropy of the joint intensity probabilities. When the images become misregistered, the compact clusters become disperse sets of points in the joint histogram and the entropy of the joint intensity probabilities increases. Making no assumptions about the form of the intensity mapping between the two images, MI is sensitive to the unmatchable outliers, e.g. the tumor resection in the intra- and pre-operative brain images (see Figs. 1a-b). To reject the outliers, some approaches are proposed including consistency test \cite{4}, intensity transformation \cite{5}, gradient-based asymmetric multifeature MI \cite{6} and graph-based multifeature MI \cite{7}. However, all these methods do not emphasize the corresponding salient structures in the two
images to suppress the outliers. Furthermore, MI likely suffers
from local and biased maxima \cite{8} which are caused by the ambiguities in defining
structure correspondence.
\begin{figure}[!t]
\centerline{\includegraphics[width=3.45in,height=0.80in]{fig1}}
\caption{(a)-(b) Intra-operative and pre-operative MR image with a large tumor
resection. (c) Joint histogram dispersion with two clotted clusters (dark
red in pseudo color). (d) Joint saliency map for (a) and (b).}
\label{fig1}
\end{figure}
Spatial information, i.e. the dependence of the intensities of neighboring
pixels, has been included in MI \cite{9}-\cite{12} to improve
registration. Nevertheless, almost all MI-based methods equally
treat each overlapping pixel pair as a separate point in the overlap area to calculate the joint histogram. This could raise three
issues: 1) when we equally consider the outlier pixel pairs,
the noncorresponding structures overlap and the histogram will show certain
clusters for the grey values of the outliers. These clusters easily
introduce the histogram dispersion (see Fig. 1c) with increasing misregistration; 2) while registration can be achieved by maximizing the compactness of the histogram, the undesired clotted clusters (see Fig. 1c) related to many noisy pixel pairs in the structureless
regions, such as background and white matter in the brain image, increase the MI ambiguities and the local maxima \cite{8} (Fig. 5c shows that the normalized MI \cite{1}\cite{20} is in a biased global maximum when the whole background areas in the two endoscopic images
are exactly aligned); 3) when we group the intensity pairs as separate
points into the histogram, the independence of the neighboring bins could increase the MI ambiguities and the
local maxima. To solve this problem, joint histogram smoothing (or blurring) \cite{6}\cite{8} has been used to increase the dependence of the neighboring histogram bins. We address these issues above as follows.
In fact, image registration is to match the corresponding salient structures in both images. To suppress the outliers and the homogeneous pixel pairs, the corresponding pixel pairs in the corresponding salient structures should contribute more to the joint histogram. For example, the corresponding salient pixel pairs in the normal brain tissues should be given more weight in the histogram
than the homogeneous and the tumor resection pixel pairs. To weight each overlapping pixel pair when computing the joint histogram, we propose a novel joint saliency map (JSM) to assign a joint saliency value between 0 and 1 to the pixel pair. The idea of JSM is demonstrated schematically in Fig. 1d, where the high joint saliency values are assigned to the corresponding salient pixel pairs rather than the outlier and the homogeneous pixel pairs.
The JSM is determined by correlating each overlapping pixel pair's respective regional saliency vectors
(RSVs). The RSV characterizes the regional salient structure around each underlying pixel after a principal axis analysis (PAA) of the pixel's regional saliency distribution. In the JSM-weighted joint histogram (WJH), the contributions of the corresponding salient structures are distributed over neighboring histogram bins. This leads to the smoothing of the compact clusters for the grey values of the corresponding salient structures, which can solve both the outlier and the local maxima problems.
The proposed JSM-MI has been applied to the rigid registration of 2D images.
Experimental results show that, compared to other MI-based registration
methods, JSM-MI method achieves better robustness and higher
accuracy for the registration of challenging image pairs with outliers. The
letter is organized as follows. We first introduce the JSM for WJH in MI. Next, we report some experiment results to identify the registration performance on accuracy and robustness. Finally, the conclusions close this letter.
\section{Methods}
\subsection{Regional Saliency Vector}
We use visual saliency operator to enhance the regional salient structures we are interested in. Many techniques have been developed to define the saliency of image, i.e., using edge gradient, local phase \cite{12}, salient regions \cite{13}, corner and keypoints \cite{14}. Gradient map has been incorporated into the MI-based registration methods \cite{9}-\cite{11}. However, gradient is a local feature and sensitive to noise. Local phase \cite{12} and salient regions \cite{15} suffer from high computational complexity. Corner and keypoint can not be defined for each image pixel. Inspired by the center-surround mechanism \cite{16}\cite{17} which has defined the intensity-contrast-based visual saliency map, we define a two-step scale and rotation invariant saliency operator based on intensity contrast as follows:
\begin{equation}
\label{eq1}
S_l (v)=\sum\nolimits_{u\in N_v } {\left( {I_l \left( v \right)-I_l \left( u
\right)} \right)^2}
\end{equation}
where $N_v $ is the 1-pixel radius circular neighborhood of the pixel position $v=\left( {x,y} \right)$ at scale $l$, $S_l (v)$ is the local saliency computed for the intensity $I_l (v)$ in the Gaussian image pyramid \cite{18} at scale $l$, $I_l (u)$ is the intensity of the pixel in the $I_l (v)$'s neighborhood. The multiscale local saliency map $S(x,y)$ at the finest scale is reconstructed by summing up all the saliency maps at the coarser scales.
In the second step, a PAA of the saliency distribution in a certain region assigns \emph{regional saliency} to each pixel based on the inertia matrix:
\begin{equation}
\label{eq1}
\boldsymbol{M}=\left[ {\begin{array}{l}
\mu _{20} \;\;\;\;\mu _{11} \\
\mu _{11} \;\;\;\;\mu _{02} \\
\end{array}} \right]
\end{equation}
where $\mu_{jk} =\sum\nolimits {(x-g_x )^j(y-g_y )^k} S(x,y)$ , $(g_x,g_y
)=({{m_{10} } \mathord{\left/ {\vphantom {{m_{10} } {m_{00} }}} \right.
\kern-\nulldelimiterspace} {m_{00} },m_{01} } \mathord{\left/ {\vphantom
{{{m_{10} } \mathord{\left/ {\vphantom {{m_{10} } {m_{00} }}} \right.
\kern-\nulldelimiterspace} {m_{00} },m_{01} } {m_{00} }}} \right.
\kern-\nulldelimiterspace} {m_{00} })$ and $m_{jk} =\sum\nolimits {x^jy^kS(x,y)}$ are the central $(j,k)\mbox{-}$ moment, the centroid and the $(j,k)\mbox{-}$ moment of the saliency distribution $S(x,y)$ in the 5.5-pixel radius circular neighborhood around each pixel. This regional saliency distribution describes a 2D regional salient structure. The two eigenvectors of the matrix $\boldsymbol{M}$ represent the orthogonal coordinate system within the regional salient structure, while the corresponding eigenvalues give information about the length of the respective axes. Because the regional information about the orientation of the salient structure is mostly stored along the first eigenvector corresponding to the largest eigenvalue, the first eigenvector referred as the RSV is enough to represent the regional salient structure around a pixel (see Figs. 2a-b).
\begin{figure}[!t]
\centerline{\includegraphics[width=3.45in,height=0.80in]{fig2}}
\caption{(a)-(b) RSVs for the sub-blocks in the reference and the floating images (size: $400\times300$ pixels).}
\label{fig2}
\end{figure}
\subsection{Joint Saliency Map}
Given two RSVs of each overlapping pixel pair, JSM is ready to describe the matching degree between the two RSVs. The inner product of two RSVs is a measure of their co-linearity and naturally can be used as their similarity
measure. The essential idea of JSM is an assumption, which
is always valid in practice according to the empirical experience in image
registration: for two precisely aligned multi-modal (or multi-temporal)
images, the majority of the corresponding pixel locations are very likely to
produce the RSVs with similar orientations (see Figs. 2a-b). This is because the
two images under registration fundamentally depict the same image
structures. As a result, the RSVs of the corresponding pixel locations from
two images could present relatively coincident orientations in general.
Therefore, the angle $\theta$ between the two RSVs (${\rm {\bf x}}_R $, ${\rm {\bf x}}_F)$ is
simply calculated, making $\cos\theta$ the scalar measure of the joint
saliency value $w\left(v \right)$:
\begin{equation}
\label{eq1}
w\left(v \right)\mbox{=}\cos \theta \left( {{\rm {\bf x}}_R ,{\rm {\bf
x}}_F } \right)={\left\langle {{\rm {\bf x}}_R ,{\rm {\bf x}}_F }
\right\rangle } \mathord{\left/ {\vphantom {{\left\langle {{\rm {\bf x}}_R
,{\rm {\bf x}}_F } \right\rangle } {\left\| {{\rm {\bf x}}_R } \right\|\cdot
\left\| {{\rm {\bf x}}_F } \right\|}}} \right. \kern-\nulldelimiterspace}
{\left\| {{\rm {\bf x}}_R } \right\|\cdot \left\| {{\rm {\bf x}}_F }
\right\|}
\end{equation}
A JSM value near one suggests that the underlying pixel pair originates from the corresponding salient
structures. Contrarily, a JSM value near zero indicates that the underlying pixel pair comes from either the outliers or a homogeneous region. To speed up the registration without reducing accuracy, the pixel with a small saliency value below a threshold value (10 percent of the maximum saliency value) is assigned a zero JSM value directly. The JSM would primarily respond to the high-gradient edge pixels if a high threshold value is chosen. However, the JSM does not simply emphasize the common image gradients in the two images. Figs. 3d-f present the image gradient and the JSM profiles of the same line (marked as dashed lines across the tumor areas) at the two registered images (see Figs. 3a-b). As shown in the figures, the image gradient features in Figs. 3d-e are very noisy and do not agree with each other at each overlapping location. The JSM in Fig. 3f can accurately preserve the corresponding salient structures in larger capture range with smaller variability than the image gradients.
\begin{figure}[!t]
\centerline{\includegraphics[width=2.44in,height=1.60in]{fig3}}
\caption{(a)-(b) The reference and the floating images for the gradient magnitude
and the JSM magnitude. (c) Compact JSM-WJH smoothing for (a)-(b). (d)-(e)
Gradient value profiles of the lines in (a)-(b), which are marked as dashed
lines. (f) JSM value profiles of the lines in (a)-(b).}
\label{fig3}
\end{figure}
\subsection{JSM-Weighted Joint Histogram }
The contribution of the interpolated floating intensity $f(v_f)$ to the joint histogram is weighted by a $w(v)$ of the JSM (the pixel positions ($v_r$,$v_f$) are overlapped at the position $v$). For 2D image registration, if using a nearest neighbor or a bilinear interpolation, the value $w(v)$ should be added to the histogram entry $h(r,f)$. In bilinear partial volume distribution (PV) interpolation, the contribution of the $f(v_f)$ to the histogram, distributed over the intensity values of all nearest neighbors of the reference pixel position $v_r$ on the grid of $I_R$, is weighted using the $w(v)$. Similarly, JSM could be easily incorporated into other interpolation schemes and Parzen-based joint histogram.
In the JSM-WJH, the outliers and homogeneous regions have little impact on the histogram distribution.
Furthermore, each histogram entry for the corresponding salient structures is the sum of smoothly varying
fractions of one, such that the histogram changes smoothly in the neighboring bins related to those structures. As a result, the compact histogram smoothing (see Fig. 3c) is introduced by highlighting the grey values of the corresponding salient structures. Computed from the compact and smooth histogram, the MI is then maximized to achieve robust and accurate rigid registration.
\subsection{Computational complexity}
The JSM should be re-calculated with the transformation changing the overlap area at each registration iteration. The RSV orientation for a JSM calculation could be easily re-oriented as it is done in the diffusion tensor image registration \cite{19}. Nevertheless, to ensure the numerical stability and the computation speedup, a new JSM at each iteration can be simply updated from the JSM of the previous iteration through the PV interpolation. The JSM could be re-calculated after $n$ iterations ($n{=}10\sim15$) to reflect the updated correspondence between the salient structures in the two images.
\section{Experimental results}
We evaluated our JSM-MI-based (JMI) algorithm on 11 challenging image pairs including CT-PET tumor images, MR brain tumor resection images, optical images with background/foreground clutter and etc. We implemented the JMI algorithm using the simplex optimization in a multiresolution scheme \cite{18}. The algorithm stops if the current step length is smaller than ${10^{-5}}$ or if it has reached the limit of 200 evaluation numbers. The challenging image pairs include some complex outliers that the normalized MI-based method and four of MI-based adaptations with incorporating spatial information fail to deal with. Due to space restrictions, we only show some typical experimental results in this letter.
\begin{figure}[!t]
\centerline{\includegraphics[width=3.44in,height=2.5in]{fig4}}
\caption{Registration results for the two images in Figs. 2a-b. (a) JMI. The yellow contour overlap of the book validates the registration accuracy owing
to the additive mixing of red and green. (b) NMI. (c) RMI. (d)
HMI. (e) GMI. (f) PMI. (g)-(h) NMI and JMI similarity surfaces plotted as a function of $x$ and $y$ translation (within a range of $\pm10$ pixels around the matching position)}
\label{fig4}
\end{figure}
Fig. 4 shows the various registration results for
the two images at Fig. 2 with a foreground book and the large changes of background appearance. To facilitate the visual assessment of registration accuracy,
the green floating contours and the red reference contours obtained by Canny-Deriche edge detector have been overlaid
over each other. The sub-pixel registration accuracy (see Table I. case 1)
of our JMI algorithm can be validated by the book's yellow contour overlap, which is due to the additive color mixing of
the green and the red contour (see Fig. 4a).
Using particle swarm optimization (PSO) to deal with the local maxima,
the other methods based on normalized MI (NMI)
\cite{1}\cite{20}, regional MI (RMI)
\cite{21}, high-dimensional MI (HMI)
\cite{22}, MI with gradient information (GMI)
\cite{10}, and phase MI (PMI)
\cite{12} show different misregistration results in Figs. 4b-f. The PSO is conducted with 20 particles and allowed to experience 2000 iterations. The algorithm stops if it has reached the limit of 200 evaluation numbers or if the minimum error (${10^{-5}}$) conditions is satisfied. The computation time needed for the different algorithms are listed in Table II.
Figs. 4g-h show that the NMI and JMI similarity surfaces are plotted as a function of $x$ and $y$ translation. In this case, the JSM removes all local maxima and achieves the global maximum at the registration position, while the NMI suffers from the biased maximum at the mismatching position.
Figs. 5a-b show the reference and floating endoscopic images ($720\times572$ pixels) including a surgical instrument with different illuminations. Using a mosaic pattern to fuse the two images, Figs. 5c-d show the NMI-based and PMI-based misregistration results. Fig. 5e shows our accurate JMI-based registration result (see Table I. case 2).
\begin{figure}[!t]
\centerline{\includegraphics[width=2.2in,height=1.5in]{fig5}}
\caption{(a)-(b) Reference and floating endoscopic images (size: $720\times572$ pixels) with a
surgical tool and illumination changes. The two images are fused using a mosaic pattern. (c) NMI. (d)
PMI. (e) JMI.}
\label{fig5}
\end{figure}
\begin{center}
\begin{threeparttable}
\caption{Registration results for Fig. 4 and Fig. 5 (The translations $X$ and $Y$ are in pixels in the $x$ and $y$ directions,
the rotation $\beta$ is in degrees around the center of the images.).}
\begin{tabular*}{0.48\textwidth}{@{\extracolsep{\fill}} rlcc }
\hline
&Cases&Correct($X$,$Y$,$\beta$)&Computed($X$,$Y$,$\beta$)\\
\hline
&1&$-23.11$, $45.59$, $11.43^\circ$&$-22.34$, $45.30$, $11.03^\circ$\\
&2&$37.91$, $-36.78$, $4.43^\circ$&$37.46$, $-38.18$, $4.68^\circ$\\
\hline
\end{tabular*}
\end{threeparttable}
\end{center}
\begin{center}
\begin{threeparttable}
\caption{Computation iterations and runtime in seconds for Fig. 4. (Matlab 6.5, single core Intel Celeron 2.8GHz, RAM 2GB)}
\begin{tabular*}{0.48\textwidth}{@{\extracolsep{\fill}}rlcccccc}
\hline
&&JMI&NMI&RMI&HMI&GMI&PMI\\
\hline
&Iter.&64&41&45&46&50&29\\
&Time&157.4&296.7&297.1&1060.1&329.1&3049.3\\
\hline
\end{tabular*}
\end{threeparttable}
\end{center}
\section{Conclusion}
We propose an effective JSM to solve the problems of outliers and local maxima in MI-based
image registration. Representing the corresponding salient structures in the two images to be registered, JSM is easily integrated into other intensity-based similarity measures for 3D nonrigid registration. Independent of this work but subsequent to our preliminary
conference papers \cite{23}\cite{24} which this letter elaborates on and extends, Ou et al. \cite{25} developed a similar mutual saliency map for
outlier rejection in 3D nonrigid image registration.
Additionally, our method is an intensity-based method and also sensitive to the initial conditions. It is necessary in principle to set the proper initial conditions close to a correct alignment solution, which can be achieved by coarse alignment techniques such as principal axes based method. Nevertheless, all instances of correct registration in this letter are directly performed by our method without any coarse alignment.
\section*{Acknowledgment}
The authors thank Simon K. Warfield, Michal Irani, Robert Barnett and Edward Vrscay for allowing the use of image, Rehan Ali for the phase recovery source code, Shanbao Tong and all reviewers for their useful comments, and Wendy Wang for her help to our algorithm.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {
"attr-fineweb-edu": 1.841797,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdsA5qYVBi77IMVbj |
\section{Introduction}
PALOMA (Process Algebra for Located Markovian Agents)~\cite{FengH14, FengHG16} is a novel stochastic process algebra which captures the behaviour of agents
who are distributed in space and whose interactions are affected by their relative positions.
This model can be thought to capture many modern systems where, for example, the range of communication may be limited for devices using wireless communication
technologies or some areas may be known ``dead zones" from which no communication is possible.
In this paper we consider what it means for two agents to be equivalent, taking into consideration both their behaviour and their location\red{, and develop the
formal underpinnings to allow such equivalence to be rigorously studied.}
The notion of Markovian bisimulation has become standard for stochastic process algebras, but as we will discuss, applied naively this
approach to equivalence checking is too strong, leaving little opportunity for a notion of equivalence that is not isomorphism. Instead here
we consider equivalence of a component within the context of a given system. This supports the idea of being able to substitute one component,
perhaps with a more efficient implementation, for another within a given system even though they may not exhibit exactly the same behaviour
in arbitrary contexts. Similarly, when we come to consider the spatial aspects of behaviour our notion of equivalence aims to capture
the relative positions of components, rather than their absolute locations.
In this brief paper we aim to give the intuition and ideas behind our bisimulation, without giving all the definitions. The rest of the paper
is structured as follows. In Section~\ref{sec:paloma} we give a brief introduction to the PALOMA modelling language, while the semantics
of the language is outlined in Section~\ref{sec:semantics}. In Section~\ref{sec:equivalence} we discuss a notion of equivalence based on equivalent
relative positions and behaviours. We present our conclusions and discuss further work in Section~\ref{sec:conc}.
\section{PALOMA language}
\label{sec:paloma}
In this section we give a brief introduction to PALOMA; the interested reader is referred to \cite{FengH14, FengHG16} for more details.
The spatial distribution of agents is a key feature of PALOMA models and we assume that there exists a finite set of locations, $Loc$ and
all agent expressions in PALOMA are parameterised by a location $\ell \in Loc$, indicating the current location of the agent.
The grammar of the language is as follows:
\begin{small}
\begin{IEEEeqnarray*}{rCl}
\pi &::=& !!(\alpha,r)@\Ir\{\vv{\ell}\} \enspace\mid\enspace ?? (\alpha, p)@\mathbf{Wt}\{w\} \enspace\mid\enspace !(\alpha, r)@\Ir\{\vv{\ell}\} \enspace\mid\enspace ?(\alpha, p)@\Prob\{q\}
\enspace\mid\enspace (\alpha, r) \\
S(\ell) &::=& \pi.S'(\ell') \enspace\mid\enspace S_1(\ell) + S_2(\ell) \enspace\mid\enspace C \\
P &::=& S(\ell) \enspace\mid\enspace P \mathbin{\|} P
\end{IEEEeqnarray*}
\end{small}%
\noindent
The two-level grammar, defines individual agents $S(\ell)$, whose behaviours are specified by the
actions they can undertake, with possible alternatives, and model components $P$, which are
comprised of parallel compositions of agents. The behaviour of individual agents are given by actions of five distinct types:
\begin{description}
\item[Unicast output $!!(\alpha,r)@\Ir\{\vv{\ell}\}$:]
Unicast is for point-to-point communication between a pair of agents \red{and is included in the language to model contention for resources in systems}.
\red{Each unicast output message has a label, $\alpha$, and a rate $r$, that determines the rate at which the output is performed.
The message is sent to locations specified by the set $\vv{\ell} \in 2^{Loc}$ interpreted as the \emph{influence range}.}
Any agent located within that range, which enables the corresponding $\alpha$-labelled unicast input action, is eligible to receive the action --- \red{that is, the label $\alpha$ is
used to identify agents that can communicate with each other.}
Unicast actions are \emph{blocking} meaning that the sending agent can only proceed when there is a eligible receiver.
\item[Unicast input $??(\alpha, p)@\mathbf{Wt}\{w\} $:] Each eligible receiver of a \red{unicast message $\alpha$} must be located within the
specified influence range, and each will have an associated \emph{weight} $w$.
The weights are used to define a probability distribution
over the eligible receivers, i.e.\ if there are $i$ potential receivers, each with weight $w_i$ and $W = \sum_i w_i$ then the probability
that the $j$th agent receives the message is $w_j/W$. Once the message is received the receiving agent may or may not act on the
message (reflecting message failure, corruption etc.) with the specified probability $p$ i.e.\ with probability $1-p$ the agent will not
act on the message received. If this occurs the message is lost --- it is not the case that it is subsequently assigned to one of the other
eligible receivers.
\item[Broadcast output $!(\alpha, r)@\Ir\{\vv{\ell}\}$:] As its name suggests, a broadcast action allows its sender to influence multiple
other agents. As with the unicast output action, a broadcast output \red{message labelled $\alpha$} is
sent with a specified influence range \red{$\vv{\ell}$} and at a specified rate $r$. \emph{All}
agents \red{with broadcast input prefix on label $\alpha$} located within that range may receive the message. Moreover the output proceeds regardless of whether there are any eligible receivers
so broadcast output is non-blocking for the sender.
\item[Broadcast input $?(\alpha, p)@\Prob\{q\}$:] Each eligible receiver of a broadcast \red{message $\alpha$} must be located within the specified
input range. Each such agent has a likelihood of receiving the message, recorded in the probability $q$. For example, agents
closer to the sender may be more likely to receive the message. Each agent independently decides whether the broadcast
is received or not (Bernoulli trials). As with unicast input, the receiving agent may or may not act on the
message with the specified probability $p$ i.e.\ with probability $1-p$ the agent will not
act on the message received.
\item[Spontaneous action $(\alpha, r)$:] These actions do not represent a communication but rather an
individual action by the agent which may change the state of the agent, for example, its location.
These can also be thought of as broadcast output actions whose influence range is the empty set.
\end{description}
All rates are assumed to be parameters of exponential distributions, meaning that the underlying stochastic model of a PALOMA
model is a continuous time Markov chain (CTMC).
\medskip
\begin{example}
Consider agents $Transmitter$ and $Receiver$ such that
\begin{IEEEeqnarray*}{lCl}
Receiver(\ell) &:=& ??(message, p)@\Wt\{v\}.Receiver(\ell) \\
Transmitter(\ell) &:=& !!(message, r)@\Ir\{\vv{\ell}\}.Transmitter(\ell)
\end{IEEEeqnarray*}
\noindent where $\ell$ denotes the current location of the agent and $\vv{\ell}$ denotes a
set of locations in the range of the unicast message emitted by action
$message$.
In a system where no agent sends a $message$ agent $Receiver$ does not perform any action.
On the other hand if there is a component, say $Tranmitter$, that
outputs a $message$ and the location of $Receiver$ is in the
influence range of the message then $Receiver$ performs $message$ with a rate
dependent on the rate at which $Transmitter$ unicasts $message$ and the probability that $Receiver$ receives it.
Similarly, if the component $Transmitter$ does not have a recipient for the $message$, it remains blocked and never
performs an action.
\end{example}
\subsection{Conditional exit rates and probabilities}
Notions of equivalence in process algebras, such as bisimulation \cite{Milner89}, are typically based on the idea
of a pair of agents each being able to match the behaviour of the other. In the case of stochastic process algebras
such as PEPA, not only the type of action but also the rates at which they occur must in some sense be matched \cite{Hillston96}.
In order to make similar definitions for PALOMA we need to define some auxiliary functions which, given a syntactic
expression, extract information about the rates and probabilities which may be exhibited by the term. Space limitations
do not allow us to present all of them here, but we present those for unicast, which is the most involved case, to give the
reader an impression of how we proceed.
Denote the set of all sequential components of PALOMA parametrised by their location by $\mathcal{C}_{S}$ and the set of
model components by $\mathcal{C}$.
Let the set of action labels be defined as $Lab$ and the set of action types as $Type = \{ !!, ??, !, ?, \cdot \}$,
where the interpretation of the symbols is clear, corresponding to the action types discussed above.
Let $Act$ denote the set of all actions.
The actions in the set $Act = Type \times Lab$ are defined by their label and their type.
Let $\mathcal{A}$ be the set of all syntactically defined actions.
Define the function $\Pi_{Act}: \mathcal{A} \to Act$ as a projection returning the label of the action \red{with its type},
e.g.\ $\Pi_{Act}(??(\alpha, p)@\Wt\{v\}) = \mathbin{??}\alpha$.
Similarly define the projection $\Pi_{Lab}: Act \to Lab$ returning just the label of the action and the function
$\Pi_{Type}:Act \to Type$ returning the type of an action.
Denote by $\Pi_{Loc}$ the function returning the set of locations spanned by a
model component.
\begin{IEEEeqnarray*}{c}
\Pi_{Loc}(S_1(\ell_1) \parc \cdots \parc S_n(\ell_n)) = \bigcup_{i=1}^{n} \{\ell_i\}
\end{IEEEeqnarray*}
Note that in the case of sequential components $\Pi_{Loc}$ will result in a singleton set
--- the location of the sequential component.
Suppose $Sys = S_1(\ell_1) \parc \cdots \parc S_n(\ell_n) \in \mathcal{C}$ for $n \in \mathbb{N}^{+}$.
Let the function $\mathrm{seq}$ return the set of all sequential components of $Sys$ in a set of locations $L$.
\begin{IEEEeqnarray*}{c}
\mathrm{seq}(Sys, L) = \{ S_i(\ell_i) \mid \Pi_{Loc}(S_i(\ell_i)) \in L\}
\end{IEEEeqnarray*}
\subsubsection{Context unaware definitions}
When we consider a PALOMA component in isolation we can use the syntax
to find the potential rate, weight or probability associated with this component and a given action.
Similar functions are defined for each form of prefix.
From the point of view of the originator of a unicast action, the important measure is the rate at which the
action is preformed.
\begin{definition}
For all $\alpha \in Lab$, $a \in \mathcal{A}$, $\vv{\ell} \in 2^{Loc}$, and $S \in \mathcal{C}_{S}$
define the function $s_{\alpha}^{!!}$ returning the rate of a unicast output action labelled $\alpha$ as follows.
\begin{IEEEeqnarray*}{lCl}
s_{\alpha}^{!!}\left(!!(\beta,r)@\Ir\{\vv{\ell}\}.S(\ell)\right) & = &
\begin{cases}
r & \mbox{for $\alpha = \beta$}\\
0 & \mbox{otherwise}
\end{cases} \\*
\IEEEstrut
s_{\alpha}^{!!}\left(a.S(\ell)\right) & = & \hspace{0.8em} 0 \quad \mbox{if $\Pi_{Type}(a) \neq \> !!$} \\
s_{\alpha}^{!!}\left(S_1(\ell) + S_2(\ell)\right) & = & \hspace{0.8em} s_{\alpha}^{!!}(S_1(\ell)) + s_{\alpha}^{!!}(S_2(\ell))
\end{IEEEeqnarray*}
\end{definition}
\begin{example}
Consider the following components
\begin{IEEEeqnarray*}{lCl}
Tester(\ell_0) &:=& (message, r).Tester(\ell_0) \\
Transmitter(\ell_0) &:=& !!(message, r).Transmitter(\ell_0) \\
Receiver(\ell_1) &:=& ??(message, p)@\Wt\{v\}.Receiver(\ell_1)
\end{IEEEeqnarray*}
Based on these definitions we can find:
\begin{align*}
s_{message}^{!!}(Tester(\ell_0) + Transmitter(\ell_0)) = 0 + r = r && s_{message}^{!!}(Receiver(\ell_1)) = 0
\end{align*}
\end{example}
The rest of the context unaware definitions are given in a similar vein and just extract necessary syntactic information from the
component definitions.
Specifically we define the following functions:
\begin{description}
\item[Unicast influence range $\Pi_{UniIR}(S, \alpha)$:] Given that $S$ has a unicast output prefix with label $\alpha$, the function returns the influence range
of unicast message $\alpha$ defined in the prefix.
Otherwise, the function returns the empty set $\emptyset$.
\item[Weight function $w_{\alpha}(S)$:] For a sequential component $S$ the function $w_{\alpha}(S)$ is defined similarly to $s_{\alpha}^{!!}$ with
base case $w_{\alpha}\left(??(\alpha, p)@\Wt\{w\}.S\right) = w$.
In addition we define the weight function over parallel compositions and sets of sequential components by summing over the weights for each sequential component in
the parallel composition or set.
\item[Probability function $p_{\alpha}^{??}(S)$:] This is again similar to $s_{\alpha}^{!!}$ with base case
$p_{\alpha}^{??}\left(??(\alpha, p)@\Wt\{w\}.S\right) = p$.
\end{description}
\begin{example}
Consider the following sequential components.
\begin{IEEEeqnarray*}{lCl}
Transmitter(\ell_0) &:=& \,!!(message, r)@\Ir\{\vv{\ell}\}.Transmitter(\ell_0) \\
Receiver1(\ell_1) &:=& \,??(message, p)@\Wt\{w_{r1}\}.Receiver1(\ell_1) \\
Receiver2(\ell_2) &:=& \,??(message, q)@\Wt\{w_{r2}\}.Receiver2(\ell_2)
\end{IEEEeqnarray*}
For the system given by $Sys = Transmitter(\ell_0) \parc Receiver1(\ell_1) \parc Receiver2(\ell_2)$
the weight for receiving a unicast message $message$ is calculated as
\begin{IEEEeqnarray*}{l}
w_{message}(Sys) = w_{message}(Transmitter(\ell_0) \parc Receiver1(\ell_1) \parc Receiver2(\ell_2))
= w_{r1} + w_{r2}
\end{IEEEeqnarray*}
\end{example}
\subsection{Context-aware conditional exit rates}
Unfortunately the syntactic information alone is not sufficient to determine the rate at which an action will be witnessed in a PALOMA system.
The spatial aspect, as captured by the influence range, plays an important role in determining both which actions are possible and potentially their rates and probabilities.
Thus we also define some context-dependent functions.
\begin{definition}
Let $\alpha$ be an action label in $Lab$.
Define the rate at which the component $S(\ell) \in \mathcal{C}_{S}$ is capable of unicasting a message labelled $\alpha$ to a location $\ell'$ as
follows:
\begin{IEEEeqnarray*}{lCl}
u_{\alpha}(\ell', !!(\beta, r)@\Ir\{\vv{\ell}\}.S(\ell)) &=&
\begin{cases}
s_{\alpha}^{!!}(!!(\beta, r)@\Ir\{\vv{\ell}\}.S(\ell)) & \mbox{if $\ell' \in \Pi_{UniIR}(S(\ell), \alpha)$ and $\alpha = \beta$} \\
0 & \mbox{otherwise}
\end{cases}\\
u_{\alpha}(\ell', S_1(\ell) + S_2(\ell)) &=& \hspace{0.8em} u_{\alpha}(\ell', S_1(\ell)) + u_{\alpha}(\ell', S_2(\ell))
\end{IEEEeqnarray*}
\end{definition}
\begin{definition}
Suppose $P = S_1(\ell_1) \parc \dots \parc S_n(\ell_n) \in \mathcal{C}$ for $n \in \mathbb{N}^{+}$ is a model component with $S_i(\ell_i) \in
\mathcal{C}_{S}$ for all $1 \leq i \leq n$.
Let $Sys$ be any other system serving as context.
Let $u_\alpha(\ell, Sys, P)$ be the rate at which a model component $P$ unicasts a message labelled $\alpha$ to location $\ell$ in the context of $Sys$, defined as
\begin{IEEEeqnarray*}{c}
u_{\alpha}(\ell, Sys, P) = \sum_{S \in \mathrm{seq}(P)} u_{\alpha}(\ell, S)\times\mathbbm{1}_{>0}\{w_{\alpha}\left(\mathrm{seq}(Sys \parc P, \> \Pi_{UniIR}(S, \alpha))\right)\} \\
\mbox{where } \mathbbm{1}_{>0}(x) =
\begin{cases}
1 & \mbox{$x > 0$} \\
0 & \mbox{otherwise}
\end{cases}
\end{IEEEeqnarray*}
\red{For each sequential component $S$ of $P$ we calculate the total weight over the components in the influence range of $S$. The indicator
function $\mathbbm{1}_{>0}$ is set to $1$ if this weight is greater than $0$ --- meaning there are eligible receivers in the influence range.
The rate at which $P$ unicasts a message $\alpha$ to location $\ell$ is then defined as the sum of rates at which each sequential component $S$ of $P$ is \emph{capable} of
said unicast multiplied by the indicator function ensuring that the blocking nature of unicast is taken into account.}
\end{definition}
The next definition deals with determining the probability of a sequential
component receiving the unicast message.
\begin{definition}
Let $S_1(\ell)$ and $S_2(\ell')$ be sequential components and $Sys \in \mathcal{C}$ any model component.
Suppose $!!(\alpha, r)@\Ir\{\vv{\ell}\}.S_2'(\ell'')$ is a prefix guarded term in the expression of $S_2(\ell')$.
Then we define the probability of $S_1(\ell)$ receiving a unicast message with label
$\alpha$ from $S_2(\ell')$, when composed in parallel with $Sys$ and $S_2(\ell')$, to be:
\begin{IEEEeqnarray*}{l}
p_{\alpha}(S_1(\ell), Sys, S_2(\ell')) =
\begin{cases}
\frac{w_{\alpha}(S_1(\ell))}{w_{\alpha}(\mathrm{seq}(Sys \parc S_1(\ell), \vv{\ell}))} & \mbox{if $\ell \in \vv{\ell}$} \\
0 & \mbox{otherwise}
\end{cases}
\end{IEEEeqnarray*}
\end{definition}
Once similar definitions have been defined for broadcast and spontaneous actions we are in a position to define the context-aware exit rate.
\begin{definition}
Let $Sys \in \mathcal{C}$ be a system in which the component $S \in
\mathcal{C}_{S}$ appears.
Let $a \in \mathcal{A}$ be any action with label $\alpha$.
Define the \emph{context-aware exit rate} $R$ for agents by the following:
\begin{IEEEeqnarray*}{rCl}
R_{a}(Sys, S(\ell_0)) =
\begin{cases}
s_{\alpha}(S(\ell_0)) & \text{if $\Pi_{Type}(a) = \mathbin{\cdot}$} \\
b_{\alpha}(S(\ell_0)) & \mbox{if $\Pi_{Type}(a) = \> !$} \\
b_{\alpha}(\ell_0, Sys) p_{\alpha}^{?}(S(\ell_0))& \mbox{if $\Pi_{Type}(a) = \>?$} \\
\max\limits_{\ell \in \Pi_{Loc}(Sys)}\{u_{\alpha}(\ell, Sys,S(\ell_0))\} & \text{if $\Pi_{Type}(a) = \> !!$} \\
\sum\limits_{T \in Seq(Sys)}u_{\alpha}(\ell_0, Sys, T) p_{\alpha}(S(\ell_0), Sys, T) p_{\alpha}^{??}(S(\ell_0)) & \text{if $\Pi_{Type}(a) = \> ??$}
\end{cases}
\end{IEEEeqnarray*}
\noindent Now consider a model component $P = S_1(\ell_1) \parc \dots \parc S_n(\ell_n)$ with $S_i(\ell_i) \in \mathcal{C}_{S}$ for all $1 \leq i \leq n$ ($n \in \mathbb{N}$)
and suppose it is a part of the system $Sys$.
Then define
\begin{IEEEeqnarray*}{rCl}
R_{a}(Sys, P) = \sum_{i=1}^{n} R_{a}(Sys \parc (P \setminus S_i(\ell_i)), S_i(\ell_i))
\end{IEEEeqnarray*}
\noindent where $P \setminus S_i(\ell_i)$ denotes the model component $P$ with $S_i(\ell_i)$ removed.
\end{definition}
Finally we define the rate at which action $a \in Act$ is performed over a set of locations.
\begin{definition}
Consider a model component $P = S_1(\ell_1) \parc \dots \parc
S_n(\ell_n)$ with $S_i(\ell_i) \in \mathcal{C}_{S}$ for all $1 \leq i \leq n$ ($n \in \mathbb{N}$) and suppose it is a part of the system $Sys$.
Let $L$ be a set of locations of interest.
We define $R_{a}(L, Sys, P)$, the rate at which action $a$ is performed by $P$ in locations $L$, within the context of system $Sys$ to be:
\begin{IEEEeqnarray*}{rCl}
R_{a}(L, Sys, P) = \sum_{S \> \in \> \mathrm{seq}(P, L)} R_{a}(Sys \parc (P \setminus S), S)
\end{IEEEeqnarray*}
\end{definition}
\section{Semantics}
\label{sec:semantics}
The definition of the semantics of PALOMA will proceed in the FuTS (State to Function Labelled Transition Systems) framework as presented in~\cite{NicolaLLM13}.
In general, the transition rules in FuTSs are given as triplets $s \xratailb{\lambda} f$
where $s$ denotes a source state, $\lambda$ the label of the transition and $f$ the continuation function associating a value of suitable type
to each state $s'$.
The shorthand $\left[s_1 \mapsto v_1, \cdots, s_n \mapsto v_n \right]$ is used to denote a function $f$ such that $f(s_i) = v_i$ for $i = 1, \cdots n$.
This kind of functional treatment of transition rules is going to allow us to give more concise definitions of semantic rules as many possible branches of
model evolutions can be captured within a single rule.
In the case of PALOMA semantics we are going to define the set of states as the set of all model components $\mathcal{C}$.
For convenience, the treatment of semantic rules is split into two steps where the following types of transition relations are considered separately:
\begin{description}
\item[Capability relation] Denoted by $s \xratailb{\lambda}_{c} f$ where $f: \mathcal{C} \to [0,1]$.
The aim is to describe actions that a defined model component is capable of and introduce probabilities for all possible states resulting from the said action firing.
For example, a component including a prefix for unicast input will be capable of the unicast input action firing with some probability dependent on the context.
The function $f$ will assign a probability for possible continuation states.
\item[Stochastic relation] Denoted by $s \xratailb{\lambda}_{s} f$ where $f: \mathcal{C} \to \mathbb{R}^{\geq 0}$.
These rules are used to generate the CTMC and thus need to assign rates to each available transition.
\end{description}
As mentioned in Section~\ref{sec:paloma}, the calculation of rates of actions for each component depend the system they appear in (a PALOMA model
component) and thus we use $Sys$ as a place-holder for any such PALOMA model component serving as context.
In the following, we use $P_1 \equiv P_2$ to denote that model components $P_1$ and $P_2$ are syntactically equivalent.
\subsection{Capability relations}
The only capability relations of interest here are ones for broadcast and unicast input actions as these are the only ones that can either succeed or fail depending
on the rest of the context system $Sys$.
The labels $\lambda_c$, of the FuTSs rules are given by the following grammar where $\alpha \in \mathsf{Lab}$ denotes the action labels:
\begin{IEEEeqnarray*}{rCllll}
\lambda_c ::= \>
& & (?\alpha, &\vv{\ell}, \> &Sys) \quad &\mbox{Broadcast input} \\
&\mid & (??\alpha, &\vv{\ell}, \> &Sys) \quad &\mbox{Unicast input}
\end{IEEEeqnarray*}
The semantic rules given in Figure~\ref{fig:caprules} use the definitions from Section~\ref{sec:paloma} to extract necessary
information from the syntactic definitions of components.
The rules \textsf{BrIn} and \textsf{UniIn} are the primitive rules describing the capability of sequential components to perform a broadcast or unicast input action,
respectively, given the set of locations $\vv{\ell}$ denoting the influence range of the message and a context system $Sys$.
In both cases the function $f$, which is defined over all states, gives the probability of a transition to each state given the action has fired.
For \textsf{BrIn} the calculation only depends on the parameters $p$ and $q$ given explicitly in the syntactic definition of the component.
For \textsf{UniIn} the likelihood of the component receiving the message, $\frac{w}{w_{\alpha}(Sys)}$, is calculated on the basis that there may be many eligible receivers of the
given message in $Sys$.
The rule \textsf{BrSystem} is used to deal with parallel compositions of model components that can act as broadcast message receivers.
\blue{
Note that the outcomes of all the broadcast input actions in a system are independent of each other.
Thus the probability of $P_1 \parc P_2$ transitioning to $P_1' \parc P_2'$ due to a broadcast input action is the product of the
probabilities of $P_1$ and $P_2$ respectively making the corresponding transitions.}
For unicast input actions the rule \textsf{ParllelUniIn} is just saying that no two components can perform the unicast input on the same label simultaneously.
\begin{figure}[t!]%
\scriptsize
\begin{minipage}[c]{0.35\linewidth}
\makebox[3em][l]{\textsf{BrIn}} \quad
$?(\alpha, p)@\Prob\{q\}.S \xratail{\left(?\alpha, \vv{\ell}, \> Sys\right)}_{c} f$
\end{minipage}
\begin{minipage}[c]{0.3\linewidth}
if $\Pi_{Loc}\left(?(\alpha, p)@\Prob\{q\}.S\right) \in \vv{\ell}$
\end{minipage}
\begin{minipage}[c]{0.4\linewidth}
$f(s) = \begin{cases}
pq & \mbox{if $s \equiv S$} \\
1 - pq & \mbox{if $s \equiv \> ?(\alpha, p)@\Prob\{q\}.S$} \\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\bigskip
\begin{minipage}[c]{0.35\linewidth}
\makebox[3em][l]{\textsf{UniIn}} \quad
$??(\alpha, p)@\Wt\{w\}.S \xratail{(??\alpha, \vv{\ell}, \> Sys)}_{c} f$
\end{minipage}
\begin{minipage}[c]{0.3\linewidth}
if $\Pi_{Loc}\left(??(\alpha, p)@\Wt\{w\}.S\right) \in \vv{\ell}$
\end{minipage}
\begin{minipage}[c]{0.4\linewidth}
$f(s) =
\begin{cases}
\dfrac{wp}{w_{\alpha}(Seq)} & \mbox{if $s \equiv S$} \\[1em]
\dfrac{w(1-p)}{w_{\alpha}(Seq)} & \mbox{if $s \equiv \> ??(\alpha, p)@\Wt\{w\}.S$} \\[1em]
0 & \mbox{otherwise}
\end{cases}$\\
\medskip
where $Seq = \mathrm{seq}(Sys, \vv{\ell})$
\end{minipage}
\bigskip
\begin{minipage}[c]{0.45\linewidth}
\AxiomC{$P_1 \xratail{\left( ?\alpha, \vv{\ell}, \> Sys\right)}_{c} f_1 \quad P_2 \xratail{\left( ?\alpha, \vv{\ell}, \> Sys\right)}_{c} f_2$}
\LeftLabel{\makebox[4em][l]{\textsf{BrSystem}} \quad}
\UnaryInfC{$P_1 \parc P_2 \xratail{\left(?\alpha, \vv{\ell}, \> Sys\right)}_c g$}
\DisplayProof
\end{minipage}
\begin{minipage}[c]{0.5\linewidth}
\makebox[3cm]{}
$g(s) =
\begin{cases}
f_{1}(P_1') f_{2}(P_2') & \mbox{if $s \equiv P_1' \parc P_2'$} \\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\bigskip
\begin{minipage}[c]{0.45\linewidth}
\AxiomC{$S_1 \xratail{\left(??\alpha, \vv{\ell}, \> Sys\right)}_c f_1 \quad S_2 \xratail{\left(??\alpha, \vv{\ell}, \> Sys\right)}_c f_2$}
\LeftLabel{\makebox[4em][l]{\textsf{ParllelUniIn}} \quad}
\UnaryInfC{$S_1 \parc S_2 \xratail{\left(??\alpha, \vv{\ell}, \> Sys\right)}_c g$}
\DisplayProof
\end{minipage}
\begin{minipage}[c]{0.5\linewidth}
\makebox[3cm]{}
$g(s) =
\begin{cases}
f_{1}(S_1') & \mbox{if $s \equiv S_1' \parc S_2$} \\
f_{2}(S_2') & \mbox{if $s \equiv S_1 \parc S_2'$} \\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\bigskip
\begin{minipage}[c]{0.25\linewidth}
\AxiomC{$P_1 \xratail{\lambda_c}_{c} f$}
\LeftLabel{\makebox[4em][l]{\textsf{Choice}} \quad}
\UnaryInfC{$P_1 + P_2 \xratail{\lambda_c}_{c} f $}
\DisplayProof
\end{minipage}
\begin{minipage}[c]{0.25\linewidth}
\AxiomC{$P_2 \xratail{\lambda_c}_{c} f$}
\UnaryInfC{$P_1 + P_2 \xratail{\lambda_c}_{c} f$}
\DisplayProof
\end{minipage}
\red{
\begin{minipage}[c]{0.5\textwidth}
\makebox[2cm]{}
\AxiomC{$P \xratail{\lambda_c}_{c} f \qquad X := P$}
\LeftLabel{\makebox[1.5cm][l]{\textsf{Constant}} \quad}
\UnaryInfC{$X \xratail{\lambda_c}_{c} f$}
\DisplayProof
\end{minipage}%
}
\caption{Capability rules for communication}
\label{fig:caprules}
\end{figure}
\subsection{Stochastic relations}
Firstly we need to define a set of labels for stochastic relations.
It will be necessary to carry around the set of locations $\vv{\ell}$ in the labels to distinguish between actions having the same label and type but affecting a different set of
components due to their influence range.
In addition, including the system $Sys$ in the labels ensures that the communication rules are only applied to components in the same system.
The set of labels for stochastic relations is thus defined as follows:
\begin{IEEEeqnarray*}{rCllll}
\lambda_s ::= \>
& & (\alpha, &\emptyset, \> &Sys) \quad &\mbox{Spontaneous action}\\
&\mid & (!\alpha, &\vv{\ell}, \> &Sys) \quad &\mbox{Broadcast communication}\\
&\mid & (!!\alpha, &\vv{\ell}, \> &Sys) \quad &\mbox{Unicast communication}
\end{IEEEeqnarray*}
\red{The stochastic rules are summarised in Figure~\ref{fig:stochrules}.}
Firstly we have rules \textsf{Br}, \textsf{Uni} and \textsf{SpAct} that just define the primitive rules for all spontaneous actions and give the
rates at which the defined transitions can happen.
For the rule \textsf{Uni} the side-condition is needed to ensure that there are eligible receivers available in the system.
The rules \textsf{BrCombo} and \textsf{UniPair} are to combine the capability rules with stochastic rules to give rates of system state transitions that are induced
by broadcast or unicast message passing.
\textsf{BrCombo} takes as premise the existence of components $S$ and $P$ such that $S$ can perform the broadcast communication action defined by stochastic relations and
$P$ is capable of broadcast input.
The rate at which the parallel composition $S \parc P$ reaches the next state $S' \parc P'$ is given by the function $f \otimes g$ which is defined as the product of $f$ applied to the
$S'$ and $g$ applied to $P$.
The unicast case is treated similarly.
\begin{figure}[hb!]
\scriptsize
\begin{subfigure}{\textwidth}
\begin{minipage}{0.58\textwidth}
\makebox[1.5cm][l]{\textsf{Br}} \quad
$!(\alpha, r)@\Ir\{\vv{\ell}\}.S \xratail{\left(!\alpha, \vv{\ell}, \> Sys\right)}_{s} \left[S \mapsto r\right]$
\end{minipage}
\bigskip
\begin{minipage}{\textwidth}
\makebox[1.5cm][l]{\textsf{Uni}} \quad
$!!(\alpha, r)@\Ir\{\vv{\ell}\}.S \xratail{\left(!!\alpha, \vv{\ell}, \> Sys\right)}_{s} \left[S \mapsto r\right]$
\qquad if there exists $S$ such that $S \xratail{\left(??\alpha, \vv{\ell}, \> Sys\right)}_c f$
\end{minipage}
\bigskip
\begin{minipage}{0.58\textwidth}
\makebox[1.5cm][l]{\textsf{SpAct}} \quad
$(\alpha, r).S \xratail{(\alpha, \emptyset, \> Sys)}_{s} \left[S \mapsto r\right]$
\end{minipage}
\caption{Primitive rules}
\end{subfigure}
\bigskip
\begin{subfigure}{\textwidth}
\begin{minipage}{0.58\textwidth}
\AxiomC{$S \xratail{\left(!\alpha, \vv{\ell}, \> Sys\right)}_s f \quad P \xratail{(?\alpha, \vv{\ell}, \> Sys)}_{c} g$}
\LeftLabel{\makebox[1.5cm][l]{\textsf{BrCombo}} \quad}
\UnaryInfC{$S \parc P \xratail{\left(!\alpha?, \vv{\ell}, \> Sys \right)} f \otimes g$}
\DisplayProof
\end{minipage}%
\begin{minipage}{0.5\textwidth}
$(f \otimes g)(s)=
\begin{cases}
f(S')g(P') & \mbox{if $s \equiv S' \parc P'$}\\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\bigskip
\begin{minipage}{0.58\textwidth}
\AxiomC{$S \xratail{\left(!!\alpha, \vv{\ell}, \> Sys\right)}_{s} f \quad P \xratail{(??\alpha, \vv{\ell}, \> Sys)}_{c} g$}
\LeftLabel{\makebox[1.5cm][l]{\textsf{UniPair}} \quad}
\UnaryInfC{$S \parc P \xratail{\left(!!\alpha, \vv{\ell}, \> Sys\right)}_s f \otimes g $}
\DisplayProof
\end{minipage}
\begin{minipage}{0.5\textwidth}
$(f \otimes g)(s)=
\begin{cases}
f(S')g(P') & \mbox{if $s \equiv S' \parc P'$}\\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\caption{Combining with capabilities}
\end{subfigure}
\bigskip
\begin{subfigure}{\textwidth}
\begin{minipage}{0.58\textwidth}
\AxiomC{$P_1 \xratail{\lambda_s}_{s} f$}
\LeftLabel{\makebox[1.5cm][l]{\textsf{Parallel}} \quad}
\UnaryInfC{$P_1 \parc P_2 \xratail{\lambda_s}_{s} f \otimes Id$}
\DisplayProof
\end{minipage}%
\begin{minipage}{0.5\textwidth}
$(f \otimes Id)(s) =
\begin{cases}
f(P_1') & \mbox{if $s \equiv P_1' \parc P_2$} \\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\bigskip
\begin{minipage}{0.58\textwidth}
\AxiomC{$P_2 \xratail{\lambda_s}_{s} f$}
\LeftLabel{\makebox[1.5cm][l]{} \quad}
\UnaryInfC{$P_1 \parc P_2 \xratail{\lambda_s}_{s} Id \otimes f$}
\DisplayProof
\end{minipage}%
\begin{minipage}{0.5\textwidth}
$(Id \otimes f)(s) =
\begin{cases}
f(P_2') & \mbox{if $s \equiv P_1 \parc P_2'$} \\
0 & \mbox{otherwise}
\end{cases}$
\end{minipage}
\bigskip
\begin{minipage}{0.15\textwidth}
\AxiomC{$P_1 \xratail{\lambda_s}_{s} f$}
\LeftLabel{\makebox[1.5cm][l]{\textsf{Choice}} \quad}
\UnaryInfC{$P_1 + P_2 \xratail{\lambda_s}_{s} f$}
\DisplayProof
\end{minipage}%
\begin{minipage}{0.43\textwidth}
\AxiomC{$P_2 \xratail{\lambda_s}_{s} f$}
\LeftLabel{\makebox[1.5cm][l]{} \quad}
\UnaryInfC{$P_1 + P_2 \xratail{\lambda_s}_{s} f$}
\DisplayProof
\end{minipage}%
\red{
\begin{minipage}{0.15\textwidth}
\AxiomC{$P \xratail{\lambda_s}_{s} f \qquad X := P$}
\LeftLabel{\makebox[1.5cm][l]{\textsf{Constant}} \quad}
\UnaryInfC{$X \xratail{\lambda_s}_{s} f$}
\DisplayProof
\end{minipage}%
}
\caption{Rules for composition}
\end{subfigure}
\caption{Stochastic rules for rates}
\label{fig:stochrules}
\end{figure}
\blue{Suppose we want to derive a CTMC for the evolution of the model component $Sys$.
For that we need to consider all enabled stochastic transition rules from $Sys$.
The CTMC has a transition from the state $Sys$ to $Sys'$ if there is a transition $Sys \xratailb{\lambda_s}_s f$ such that
$f(Sys) \neq 0$. The next step is to consider all transitions from $Sys'$ and so on recursively until no new states are discovered and the full CTMC is generated.}
\section{Equivalence relations}
\label{sec:equivalence}
Firstly we will briefly cover a naive attempt to define a bisimulation on sequential components of PALOMA to demonstrate why it is not entirely trivial
to deal with spatial properties of PALOMA models.
The approach that allows us to relax the conditions on spatial properties of defined models will be described in more detail.
In terms of semantic rules introduced in Section~\ref{sec:semantics} we are going to say that $S \xrightarrow{a} S'$ holds if there is a stochastic transition
$Sys \xratailb{\lambda_s}_s f$ and a system $Sys'$ such that $S' \in \mathrm{seq}(Sys')$ and $f(Sys') \neq 0$.
In addition the label $\lambda_s$ is required to be such that
\begin{IEEEeqnarray*}{c}
\lambda_s =
\begin{cases}
(\alpha, \emptyset, \> Sys) & \mbox{if $a = \alpha$} \\
(!\alpha, \vv{\ell}, \> Sys) & \mbox{if $a = ?\alpha \lor !\alpha$} \\
(!!\alpha, \vv{\ell}, \> Sys) & \mbox{if $a = ??\alpha \lor !!\alpha$}
\end{cases}
\end{IEEEeqnarray*}
As the behaviour of the PALOMA sequential component is parametrised by its location the natural interpretation would be to consider
locations as an inherent part of a component's state.
This would lead to the following definition, making use of the syntax-derived rate function defined in Section~\ref{sec:paloma}.
\begin{definition}
\label{def:naive bisimulation}
Let $Sys \in \mathcal{C}$ be any model component serving as a context.
A binary relation $\mathcal{R}_{Sys}$ is a bisimulation over sequential components if, and only if, $(S(\ell_1), T(\ell_2)) \in \mathcal{R}_{Sys}$ implies, for all $a \in Act$
\begin{enumerate}
\item $R_{a}(Sys, S(\ell_1)) = R_{a}(Sys, T(\ell_2))$.
\item $\ell_1 = \ell_2$.
\item $S(\ell_1) \xrightarrow{a} S'(\ell_1')$ implies for some $T'(\ell_2')$, $T(\ell_2) \xrightarrow{a} T'(\ell_2')$ and $(S'(\ell_1'), T'(\ell_2')) \in \mathcal{R}_{Sys}$.
\item $T(\ell_2) \xrightarrow{a} T'(\ell_2')$ implies for some $S'(\ell_1')$, $S(\ell_1) \xrightarrow{a} S'(\ell_1')$ and $(S'(\ell_1'), T'(\ell_2')) \in \mathcal{R}_{Sys}$.
\end{enumerate}
\end{definition}
This definition would give rise to an equivalence relation on PALOMA components with respect to the underlying context system.
However, Definition~\ref{def:naive bisimulation} has some limitations due to the restrictive way in which location is treated, and we will not pursue it further.
Specifically, two sequential components which have identical behaviour in different locations will be considered non-equivalent in this setting.
This would lead to a very strict equivalence being defined on the model components of PALOMA\@.
A more interesting idea is to shift to considering relative locations between the sequential components.
This will be explored in the following subsection.
\subsection{Relative locations}
\label{sec:isometry}
In order to consider relative locations between sequential components we need a notion of distance between the components. Thus we consider the case where $Loc$ denotes a metric space. Specifically we will consider the Euclidean plane
$\mathbb{R}^{2}$ \red{(extensions to different metric spaces are immediate)}.
The notion we make use of in the following discussion is that of isometries -- that is, maps between metric spaces that preserve the distances between points.
In particular we are interested in the set of Euclidean plane isometries of which we have four types: translations, rotations, reflections and glide reflections.
Denote the set of Euclidean plane isometries by $E(2)$.
The first definition we are going to give mimics the Definition~\ref{def:naive bisimulation} but allows the locations of the sequential components under consideration
to differ by an element in $E(2)$.
\begin{definition}
Let $\phi \in E(2)$ and $Sys \in \mathcal{C}$ a system component serving as context.
A binary relation $\mathcal{R}_{\phi, Sys}$ is a bisimulation with respect to $\phi$ over components if, and only if, $(S(\ell_1), T(\ell_2)) \in \mathcal{R}_{\phi, Sys}$ implies, for all $a \in Act$, that
\begin{enumerate}
\item $R_{a}(Sys, S(\ell_1)) = R_{a}(Sys, T(\ell_2))$.
\item $\phi(\ell_1) = \ell_2$.
\item $S(\ell_1) \xrightarrow{a} S'(\ell_1')$ implies for some $T'(\ell_2')$, $T(\ell_2) \xrightarrow{a} T'(\ell_2')$ and $(S'(\ell_1'), T'(\ell_2')) \in \mathcal{R}_{\phi, Sys}$.
\item $T(\ell_2) \xrightarrow{a} T'(\ell_2')$ implies for some $S'(\ell_1')$, $T(\ell_2) \xrightarrow{a} S'(\ell_1')$ and $(S'(\ell_1'), T'(\ell_2')) \in \mathcal{R}_{\phi, Sys}$.
\end{enumerate}
\end{definition}
In this definition for sequential components the location plays little role.
The situation becomes more interesting when we attempt to extend the definition to model components $\mathcal{C}$ of PALOMA.
\begin{definition}\label{def:bisimulation wrt phi}
Let $\phi \in E(2)$ and $Sys \in \mathcal{C}_{S}$ a model component serving as context.
A binary relation $\mathcal{R}_{\phi, Sys}$ is a bisimulation with respect to $\phi$ over model components if, and only if, $(P, Q) \in \mathcal{R}_{\phi, Sys}$ implies,
for all $a \in Act$ and all sets of locations $L$
\begin{enumerate}
\item $R_{a}(L, Sys, P) = R_{a}(\phi(L), Sys, Q)$.
\item $P \xrightarrow{a} P'$ implies for some $Q$, $Q \xrightarrow{a} Q'$ and $(P', Q') \in \mathcal{R}_{\phi, Sys}$.
\item $Q \xrightarrow{a} Q'$ implies for some $P'$, $P \xrightarrow{a} P'$ and $(P', Q') \in \mathcal{R}_{\phi, Sys}$.
\end{enumerate}
\end{definition}
From the definition we can easily see that any component is bisimilar to itself and that conditions are symmetric -- meaning we have
$(P, Q) \in \mathcal{R}_{\phi, Sys} \implies (Q, P) \in \mathcal{R}_{\phi, Sys}$ -- and that transitivity holds.
To be able to define a bisimilarity as the largest bisimulation over the components would require us to verify that a union of bisimulations is again a bisimulation.
\begin{definition}\label{def:bisimilarity relation}
Two model components $P_1, P_2$, defined over $\mathbb{R}^2$ are considered bisimilar with respect to context system $Sys$, denoted $P_1 \sim_{Sys} P_2$
if there exists an isometry $\phi \in E(2)$ and a corresponding bisimulation $\rel_{\phi, Sys}$ such that $(P_1, P_2) \in \rel_{\phi, Sys}$.
\end{definition}
The simplest case we can consider is bisimilarity with respect to empty context system $Sys$ denoted by $\emptyset$.
We illustrate this in the following example.
\begin{example}
\begin{IEEEeqnarray*}{llCl}
&Transmitter(\ell_0) &:=& !!(message\_move, r)@\Ir\{all\}.Transmitter(\ell_1) \\
&Transmitter(\ell_1) &:=& !!(message\_move, r)@\Ir\{all\}.Transmitter(\ell_0) \\
&Receiver(\ell_1) &:=& ??(message\_move, p)@\Wt\{v\}.Receiver(\ell_0) \\
&Receiver(\ell_0) &:=& ??(message\_move, q)@\Wt\{v\}.Receiver(\ell_1)
\end{IEEEeqnarray*}
For this example take $\ell_0 = (-1, 0)$ and $\ell_1 = (1, 0)$.
The two systems we are going to analyse are
\begin{IEEEeqnarray*}{l}
Scenario_1 := Transmitter(\ell_0) \parc Receiver(\ell_1) \\
Scenario_2 := Transmitter(\ell_1) \parc Receiver(\ell_0)
\end{IEEEeqnarray*}
It is clear that the systems are symmetric in the sense that if the locations in $Scenario_1$ are reflected along the $y$-axis we get $Scenario_2$.
Denote the reflection along the $y$-axis as $\phi$.
This give $\phi(\ell_0) = \ell_1$ and $\phi(\ell_1) = \ell_0$.
It it intuitively clear that the two systems behave in the same way up to the starting location of the $Transmitter$ and $Receiver$ in both systems.
Thus it makes sense to abstract away the absolute locations and consider the given systems observationally equivalent up to spatial transformation $\phi$.
In the following we verify that applying Definition~\ref{def:bisimulation wrt phi} to these examples indeed agrees with the intuition.
The two systems are considered on their own with no additional context -- that is the $Sys$ in Definiton~\ref{def:bisimulation wrt phi} becomes~$\emptyset$.
\begin{IEEEeqnarray*}{l}
R_{!!message\_move}(\ell_0 , \emptyset, Scenario_1) = r \qquad \quad
R_{??message\_move}(\ell_1 , \emptyset, Scenario_2) = rp
\end{IEEEeqnarray*}
and
\begin{IEEEeqnarray*}{l}
R_{!!message\_move}(\phi(\ell_0), \emptyset, Scenario_1) = R_{!!message\_move}(\ell_1, \emptyset, Scenario_1) = r \\
R_{??message\_move}(\phi(\ell_1), \emptyset, Scenario_2 )= R_{!!message\_move}(\ell_0, \emptyset, Scenario_2) = rp
\end{IEEEeqnarray*}
As the rest of the rates are $0$ then the first condition in Definition~\ref{def:bisimulation wrt phi} holds.
To get the second and third conditions requires verifying that the rates also match for derivatives of the systems $Scenario_1$ and $Scenario_2$.
This is not going to be done here but one can easily see that the same symmetries are going to hold throughout the evolution of the systems
and thus the Definition~\ref{def:bisimilarity relation} would give that
\begin{IEEEeqnarray*}{rCl}
Scenario_1 \sim_{\emptyset} Scenario_2
\end{IEEEeqnarray*}
\end{example}
In the example we gave no additional context to the systems under study but the Definition~\ref{def:bisimulation wrt phi} allows for reasoning about the equivalence
of the two components in the context of any given system.
The following example demonstrates that components being equivalent with respect to one context system does not imply equivalence with respect to other contexts.
\begin{example}
\begin{IEEEeqnarray*}{llCl}
&Transmitter(\ell_0) &:=& !!(message, r)@\Ir\{all\}.Transmitter(\ell_0) \\
&Receiver(\ell_0) &:=& ??(message, r)@\Ir\{all\}.Receiver(\ell_0) \\
&Sys &:=& Transmitter(\ell_0)
\end{IEEEeqnarray*}
It can be verified that according to Definition~\ref{def:bisimulation wrt phi} we have
\[
Transmitter \sim_{\emptyset} Receiver
\]
as neither component can perform an action due to the blocking nature of unicast communication.
On the other hand we have
\[
Transmitter \not\sim_{Sys} Receiver
\]
as the system $Transmitter(\ell_0) \parc Transmitter(\ell_0)$ would not perform an action while in $Transmitter(\ell_0) \parc Receiver(\ell_0)$
we have unicast communication happening.
\end{example}
\section{Conclusions and Future Work}
\label{sec:conc}
\blue{
The paper introducing the PALOMA language in its current form~\cite{FengHG16}
concentrated on the fluid analysis of CTMCs defined on population counts and gave semantic rules for generating a model in the Multi-message Multi-class Markovian
Agents Model framework~\cite{CerottiGBCM10}.
}In order to have a rigorous foundation for bisimulation definitions we have introduced the new agent level semantics in the FuTSs framework~\cite{NicolaLLM13}.
\blue{Several other process algebras that capture the relative locations of interacting entities have been developed.
In relation to systems biology there is, for example, SpacePi~\cite{johnEU08} where locations are defined in real coordinate spaces and for
wireless networks there is, for example, CWS~\cite{MezzettiS06} which makes no restrictions on the notion of location that can be used.
However, there is very little work exploring notions of equivalence for spatially distributed systems.}
We presented an idea for a bisimulation of PALOMA models which allows us to abstract away explicitly defined locations of PALOMA components
and use relative locations of sequential components as the basis of the model comparison.
This idea relies on working over the Euclidean plane and being able to apply isometries to the model components of PALOMA leaving the relative spatial structure of
the model components intact.
As the behaviour of PALOMA components is dependent on the context in which they appear thus definitions of equivalences are given in terms of the context system.
The bisimulation ideas presented are intended to serve as a grounding for further development of model comparison and analysis methods for systems with explicitly defined
spatial location.
From the modelling and simulation perspective the aim of equivalence relations is to provide formal ways of reducing the state space of the underlying CTMC by allowing
us to swap out components in the model for ones generating a smaller state space while leaving the behaviour of the model the same up to some equivalence relation.
In particular, it is useful to consider such equivalence relations that induce a lumpable partition at the CTMC level.
\subsection*{Acknowledgements}
This work was supported by grant EP/L01503X/1 for the University of Edinburgh School of Informatics Centre for Doctoral Training in Pervasive Parallelism
(http://pervasiveparallelism.inf.ed.ac.uk/) from the EPSRC, and by the EU-funded project, QUANTICOL 600708.
\bibliographystyle{eptcs}
| {
"attr-fineweb-edu": 1.300781,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdt84eIZjrLrbKgee | \section{Introduction}\label{sec:introduction}
Modern high-capacity deep neural networks (DNNs) have shown astounding performance in many automated computer vision tasks ranging from complex scene understanding for autonomous driving \cite{Wang_2021_CVPR,Choi_2021_CVPR,Chen_2021_CVPR,Prakash_2021_CVPR,Luo_2021_CVPR,Li_2021_ICCV}, to accurate DeepFake media detection \cite{juefei2021countering,dolhansky2020deepfake}; from challenging medical imagery grading and diagnosis \cite{yim2020predicting,fan2020inf,cheng2020adversarial,icme21_xray}, to billion-scale consumer applications such as the face authentication for mobile payment, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot. Many of the tasks are safety- and mission-critical and the reliability of the deployed DNNs are of utmost importance. However, over the years, we have come to realize that the existence of unintentional (natural degradation corruptions) and intentional (adversarial perturbations) examples such as \cite{tmm21_pasadena,iccv21_advmot,arxiv21_ara,ijcai21_ava,arxiv21_advhaze,neurips20_abba,eccv20_spark,zhai2020s_rain,acmmm20_amora,iccv21_flat,icme21_xray,gao2020making,arxiv21_advbokeh,cheng2020adversarial} is a stark reminder that DNNs are vulnerable.
To tackle the DNN's vulnerability issues, many researchers have resorted to DNN repairing that aims at fixing the faulty DNN weights with the guidance of some specific repairing optimization criteria. An analogy to this is the traditional software repairing in the software engineering literature \cite{gazzola2017automatic}. However, general-purpose DNN repairing may not always be feasible in practice, due to (1) the difficulty of generalizing DNNs to any arbitrary unseen scenarios, and (2) the difficulty of generalizing DNNs to seen scenarios but with unpredictable, volatile, and ever-changing deployed environment.
For these reasons, a more practical DNN repairing strategy is to work under some assumptions of practical contexts and to perform task-specific and environment-aware DNN repairing where the model gap is closed up for a certain scenario/environment, or a set of scenarios/environments.
Compared to existing DNN repair work (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, \cite{ma2018mode,zhang2019apricot,sohn2019arachne,gao2020sensei,ren2020fewshot,yu2021deeprepair}), this work takes the DNN repairing to a whole new level, quite literally, where we are performing \textbf{block-level} architecture-oriented repairing as opposed to network-level, layer-level, and neuron-level repairing.
As we will show in the following sections that block-level repairing, being a midpoint sweet spot in terms of network module granularity, offers a good trade-off between network accuracy and time consumption for that just repairing some specific weights in a layer neglects the relationship between different layers while repairing the whole network weights leads to high cost. In addition, block-level repairing allows us to locally adjust not only the weights but also the network architecture within the block very effectively and efficiently.
To this end, as the first attempt, we repair DNNs by jointly optimizing the architecture and weights at the block level in this work.
The modern block structure stems from the philosophy of VGG nets \cite{Simonyan2015ICLR} and is generalized to a common designing strategy in the state-of-the-art architectures \cite{he2016resnet} (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, ResNet) and optimization method \cite{liu2018darts}.
To validate its importance for block-level repairing, we first study the drawbacks of network-level and layer-level repairing, which motivates us to explore a novel research granularity and repairing direction.
Eventually, we identified that block-level architecture-oriented DNN repair is a promising direction. In order to achieve this, we need to address two challenges, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \textit{block localization} and \textit{joint architecture and weight repairing}.
For the first challenge, we propose the \textit{adversarial-aware spectrum analysis for vulnerable block localization} that considers the neuron suspiciousness and weights' gradients in blocks during the forward and backward processes when evaluating a series of examples. This method enables more precise block localization even under few-shot examples.
In terms of the second challenge, we propose the \textit{architecture-oriented search-based repairing} that relaxes the targeted block to a continuous search space. The space consists of several nodes and edges where the node represents deep features and the edge is an operation to connect two nodes.
By jointly optimizing the architecture and weights in that space, our method is able to find a much better block architecture for a specific repairing target.
We conduct extensive experiments to validate the proposed repairing method and find that our method can not only enhance the accuracy but also the robustness across various corruptions.
The different DNN models repaired with our technique perform better than the original one on both clean and corrupted data, with an average 3.939\% improvement on clean data and 7.79\% improvement on corrupted data, establishing vigorous general repairing capability on most of the DNN architectures.
Overall, the contribution of this paper is summarized as follows:
\begin{itemize}
\item We propose block-level architecture-oriented repairing for DNN repair. The intuition of block structure design in modern DNNs provides a suitable granularity of DNN repair at the block-level \cite{he2016resnet}. In addition, we also show that jointly optimizing architecture and weights further brings the advantage of DNN repair over repairing DNN by only updating weights, which is demonstrated by our comparative evaluation in the experimental section.
\item In terms of the \textit{novelty and potential impacts}, existing DNN repair methods \cite{ma2018mode,zhang2019apricot,sohn2019arachne,gao2020sensei,eniser2019deepfault,ren2020fewshot} mostly focus on only repairing DNN via updating its weights while ignoring inherent DNN architecture design (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, block structure and relationships between different layers), which could also impact the DNN behavior, whereas only repairing the weights could not address such an issue. Therefore, compared with existing work, this paper initiates a new and wide direction for DNN repair by taking relationships of DNN architecture design as well as layers and weights into consideration.
\item Technically, we originally propose the adversarial-aware spectrum analysis-based block localization and architecture-oriented search-based repairing method, both of which are novel for DNN repair. The first one enables us to localize a vulnerable block accurately even with only a few examples. The latter formulates the repairing problem as the joint optimization of both the architecture and weights at the block level.
\item We implement our repairing techniques in the tool {\emph{ArchRepair}}{} and perform extensive evaluation against 6 state-of-the-art DNN repair techniques under 4 DNNs with different architectures on two different datasets. The results demonstrate the advantage of {\emph{ArchRepair}}{} in achieving SOTA repairing performance in terms of both accuracy and robustness.
\end{itemize}
To the best of our knowledge, this is the very first attempt to consider the DNN repairing problem at the block-level that repairs both network weights and architecture jointly. The results of this paper demonstrate the limitation of repairing DNN by only updating the weights, and show that other important DNN development elements such as architecture that encodes more advanced relationships of neurons and layers should also be taken into consideration during the design of DNN repair techniques.
\section{DNN Repairing and Motivation}\label{sec:motive}
In this section, we review existing repairing methods in DNN and motivate our method. In \secref{subsec:motive-solution}, we thoroughly analyze previous DNN repair techniques from the viewpoint of different repairing targets, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, the parameters (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, weights) of the whole network, layers, or neurons.
To this end, we formulate the core mechanism and compare their strengths and weaknesses, which inspires and motivates us to develop the block-level repairing method.
To validate our motivation, we perform a preliminary study in \secref{subsec:motive-empirical}.
\subsection{DNN repair Solutions}\label{subsec:motive-solution}
In the standard training process, given a training dataset, we can train a DNN denoted as $\phi_{(\mathcal{W,}\mathcal{A})}$ where $\mathcal{A}$ represents the network architecture related parameters determining what operations (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, convolution layer, pooling layer, \emph{etc}\onedot} \def\vs{\emph{vs}\onedot) are used in the architecture, and $\mathcal{W}$ is the respective weights (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, parameters of different operations). Generally, the architecture $\mathcal{A}$ is pre-defined and fixed during the training and testing processes. The variable $\mathcal{W}$ consists of weights for different layers.
Although existing DNNs (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, ResNet \cite{he2016resnet}) have achieved significantly high accuracy on popular datasets, incorrect behaviors are always found in these models when we deploy them in the real world or test them on challenging datasets.
There are a series of works that study how to repair these DNNs to be generalizable to misclassified examples, challenging corruptions, or bias errors \cite{sohn2019arachne,ren2020fewshot,yu2021deeprepair,tian2020repairing}.
In general, we can formulate the existing repairing methods as
\begin{align}
\mathcal{W}^*
&=
\text{Locator}(\phi_{(\mathcal{W,}\mathcal{A})},\mathcal{D}^{\text{repair}})
\label{eq:unified_repair-1}
\\
\hat{\mathcal{W}}^{*}
&=
\operatornamewithlimits{arg\,min}_{\mathcal{W}^*} \text{J}(\phi_{(\mathcal{W}^*,\mathcal{A})}, \mathcal{D}^{\text{repair}})
\label{eq:unified_repair-2}
\end{align}
where $\mathcal{W}^*$ is a subset of $\mathcal{W}$ and $\hat{\mathcal{W}}^*$ is the fixed counterpart of $\mathcal{W}^*$.
The dataset $\mathcal{D}^\text{repair}$ contains the examples for repairing guidance. Different works may set different $\mathcal{D}^\text{repair}$ according to the repairing scenarios.
For example, Yu \emph{et al}\onedot~\cite{yu2021deeprepair} sets $\mathcal{D}^\text{repair}$ as the combination of the augmented training dataset.
We will show that our method can address different repairing scenarios.
Intuitively, \reqref{eq:unified_repair-1} is to find the weights we need to fix in the DNN, and \reqref{eq:unified_repair-2} with a task-related objective function $\text{J}(\cdot)$ is to fix the selected weights ${\mathcal{W}}^*$ and produce a new one $\hat{\mathcal{W}}^*$.
The above formulation can represent a series of existing repairing methods.
For example, when we try to fix all weights of a DNN (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{W}^*=\mathcal{W}$) and set the objective function $\text{J}(\cdot)$ as the task-related loss function (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, cross-entropy function for image classification) with different data augmentation techniques on collected failure cases as $\mathcal{D}^\text{repair}$ to retrain the weights, we actually get the methods proposed by \cite{ren2020fewshot} and \cite{yu2021deeprepair}.
In addition, when we employ the gradient loss of weights and forward impact to localize the targeted weights and use a fitness function to fix localized weights, the formulation becomes the method \cite{sohn2019arachne}.
Nevertheless, with the general formulation in \reqref{eq:unified_repair-1} and \reqref{eq:unified_repair-2}, we can see that existing methods have the following limitations:
\begin{itemize}
\item Existing works only fix the targeted DNN either at the network-level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, fixing all weights of the DNN) or at the neuron-level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, only fixing partial weights of the DNN), and ignore the effects of the architecture $\mathcal{A}$.
\item Only repairing some specific weights in a layer could easily neglect the relationship between different layers while repairing the whole network's weights leads to high cost.
\end{itemize}
Note that, the state-of-the-art DNNs (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, ResNet \cite{he2016resnet}) are made up of several blocks where each block is built with stacked convolutional and activation layers. Such block-like architecture is mainly inspired by the philosophy of VGG nets \cite{Simonyan2015ICLR} and its effectiveness has been demonstrated in wide applications.
Therefore in this work, we focus on DNN repairing at the block-level. In particular, we consider both the architecture and weights repairing of a specific block.
\subsection{Empirical Study and Motivation} \label{subsec:motive-empirical}
First, we perform a preliminary experiment to discuss the effectiveness of the repairing methods at different levels. In this experiment, we choose 3 variants of ResNet \cite{he2016resnet} (specifically, ResNet-18, ResNet-50, and ResNet-101) as the targeted DNNs $\phi$, and we select CIFAR-10 dataset as the experimental environment. We repair the DNN at four levels, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, Neuron-level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, only fixing weights of one neuron ), Layer-level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, only fixing the weights of one layer), Block-level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, fixing the weights of a block) and the Network-level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, fixing all weights of the DNN).
Inspired by recent work \cite{sohn2019arachne}, we choose the neuron (or layer/block) with the greatest gradient (mean gradient for layer and block) as our target to fix. Note that as the previous work have shown that repairing DNN with only a few failure cases is meaningful and important~\cite{ren2020fewshot,yu2021deeprepair}, we only randomly select 100 failure cases from the testing dataset to calculate the gradients and choose such neuron (or layer/block).
Then, we adjust the weights of the chosen neuron/layer/block by gradient descent w.r.t\onedot} \def\dof{d.o.f\onedot the loss function (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, cross-entropy loss for image classification).
To compare their effectiveness, we apply all methods on the same training dataset of CIFAR-10 and Tiny-ImageNet, then measure the accuracy on the respective testing dataset. We also record the execution time of the total repairing phase (100 epochs) as indicator of time cost. We show the repairing result in \tableref{tab:motivate-empirical}.
According to \tableref{tab:motivate-empirical}, the network-level repairing achieves the highest accuracy on ResNet-18 and ResNet-101 when repairing CIFAR-10 dataset, and all 3 variants of ResNet when repairing Tiny-ImageNet dataset, but also leads to the highest time cost under every configuration.
Among 3 other levels of repairing methods, the block-level repairing achieves the highest accuracy improvement without having drastic increment on time cost (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the run-time increment comparing with neuron-level and layer-level is less than 500 seconds on 100 epochs across all 3 ResNets) when repairing on both CIFAR-10 and Tiny-ImageNet.
\begin{table*}[t]
\centering
\small
\caption{Accuracy (\%) and execution time (s/100 epochs) of applying repairing method at different levels on 3 different DNNs trained and tested on CIFAR-10 and Tiny-ImageNet datasets. }
{\resizebox{\linewidth}{!}{
\begin{tabular}{c|l|rr|rr|rr}
\toprule
\multicolumn{2}{c|}{\multirow{2}{*}{\bf Scale}} & \multicolumn{2}{c|}{\bf ResNet-18} & \multicolumn{2}{c|}{\bf ResNet-50} & \multicolumn{2}{c}{\bf ResNet-101}\\
\multicolumn{2}{c|}{} & \bf Accuracy (\%) & \bf Execution Time & \bf Accuracy (\%) & \bf Execution Time & \bf Accuracy (\%) & \bf Execution Time \\
\midrule
\multirow{5}{*}{\rotatebox{90}{CIFAR-10}}
& \bf Original & 85.00 & - & 85.17 & - & 85.31 & - \\
& \bf Neuron-level & 85.18 & 650.49 & 85.23 & 4054.29 & 85.39 & 6853.47 \\
& \bf Layer-level & 85.16 & 590.47 & 85.24 & 4159.93 & 85.41 & 4956.81 \\
& \bf Block-level & 85.19 & 760.94 & 85.24 & 3976.39 & 85.47 & 7118.03 \\
& \bf Network-level & 85.73 & 1456.92 & 84.80 & 5735.61 & 87.43 & 9889.35 \\
\midrule
\multirow{5}{*}{\rotatebox{90}{Tiny-ImageNet}}
& \bf Original & 45.15 & - & 46.26 & - & 46.14 & - \\
& \bf Neuron-level & 45.23 & 1847.59 & 46.17 & 13074.85 & 46.14 & 20395.79 \\
& \bf Layer-level & 45.23 & 1854.37 & 46.24 & 12796.91 & 46.15 & 18497.53 \\
& \bf Block-level & 45.30 & 2011.84 & 46.27 & 13452.17 & 46.22 & 24774.15 \\
& \bf Network-level & 45.52 & 2574.81 & 46.41 & 17495.88 & 46.55 & 32908.43 \\
\bottomrule
\end{tabular}
}
}
\label{tab:motivate-empirical}
\end{table*}
Overall, the network-level repairing is significantly effective on the accuracy improvement but leads to a high time cost.
Nevertheless, the block-level repairing achieves impressive accuracy enhancement with much less execution time comparing to network-level method (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, about $2\times$ less on ResNet-18), making it a good trade-off between effectiveness and efficiency.
This fact inspires and motivates us to further investigate the block-level repairing method.
\section{Block-level Architecture and Weights Repairing}\label{sec:blrepair}
In this section, we first provide an overview of our method in the \secref{subsec:blr-overview} by presenting our intuitive idea and the main pipeline containing two key modules, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \textit{Vulnerable Block Localization} and \textit{Architecture-oriented Search-based Repairing}. After that, we detail the first module in \secref{subsec:blr-locating} and the second module in \secref{subsec:blr-searching}, respectively. The first module is to locate the vulnerable block in a deployed DNN, while the second module is to repair the architecture and weights of the localized block by formulating it as an architecture searching problem.
\subsection{Overview}\label{subsec:blr-overview}
Given a deployed DNN $\phi_{(\mathcal{W},\mathcal{A})}$, the weights and architecture usually consist of several blocks, each of which is built by stacking basic operations, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, convolutional layer.
Then, we represent the weights and architecture with $B$ blocks, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{W} = \{\mathcal{W}_{\text{b}}^i\}_{i=1}^{B}$ and $\mathcal{A} = \{\mathcal{A}_{\text{b}}^i\}_{i=1}^{B}$, where the weights or architecture of each block are made up by one or multiple layers.
For example, when we consider the ResNet-18 \cite{he2016resnet}, we can say that it has six blocks (See \tableref{tab:resnet-block}). The first block contains only one convolution layer with the kernel size of $7\times 7 \times 64$ and the stride of 2. The second to the fifth blocks have two convolutional layers and the last block contains a fully connected layer and a softmax layer.
Then, we can reformulate \reqref{eq:unified_repair-1} and \reqref{eq:unified_repair-2} for the proposed block-level repairing by
\begin{align}
(\mathcal{W}_\text{b}^*,\mathcal{A}_\text{b}^*) &= \text{Locator}(\phi_{(\{\mathcal{W}_{\text{b}}^i\}_{i=1}^{B},\{\mathcal{A}_{\text{b}}^i\}_{i=1}^{B})},\mathcal{D}^{\text{repair}}) \label{eq:block_repair-1} \\
(\hat{\mathcal{W}}^{*}_\text{b}, \hat{\mathcal{A}}^{*}_\text{b}) &= \operatornamewithlimits{arg\,min}_{(\mathcal{W}_\text{b}^*,\mathcal{A}_\text{b}^*)} \text{J}(\phi_{(\mathcal{W}_\text{b}^*,\mathcal{A}_\text{b}^*)}, \mathcal{D}^{\text{repair}}) \label{eq:block_repair-2}
\end{align}
where \reqref{eq:block_repair-1} is to locate the block (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $(\mathcal{W}_\text{b}^*,\mathcal{A}_\text{b}^*)$) that should be fixed through the proposed adversarial-aware block localization, and \reqref{eq:block_repair-2} is to repair the localized block by formulating it as a network architecture searching problem. Clearly, compared with the general repairing method (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \reqref{eq:unified_repair-1} and \reqref{eq:unified_repair-2}), the proposed method focuses on fixing the weights and architecture at the block level.
We detail the \textit{vulnerable block localization} in \secref{subsec:blr-locating} and \textit{architecture search-based repairing} in \secref{subsec:blr-searching}.
\begin{table}[t]
\setlength{\tabcolsep}{3pt}
\centering
\caption{ResNet architectures and their respective blocks. More details could be found in \cite{he2016resnet}.}
\begin{tabular}{ll|ccc}
\toprule
Block & Layer & 18-layer & 50-layer & 101-layer \\
\midrule
Blk1 & conv1 & \multicolumn{3}{c}{$7\times 7$, 64, stride 2}\\
%
\hline
\multirow{2}{*}{Blk2} & \multirow{2}{*}{conv2\_x} & \multicolumn{3}{c}{$3\times 3$ max pool, stride 2}\\
\cline{3-5}
& & $\begin{bmatrix} 3\times 3,64\\ 3\times 3,64 \end{bmatrix}\times2$ & $\begin{bmatrix} 1\times 1,64\\ 3\times 3,64\\ 1\times 1,256\\ \end{bmatrix}\times3$ & $\begin{bmatrix} 1\times 1,64\\ 3\times 3,64\\ 1\times 1,256\\ \end{bmatrix}\times3$ \\
%
\hline
%
Blk3 & conv3\_x & $\begin{bmatrix} 3\times 3,128\\ 3\times 3,128 \end{bmatrix}\times2$ & $\begin{bmatrix} 1\times 1,128\\ 3\times 3,128\\ 1\times 1,512\\ \end{bmatrix}\times4$ & $\begin{bmatrix} 1\times 1,128\\ 3\times 3,128\\ 1\times 1,512\\ \end{bmatrix}\times4$ \\
%
\hline
%
Blk4 & conv4\_x & $\begin{bmatrix} 3\times 3,256\\ 3\times 3,256 \end{bmatrix}\times2$ & $\begin{bmatrix} 1\times 1,256\\ 3\times 3,256\\ 1\times 1,1024\\ \end{bmatrix}\times6$ & $\begin{bmatrix} 1\times 1,256\\ 3\times 3,256\\ 1\times 1,1024\\ \end{bmatrix}\times23$ \\
%
\hline
%
Blk5 & conv5\_x & $\begin{bmatrix} 3\times 3,512\\ 3\times 3,512 \end{bmatrix}\times2$ & $\begin{bmatrix} 1\times 1,512\\ 3\times 3,512\\ 1\times 1,2048\\ \end{bmatrix}\times3$ & $\begin{bmatrix} 1\times 1,512\\ 3\times 3,512\\ 1\times 1,2048\\ \end{bmatrix}\times3$ \\
%
\hline
%
Blk6 & \multicolumn{4}{c}{average pool, 1,000-d fully-connection, softmax}\\
%
\bottomrule
\end{tabular}
\label{tab:resnet-block}
\end{table}
There are two main solutions for vulnerable neurons localization \cite{sohn2019arachne,eniser2019deepfault}. The first one employs the neuron spectrum analysis during the forward process of DNN on a testing dataset. It calculates the spectrum of all neurons (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, activated/non-activated times of neurons for correctly classified examples and activated/non-activated times of neurons for misclassified examples).
These attributes are used to measure the suspiciousness of all neurons. The general principle is that a neuron is more suspicious when the neuron is more often activated under the misclassified examples than that under the correctly classified examples \cite{eniser2019deepfault}.
This solution is able to localize the vulnerable neurons accurately but requires a large testing dataset, which is not suitable for the scenario where a few examples are available for repairing.
The second solution is to actively localize the vulnerable neurons by performing backpropagation on the misclassified examples and calculating the gradients of neurons w.r.t\onedot} \def\dof{d.o.f\onedot the loss function. The neurons with large gradients are responsible for the misclassification \cite{sohn2019arachne}.
This solution is able to localize the vulnerable neuron with fewer examples but ignores the effects of correctly classified examples. As shown in \figref{fig:gr_bb}, with different failure examples, the gradients of different convolutional blocks in ResNet18 may have similar values, which demonstrates that the gradient-based localization is not sensitive to the variance of the number of failure examples.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{image/gradient_b_bar.pdf}
\caption{
Average gradients of different blocks in ResNet-18 for different $\mathcal{D}^{\text{repair}}_{\text{fail}}$ sizes.
}
\label{fig:gr_bb}
\end{figure}
Overall, existing methods only focus on localizing vulnerable neurons while ignoring the blocks in DNNs. In addition, they have their respective defects.
In this work, we propose a novel localization method that aims to find the most vulnerable block in the DNN, which can lead to the buggy behavior of a deployed DNN.
To take the respective advantages of existing works and avoid their defects, we propose adversarial-aware spectrum analysis to localize the vulnerable block.
\subsection{Adversarial-aware Specturm Analysis for Vulnerable Block Localization}\label{subsec:blr-locating}
\subsubsection{Neuron spectrum analysis}
Given a dataset $\mathcal{D}^\text{repair}$ for repairing and the targeted DNN $\phi_{(\mathcal{W},\mathcal{A})}$, we calculate the spectrum attributes of the $j$th neuron in $\mathcal{W}$ by counting the times of activation and non-activation for the neuron under the correctly classified examples and denote them as $N^j_{\text{ac}}$ and $N^j_{\text{nc}}$, respectively. Similarly, we can count the times of activation and non-activation for the same neuron under the misclassified examples and name them as $N^j_{\text{am}}$ and $N^j_{\text{nm}}$, respectively.
Then, we calculate a suspiciousness score for each neuron via the Tarantula measure \cite{jones2005tarantula},
\begin{align}\label{eq:tarantula}
s_j = \frac{N^j_{\text{am}}/(N^j_{\text{am}}+N^j_{\text{nm}})}{N^j_{\text{am}}/(N^j_{\text{am}}+N^j_{\text{nm}})+N^j_{\text{ac}}/(N^j_{\text{ac}}+N^j_{\text{nc}})}
\end{align}
where $s_j$ determines the suspiciousness of the $j$th neuron and the higher $s_j$ means the $j$th neuron is more vulnerable.
\subsubsection{Adversarial-aware block spectrum analysis}
With the above neuron spectrum analysis, we can obtain the suspiciousness scores for all neurons and the suspiciousness set $\mathcal{S}=\{s_j\}$.
Nevertheless, these suspiciousness scores depend on the statistical analysis and are not related to the objective directly, which leads to less effective localization.
To alleviate the issue, we propose to refine the suspiciousness scores with adversarial information under the guidance of the loss function (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, cross-entropy function for classification).
Specifically, we select the failure examples in $\mathcal{D}^\text{repair}$ and construct a subset denoted as $\mathcal{D}^\text{repair}_\text{fail}$.
For each example in $\mathcal{D}^\text{repair}_\text{fail}$, we can calculate the gradient of all neurons w.r.t\onedot} \def\dof{d.o.f\onedot the loss function. Then, we average the gradients of a neuron on all examples and get a set $\mathcal{G} = \{g_j\}$ where $g_j$ is the averaging gradient of the $j$th neuron on all examples in $\mathcal{D}^\text{repair}_\text{fail}$.
Intuitively, the larger gradient means that the corresponding neuron may significantly contribute to misclassification and should be tuned to minimize the loss.
For the $i$th block, we denote its gradient as the average of the gradients of all neurons in that block, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $G_i = \frac{1}{|\mathcal{W}_\text{b}^i|}\sum_{\mathbf{w}_j\in\mathcal{W}_\text{b}^i}g_j$.
We also calculate the averaging gradient across all neurons, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\overline{G}=\frac{1}{B}\sum_{i=1}^{B}G_i$.
Then, we use these gradients to reweight the suspiciousness scores of all neurons.
\begin{align}\label{eq:reweight}
\hat{s}_j = \frac{|g_j-\overline{G}|}{\max(\{|g_j-\overline{G}|\})} s_j.
\end{align}
The principle behind this strategy is that the suspiciousness score of the $j$th neuron decreases when its relative gradient is small.
As a result, we can update the suspiciousness set $\mathcal{S}$ and get $\hat{\mathcal{S}}=\{\hat{s}_j\}$.
A block in the DNN consists of a series of neurons and we collect the updated suspiciousness scores of the neurons in the $i$th block to the set $\hat{\mathcal{S}}_i\in\hat{\mathcal{S}}$.
There are $B$ suspiciousness sets and $\hat{\mathcal{S}} = \{\hat{\mathcal{S}}_i\}_{i=1}^B$.
After that, we use a threshold (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\epsilon$) to select the vulnerable neurons, that is, the neuron with $\hat{s}_j>\epsilon$ is identified as the vulnerable neuron.
Then, we can count the number of vulnerable neurons in each $\hat{\mathcal{S}}_i$ and the block with the most vulnerable neurons is identified as the targeted block we would repair.
\begin{algorithm}[t]
\caption{Vulnerable block localization}
\label{alg:suspiciousness_ranking}
\KwIn{ A DNN $\phi_{(\mathcal{W},\mathcal{A})}$ and datasets $\mathcal{D}^\text{repair}$ and $\mathcal{D}^\text{repair}_\text{fail}$}
\KwOut{$\mathcal{W}^*_\text{b}$,$\mathcal{A}^*_\text{b}$}
Calculate suspiciousness scores $\mathcal{S}$ of all neurons via \reqref{eq:tarantula}\;
Calculate the gradients of all neurons on $\mathcal{D}^\text{repair}_\text{fail}$ and get $\mathcal{G}$\;
Update the suspiciousness scores $\mathcal{S}$ and get $\hat{\mathcal{S}}$\;
Identify the vulnerable neurons via a threshold $\epsilon$\;
Localize the vulnerable block with maximum number of vulnerable neurons\;
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{image/suspicious_b_bar.pdf}
\caption{
Collected suspicious neurons in blocks of ResNet-18 when setting threshold $\epsilon$ equals to the value that select top-$50$ neurons from suspicious ranking, with $\mathcal{S}$(left) and $\hat{\mathcal{S}}$(right), respectively.
}
\label{fig:sr_comparsion}
\end{figure}
We summarize the whole process of the block localization in Algorithm~\ref{alg:suspiciousness_ranking}.
To validate its advantages, we conduct an experiment to compare the effectiveness and stability of the blocks positioned from ${\mathcal{S}}$ and $\hat{\mathcal{S}}$, respectively.
To compare the stability of the method, we changed the size of the dataset $\mathcal{D}^{\text{repair}}_\text{fail}$.
We observe that as the size of the dataset changes, the suspicious neurons on each block obtained by ${\mathcal{S}}$ vary significantly while those obtained by $\hat{\mathcal{S}}$ are much more stable and lead to unanimous conclusions.
As shown in \figref{fig:sr_comparsion}, according to the experiments on ResNet-18, by the number of suspicious neurons contained in the block, ${\mathcal{S}}$ and $\hat{\mathcal{S}}$ estimated that `block 1' and `block 4' are the most vulnerable, respectively. We observed similar results when the threshold $\epsilon$ are set to other values (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, $\epsilon_{10}$, $\epsilon_{20}$, $\epsilon_{30}$, $\epsilon_{40}$, $\epsilon_{100}$).
We also conduct detailed quantitative analysis and discussion in \secref{subsec:exp-rq3}, presenting that repairing the most vulnerable block, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, `block 4', achieves much higher improvement.
\subsection{Architecture-oriented Search-based Repairing}\label{subsec:blr-searching}
After localizing the targeted block, how to break the old architecture's bottleneck and fix it to become competent in the tasks is another challenge.
To this end, we formulate the very first block-level architecture and weights repairing as the network architecture search task.
Given a deployed DNN with pre-trained weights and fixed architecture (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\phi_{(\mathcal{W},\mathcal{A})}$), we first relax the targeted block (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\phi_{(\mathcal{W}^*_\text{b},\mathcal{A}^*_\text{b})}$) to a directed acyclic graph like the cell structure in the differentiable architecture search (DARTS) \cite{liu2018darts}, which is composed of an ordered sequence of nodes that are connected by edges. Intuitively, the node corresponds to the deep feature while the edge denotes the operation layer like convolutional layer.
Our goal is to optimize the edges, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, to determine which two nodes should be connected and which operation should be selected for that connection.
To this end, the key issues are to define the architecture search space and optimization strategy.
\subsubsection{Architecture search space for the targeted block}
To better illustrate the process of architecture search, we take the ResNet as an example. Given a block in ResNet containing $K$ operation layers, we reformulate it as a directed acyclic graph that has $K+1$ nodes $\{\mathbf{X}^k\}_{k=1}^{K}$ and allow each node to accept the outputs from all previous nodes instead of following the sequential order.
As shown in \figref{fig:workflow}, we present an example of the graph representation of the targeted block via nodes and edges.
Specifically, we denote the edge for connecting the $i$th and $j$th nodes as $\text{e}_{(i,j)}$ and the node $\mathbf{X}^j$ can be calculated by
\begin{align} \label{eq:cal_node}
\mathbf{X}^j=\sum_{i=[1,j-1]}\text{e}_{(i,j)}(\mathbf{X}^{i}),
\end{align}
where $\text{e}_{(i,j)}(\mathbf{X}^{i})$ is an edge taking the node $\mathbf{X}^{i}$ as the input.
Then, we define an operation set $\mathcal{O}$ containing six candidate operations as presented in \tableref{tab:op}, each of which can be set as the edge.
For example, when we select `None' for $\text{e}_{(i,j)}$, the two nodes $\mathbf{X}^{i}$ and $\mathbf{X}^{j}$ should not be connected.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{image/workflow.pdf}\\
\caption{The overall workflow of {\emph{ArchRepair}}{}. Given a deployed DNN model, we first apply the Vulnerable Block Localization to identify the most vulnerable block. Then, we continue to formulate the block repairing as a DNN architecture search problem, and the block's architecture and parameters are optimized jointly through Architecture-oriented Search-based Repairing.}
\label{fig:workflow}
\end{figure*}
Note that, the raw sequentially ordered block of ResNet is a special case in the defined search space and we can naturally inherent the raw weights and architecture setup as the initialization for the following optimization.
\subsubsection{Architecture and weights optimization}
The optimization goal is to select a suitable operation for each edge from the operation set.
To this end, we relax the selection as a continuous process by regarding the edge connecting the node $i$ and $j$ as a weighted combination of the outputs of all candidate operations
\begin{align} \label{eq:relax_node}
\text{e}_{(i,j)}(\mathbf{X}^{i})=\sum_{\text{o}\in\mathcal{O}}\frac{\exp{(\alpha^\text{o}_{(i,j)})}}{\sum_{\text{o}'\in\mathcal{O}} \exp{(\alpha^{\text{o}'}_{(i,j)})}}\text{o}(\mathbf{X}^{i})
\end{align}
where the parameter $\alpha^\text{o}_{(i,j)}$ determines the combination weight of using the operation $\text{o}$ for connecting the $i$th and $j$th nodes.
As a result, we can define the architecture parameters for the edge $\text{e}_{(i,j)}$ as a vector $\mathbf{a}_{(i,j)}=[\alpha_{(i,j)}^\text{o}|\text{o}\in\mathcal{O}]$ assigning each operation in the $\mathcal{O}$ a combination weight.
Moreover, for the whole block, we denote its architecture as $\mathcal{A}_{\text{b}}^*=\{\mathbf{a}_{(i,j)}\}$ and respective parameters for all candidate operations as $\mathcal{W}_{\text{b}}^*=\{\mathbf{w}_{(i,j)}\}$.
Then, we can specify the repairing process in \reqref{eq:block_repair-2} by optimizing the weights (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{W}_\text{b}^*$) and architecture parameters (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{A}_\text{b}^*$) on the training dataset and validation dataset, alternatively, that is, we have
\begin{align}
\hat{\mathcal{W}}^{*}_\text{b} &= \operatornamewithlimits{arg\,min}_{\mathcal{W}_\text{b}^*} \text{J}(\phi_{(\mathcal{W}_\text{b}^*,\mathcal{A}_\text{b}^*)}, \mathcal{D}^{\text{repair}}_{\text{train}}) \label{eq:nas_repair-1}, \\
\hat{\mathcal{A}}^{*}_\text{b} &= \operatornamewithlimits{arg\,min}_{\mathcal{A}_\text{b}^*} \text{J}(\phi_{(\hat{\mathcal{W}}_\text{b}^*,\mathcal{A}_\text{b}^*)}, \mathcal{D}^{\text{repair}}_{\text{val}}) \label{eq:nas_repair-2}
\end{align}
where $\text{J}(\cdot)$ is specified as the cross-entropy loss function for the image classification task.
During the training process, we initialize the block architecture $\mathcal{A}^{*}_\text{b}$ as the raw block architecture of the targeted DNN, and update the architecture and weights, alternatively.
We will detail the repairing process in \secref{subsec:blr-workflow}.
After getting the optimized architecture (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\hat{\mathcal{A}}_\text{b}^*$) in the continuous search space, we set the operation with maximum combination weight as the edge, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\text{e}_{(i,j)} =\operatornamewithlimits{arg\,max}_{\text{o}\in\mathcal{O}}\alpha_{(i,j)}^\text{o}$. Then, we retrain the weights $\hat{\mathcal{W}}^*_\text{b}$ with fixed block architecture.
\begin{table}[t]
\centering
\caption{All operators in the operation set $\mathcal{O}$.}
{
\begin{tabular}{l|l}
\toprule
Operators & Operations \\
\midrule
None & Add a Zero CNN layer whose weights are all zero. \\
Skip & Add an Identity CNN layer whose weights are all one. \\
AvgPool & Add an Average Pooling layer and an Identity CNN layer. \\
MaxPool & Add a Max Pooling layer and an Identity CNN layer. \\
SepConv & Add separated CNN layers. \\
DilConv & Add a CNN layer with dilation kernel and an Identity CNN layer. \\
\bottomrule
\end{tabular}
}
\label{tab:op}
\end{table}
\subsection{Our Repairing Algorithm}
\label{subsec:blr-workflow}
\figref{fig:workflow} displays the whole workflow of {\emph{ArchRepair}}{}.
Given a deployed DNN, we first employ the proposed vulnerable block localization to determine the block we aim to fix.
Specifically, we use the $\mathcal{D}^\text{repair}$ dataset and the neuron spectrum analysis to get the suspiciousness scores of all neurons, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{S}=\{s_j\}$.
Meanwhile, we use the failure examples in $\mathcal{D}^\text{repair}$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{repair}_\text{fail}$) to get the gradients of all neurons w.r.t\onedot} \def\dof{d.o.f\onedot the loss function (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{G}=\{g_j\}$).
Then, we use Eq.~\eqref{eq:reweight} and $\mathcal{G}=\{g_j\}$ to reweight $\mathcal{S}=\{s_j\}$, thus get $\hat{\mathcal{S}}=\{\hat{s}_j\}$.
After that, we can calculate the number of vulnerable neurons through a threshold $\epsilon$, that is ,when the suspiciousness score of a neuron is larger than $\epsilon$, the neuron is identified as a vulnerable case.
Finally, the block with the largest number of vulnerable cases is selected as the targeted block we want to repair.
During the architecture search-based repairing, we reformulate the targeted block as a directed acyclic graph where the deep features are nodes and operations are edges. Then, we relax each edge as a combination of six operations (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \reqref{eq:relax_node}) where the combination weights correspond to the architecture parameters $\mathcal{A}_\text{b}^*=\{\mathbf{a}_{(i,j)}\}$. We use the dataset $\mathcal{D}^\text{repair}$ to conduct the architecture and weights optimization via \reqref{eq:nas_repair-1} and \reqref{eq:nas_repair-2} where the original architecture and weights are inherited and serve as the optimization initialization.
Hence given the optimized block architecture in the continuous space (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\hat{\mathcal{A}}^*_\text{b}$), we discretize it to the final architecture by preserving the operation with the maximum combination weight and removing other operations.
Finally, we use the $\mathcal{D}^\text{repair}$ to fine-tune the weights by fixing the optimized architecture for the repaired DNN.
\section{Experimental Design and Settings}\label{sec:exp}
In this section, we conduct extensive experiments to validate the proposed methods and compare with the state-of-the-art DNN repair techniques, to investigate the following research questions:
\begin{itemize}
\item \textbf{RQ1.} Does {\emph{ArchRepair}}{} outperform the state-of-the-art (SOTA) DNN repair techniques with better repairing effects?
\item \textbf{RQ2.} Could {\emph{ArchRepair}}{} repair DNNs on certain failure patterns without sacrificing robustness on clean data and other failure patterns?
\item \textbf{RQ3.} Is our proposed localization method effective in identifying vulnerable neuron blocks?
\item \textbf{RQ4.} How do different components of our proposed method impact the overall repairing performance?
\end{itemize}
\textbf{RQ1} intends to evaluate the overall repairing capability of {\emph{ArchRepair}}{} and to compare it to SOTA DNN repair techniques as baselines. \textbf{RQ2} aims at exploring the potential of our method in repairing DNN on corrupted data, which are common robustness issues during DNN practical usage in the operational environments. \textbf{RQ3} intends to examine whether the proposed localization method can precisely locate vulnerable blocks. \textbf{RQ4} is to explore the contribution that each of {\emph{ArchRepair}}{}'s key components makes on the overall performance of DNN repair.
\subsection{Experimental Setups} \label{subsec:exp-setup}
To answer the research questions above, we design our evaluation from multiple perspectives listed in the following.
{\bf Subject Datasets and Repairing Scenarios.}
Given a deployed DNN trained on a training dataset $\mathcal{D}^\text{t}$, we can evaluate it on a testing dataset $\mathcal{D}^\text{v}$.
In the real world, there are a lot of scenes that cannot be covered by $\mathcal{D}^\text{v}$ and the DNN's performance may decrease significantly after the DNN is deployed in its operational environment.
For example, there are common corruptions (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, noise patterns) in the real world that can affect the DNN significantly~\cite{hendrycks2019robustness}: Gaussian noise (GN), shot noise (SN), impulse noise (IN), defocus blur (DB), Gaussian blur (GB), motion blur (MB), zoom blur (ZB), snow (SNW), frost (FRO), fog (FOG), brightness (BR), contrast (CTR), elastic transform (ET), pixelate (PIX), and JPEG compression (JPEG).
According to above situations, we consider two repairing scenarios that commonly occur in practice:
\begin{itemize}
\item {\bf Repairing the accuracy drift on testing dataset.} When we evaluate the DNN on the testing dataset $\mathcal{D}^\text{v}$, we can collect a few failure examples (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, 1,000 examples) denoted as $\mathcal{D}^\text{v}_{\text{fail}}$. Then, we set $\mathcal{D}^{\text{repair}}=\mathcal{D}^\text{v}_{\text{fail}}\cup\mathcal{D}^\text{t}$ and use the proposed or baseline repairing methods to enhance the deployed DNNs. We evaluate the accuracy on the testing dataset where $\mathcal{D}^\text{v}_{\text{fail}}$ is excluded (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{v}\setminus\mathcal{D}^\text{v}_{\text{fail}}$).
%
Note that, the context of repairing DNN with only a few testing data is meaningful and important, which is adopted by recent works~\cite{ren2020fewshot,yu2021deeprepair}.
In addition, there could be many practical scenarios in which collecting buggy example is very difficult or at very high cost, with only a few buggy examples collected entirely. Hence, we follow the common choice in recent works~\cite{ren2020fewshot,yu2021deeprepair} to select only 1,000 failure examples from testing data.
%
\item {\bf Repairing the robustness on corrupted datasets.} When we evaluate the DNN on a corrupted testing dataset $\mathcal{D}^\text{c}$, we can also collect a few failure examples (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, 1,000 examples) denoted as $\mathcal{D}^\text{c}_{\text{fail}}$ and set $\mathcal{D}^{\text{repair}}=\mathcal{D}^\text{c}_{\text{fail}}\cup\mathcal{D}^\text{t}$. The repairing goal is to enhance the accuracy on $\mathcal{D}^\text{c}\setminus\mathcal{D}^\text{c}_{\text{fail}}$ and other corrupted datasets while maintaining the accuracy on the clean testing dataset (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{v}\setminus\mathcal{D}^\text{v}_{\text{fail}}$).
\end{itemize}
We choose CIFAR-10~\cite{krizhevsky2009cifar} and Tiny-ImageNet~\cite{le2015tinyimg} as the evaluation datasets. They are commonly used datasets in recent DNN repair studies, which enables us for comparative studies in a relatively fair way.
Each dataset contains their respective training dataset $\mathcal{D}^\text{t}$ and testing dataset $\mathcal{D}^\text{v}$.
CIFAR-10 contains a total of 60,000 images in 10 categories, in which 50,000 images are for $\mathcal{D}^\text{t}$ and the other 10,000 are for $\mathcal{D}^\text{v}$.
Tiny-ImageNet has a training dataset $\mathcal{D}^\text{t}$ with the size of 100,000 images, and a testing dataset $\mathcal{D}^\text{v}$ with the size of 10,000 images.
Therefore, we have corrupted testing datasets $\{\mathcal{D}^\text{c}_i\}$ where $i=1,2,\dots, 15$ corresponding to the above fifteen corruptions~\cite{hendrycks2019robustness}.
{\bf DNN architectures.}
We select four different architectures of DNN, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, ResNet-101~\cite{he2016resnet}, and DenseNet-121~\cite{huang2017densenet}. Given that {\emph{ArchRepair}}{} is a block-based repairing method, the block-like architecture, ResNet, turns out to be a perfect research subject. For a broad comparison, we also choose a non-block-like architecture, DenseNet-121, to examine the repairing capability of {\emph{ArchRepair}}{}~\footnote{For DenseNet-121, we manually group two consecutive convolution blocks as one block when repairing.}. For each architecture, we first pre-train them with the original training dataset $\mathcal{D}^\text{t}$ (from CIFAR-10 or Tiny-ImageNet), the model with the highest accuracy in testing dataset $\mathcal{D}^\text{v}$ (from CIFAR-10 or Tiny-ImageNet) will be saved as pre-trained model $\phi_\theta$. As the original ResNet and DenseNet are not designed for CIFAR-10 and Tiny-ImageNet datasets, we use the unofficial architecture code offered by a popular GitHub project\footnote{Train CIFAR10 with PyTorch: https://github.com/kuangliu/pytorch-cifar}, which has more than 4.1K stars.
{\bf Hyper-parameters.}
In terms of the training setup, we employ stochastic gradient descent (SGD) as the optimizer, setting batch size as 128, the initial learning rate as 0.1 and the weight decay as 0.0005. We use cross-entropy loss as the loss function. The maximum number of epochs is 500, and an early-stop function will terminate the training phase when the validation loss no longer decreases in 10 epochs.
{\bf Baselines.}
To demonstrate the repairing capability of the proposed {\emph{ArchRepair}}{}, we select 6 SOTA DNN repair methods from two different categories as baselines: neuron-level repairing methods and network-level repairing methods. The neuron-level repairing methods focus on repairing certain neurons' weight in order to repair the DNNs, representative methods from this category are MODE~\cite{ma2018mode}, Apricot~\cite{zhang2019apricot}, and Arachne~\cite{sohn2019arachne}. While network-level repairing methods mainly repair DNNs by using augmented datasets to fine-tune the whole network, where SENSEI~\cite{gao2020sensei}, Few-Shot~\cite{ren2020fewshot}, and DeepRepair~\cite{yu2021deeprepair} are the most popular ones.
For a fair comparison, we employ the same settings on all six repairing methods and {\emph{ArchRepair}}{}. In order to fully evaluate the effectiveness of proposed method, we apply all methods (six baselines and {\emph{ArchRepair}}{}) to fix 4 different DNN architectures on large-scale datasets, including the clean version and 15 corrupted version from CIFAR-10 and Tiny-ImageNet, to assess the repairing capability.
{\bf Other configurations.}
We implement {\emph{ArchRepair}}{} in Python 3.9 based on PyTorch framework. All the experiments were performed on a same server with a 12-core 3.60GHz Xeon CPU E5-1650, 128GB RAM and four NVIDIA GeForce RTX 3090 GPUs (24GB memory of each). The opreation system is Ubuntu 18.04.
In summary, for each baseline method and {\emph{ArchRepair}}{}, our evaluation consists of 64 configurations (4 DNN architectures $\times$ 16 versions of a dataset~\footnote{one clean dataset (repairing the accuracy drift on testing dataset) and fifteen corruption datasets (repairing the robustness on corrupted datasets)}) on both CIFAR-10 and Tiny-ImageNet. For CIFAR-10 dataset, an execution of training and repairing a model under one specific configuration costs about 12 hours on average (the maximum one is about 50 hours); while for Tiny-ImageNet dataset, an execution of training and repairing a model takes about 18 hours on average (the maximum one is about 64 hours). Overall, the total execution time of our experiments is more than 2 months.
\section{Experimental Results}
In this section, we summarize the high-level results and findings for answering our research questions.
\begin{table*}[t]
\centering
\caption{Accuracy (\%) of 4 different DNNs (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18. ResNet-50, ResNet-101, and DenseNet-121) repaired on 2 dataset (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, CIFAR-10 and Tiny-ImageNet) by different repairing methods. }
\resizebox{1\linewidth}{!}{
\begin{tabular}{l|cccc|cccc}
\toprule
\multirow{2}{*}{\bf Baseline} & \multicolumn{4}{c|}{\bf CIFAR-10} & \multicolumn{4}{c}{\bf Tiny-ImageNet} \\
& \bf ResNet-18 & \bf ResNet-50 & \bf ResNet-101 & \bf DenseNet-121 & \bf ResNet-18 & \bf ResNet-50 & \bf ResNet-101 & \bf DenseNet-121 \\
\midrule
\bf Original & 85.00 & 85.17 & 85.72 & 87.97 & 45.15 & 46.27 & 46.14 & \cellcolor{tab_red}48.73 \\
\midrule
\bf MODE~\cite{ma2018mode} & 85.13 & 85.26 & 86.19 & 88.28 & 45.75 & 45.93 & 45.87 & 47.69 \\
\bf Apricot~\cite{zhang2019apricot} & 86.80 & 88.95 & 89.74 & 89.93 & 46.30 & 46.85 & 45.90 & 45.27 \\
\bf Arachne~\cite{sohn2019arachne} & 85.38 & 87.95 & 89.37 & 91.25 & 46.73 & 47.37 & 46.75 & 46.95 \\
\midrule
\bf SENSEI~\cite{gao2020sensei} & 85.20 & 86.25 & 88.73 & 89.73 & 45.82 & 46.92 & 46.38 & 45.38 \\
\bf Few-Shot~\cite{ren2020fewshot} & 86.28 & 86.35 & 88.28 & 88.57 & 45.82 & 46.92 & 45.87 & 45.26 \\
\bf DeepRepair~\cite{yu2021deeprepair} & 87.20 & 87.46 & 88.94 & 90.56 & 46.78 & 47.69 & \cellcolor{tab_red}46.94 & 46.97 \\
\midrule
\bf {\emph{ArchRepair}}{} (ours) & \cellcolor{tab_red}88.29 & \cellcolor{tab_red}89.58 & \cellcolor{tab_red}90.38 & \cellcolor{tab_red}91.37 & \cellcolor{tab_red}47.35 & \cellcolor{tab_red}47.82 & 46.73 & 46.84 \\
\bottomrule
\end{tabular}
}
\label{tab:rq1}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{image/RQ1_C16_cifar.pdf}
\caption{Comparing the repairing methods on different DNNs (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, ResNet-101 and DenseNet-121) by contrasting the accuracy of repaired DNNs on CIFAR-10's testing dataset (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{t}$) and corruption datasets (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{c}$).}
\label{fig:rq1_cifar_bar}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{image/RQ1_C16_tiny_imagenet.pdf}
\caption{Comparing the repairing methods on different DNNs (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, ResNet-101 and DenseNet-121) by contrasting the accuracy of repaired DNNs on Tiny-Imagenet's testing dataset (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{t}$) and corruption datasets (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{c}$).}
\label{fig:rq1_tiny_imagenet_bar}
\end{figure*}
\subsection{RQ1: Does {\emph{ArchRepair}}{} outperform the state-of-the-arts (SOTA) DNN repair techniques?}\label{subsec:exp-rq1}
To answer RQ1, we train 4 DNNs (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, ResNet-101, and DenseNet-101) on CIFAR-10's and Tiny-ImageNet's training dataset (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{t}$) and evaluate them on testing datasets (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, $\mathcal{D}^\text{v}$) respectively. To evaluate the performance of our method (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, {\emph{ArchRepair}}{}), we apply six different SOTA methods as well as {\emph{ArchRepair}}{} to repair these 4 DNNs. The evaluation results of repairing are summarized in \tableref{tab:rq1}. In general, {\emph{ArchRepair}}{} exhibits significant advantages over all baseline methods on the 4 DNNs, demonstrating its effectiveness and generalization ability of the proposed method.
In particular, comparing with the state-of-the-art DNN repair methods (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, neuron-level repairing method Arachne~\cite{sohn2019arachne}, and network-level repairing method DeepRepair~\cite{yu2021deeprepair}), {\emph{ArchRepair}}{} achieves much higher accuracy on all 4 DNNs on CIFAR-10 dataset. On the more challenging dataset, Tiny-ImageNet, {\emph{ArchRepair}}{} still achieves much higher accuracy on 2 out of 4 DNNs. Note that on DenseNet-121, all the repairing methods failed to repair, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, didn't improve the performance comparing to the original network. One possible explanation is that the original DenseNet-121's performance has almost reached the upper-bound of the classification accuracy on Tiny-ImageNet (highest accuracy among 4 different DNNs), hence there might not be much room for improvement in terms of the accuracy.
Furthermore, to understand the influence of repairing on DNN's robustness, we evaluate the repaired DNNs' performance on corruption datasets (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, CIFAR-10-C~\cite{hendrycks2019robustness} and Tiny-ImageNet-C~\cite{hendrycks2019robustness}). The CIFAR-10-C and Tiny-ImageNet-C contain over 15 types of natural corruption datasets, and we show the results on CIFAR-10-C in \figref{fig:rq1_cifar_bar} and Tiny-ImageNet-C in \figref{fig:rq1_tiny_imagenet_bar}. Obviously in \figref{fig:rq1_cifar_bar}, {\emph{ArchRepair}}{} achieves the highest accuracy on a majority of corruption datasets across three variants of ResNet (8/15, 9/15, and 7/15 on ResNet-18, ResNet-50, and ResNet-101, respectively) besides the best performance on clean dataset.
Even on DenseNet-121, which is not a block-like DNN, {\emph{ArchRepair}}{} also achieves promising performance compared with SOTA method Apricot~\cite{zhang2019apricot}.
The performance of {\emph{ArchRepair}}{} are also significant on Tiny-ImageNet-C. As we've mentioned before, Tiny-ImageNet is way more challenging. Nevertheless, {\emph{ArchRepair}}{} still outperforms baselines in terms of the robustness on a majority of corruption datasets across three variants of ResNet (9/15, 9/15, and 7/15 on ResNet-18, ResNet-50, and ResNet-101, respectively) as well as the non-block-like DNN DenseNet-121 (8/15).
This fact confirms that {\emph{ArchRepair}}{} doesn't harm the DNN's robustness, and on the contrary, it can even sometimes improve DNN's generalization ability towards classifying corrupted data.
\begin{tcolorbox}[size=title]
{\textbf{Answer to RQ1:} According to the experimental results on clean dataset, {\emph{ArchRepair}}{} outperforms the SOTA repairing method on all 4 DNNs with different architetures (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, ResNet-101, and DenseNet-121). Moreover, the experimental results on corruption datasets also support that {\emph{ArchRepair}}{} can repair a DNN without harming its robustness. }
\end{tcolorbox}
\begin{table*}
\centering
\caption{Accuracy (\%) of a deployed ResNet-18 repaired by different repairing method on 15 different corruption patterns.}
\resizebox{\linewidth}{!}{
\begin{tabular}{c|l|cccccccccccccccc}
\toprule
\multicolumn{2}{c|}{\bf ResNet-18}
& \bf Clean & \bf GN & \bf SN & \bf IN & \bf DB & \bf GB & \bf MB & \bf ZB & \bf SNW & \bf FRO & \bf FOG & \bf BR & \bf CTR & \bf ET & \bf PIX & \bf JPEG \\
\midrule
\multirow{6}{*}{\rotatebox{90}{CIFAR-10-C}}
& \bf Original & 85.000 & 61.452 & 67.392 & 61.944 & 74.762 & 54.782 & 66.348 & 69.476 & 71.408 & 70.114 & 73.532 & 82.736 & 58.716 & 74.822 & 72.364 & 78.752 \\
& \bf Apricot~\cite{zhang2019apricot} & 86.644 & 76.930 & \cellcolor{tab_red}78.656 & 77.694 & 75.827 & 66.390 & \cellcolor{tab_red}76.810 & \cellcolor{tab_red}79.851 & 76.406 & 77.269 & 78.979 & \cellcolor{tab_red}89.254 & 74.390 & 75.112 & 75.350 & 75.810 \\
& \bf Arachne~\cite{sohn2019arachne} & 88.451 & 77.144 & 77.715 & \cellcolor{tab_red}78.976 & 76.546 & 65.815 & 75.963 & 77.712 & 77.862 & 77.224 & 79.200 & 86.913 & 75.792 & 73.876 & \cellcolor{tab_red}77.694 & 74.402 \\
& \bf SENSEI~\cite{gao2020sensei} & 86.525 & 68.762 & 70.471 & 73.345 & 76.842 & 60.244 & 71.229 & 73.297 & 73.732 & 73.814 & 76.975 & 83.006 & 64.861 & 72.814 & 75.833 & \cellcolor{tab_red}79.495 \\
& \bf DeepRepair~\cite{yu2021deeprepair} & 88.159 & 75.197 & 73.990 & 75.807 & 77.369 & 63.263 & 75.703 & 74.973 & 76.999 & 76.872 & 77.884 & 83.967 & 72.889 & 76.594 & 74.669 & 77.726 \\
& \bf {\emph{ArchRepair}}{} (ours) & \cellcolor{tab_red}90.177 & \cellcolor{tab_red}77.546 & 77.689 & 73.237 & \cellcolor{tab_red}80.679 & \cellcolor{tab_red}67.523 & 75.998 & 77.697 & \cellcolor{tab_red}77.867 & \cellcolor{tab_red}80.677 & \cellcolor{tab_red}79.854 & 85.146 & \cellcolor{tab_red}79.026 & \cellcolor{tab_red}78.053 & 77.448 & 77.967 \\
\midrule
\multirow{6}{*}{\rotatebox{90}{Tiny-ImageNet-C}}
& \bf Original & 45.150 & 15.912 & \cellcolor{tab_red}16.972 & 15.482 & 14.281 & 14.337 & 13.648 & 12.191 & 13.562 & 16.452 & 15.119 & 13.823 & 6.130 & 12.657 & 10.819 & 13.577 \\
& \bf Apricot~\cite{zhang2019apricot} & 46.732 & 16.703 & 15.270 & 15.339 & 14.266 & 14.762 & 13.047 & 11.959 & 13.319 & \cellcolor{tab_red}19.550 & 14.838 & 14.041 & 8.790 & 11.231 & 9.227 & \cellcolor{tab_red}14.825 \\
& \bf Arachne~\cite{sohn2019arachne} & 46.297 & 16.302 & 15.932 & 15.932 & \cellcolor{tab_red}14.938 & 15.152 & 14.119 & 11.695 & 13.805 & 18.986 & 15.106 & 14.123 & 8.253 & 11.831 & 10.145 & 13.918 \\
& \bf SENSEI~\cite{gao2020sensei} & 45.824 & 15.270 & 14.870 & 14.390 & 14.664 & 15.052 & 14.191 & 12.112 & \cellcolor{tab_red}13.917 & 17.250 & 14.943 & 13.602 & 9.117 & 12.902 & 11.277 & 14.772 \\
& \bf DeepRepair~\cite{yu2021deeprepair} & 46.780 & 17.032 & 15.673 & 15.277 & 14.669 & \cellcolor{tab_red}15.324 & 13.570 & 12.478 & 13.624 & 18.950 & 15.152 & 14.145 & 9.385 & 13.496 & 11.926 & 14.597 \\
& \bf {\emph{ArchRepair}}{{}} (ours) & \cellcolor{tab_red}47.350 & \cellcolor{tab_red}17.820 & 15.779 & \cellcolor{tab_red}16.376 & 14.769 & 15.224 & \cellcolor{tab_red}15.967 & \cellcolor{tab_red}12.670 & 12.923 & 19.295 & \cellcolor{tab_red}15.915 & \cellcolor{tab_red}15.112 & \cellcolor{tab_red}10.337 & \cellcolor{tab_red}13.765 & \cellcolor{tab_red}12.553 & 14.624 \\
\bottomrule
\end{tabular}
}
\label{tab:rq2}
\end{table*}
\subsection{RQ2: Can {\emph{ArchRepair}}{} fix DNN on a certain failure pattern without sacrificing robustness on clean data and other failure patterns?}\label{subsec:exp-rq2}
In \secref{subsec:exp-rq1}, our investigation results demonstrated that {\emph{ArchRepair}}{} will not affect DNN's robustness when repairing on the clean dataset. Hence in this section, we continue to validate whether our method harms DNN's robustness when repairing a specific failure pattern.
We first verify the repairing capability of {\emph{ArchRepair}}{}. We repair a deployed DNN (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18) on each of the corruption datasets from CIFAR-10-C and Tiny-ImageNet-C, and compare the performance with the other repairing methods, where the results are summarized in \tableref{tab:rq2}. Comparing the experimental results on the corruption dataset, we see that all repairing methods have the capability to repair the failure patterns, except shot noise (SN) on Tiny-ImageNet-C (all repairing methods fail to repair this corruption pattern). Among these repairing techniques, our method {\emph{ArchRepair}}{} has the highest accuracy on 8 out of 15 the corruption datasets on CIFAR-10-C dataset, and 9 out of 15 the corruption datasets on Tiny-ImageNet-C, respectively, demonstrating that {\emph{ArchRepair}}{} exhibits the advantages in repairing failure patterns.
To validate whether our method has harmed DNN's robustness, we also evaluate the performance of repaired DNNs on the other corruption datasets. The evaluation results on CIFAR-10 and Tiny-ImageNet are shown in \figref{fig:rq2_cifar_bar} and \figref{fig:rq2_tiny_imagenet_bar}, respectively. Comparing the accuracy difference on CIFAR-10-C (see \figref{fig:rq2_cifar_bar}), we observe that the DNNs repaired by {\emph{ArchRepair}}{} (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the red bar) have higher accuracies on both clean and corruption datasets than the original DNN (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the gray bar, which is lower than others in most of the cases), indicating that repairing method will not harm the DNN's robustness when having fixed certain corruption patterns. This is also verified by the results on Tiny-ImageNet-C (see \figref{fig:rq2_tiny_imagenet_bar}), where repairing on a certain corruption pattern will not affect the DNN's robustness on clean dataset and other corruption patterns, instead, it can even significantly enhance the robustness in some cases (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, when repairing on Fog corruption, the performance on other corruptions is also improved).
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{image/RQ2_C16_cifar_p1.pdf}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{image/RQ2_C16_cifar_p2.pdf}
\caption{Comparing the effectiveness and robustness of repairing methods on ResNet-18 by repairing the DNNs on one of the CIFAR-10's corruption dataset $\mathcal{D}^\text{c}_\text{i}$ (CIFAR-10-C) and evaluating on the other corruption dataset $\{\mathcal{D}^\text{c}_\text{k} | \mathcal{D}^\text{c}_\text{k} \in \mathcal{D}^\text{c}, \text{k} \neq \text{i}\}$.}
\label{fig:rq2_cifar_bar}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{image/RQ2_C16_tiny_imagenet_p1.pdf}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{image/RQ2_C16_tiny_imagenet_p2.pdf}
\caption{Comparing the effectiveness and robustness of repairing methods on ResNet-18 by repairing the DNNs on one of the Tiny-Imagenet's corruption dataset $\mathcal{D}^\text{c}_\text{i}$ (Tiny-ImageNet-C) and evaluating on the other corruption dataset $\{\mathcal{D}^\text{c}_\text{k} | \mathcal{D}^\text{c}_\text{k} \in \mathcal{D}^\text{c}, \text{k} \neq \text{i}\}$.}
\label{fig:rq2_tiny_imagenet_bar}
\end{figure*}
\begin{tcolorbox}[size=title]
{\textbf{Answer to RQ2:} {\emph{ArchRepair}}{} can successfully fix a certain corruption pattern on a deployed DNN (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18), outperforming the existing 4 DNN repair methods. In addition, {\emph{ArchRepair}}{}'s repairing doesn't harm DNN's robustness on clean dataset and other failure patterns.}
\end{tcolorbox}
\begin{table*}[t]
\centering
\small
\caption{Block suspiciousness $\mathcal{S}_\text{B}$ under 8 different thresholds $\epsilon_i$ and the accuracy of 2 DNNs (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18 and ResNet-50) repaired on 4 different blocks. Obviously repairing on the block with the highest block suspiciousness has the best performance.}
\begin{subtable}[t]{\linewidth}
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|rrrrrrrr|c|rrrrrrrr}
\toprule
\multirow{3}{*}{} & \multicolumn{9}{c|}{\bf CIFAR-10} & \multicolumn{9}{c}{\bf Tiny-ImageNet} \\
\cline{2-19}
& \multirow{2}{*}{\bf Acc. (\%) on $\mathcal{D}^\text{v}$} & \multicolumn{8}{c|}{\bf Block Suspiciousness $\mathcal{S}_\text{B}$} & \multirow{2}{*}{\bf Acc. (\%) on $\mathcal{D}^\text{v}$} & \multicolumn{8}{c}{\bf Block Suspiciousness $\mathcal{S}_\text{B}$} \\
& & $\bm{\epsilon_{10}}$ & $\bm{\epsilon_{20}}$ & $\bm{\epsilon_{30}}$ & $\bm{\epsilon_{40}}$ & $\bm{\epsilon_{50}}$ & $\bm{\epsilon_{75}}$ & $\bm{\epsilon_{100}}$ & $\bm{\epsilon_{150}}$ & & $\bm{\epsilon_{10}}$ & $\bm{\epsilon_{20}}$ & $\bm{\epsilon_{30}}$ & $\bm{\epsilon_{40}}$ & $\bm{\epsilon_{50}}$ & $\bm{\epsilon_{75}}$ & $\bm{\epsilon_{100}}$ & $\bm{\epsilon_{150}}$ \\
\midrule
\bf Block 1 & 85.374 & 0 & 3 & 6 & 8 & 8 & 18 & 22 & 40 & 46.11 & 1 & 1 & 4 & 4 & 4 & 12 & 23 & 41 \\
\bf Block 2 & 86.377 & 0 & 0 & 0 & 1 & 1 & 2 & 5 & 16 & 46.29 & 0 & 1 & 2 & 2 & 2 & 6 & 9 & 16 \\
\bf Block 3 & 85.090 & 0 & 1 & 3 & 9 & 17 & 19 & 26 & 47 & 47.13 & 0 & 0 & 0 & 1 & 4 & 5 & 9 & 16 \\
\bf Block 4 & \cellcolor{tab_red}88.294 & \cellcolor{tab_red}10 & \cellcolor{tab_red}20 & \cellcolor{tab_red}21 & \cellcolor{tab_red}22 & \cellcolor{tab_red}24 & \cellcolor{tab_red}48 & \cellcolor{tab_red}48 & \cellcolor{tab_red}50 & \cellcolor{tab_red}47.35 & \cellcolor{tab_red}9 & \cellcolor{tab_red}18 & \cellcolor{tab_red}24 & \cellcolor{tab_red}33 & \cellcolor{tab_red}40 & \cellcolor{tab_red}52 & \cellcolor{tab_red}60 & \cellcolor{tab_red}79 \\
\bottomrule
\end{tabular}
}
\vspace{0.5pt}
\caption{Block suspiciousness and repairing accuracy on ResNet-18}
\end{subtable}\\
\begin{subtable}[t]{\linewidth}
\resizebox{\linewidth}{!}{
\begin{tabular}{l|c|rrrrrrrr|c|rrrrrrrr}
\toprule
\multirow{3}{*}{} & \multicolumn{9}{c|}{\bf CIFAR-10} & \multicolumn{9}{c}{\bf Tiny-ImageNet} \\
\cline{2-19}
& \multirow{2}{*}{\bf Acc. (\%) on $\mathcal{D}^\text{v}$} & \multicolumn{8}{c|}{\bf Block Suspiciousness $\mathcal{S}_\text{B}$} & \multirow{2}{*}{\bf Acc. (\%) on $\mathcal{D}^\text{v}$} & \multicolumn{8}{c}{\bf Block Suspiciousness $\mathcal{S}_\text{B}$} \\
& & $\bm{\epsilon_{10}}$ & $\bm{\epsilon_{20}}$ & $\bm{\epsilon_{30}}$ & $\bm{\epsilon_{40}}$ & $\bm{\epsilon_{50}}$ & $\bm{\epsilon_{75}}$ & $\bm{\epsilon_{100}}$ & $\bm{\epsilon_{150}}$ & & $\bm{\epsilon_{10}}$ & $\bm{\epsilon_{20}}$ & $\bm{\epsilon_{30}}$ & $\bm{\epsilon_{40}}$ & $\bm{\epsilon_{50}}$ & $\bm{\epsilon_{75}}$ & $\bm{\epsilon_{100}}$ & $\bm{\epsilon_{150}}$ \\
\midrule
\bf Block 1 & 82.115 & 1 & 2 & 2 & 4 & 4 & 7 & 7 & 7 & 45.83 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bf Block 2 & 84.313 & 1 & 1 & 6 & 8 & 8 & 10 & 10 & 15 & 46.55 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 \\
\bf Block 3 & \cellcolor{tab_red}89.576 & \cellcolor{tab_red}8 & \cellcolor{tab_red}18 & \cellcolor{tab_red}24 & \cellcolor{tab_red}32 & \cellcolor{tab_red}42 & \cellcolor{tab_red}58 & \cellcolor{tab_red}86 & \cellcolor{tab_red}139 & \cellcolor{tab_red}47.82 & \cellcolor{tab_red}10 & \cellcolor{tab_red}20 & \cellcolor{tab_red}30 & \cellcolor{tab_red}40 & \cellcolor{tab_red}48 & \cellcolor{tab_red}67 & \cellcolor{tab_red}84 & \cellcolor{tab_red}119 \\
\bf Block 4 & 87.254 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 46.27 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 3 \\
\bottomrule
\end{tabular}
}
\vspace{0.5pt}
\caption{Block suspiciousness and repairing accuracy on ResNet-50}
\end{subtable}
\label{tab:rq3}
\end{table*}
\subsection{RQ3: Is our proposed localization effective in identifying vulnerable block candidates?}\label{subsec:exp-rq3}
To verify the effectiveness of our localization method, we conduct an experiment by applying the repairing method on all 4 blocks of ResNet-18~\&~ResNet-50, and comparing the accuracy on the clean datasets $\mathcal{D}^\text{v}$ of both CIFAR-10 and Tiny-ImageNet with their block suspiciousness $\mathcal{S}_\text{B}$ (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, the number of suspicious neurons in correspond block). We calculate the block suspiciousness under 8 different thresholds
$\epsilon_i$~\footnote{$\epsilon_i$ indicates top-$i$ neurons with highest suspiciousness.} ($i\in\{10, 20, 30, 40, 50, 75, 100, 150\}$)
to evaluate how the threshold $\epsilon_i$ affects the block suspiciousness. The experimental results are summarized in \tableref{tab:rq3}.
As shown in \tableref{tab:rq3}, the block suspiciousness $\mathcal{S}_\text{B}$ of Block 4 in ResNet-18 and Block 3 in ResNet-50 are always the highest on both CIFAR-10 and Tiny-ImageNet datasets, no matter what value the threshold $\epsilon_i$ is. It matches the performance of repaired DNNs, where the DNN repaired on Block 4 in ResNet-18 and Block 3 in ResNet-50 has the highest accuracy, respectively. This demonstrates that our localization method can correctly locate the most vulnerable block.
It's worth mentioning that for a simpler DNN architecture, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, the vulnerable candidate block can be located more accurately when the threshold $\epsilon_i$ is small. As the threshold $\epsilon_i$ increases, the block suspiciousness $\mathcal{S}_\text{B}$ on other blocks becomes larger, making the localization method difficult to identify the vulnerable block. While for ResNet-50 (a relatively complex DNN), no matter what value the threshold $\epsilon_i$ is, the localization result are always significantly accurate (with a much higher suspiciousness $\mathcal{S}_\text{B}$ comparing with other blocks).
\begin{tcolorbox}[size=title]
{\textbf{Answer to RQ3:} {\emph{ArchRepair}}{} can always locate the most vulnerable block regardless the settings of threshold $\epsilon_i$ on different DNNs' architectures (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, ResNet-18 and ResNet-50).}
\end{tcolorbox}
\subsection{RQ4: How different components of {\emph{ArchRepair}}{} impact its overall performance?}\label{subsec:exp-rq4}
To demonstrate the effectiveness of our {\emph{ArchRepair}}{} and investigate how each component has contribute to its overall performance, we conduct an ablation study by repairing 4 pre-trained models (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, ResNet-101, and DenseNet-121) with two variants of our method on both CIFAR-10 and Tiny-ImageNet datasets. \tableref{tab:rq4} summarizes the evaluation results.
The first one performs {\emph{ArchRepair}}{} on one single layer of the DNN, and we denote these variants as `Layer-lv' in \tableref{tab:rq4}. The second one is our full (complete) version that applies {\emph{ArchRepair}}{} at the block level, we denote this variant as `Block-lv' in \tableref{tab:rq4}.
\begin{table*}[t]
\centering
\small
\caption{Comparing the two variants of our methods on four DNNs by evaluating the accuracy of repaired DNN under testing dataset $\mathcal{D}^\text{t}$.}
\resizebox{\linewidth}{!}{
\begin{tabular}{c|cccc|cccc}
\toprule
\multirow{2}{*}{} & \multicolumn{4}{c|}{\bf CIFAR-10} & \multicolumn{4}{c}{\bf Tiny-ImageNet} \\
& \bf ResNet-18 & \bf ResNet-50 & \bf ResNet-101 & \bf DenseNet-121 & \bf ResNet-18 & \bf ResNet-50 & \bf ResNet-101 & \bf DenseNet-121 \\
\midrule
\bf Original & 85.00 & 85.17 & 85.72 & 87.97 & 45.15 & 46.27 & 46.14 & \cellcolor{tab_red}48.73 \\
\bf Layer-lv & 85.02 & 85.26 & 85.29 & 89.86 & 45.35 & 45.11 & 45.84 & 46.17 \\
\bf Block-lv & \cellcolor{tab_red}88.29 & \cellcolor{tab_red}89.58 & \cellcolor{tab_red}90.38 & \cellcolor{tab_red}91.37 & \cellcolor{tab_red}47.35 & \cellcolor{tab_red}47.82 & \cellcolor{tab_red}46.73 & 46.84 \\
\bottomrule
\end{tabular}
}
\label{tab:rq4}
\end{table*}
Comparing with the original DNNs, the performance of `Layer-lv' is acceptable on CIFAR-10 dataset, as it slightly improves the behaviors on three DNNs (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, ResNet-18, ResNet-50, and DenseNet-121) and only decreases slightly on ResNet-101. The `Block-lv' achieves better performance on all of the four DNNs on CIFAR-10, and these results indicate that {\emph{ArchRepair}}{}'s repairing capability is effective at both levels.
The performance on `Block-lv' is better than the `Layer-lv' on all the four DNNs on two different datasets, especially on the more challenging dataset Tiny-ImageNet, where `Layer-lv' only shows small improvement on ResNet-18 while `Block-lv' has significant improvement on all three variants of ResNet. This demonstrate that repairing on one specific layer cannot fully unleash {\emph{ArchRepair}}{}'s potential while repairing on a block enables to take the advantage of all components of {\emph{ArchRepair}}{}. Note that even though both `Block-lv' and `Layer-lv' fail to repair DenseNet-121 on Tiny-ImageNet (as well as all the SOTA baseline methods, see evaluation results in \tableref{tab:rq1}), `Block-lv' still performs better than `Layer-lv'.
\begin{tcolorbox}[size=title]
{\textbf{Answer to RQ4:} Block-level repairing is more effective than layer-level one towards fully releasing {\emph{ArchRepair}}'s repairing capability. In addition, adjusting the network's architecture and weights simultaneously is more effective than only adjusting the weights, especially for block-level repairing, demonstrating that jointly repairing the block architecture and weights is a promising research direction for DNN repair.}
\end{tcolorbox}
\subsection{Threat to validity}
The threats to the validity could be from the following aspects: 1) The selected dataset and the used model architectures could be a threat. To mitigate it, we selected the popular datasets as well as diverse architectures to evaluate our method.
2) The selection of the corruption dataset could be biased, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, our method may not generalize well on other corruptions. We actually selected the 15 commonly used natural corruptions in the standard benchmarks of previous work~\cite{hendrycks2019robustness}.
3) A Further threat is from the implementation of our method as well as the usage of the existing baselines. To mitigate the threat, we carefully follow the configuration as stated in the original papers or implementations, respectively. Moreover, our co-authors carefully test and review our code and the configuration of other tools.
Furthermore, to be comprehensive for better understanding the position of {\emph{ArchRepair}}{}, we perform a large scale comparative study against 6 SOTA DNN repair techniques. The results confirm DNN repair could be even more promising and there are still opportunities ahead when going beyond focusing on repairing DNN weights only.
\section{Related Work}\label{sec:related}
\subsection{DNN Testing}
DNN testing is an important and relevant technique to DNN repair, aiming to detect potential buggy issues of a DNN.
Some recent work focus on testing criteria design. For example, DeepXplore~\cite{pei2017deepxplore} proposes the neuron coverage based on the number of activated neurons on given testing data, where the neuron coverage represents the adequacy of the testing data.
Similarly, DeepGauge~\cite{ma2018deepgauge} proposes multi-granularity testing criteria, which are based on neural behaviors.
Different from previous work focusing on single neuron's behaviors, DeepCT~\cite{ma2019deepct} considers the interactions between the different neurons, and Kim \emph{et al}\onedot~\cite{kim2019guiding} propose the coverage criteria to measure the surprise of the inputs.
Some researchers~\cite{sekhon2019towards,ncmislead} also point out that the neuron coverage might fail if most of the neurons are activated by a few test cases, and more further research is still needed along this line.
These testing criteria lay the foundation for testing generation techniques to detect defects in DNNs. DeepTest~\cite{tian2018deeptest} generates test cases based on the guidance of neuron coverage. TensorFuzz~\cite{odena2018tensorfuzz} proposes a distance-based coverage-guided fuzzing techniques to test DNNs. Similarly, DeepHunter~\cite{xie2019deephunter} proposes another coverage-guided testing technique by integrating the coverage criteria from DeepGauge. Readers can also see~\cite{ma2018deepmutation}. DeepStellar~\cite{du2019deepstellar} employs the coverage criteria and fuzzing technique for testing the recurrent neural network. More discussions on the progress of deep learning testing can be referred to the recent survey~\cite{dltestsurvey,arxiv18_sdle}.
Different from these testing techniques, our work mainly focuses on repairing DNNs and enhance their robustness and generalization ability, which can be considered as the downstream tasks of DNN testing.
\subsection{Fault Localization on Deep Neuron Network}
Fault localization aims to locate the root causing of software failures. Similar approaches have been widely studied for traditional software, which focus on developing faults identification methods such as spectral-based~\cite{jones2005tarantula,abreu_practical_2009,landsberg_evaluation_2015,landsberg_optimising_2018,naish_model_2011,zhang_theoretical_2017,perez_test-suite_2017}, model-based~\cite{birch_fast_2019,s._alves_method_2017}, slice-based~\cite{alves_fault-localization_2011}, and semantic fault localization~\cite{christakis_semantic_2019}.
Several works recently introduce fault localization on DNNs to find vulnerable neurons and repair their weights. Representative techniques include sensitivity-based fault localization~\cite{sohn2019arachne} and spectrum-based fault localization~\cite{eniser2019deepfault}. Eniser \emph{et al}\onedot~\cite{eniser2019deepfault} try to identify suspicious neurons responsible for unsatisfactory DNN performance, which is an early attempt to introduce fault localization technique on DNNs with promising results.
However, these methods only consider a fixed DNN architecture and neuron-aware buggy behaviors, which is less flexible for real-world applications.
Our work repairs DNN at a higher level (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, block level) by localizing the vulnerable block and jointly repairing the block architecture and weights, which is novel and havn't been investigated before.
\subsection{DNN Repair}
So far, there are several attempts for repairing DNN models.
Inspired by software debugging, Ma \emph{et al}\onedot~\cite{ma2018mode} propose a novel model debugging technique for neural network models, which is denoted as MODE. MODE first performs state differential analysis on hidden layers to identify the faulty neurons that are responsible for the misclassification. Then, an input selection algorithm is used to select new input samples to retrain the faulty neurons.
Zhang \emph{et al}\onedot~\cite{zhang2019apricot} propose a weight-adjustment approach called Apricot to fix the DNN. Apricot first generates a set of reduced DNNs from the original model and trains them with a random subset of the original training dataset, respectively. For each failure example, Apricot separates reduced DNN models into two partitions, one successfully predicts the label and the other not, and takes the mean of the corresponding weight assignments of two partitions. After that, Apricot automatically adjusts the weight with these mean values.
Further, Sohn \emph{et al}\onedot~\cite{sohn2019arachne} propose a search-based repair technique for DNNs, called Arachne. Unlike other techniques, Arachne directly manipulates the neuron weights without retraining. Arachne first uses positive and negative input data to retain correct behavior and generate a patch, respectively. Then uses Particle Swarm Optimization (PSO) to search and locate faulty neurons, and uses the result of PSO candidate to update neurons' weight, and further calculates fitness value based on the outcomes.
Recently, Gao \emph{et al}\onedot~\cite{gao2020sensei} have proposed a new algorithm called SENSEI, which uses guided test generation techniques to address the data augmentation problem for robust generalization of DNNs under natural environmental variations. Firstly, SENSEI uses a genetic search on a space of the natural environmental variants of each training input data to identify the worst variant for augmentation on each epoch. Besides, SENSEI uses a heuristic technique named selective augmentation, which allows skipping augmentation in certain epochs based on an analysis of the DNN's current robustness.
Another recent attempt for DNN repair is DeepRepair~\cite{yu2021deeprepair}, a method to repair the DNN on the image classification task. DeepRepair uses a style-guided data augmentation for DNN repairing to introduce the unknown failure patterns into the training data to retrain the model and applies clustering-based failure data generation to improve the effectiveness of data augmentation.
Our repairing method is orthogonal to data-augmentation based methods such as SENSEI~\cite{gao2020sensei} and DeepRepair~\cite{yu2021deeprepair}, where we focus on repairing DNN from the architecture and weight perspective. Our method also goes one step further beyond the weight level (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, MODE~\cite{ma2018mode}, Apricot~\cite{zhang2019apricot}, and Arachne~\cite{sohn2019arachne}), and considers at a higher granularity by jointly repairing architecture and weights at block level, which is demonstrated to be a promising direction for DNN repairing.
\subsection{Neural Architecture Search}
Neural architecture search (NAS) could be another relevant line of our work, aiming to automatically design an architecture instead of handcrafting one.
Typical NAS includes evolution-based~\cite{real_regularized_2019,xie_genetic_2017}, and reinforcement-learning-based~\cite{baker_designing_2017} methods.
However, the resources RL or evolution-based methods leveraged are often very expensive and still unaffordable in practice.
More recently, DARTS~\cite{liu2018darts} relaxes the search space to make it continuous so that the search processes can be performed based on the gradient. %
Differentiable NAS approaches can significantly reduce the computational cost.
Our search method is based on PC-DARTS~\cite{xu_pc-darts:_2020}, a stability improved variant of DARTS by introducing a partially connected mechanism.
The purpose of repairing and NAS is very different. The former intends to fix the buggy behaviors that follow some patterns with generalization capability, while NAS is to design general architecture automatically for better performance (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, energy efficiency).
In this paper, we formulate the block-level joint architecture and weight repairing as a NAS problem, which demonstrates the possibilities and chances for DNN repair along this direction.
\section{Conclusion}\label{sec:concl}
In this work, we have proposed {\emph{ArchRepair}}{}, an architecture-oriented DNN repair at block level, which offers a good trade-off between repaired network accuracy and time consumption, compared to neuron-level, layer-level, and network-level (data augmentation) repairing. To achieve this, two key problems are identified and solved sequentially, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, \emph{block localization}, and \emph{joint architecture and weights repairing}.
By jointly repairing both architecture and weights on the candidate block for repairing, \textbf{\emph{ArchRepair}} is able to achieve better repairing performance compared with 6 SOTA techniques.
Our extensive evaluation have also demonstrated that our method could not only enhance the accuracy but also the robustness across various corruption patterns while being cost-effective.
To the best of our knowledge, this is the very first attempt about DNN repair by considering adjusting both the architecture and weights at the `block-level'. Our research also initiates a promising direction for further DNN repair research, towards addressing the current urgent industrial demands for reliable and trustworthy DNN deployment in diverse real-world environments.
\bibliographystyle{splncs04}
| {
"attr-fineweb-edu": 1.53125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdu85qhLACA4PvzTL | \subsection{Related Work}
\label{sec_relatedWork}
Many approaches based on topological methods have been documented over the last
two decades. We refer the reader to the survey by Heine~et~al.~\cite{heine16}
for a comprehensive overview.
In the following, we focus on algorithms for
constructing topological data abstractions, which \julesReplace{is}{are} the most related to our
work.
While Morse theory has originally been developed in the smooth setting
\cite{milnor63}, many of its concepts can be translated to discretized data, in
particular in the form of piecewise-linear (PL) scalar fields defined on PL
manifolds.
Banchoff~\cite{banchoff70} introduced a formalism for the
combinatorial characterization of
the \emph{critical points} (\autoref{sec_criticalPoints}) of an input PL scalar
field. These points correspond to
locations where the sub-level sets of the
function change their topology. They correspond to notable events in the data.
\julesReplace{For instance,}{In practice,} extrema are often associated \julesEditsOut{in practice }with features of
interest. In presence of noise however, many critical points can be reported by
this characterization, which motivates the introduction of an importance
measure on critical points, to distinguish noise artifacts from salient
features.
Topological persistence \cite{edelsbrunner02, edelsbrunner09}
\julesReplaceMinor{established itself}{has been established} as a reference measure to assess the importance of critical
points. It can be directly read from the \emph{Persistence diagram}
(\autoref{sec_persistenceDiagram}) which plots \julesEditsOut{the }topological features of the
data according to their \emph{birth} and \emph{death}, both of which exactly
coincide with critical points. Thus, the critical points of the input data are
arranged in the diagram in pairs. The Persistence diagram can be computed
generically by matrix reduction operations \cite{edelsbrunner02,
edelsbrunner09}.
The pairs of critical points in the diagram
which involve extrema,\julesEditsOut{ which are} often associated to features of interest in
applications, can be
computed more efficiently, with a Union-Find data
structure \cite{edelsbrunner09, cormen}, or equivalently, they can be read
directly
from the merge tree (presented further down). For the special case of
point cloud data, the
topology of the underlying manifold sampled by the point cloud can be inferred
\cite{ChazalO08}
by considering the persistence diagram of the Vietoris-Rips filtration, for
which tailored algorithms have been developped \cite{ripser}.
Although persistence diagrams are stable \cite{CohenSteinerEH05}
\julienReplace{(i.e. a small perturbation to the input data will only yield a
small perturbation of its persistence diagram), }{,
\julesRevision{meaning that similar scalar fields exhibit similar diagrams,}}
their
discriminative power may be insufficient for some applications.
This motivates the design of more discriminative topological
abstractions, such as \julesReplaceMinor{the }{}merge and contour trees, which \julesEditsOut{respectively }track the
connected components of sub-level sets and level sets.
\julienMajorRevision{Intuitively, these trees indicate how level sets
connect and disconnect
when passing critical points.}
The first algorithms for
computing these tree structures focused on the 2D \cite{boyell63,
de1997trekking} and 3D \cite{tarasov98} cases.
In their
seminal paper, Carr~et~al.~\cite{carr00} introduced an efficient algorithm,
with optimal time complexity, for computing the contour tree in all dimensions.
Recently, several algorithms have been documented to compute this
structure in parallel \cite{
PascucciC03,
MaadasamyDN12,
MorozovW14,
AcharyaN15,
CarrWSA16,
gueunet_ldav17,
smirnov17,
gueunet_tpds19}.
If the input domain is not simply connected
\julienMajorRevision{(intuitively, if it contains handles)}, the Reeb graph
\cite{reeb1946points}
needs to
be
considered instead of the contour tree to correctly track connected
components of
level sets, which involves more sophisticated methods
\julienMajorRevision{(as the Reeb graph may now contain loops)}.
The first Reeb graph computation algorithms
were based on a
slicing strategy \cite{ShinagawaKK91, BiasottiFS00, WoodHDS04},
later
improved by solely slicing along critical contours \cite{PataneSF08,
DoraiswamyN12}. Several techniques focused on practical \julesReplace{performances}{performance}
\cite{pascucci07, tierny_vis09}, while algorithms with optimal time complexity
have been introduced, first in 2D \cite{ColeMcLaughlinEHNP03}, later
in arbitrary
dimension \cite{Parsa12}, and then parallelized
\cite{GueunetFJT19}. Recently,
efficient algorithms have been investigated for the computation of the
generalization of the Reeb graph to
multivariate functions, called the Reeb space \cite{
EdelsbrunnerHP08,
CarrD14, tierny_vis16}.
The Morse-Smale complex is another typical topological abstraction for scalar
data \cite{Defl15}. It decomposes the input domain into cells which \julesReplace{admit}{have}
identical gradient integration extremities.
\julienMajorRevision{Intuitively, it
segments the data into regions, bounded by gradient flow separatrices, where
the gradient shows a homogeneous behaviour.}
While the initial algorithms for
its computation were developed in the PL setting
\cite{
EdelsbrunnerHZ01,
EdelsbrunnerHNP03
}, modern alternatives \cite{gyulassy_vis08, robins_pami11}
are based on Discrete Morse theory \cite{forman98} and parallel algorithms have
been documented \cite{ShivashankarN12, gyulassy_vis18}.
To our knowledge, no algorithm has been described so far for the
\emph{progressive} computation of the above structures. In this work, we
introduce the first progressive algorithms for the computation of topological
abstractions, namely critical points (\autoref{sec_progressiveCriticalPoints})
and persistence diagrams
(\julienRevision{for
extremum-saddle pairs, }\autoref{sec_progressivePersistenceDiagram}).
Our approach is based on a
hierarchical
representation of the data. Multiresolution hierarchies have been
considered
before, for the Reeb graph \cite{HilagaSKK01}, the contour tree
\cite{pascucci_mr04} and the Morse-Smale
complex\julienRevision{~\cite{BremerEHP03, gunther2012, IuricichF17}}, but the
hierarchical
aspect dealt with the \emph{output} data structure, while the
input was processed without multiresolution, with existing algorithms
\cite{BiasottiFS00, carr00, EdelsbrunnerHZ01}.
In contrast, in our work, the \emph{input} data is represented \julesReplaceMinor{in}{as} a
multiresolution hierarchy and the output is efficiently, progressively
updated \julienRevision{in a coarse-to-fine manner,} by iterating
through the hierarchy levels.
\julienRevision{Our progressive scheme relies on a hierarchical representation
of
the input data. In the visualization community, many types of hierarchies have
been defined to encode
and extract visual representations from volumetric data at different levels of
details \cite{GregorskiDLPJ02, weiss_vis09, weiss_sgp09, PascucciB00,
LewinerVLM04, GerstnerP00}.
\julienMajorRevision{For example, Gerstner and Pajarola \cite{GerstnerP00}
introduce a method for the
robust extraction of isosurfaces in
multiresolution volume representations. For this, their algorithm extracts
the critical points of the input scalar field, for each level of their
hierarchical scheme. However, they use for this the standard, non-progressive,
procedure
\cite{banchoff70}. In contrast, our approach extracts the critical points for
all of our hierarchy levels \emph{progressively}, i.e. without recomputing from
scratch
critical points at each new hierarchy level, but instead
by efficiently and minimally
updating the information already computed at the previous levels.}
\julienReplace{Generally, in}{In} our work, we focus on a
specific scheme based on the so-called
\emph{red} subdivision \cite{freudenthal42, bank83, loop87,
bey95, zhang95} applied to regular grids \cite{kuhn60, bey98}, in particular to
investigate progressive and efficient \emph{coarse-to-fine} computations,
in contrast to
the traditional fine-to-coarse hierachical approaches found in the
visualization literature.}
The approaches which are the most related to our work are probably the
streaming algorithms
for computing the Reeb graph \cite{pascucci07}
and the merge tree\cite{bremer_tvcg11}.
These algorithms are capable of computing
their output in a streaming fashion: the simplices of the input domain can be
processed in arbitrary order and these algorithms maintain and iteratively
complete their output data structure. However, while they can be interrupted,
these algorithms are not, strictly speaking, \emph{progressive}: upon
interruption, they do not provide \julesReplace{exploitable}{interpretable} but \emph{partial} results, which
are very far in practice from the final result.
For instance, the streaming Reeb graph \cite{pascucci07} can typically count at
a given time
a very large number of loops (which will be iteratively filled
as the algorithm progresses). In
contrast, our
coarse-to-fine algorithms provide \julesReplace{exploitable}{interpretable}
results upon interruption, which are visually similar to the exact, final
outputs and which empirically quickly converge towards them.
\subsection{Contributions}
\label{sec_contributions}
This paper makes the following new contributions:
\begin{enumerate}[leftmargin=1em]
\item \emph{A progressive \julien{data representation}
(\autoref{sec_progressiveData})} We present an approach for the progressive
topological analysis of scalar data, to generate \julesReplace{exploitable}{interpretable} outputs
upon interruption requests.
Our approach \julienRevision{relies} on a hierarchical
representation of the input \julienRevision{data (derived from established
triangulation subdivision schemes \cite{freudenthal42, kuhn60, bank83, loop87,
bey95, zhang95, bey98})} and the fast identification of \julienRevision{the new
notion of}
\emph{topologically invariant vertices},
for which we show that no computation is required
as they are introduced in the hierarchy.
\item \emph{A progressive algorithm for critical point extraction
(\autoref{sec_progressiveCriticalPoints})}
We introduce a progressive algorithm
for critical point extraction.
As it progresses down the data hierarchy, our algorithm
leverages efficient update mechanisms for ordinary vertices and avoids
computation for the topologically invariant ones. This enables a
progressive output refinement, which \julesReplace{even results in}{results in even} faster overall
computations
than non-progressive methods.
We
also
introduce
a fast heuristic to evaluate
the lifetime of critical points in the data hierarchy.
\item \emph{A progressive algorithm for persistence diagram computation
(\autoref{sec_progressivePersistenceDiagram})} We introduce a progressive
algorithm for the computation of persistence diagrams of
extremum-saddle pairs,
built on top of the above contributions. In practice, our algorithm
tends to capture the main features of the data first, and then progressively
improves its accuracy. This is confirmed quantitatively by the empirical
convergence of the Wasserstein distance to the final output, which is
monotonically decreasing (more computation time indeed yields more accuracy).
Our approach enables a
continuous visual feedback,
while being in practice even faster overall than non-progressive methods.
\item \emph{A reference implementation}
We provide a reference C++ implementation of our algorithms
\julienRevision{(publicly available at:
\href{https://github.com/julesvidal/progressive-scalar-topology}{
https://github.com/julesvidal/progressive-scalar-topology})} that can
be used to replicate our results\julienRevision{, and}
for future
benchmarks.
\end{enumerate}
\section{Preliminaries}
\label{sec_preliminaries}
This section briefly presents the technical background of our work. We refer
the reader to the \julesReplace{reference }{}textbook by Edelsbrunner and Harer
\cite{edelsbrunner09} for a detailed introduction to computational topology.
\subsection{Input Data}
\label{sec_plScalarField}
The input is modeled as a piecewise linear (PL) scalar
field $f : \mathcal{M}
\rightarrow \mathbb{R}$ defined on a PL $d$-manifold $\mathcal{M}$, with $d$ equals 2
or 3 in our applications.
The scalar values are given at the vertices of
$\mathcal{M}$ and are linearly interpolated
on the other
simplices \julienMajorRevision{(with barycentric coordinates)}.
$f$ is assumed to be injective on the vertices
of $\mathcal{M}$ \julienMajorRevision{(i.e. each vertex has a distinct $f$ value)}.
This is enforced in practice with a symbolic
perturbation inspired by Simulation of Simplicity \cite{edelsbrunner90}.
Specific requirements
on the structure of the triangulation $\mathcal{M}$ are discussed
in Secs. \ref{sec_multiRes} and \ref{sec_limitations}.
\subsection{Critical Points}
\label{sec_criticalPoints}
\julesReplace{The topological}{Topological} features of $f$ can be tracked with the notion of
\emph{sub-level} set, noted
$\sublevelset{f}(w) = \{ p \in \mathcal{M} ~ | ~ f(p) < w\}$.
\julienMajorRevision{It is simply the subset of the data below a certain
threshold $w$.}
In particular,
the topology of these sub-level sets (in 3D their connected components,
cycles and voids) can only change at specific locations, named the
\emph{critical points} of $f$ \cite{milnor63}. In the PL setting, Banchoff
\cite{banchoff70} introduced a local characterization of critical
points, defined as follows.
\julesReplaceMinor{}{A \textit{face} $\tau$ of a simplex $\sigma \in \mathcal{M}$ is a simplex of
$\mathcal{M}$ that is defined by a non-empty, strict subset of the
\julienReplaceMinor{points}{vertices} of
$\sigma$.
We call $\sigma$ a \textit{co-face} of $\tau$ and we note
$\tau<\sigma$.}
The \emph{star} of a vertex $v \in \mathcal{M}$, noted $St(v)$, is
the set of its co-faces:
$St(v) = \{ \sigma \in \mathcal{M} ~|~ v < \sigma \}$.
\julienMajorRevision{This can be viewed as a small, combinatorial, neighborhood
around $v$.}
The
\emph{link} of $v$,
noted $Lk(v)$, is the set of the faces $\tau$ of the simplices $\sigma$
of $St(v)$ with empty intersection with $v$:
$Lk(v) = \{ \tau \in \mathcal{M} ~ | ~ \tau < \sigma, ~
\sigma\in St(v), ~ \tau \cap v = \emptyset\}$.
\julienMajorRevision{This can be viewed as the \emph{boundary} of a small,
combinatorial, neighborhood around $v$.}
The \emph{lower link} of $v$, noted $\Link^{-}(v)$, is given by the
set of simplices of $Lk(v)$ which only contain vertices \emph{lower} than
$v$:
$\Link^{-}(v) = \{ \sigma \in Lk(v) ~ | ~ \forall v' \in \sigma, ~ f(v')
< f(v)\}$. The upper link is defined symmetrically: $\Link^{+}(v) = \{
\sigma \in Lk(v) ~ | ~
\forall v' \in \sigma, ~ f(v') > f(v)\}$.
A vertex $v$ is \emph{regular} if
both $\Link^{-}(v)$ and $\Link^{+}(v)$ are simply connected. For
such vertices, the sub-level sets
\julienReplace{enter the neighborhood of $v$, $St(v)$, through the
lower part of the neighborhood boundary, $\Link^{-}(v)$, and exit through its
upper part, $\Link^{+}(v)$, \emph{without} changing their topology.}{
do not change their topology as they span
$St(v)$.} Otherwise, $v$ is
a \emph{critical point}.
These can be classified with regard to their
\emph{index} $\mathcal{I}(v)$\julienMajorRevision{, which intuitively corresponds
to the number of independent directions of decreasing $f$ values around $v$}.
It is equal to $0$ for local minima
($\Link^{-}(v) = \emptyset$), to $d$ for local maxima
($\Link^{+}(v) = \emptyset$) and otherwise to $i$ for
$i$-saddles ($0 < i < d$).
Adjacency
relations between critical points can be captured with the notion of
\emph{integral line}. Given a vertex $v$,
its \emph{forward} integral
line, noted $\mathcal{L}^+(v)$, is a path along the edges of
$\mathcal{M}$,
initiated in $v$, such that each edge of $\mathcal{L}^+(v)$ connects a
vertex $v'$ to its highest neighbor $v''$.
Then, forward integral lines are
guaranteed to terminate in local maxima of $f$.
When encountering a saddle $s$, we
say that an integral line \emph{forks}: it yields one new integral line per
connected component of $\Link^{+}(s)$.
Note that several integral lines can \emph{merge} (and possibly fork later). A
\emph{backward} integral line, noted $\mathcal{L}^-(v)$ is defined
symmetrically (i.e. integrating downwards towards minima).
Critical points play a central role in TDA
as
they often correspond to features of interest in various applications: centers
of vortices in fluid dynamics \cite{kasten_tvcg11}, atoms in chemistry
\cite{harshChemistry, chemistry_vis14, Malgorzata19} or clusters of galaxies in
astrophysics \cite{sousbie11, shivashankar2016felix}.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{pdfigure}
\mycaption{Persistence diagrams of a clean (left) and noisy (right)
scalar
field
(light brown spheres: minima, cyan: maxima, others: saddles). The main three
hills
are clearly apparent in the diagrams (high persistence pairs), whereas
small pairs near the diagonal indicate noisy features.}
\label{fig_pd}
\end{figure}
\subsection{Persistence Diagrams}
\label{sec_persistenceDiagram}
Several importance
measures for critical points have been studied \cite{carr04},
including \emph{topological
persistence} \cite{edelsbrunner02}, which is tightly coupled to the notion of
Persistence diagram \cite{edelsbrunner09}, which we briefly summarize here.
\julienMajorRevision{In practical applications, features of interest are
often characterized by the extrema of the field. Thus, in the following, we
will first focus our description on local minima, and then discuss
generalizations. The importance of a local minimum can be assessed with its
\emph{persistence}, which describes the lifetime of the topological feature
(i.e. the connected component) it created in $\sublevelset{f}(w)$.}
\julesReplace{In particular, a}{A}s $w$ increases, new connected components of
$\sublevelset{f}(w)$ are created at the minima of $f$. The Elder rule
\cite{edelsbrunner09} indicates that if two connected
components, created at the minima $m_0$ and $m_1$ with $f(m_0) < f(m_1)$, meet
at a given $1$-saddle $s$, the \emph{youngest} of the two components (the
one created at $m_1$) \emph{dies} in favor of the \emph{oldest} one (created at
$m_0$). In this case, a \emph{persistence pair} $(m_1, s)$ is created and
its
\emph{topological persistence} $p$ is given by $p(m_1, s) = f(s) - f(m_1)$.
All \julesReplaceMinor{the }{}local minima
can be
unambiguously
paired following this strategy, while the
global minimum is usually paired, by convention, with the global maximum.
\julesReplace{}{\julienMajorRevision{Symmetrically},
persistence assesses the importance of a
local maximum paired with a \julienReplace{$(d-1)$}{$2$}-saddle,
based on the
lifetime of the topological feature it destroyed in
$\sublevelset{f}(w)$.
}
\julesRevision{Generally, as one continuously increase\julesRevision{s} an isovalue $w$,
topological structures in $\sublevelset{f}(w)$ (connected components, cycles,
voids) are created and destroyed at
critical points.
\julienReplace{Thus}{As such}, each topological feature is
characterized by a pair of critical points, \julienReplace{which
indicate its birth
and death, and whose difference in function values indicates its lifespan
in the data, its \emph{persistence}.}{The topological persistence
of these features denote their lifespan
in term of the $f$ values.}}
\julesReplace{The symmetric reasoning can be applied in 3D to characterize,
with $2$-saddle/maximum pairs, the life time of the voids of
$\sublevelset{f}(w)$, while the $1$-saddle/$2$-saddle pairs characterize its
independent cycles.}{As described above, the persistence of connected
components of $\sublevelset{f}(w)$ is encoded \julienReplace{with}{in}
minimum/$1$-saddle pairs. In
3D, $2$-saddle/maximum pairs characterize the life time of the voids of
$\sublevelset{f}(w)$, while \julesReplaceMinor{the }{}$1$-saddle/$2$-saddle pairs characterize its
independent cycles.}
\julienReplace{As mentioned above}{In practical applications}, features of
interest are often characterized by the
extrema of the field. Thus, in the following, when considering
persistence diagrams, we will focus on minimum/$1$-saddle pairs
and $(d-1)$-saddle/maximum pairs.
Persistence pairs are usually
visualized with the \emph{Persistence diagram} $\mathcal{D}(f)$
\cite{edelsbrunner09}, which embeds each pair $(c, c')$, with $f(c) < f(c')$,
as a point in the 2D plane, at location $\big(f(c), f(c')\big)$. There, the
pair
persistence
can be
visualized as the height of the point to the diagonal.
\julienMajorRevision{In other words, in the persistence diagram, each
topological feature
of $\sublevelset{f}(w)$ (connected component, cycle, void) can be
readily visualized as a bar (\autoref{fig_pd}), whose height to the diagonal
denotes its importance in the data.}
Features with a high persistence stand out, away from the diagonal,
while noisy features are typically located in its vicinity.
The conciseness, stability \cite{edelsbrunner02} and expressiveness of this
diagram made it a popular tool
for data summarization tasks.
As shown in \autoref{fig_pd},
it provides visual hints about the number, ranges and salience
of the features of interest.
\julesEditsOut{To evaluate quantitatively the relevance of the progressive outputs
continuously provided by our algorithms, we measure their distance
to the final, exact result with the \emph{Wasserstein distance}, an established
practical metric
inspired by optimal
transport \cite{Kantorovich, monge81}.
Given two diagrams $\mathcal{D}(f)$ and
$\mathcal{D}(g)$, a pointwise distance
$\pointMetric{q}$, inspired from the $L^p$ norm, can be introduced
in the 2D birth/death space
between
two points $a = (x_a, y_a) \in \mathcal{D}(f)$ and
$b = (x_b, y_b) \in \mathcal{D}(g)$, with $q > 0$, as:
\vspace{-1.5ex}
\begin{equation}
\pointMetric{q}(a,b)=\left(|x_b-x_a|^q + |y_b-y_a|^q\right)^{1/q} = \|a-b\|_q
\label{eq_pointWise_metric}
\end{equation}
\vspace{-3.5ex}
\noindent
By convention, $\pointMetric{q}(a, b)$ is set to zero
if both $a$ and $b$ exactly lie on the diagonal ($x_a = y_a$ and $x_b = y_b$).
The \jules{$L_q$}-Wasserstein distance, noted
$\wasserstein{q}$, between $\mathcal{D}(f)$ and
$\mathcal{D}(g)$ can then be introduced as:
\begin{equation}
\wasserstein{q}\big(\mathcal{D}(f), \mathcal{D}(g)\big) =
\min_{\phi
\in \Phi} \left(\sum_{a \in \mathcal{D}(f)}
\pointMetric{q}\big(a,\phi(a)\big)^q\right)^{1/q}
\label{eq_wasserstein}
\end{equation}
\noindent
where $\Phi$ is the set of all possible assignments $\phi$ mapping each
point
$a \in \mathcal{D}(f)$ to
a point
$b
\in \mathcal{D}(g)$
or to
its projection onto the diagonal.
$\wasserstein{q}$ can be computed
via
assignment optimization, for which
exact \cite{Munkres1957} and approximate \cite{Bertsekas81, Kerber2016}
implementations are publicly available \cite{ttk17}.
}
\section{Progressive Data Representation}
\label{sec_progressiveData}
This section details our hierarchical scheme for the progressive
representation of the input \julienRevision{data, which relies on
a hierarchy of triangulations $\mathcal{H}$ derived from established
subdivision schemes~\cite{freudenthal42, bank83, loop87,
bey95, zhang95}}.
\julienMajorRevision{In particular, our goal is to define a hierarchical scheme
that will enable efficient update mechanisms between hierarchy levels. This
will
avoid, at each new level of the hierarchy, the recomputation
from scratch of the topological data representations presented in
sections~\ref{sec_progressiveCriticalPoints} and
\ref{sec_progressivePersistenceDiagram}, and this will instead enable their
progressive update.}
After a generic description \julienRevision{of the employed triangulation
hierarchy (\autoref{sec_multiRes})},
we
\julesReplace{\julienRevision{detail for completeness}}{present for completeness}
an efficient implementation\julienRevision{~\cite{kuhn60, bey98}} for the
special case of triangulations
of
regular grids \julienRevision{(\autoref{sec_gridHierarchy})}, on which we focus
in this paper (\autoref{sec_limitations}
discusses generalizations).
Next, we resume our generic
description \julienRevision{(\autoref{sec_topologicalInvariants})} and
\julienReplace{show how to leverage the specific structure of the employed
triangulation hierarchy to accelerate the topological analysis of the data.}
{investigate how the specific structure of the
employed triangulation hierarchy can be leveraged to accelerate the topological
analysis of scalar data.}
For this, we introduce the \julienRevision{novel} notion
of \emph{Topologically Invariant Vertices}, \julienRevision{which is} central
to our \julienRevision{work}.
\subsection{Edge-Nested Triangulation Hierarchy}
\label{sec_multiRes}
\julien{Our progressive representation of the input data is based on a
multiresolution hierarchy of the input PL-manifold $\mathcal{M}$,
\julienRevision{which relies on established subdivision schemes
~\cite{freudenthal42, bank83, loop87,
bey95, zhang95}.}}
\julesRevision{\julienReplace{Intuitively, our}{The} goal is to
\julienReplace{define}{provide} a multiresolution hierachy that
\julienReplace{will enable the efficient update of the topological
information computed at the previous levels, in order to avoid
full re-computations (\autoref{sec_progressiveCriticalPoints}).}{allows
topological information
computed on previous levels to remain valid and avoid recomputations
(\autoref{sec_cpUpdates}).}
In order to construct such a hierarchical scheme, \julienMajorRevision{as
formalized
next,} we impose that, as one progresses
down the
hierarchy, new vertices are \julienMajorRevision{only} inserted along
\julienMajorRevision{pre-existing} edges
(exactly one new vertex per edge, typically \julesReplaceMinor{in}{at} their center),
and that the additional new edges only connect new vertices
(\autoref{fig_edge_nested}).
\julienReplace{This will have the beneficial effect of
preserving, from one hierarchy level to the next,
the \emph{structure} of the
local neighborhood around each pre-existing vertex (of its link, as discussed
in \autoref{sec_topologicalInvariants}),
which will in turn effectively enable fast updates of the pre-existing local
topological information (\autoref{sec_progressiveCriticalPoints}).}{This allows
to preserve the structure of the link of vertices from a level to another.}
We call such a hierarchy \emph{edge-nested} and we formalize
it in the following, to introduce the notations that will be used in
the
rest of the paper.}
\julien{Let $\mathcal{H} = \{\mathcal{M}^0, \mathcal{M}^1, \dots, \mathcal{M}^h\}$ be a
hierarchy of PL $d$-manifolds, which respects the following key
conditions.}
\begin{enumerate}[leftmargin=1.5em]
\item{\textbf{Old Vertex Condition:} Each vertex of $\mathcal{M}^i$ (the
triangulation
at level $i$)
also belongs to
the vertex set, noted $\mathcal{M}^{i+1}_0$, of $\mathcal{M}^{i+1}$:
\begin{eqnarray}
\mathcal{M}^i_0 \subset \mathcal{M}^{i+1}_0
\label{eq_oldVertices}
\end{eqnarray}
The vertices of $\mathcal{M}^{i+1}$
already present in $\mathcal{M}^{i}$
are called \emph{old vertices} \julienMajorRevision{(black spheres in
\autoref{fig_edge_nested})}.
}
\item{\textbf{New Vertex Condition:} Each vertex of $\mathcal{M}^{i+1}$
not present in
$\mathcal{M}^{i}$
has to be located
on an edge $(v_0, v_1)$ of $\mathcal{M}^i$ (typically \julienReplaceMinor{in}{at}
its center),
as
summarized below, where $\mathcal{M}^{i}_1$ stands for the edge set of
$\mathcal{M}^{i}$:
\begin{eqnarray}
\forall v \in \mathcal{M}^{i+1}_0, v \notin \mathcal{M}^{i}_0 : &
\exists (v_0, v_1) \in \mathcal{M}^{i}_1, ~ v \in (v_0, v_1)
\end{eqnarray}
The vertices of $\mathcal{M}^{i+1}$ not present in $\mathcal{M}^{i}$
are called \emph{new vertices} \julienMajorRevision{(white spheres in
\autoref{fig_edge_nested})}.
}
\item{\textbf{Old Edge Condition:} Each edge $(v_0, v_1)$ of $\mathcal{M}^{i}$
has to be subdivided at level $i+1$ \julesReplaceMinor{along}{at} exactly one new vertex $v$ of
$\mathcal{M}^{i+1}$:
\begin{eqnarray}
\begin{aligned}
\forall (v_0, v_1) \in \mathcal{M}^{i}_1: \quad \quad
&
|\{ v \in (v_0, v_1), ~
v \notin \mathcal{M}^{i}_0, ~
v \in \mathcal{M}^{i+1}_0 \}| = 1\\
&
(v_0, v) \in \mathcal{M}^{i+1}_1, \quad (v, v_1) \in \mathcal{M}^{i+1}_1 \\
& (v_0, v_1) \notin \mathcal{M}^{i+1}_1
\end{aligned}
\end{eqnarray}
The edges of $\mathcal{M}^{i+1}$
obtained by subdivision of an edge of $\mathcal{M}^{i}$ are called \emph{old
edges}\julesRevision{, they connect old vertices to new vertices}
\julienMajorRevision{(gray cylinders in \autoref{fig_edge_nested})}.
}
\item{\textbf{New Edge Condition:} Each edge of $\mathcal{M}^{i+1}$ which is
not an old edge has to connect two new vertices, and it is called a \emph{new
edge} \julienMajorRevision{(white cylinders in \autoref{fig_edge_nested})}.
}
\end{enumerate}
\julesEditsOut{Overall,
\julienRevision{this} hierarchical scheme imposes that, as one progresses
down the
hierarchy, new vertices are inserted along edges
(exactly one new vertex per edge, typically in their center),
and that the additional new edges only connect new vertices
(\autoref{fig_multires_grid}).}
\julesReplaceMinor{\julienMajorRevision{Figure~\ref{fig_edge_nested}}}{\autoref{fig_edge_nested}} presents a simple example of
2D edge-nested triangulation hierarchy. Note that the
\julienRevision{Loop subdivision \cite{loop87}
is compatible with the above formalization, which is more generally termed as
\emph{red} subdivision in the scientific computing literature, and which has
been extensively studied for domains of
two \cite{bank83}, three \cite{bey95, zhang95} and arbitrary dimensions
\cite{freudenthal42}.}
An input PL manifold $\mathcal{M}$ admits an edge-nested
triangulation hierarchy if there exists a hierarchy $\mathcal{H}$
for which $\mathcal{M}$ is the last element ($\mathcal{M} = \mathcal{M}^{h}$).
\begin{figure}
\center
\includegraphics*[width=0.9\linewidth]{fig3.1}
\mycaption{\julesRevision{Edge-nested triangulation hierarchy
\julienReplace{for a simple 2D example.}{generated from a 2D triangulation
constructed using two successive \textit{red} subdivisions of a triangle.}
Old vertices/edges are shown in black/gray. New vertices and
edges are \julienReplace{shown}{showed} in white.} }
\label{fig_edge_nested}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{fig3}
\mycaption{Translation invariant local triangulation pattern for the cells of
a
2D and 3D regular grid.
In 2D, quadrilaterals are subdivided into two triangles (a), always
along the same diagonal. In 3D, the generalization of this pattern
subdivides each hexahedron into six tetrahedra (b).
}
\label{fig_triangulatingGrid}
\end{figure}
\subsection{Edge-Nested Triangulations of Regular Grids}
\label{sec_gridHierarchy}
\julienRevision{While the construction of an edge-nested triangulation
hierarchy given an arbitrary
input manifold $\mathcal{M}$ is an open question which we leave for future work
(see \autoref{sec_limitations}), it can be shown that such a hierarchy
exists for regular grids, and that it can be
implemented
very
efficiently, as discussed by Bey~\cite{bey98}. We describe this implementation
in the following for the sake of completeness\julienReplace{, by detailing how
to efficiently retrieve an
arbitrarily coarse version of the fine triangulation $\mathcal{M}^{h}$ from an
input
regular grid $\mathcal{G}^{0}$.}{.}}
Let $\mathcal{G}^{0}$ be a $d$-dimensional regular grid, with $d$ \julesReplace{equals}{equal to} $2$ or $3$
in our applications, of dimensions $L_x^{0}$, $L_y^{0}$, $L_z^{0}$
$\big(\text{\julesRevision{\textit{i.e.} of number of vertices }}|\mathcal{G}^{0}_0| = (L_x^{0} + 1)\times (L_y^{0}+1) \times
(L_z^{0}+1)$, in 2D: $L_z^{0} = 0\big)$.
We will
first
assume that $L_x^{0}$, $L_y^{0}$ and $L_z^{0}$ are all powers of $2$.
Let $\phi_0$ be the
\emph{triangulation operator},
which transforms
$\mathcal{G}^{0}$ into a valid triangulation $\mathcal{M}^{h}$, i.e. $\mathcal{M}^{h} =
\phi_0(\mathcal{G}^{0})$,
by preserving vertex sets, i.e. $\mathcal{M}^{h}_0 = \mathcal{G}^{0}_0$, and
by inserting exactly one edge for each $i$-dimensional cell of $\mathcal{G}^{0}$
($1 < i \leq d$),
according to a unique pattern, which is \emph{invariant by
translation} along the cells of the grid\julienRevision{, known as Kuhn's
triangulation \cite{kuhn60}}.
In $2$D, each
quadrilateral is subdivided into two triangles by inserting one edge
always along the \emph{same diagonal}. In $3$D, each
hexahedron is subdivided into six tetrahedra by
always inserting the \emph{same diagonal} edges
\julien{(\autoref{fig_triangulatingGrid})}.
\begin{figure}
\begin{center}
\adjustbox{width=\linewidth,center}{
\begin{tikzcd}
\centering
\mathcal{M}^{0} \arrow[r]
& \mathcal{M}^{1} \arrow[r]
& \dots \arrow[r]
& \mathcal{M}^{h - 1} \arrow[r]
& \mathcal{M}^{h}\\
\mathcal{G}^{h} \arrow[u, "\phi_{h}"]
& \mathcal{G}^{h-1} \arrow[l, , "\Pi_{h}"] \arrow[u,
"\phi_{h - 1}"]
& \dots \arrow[l, "\Pi_{h - 1}"] \arrow[u]
& \mathcal{G}^{1} \arrow[l, "\Pi_2"] \arrow[u,
"\phi_{1}"]
& \mathcal{G}^{0} \arrow[l, "\Pi_1"] \arrow[u,
"\phi_{0}"]
\end{tikzcd}
}
\mycaption{Commutative diagram for the generation of an edge-nested
triangulation hierarchy $\mathcal{H} = \{\mathcal{M}^0, \mathcal{M}^1, \dots,
\mathcal{M}^h\}$ from a regular grid $\mathcal{G}^0$. The hierarchy can be obtained by a
sequence of decimation operators $\Pi_i$, accompanied with
triangulation
operators $\phi_i$.}
\label{fig_commutativeDiagram}
\end{center}
\end{figure}
Let $\Pi_1$ be the \emph{decimation operator}, which
transforms the regular grid $\mathcal{G}^{0}$ into a regular grid $\mathcal{G}^{1}$, i.e.
$\mathcal{G}^{1} = \Pi_1(\mathcal{G}^{0})$,
by selecting one vertex every
two vertices in each dimension.
Let $(i, j, k)$ be the grid coordinates of a vertex $v \in \mathcal{G}^{0}$.
Then the grid $\mathcal{G}^{1}$ is obtained by only selecting the vertices with even
grid coordinates $(i, j, k)$ in $\mathcal{G}^{0}$.
In $2$D, each
quadrilateral of $\mathcal{G}^{1}$ corresponds in the general case to four
quadrilaterals of $\mathcal{G}^{0}$
and in $3$D, each hexahedron of $\mathcal{G}^{1}$ corresponds to eight hexadra of
$\mathcal{G}^{0}$. Note that the decimation operator $\Pi_1$ induces a
reciprocal
\emph{subdivision}
operator, which, given $\mathcal{G}^{1}$, yields $\mathcal{G}^{0}$ by
inserting a new
vertex in the center of each $i$-dimensional cell of $\mathcal{G}^{1}$ ($0 < i \leq
d$).
We now introduce by recurrence a sequence of decimation operators
$\Pi_i$
(\autoref{fig_commutativeDiagram}),
which decimate each grid $\mathcal{G}^{i-1}$ into a grid $\mathcal{G}^{i}$ by sub-sampling
its vertices with even grid coordinates as described above. It follows that for
a given level of decimation $i$, the dimensions of $\mathcal{G}^{i}$ are given by
$L_x^{i} = L_x^{0}/2^{i}$,
$L_y^{i} = L_y^{0}/2^{i}$, and
$L_z^{i} = L_z^{0}/2^{i}$.
Let us now consider the sequence of triangulation operators
$\phi_i$, which
triangulate each grid $\mathcal{G}^{i}$ into a triangulation $\mathcal{M}^{h - i}$, i.e.
$\mathcal{M}^{h - i} = \phi_i(\mathcal{G}^{i})$, as illustrated by the
commutative
diagram of \autoref{fig_commutativeDiagram}.
Then, it can be verified (\autoref{fig_multires_grid}) that each
condition of \autoref{sec_multiRes} is indeed satisfied by the sequence
$\mathcal{H} = \{\mathcal{M}^0, \mathcal{M}^1, \dots, \mathcal{M}^h\}$ and that $\mathcal{H}$
is a valid edge-nested triangulation hierarchy.
\julienRevision{In particular, as described by Bey~\cite{bey98}, any
triangulation $\mathcal{M}^{i}$ can be equivalently obtained either: \emph{(i)} by
applying the red subdivision scheme \cite{bank83, bey95, zhang95} $i$ times on
$\mathcal{M}^0$ or \emph{(ii)} by considering the Kuhn
triangulation~\cite{kuhn60}
of $\mathcal{G}^{h-i}$ (itself obtained by $i$ regular subdivisions of $\mathcal{G}^{h}$).
In other words, any triangulation $\mathcal{M}^{i}$ in the commutative diagram
of \autoref{fig_commutativeDiagram} can be obtained by starting either
\emph{(i)} from $\mathcal{M}^0$ or \emph{(ii)} from $\mathcal{G}^h$. In our work, we
exploit this equivalence property, but in \emph{reverse}: we
use it to efficiently retrieve an arbitrarily coarse version of
the fine triangulation $\mathcal{M}^h$ of the input grid $\mathcal{G}^0$.}
\julienRevision{In particular, the}
edge-nested triangulation hierarchy
\julienRevision{$\mathcal{H}$}
can be implemented very
efficiently, by encoding the entire hierarchy implicitly, and by only
maintaining the grid $\mathcal{G}^{0}$ in memory. At a given hierarchy level $i$,
adjacency relations in $\mathcal{M}^i$ between two vertices $v_0$ and $v_1$ can be
inferred based on their grid coordinates at level $i$, $(i_0, j_0, k_0)$ and
$(i_1, j_1, k_1)$, and given the triangulation pattern shown in
\autoref{fig_triangulatingGrid}. Then, the data values associated to the
vertices $v_0$ and $v_1$ can be retrieved by mapping these vertices back to
their original locations in $\mathcal{G}^{0}$, given by the grid coordinates
$(i_0\times2^{h-i}, j_0\times2^{h-i}, k_0\times 2^{h-i})$
and $(i_1\times2^{h-i}, j_1\times2^{h-i}, k_1\times 2^{h-i})$.
This approach is easily extended to support regular grids whose
dimensions,
$L_x^{0}$, $L_y^{0}$ or $L_z^{0}$
are not necessarily powers of 2.
In particular, when considering the decimation operator $\Pi_i$,
in case some of the dimensions
$L_x^{i-1}$, $L_y^{i-1}$ or $L_z^{i-1}$
are not even,
$\Pi_i$
systematically
adds the last vertex of $\mathcal{G}^{i-1}$ for each odd dimension.
\julien{In our progressive algorithms (Sec.~\ref{sec_progressiveCriticalPoints}
and
\ref{sec_progressivePersistenceDiagram}), these few extra vertices will require
full recomputations.}
Below,
we resume our generic description for arbitrary
edge-nested triangulation hierarchies, not necessarily obtained from regular
grids (\autoref{sec_limitations} discusses generalizations).
\begin{figure}
\center
\includegraphics*[width=0.9\linewidth]{fig5_2}
\mycaption{Edge-nested triangulation hierarchy generated from a regular grid.
Old vertices/edges are shown in black/gray in $\mathcal{M}^2$.
There is a one-to-one mapping (colors from $\mathcal{M}^0$ to $\mathcal{M}^1$)
between the edges of $\mathcal{M}^0$
and the new
vertices of $\mathcal{G}^{h-1}$, inserted
in
each $i$-dimensional cell
of $\mathcal{G}^{h}$ ($0 < i \leq d$).}
\label{fig_multires_grid}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figure_3.2}
\mycaption{\julienMajorRevision{Important properties of edge-nested
triangulations, enabling fast updates of local topological
information. (a) Left: From one hierarchy level ($i$) to the next ($i+1$),
edge-nested triangulations preserve the local structure of the link
$Lk(v)^i$ of an old vertex $v$ (red sphere). In particular, there exists a
one-to-one mapping $\Psi_i$ between the vertices and the edges (red arrows) of
$Lk(v)^i$ and $Lk(v)^{i+1}$. (b) Center: this link invariance enables
the fast identification of old vertices which do not change their
\julesReplaceMinor{critical type}{criticality}: these are old vertices (red
sphere) for which the \emph{polarity} (blue
signs) remains unchanged from one hierarchy level ($i$) to the next ($i+1$) and
for which, therefore, connected components of lower and upper links (green and
blue components, respectively) do not change (thus, requiring no
update). Such vertices are called
\emph{topologically invariant old vertices}. (c) Right: A new
vertex $v$ which is \emph{monotonic} (i.e. $f(v_0) < f(v) < f(v_1)$, with $v_0$
and $v_1$ being respectively the lowest and highest vertex of the edge $(v_0,
v_1)$ where $v$ is inserted) is guaranteed to be regular
if all its adjacent new neighbors (in the figure, $n_0$ and $n_1$) are also
\emph{monotonic} (see \autoref{sec_topologicalInvariants} for further
discussion).}}
\label{fig_topo_invariants}
\end{figure*}
\subsection{Topologically Invariant \julien{Vertices}}
\label{sec_topologicalInvariants}
The input edge-nested triangulation hierarchy $\mathcal{H}$
yields a hierarchy of PL scalar fields $\{f^{0}, f^{1},
\dots, f^{h}\}$,
such that each old vertex $v$ maintains by
construction its scalar value:
$f^{i}(v) = f^{j}(v)
= f(v), ~ \forall j ~ / ~ i \leq j \leq h$.
In the following, we show how the
specific structure of
edge-nested triangulation
hierarchies
\julesRevision{described in \autoref{sec_multiRes}}
can be leveraged
to
efficiently update topological information while progressing down the hierarchy.
First we
show that edge-nested triangulations
preserve the topology of
the link
of vertices when
progressing from one hierarchy level to the next.
This enables the quick identification, discussed next, of
vertices which do not change their \julesReplaceMinor{critical type}{criticality} when progressing down the
hierarchy\julesReplaceMinor{, and which we call \emph{topologically invariant old vertices} and
for which no update will be needed during subsequent analyses }{. We call these
vertices \emph{topologically invariant old vertices}, as they will need
no update during subsequent analyses}
\julesReplaceMinor{(sections~\ref{sec_progressiveCriticalPoints} and
\ref{sec_progressivePersistenceDiagram})}{(\autoref{sec_progressiveCriticalPoints} and \autoref{sec_progressivePersistenceDiagram})}.
\julienMajorRevision{Last, we show how to efficiently identify new vertices
that are guaranteed by construction to be regular points of $f^i$, which we
call \emph{topologically invariant new vertices} and for which no computation
will be required in subsequent analyses.}
\noindent
\textbf{1) Link Topological Invariance:}
A first insight
is that
the link $Lk(v)$ of a vertex $v$ is topologically
invariant throughout the hierarchy.
\julienMajorRevision{This property is important because it will enable the
fast identification of vertices which do not change their
\julienReplaceMinor{critical}{criticality}
(next
paragraph).}
Let $Lk(v)^{i}$ be the link of
$v$ at level $i$, then there exists a
one-to-one mapping
$\Psi_i$ (\julesReplace{right inset}{\autoref{fig_topo_invariants}(a)})
between the simplices of $Lk(v)^{i}$ and
$Lk(v)^{i+1}$ --
such that $Lk(v)^{i+1} = \Psi_i\big(Lk(v)^{i}\big)$ --
which preserves the simplicial structure of $Lk(v)^{i}$
(which preserves adjacencies).
Indeed,
\emph{(i)} new vertices are only inserted on old edges (this maps the
$k^{th}$ neighbor of $v$ at level $i$ to its $k^{th}$ new neighbor at level
$i+1$, \julienMajorRevision{top red arrow in \autoref{fig_topo_invariants}(a)})
and \emph{(ii)} new edges are only inserted between new
vertices
(this maps the $k^{th}$ edge of $Lk(v)^{i}$ to the
$k^{th}$ new edge of $Lk(v)^{i+1}$,
\julesReplace{inset}{\julienMajorRevision{right red
arrow in }\autoref{fig_topo_invariants}(a)}). This
mapping $\Psi_i$ can be
viewed as a
combinatorially
invariant \emph{zoom} in the neighborhood of $v$ as one
progresses down the hierarchy.
\noindent
\textbf{2) Topologically Invariant Old Vertices:}
A second insight deals with the evolution of the data values on the link
of an \emph{old} vertex, as one
progresses down the hierarchy and zooms with the above mapping $\Psi_i$.
We define the \emph{polarity} of $Lk(v)^{i}$, noted
$\delta : Lk(v)^{i} \rightarrow \{-1, 1\}$ as the field which assigns
to
each neighbor $n$ of $v$ at level $i$ the sign of its function difference with
$v$:
$\delta(n) = sgn\big(f(n) - f(v)\big)$.
The polarity is positive
in the upper link, negative
in
the lower link
\julesReplace{(blue signs, above inset)}{(\autoref{fig_topo_invariants}(b),
blue signs)}.
Let
$(v_0, v_1)$ be an
edge at level $i$, which gets subdivided at level
$i+1$
along a new vertex $v_n$. \julesReplaceMinor{W}{Assuming that $f(v_0)<f(v_1)$}, we say that $v_n$ is \emph{monotonic} if
\julesReplaceMinor{ $f(v_0) < f(v_n) < f(v_1)$ or $f(v_0) > f(v_n) > f(v_1)$}{
$f(v_n) \in \big(f(v_0),f(v_1)\big)$}.
Otherwise,
$v_n$ is \emph{non-monotonic}.
In that case, if $v_n$'s polarity in $Lk(v_0)^{i+1}$ is the opposite of
$v_1$'s polarity in $Lk(v_0)^{i}$,
we say that $v_0$ is \emph{impacted} by its
neighbor $v_n$.
Now, if an old vertex $v$ is not \emph{impacted} by any of its non-monotonic
neighbors,
its link polarity is maintained \julienReplace{(i.e. the blue signs in
\autoref{fig_topo_invariants}(b) remain unchanged when going from the
hierarchy level $i$ to $i+1$). This implies that $v$ is therefore guaranteed
to maintain its \emph{criticality}:}{
\julesReplace{(above inset)}{(\autoref{fig_topo_invariants}(b)} and
$v$ is guaranteed to maintain its \emph{criticality}:}
it maintains its critical index (i.e., $\mathcal{I}(v)^{i+1} = \mathcal{I}(v)^{i}$)
or it remains regular.
\julienRevision{Indeed, each
neighbor $n$ which does not impact $v$ maintains \julesReplaceMinor{it}{its}
classification as being upper or lower.
Then,
since there is a one-to-one
mapping $\Psi_{i}$ (see \julesReplace{the inset figure in the above
paragraph}{\autoref{fig_topo_invariants}(a)}) between
$Lk(v)^{i}$
and
$Lk(v)^{i+1}$
which preserves their simplicial structure,
it follows that the complexes
$\Link^{-}(v)^{i+1}$ and
$\Link^{+}(v)^{i+1}$ are
respectively identical to
$\Link^{-}(v)^{i}$ and
$\Link^{+}(v)^{i}$. Thus, the number of connected components of lower and
upper links are maintained, preserving the criticality of $v$.}
Old vertices which are not impacted by their
non-monotonic
neighbors
are called \emph{topologically
invariant old vertices}.
\noindent
\textbf{3) Topologically Invariant New Vertices:}
A third insight deals with the link of \emph{new} vertices.
Given a new monotonic vertex $v$ \julienMajorRevision{(small red
sphere
in \autoref{fig_topo_invariants}(c))} subdividing an edge
\julienMajorRevision{($v_0, v_1$)}
at level $i$ \julienMajorRevision{(red cylinder in
\autoref{fig_topo_invariants}(c))},
if its new neighbors are all monotonic as well, $v$ is
then called an \emph{interpolating vertex} and it can be shown that $v$
must
be a regular vertex.
\julienRevision{First, since $v$ is monotonic, it cannot
be an
extremum, since by definition it is connected to one lower ($v_0$) and one
upper ($v_1$) old vertex
\julienMajorRevision{(large green and blue spheres
in \autoref{fig_topo_invariants}(c))}. Note that $v_0$ and $v_1$ are the only
old vertices
adjacent to $v$.
Second, to show that $v$ is regular, we argue
that
$\Link^{+}(v)^{i}$ is necessarily connected (and so is
$\Link^{-}(v)^{i}$, symmetrically). Let $(v_0 ,v_1, o)$ be a triangle at level
$i-1$ (\julesReplace{above
inset}{\julienMajorRevision{red triangle in
}\autoref{fig_topo_invariants}(c)}). At level $i$, the edges $(v_0, o)$
and $(v_1, o)$ are subdivided
along the new vertices $n_0$ and
$n_1$ and
the \emph{new} edges $(v, n_0)$, $(v, n_1)$, and $(n_0, n_1)$ are inserted to
connect the new vertices.
Let us assume
that $f(n_0) > f(v)$. $n_0$ is then an \emph{upper} neighbor of $v$ ($n_0
\in \Link^{+}(v)^{i}$). Since $n_0$ is monotonic, this means that the
\emph{outer} old vertex $o$ \julesReplaceMinor{}{(which is not in
$Lk(v)^i$)}
must also be
upper: $f(o) > f(n_0) > f(v)$.
Since $n_1$ is monotonic as well, it follows that $n_1$ is upper too.
Thus, there exists a path
$
\{v_1, n_1, n_0\}
\in Lk(v)^{i}$ \julienMajorRevision{(blue arrow in
\autoref{fig_topo_invariants}(c))}, which connects
$v_1$ to $n_0$
and which is only composed of \emph{upper} vertices. Thus
$n_0$ and
$v_1$ belong to the same connected component of $\Link^{+}(v)^{i}$. The same
reasoning holds for all the new \emph{upper} neighbors of $v$. It follows that
$\Link^{+}(v)^{i}$ and
$\Link^{-}(v)^{i}$ are both made of a single connected component,
containing exactly one old vertex each, $v_1$ and $v_0$ respectively.
Thus, $v$ is regular.
Note that this reasoning readily applies to 2D and 3D.
}
Since interpolating vertices, such as $v$, imply no topological event in the
sub-level sets,
we call them \emph{topologically invariant new
vertices}.
\begin{figure}[t]
\includegraphics[width=\linewidth]{fig6}
\mycaption{\emph{Topologically Invariant} (TI) vertices (numbers denote $f$
values).
When progressing down the hierarchy, two \emph{non-monotonic} vertices appear
(red labels). This yields new critical points (cyan: maxima, green: saddles,
brown: minimum).
\textit{Old} TIs (blue labels), whose link polarity is unchanged,
maintain their
criticality.
\textit{New} TIs are regular (green label).
\julienMajorRevision{For \emph{topologically invariant}
vertices (blue and green labels), no computation is required. As illustrated
in \autoref{tab_stats}, TI vertices represent the majority of the data
in real-life datasets.}}
\label{fig_monotony_change}
\end{figure}
The three key insights of edge-nested triangulations discussed above
(summarized in \autoref{fig_monotony_change}) form the
cornerstone of our progressive approach to topological analysis.
As detailed next,
checking if
vertices are
topologically invariant \julesReplaceMinor{reveals}{turns out} to be less computationally expensive in
practice than
computing their criticality from scratch.
Moreover, the set of topologically invariant vertices
tend\julesReplaceMinor{}{s} to represent \julesReplaceMinor{in practice }{}the majority of the hierarchy (see
\autoref{sec_results}).
This allows for the design of efficient
progressive algorithms, presented in the next sections.
\section{Progressive Critical Points}
\label{sec_progressiveCriticalPoints}
Our progressive algorithm for critical point extraction
\julesReplace{initializes}{starts} at the first level of the hierarchy,
$\mathcal{M}^0$, and
progresses
level by level
down the hierarchy $\mathcal{H}$
until reaching
its final level,
$\mathcal{M}^h$, or
until \julesReplaceMinor{user interruption}{interrupted by a user}.
\julienMajorRevision{At each level $i$, our approach delivers the
entire list of critical points of the data for the current resolution ($f^i :
\mathcal{M}^i \rightarrow \mathbb{R}$). For this, our strategy consists in avoiding
recomputation as much as possible and instead efficiently and minimally
update the information computed at the previous level $(i - 1)$.}
\subsection{Initialization and Updates}
\label{sec_cpUpdates}
This section focuses on
the vertices of $\mathcal{H}$ which are not \emph{topologically
invariant}.
The
case of topologically
invariant vertices is discussed in \autoref{sec_cpRegularity}.
\julienReplaceMinor{}{In short, our approach computes the
criticality of each vertex with the traditional method \cite{banchoff70} at
the first hierarchy level. However, for the following levels, instead of
re-starting this computation from scratch, our algorithm maintains the
criticality information computed at the previous levels and only
minimally updates this information, where needed, by using dynamic trees
\cite{sleator83}, a specialized data structure for dynamic connectivity
tracking.}
At the first hierarchy level,
$\mathcal{M}^0$
only
contains new vertices for which the criticality needs to be initialized. As of
the second level, old and new vertices start to co-exist
in $\mathcal{M}^1$
and fast update mechanisms can be considered to efficiently update the
criticality of the old vertices.
For this, we leverage the
topological invariance of the link of each old vertex
throughout the hierarchy (\autoref{sec_topologicalInvariants}).
This allows to
store relevant topological information on the link and to quickly update them
when progressing down the hierarchy. In particular, we initialize for each
\emph{new} vertex
$v$
at \julesReplaceMinor{the }{}level $i$ the following information:
\begin{itemize}[leftmargin=.85em]
\item{\emph{Link 1-skeleton:} We store the list of
\emph{local} edges (and their adjacencies) of
$Lk(v)^{i}$\julesReplaceMinor{.}{, encoded with pairs of local indices for the
neighbors of $v$.}
This remains invariant through $\mathcal{H}$
(\autoref{sec_topologicalInvariants}).}
\item{\emph{Link polarity:} We store for each vertex of $Lk(v)^{i}$
its \emph{polarity} (\autoref{sec_topologicalInvariants}), i.e. its
classification as being upper or lower than $v$. This is encoded with one
bit per vertex of $Lk(v)^{i}$.}
\item{\emph{Link dynamic tree:} An efficient data structure \cite{sleator83}
for maintaining connected components in dynamic graphs, discussed
below.}
\end{itemize}
For each new vertex $v$ which is not topologically invariant,
\julienReplaceMinor{these}{the following}
data
structures are initialized\julesReplaceMinor{ and t}{: a list of pairs of local neighbor indices denoting the local edges of $Lk(v)^i$ (up to 24 pairs in a 3D grid),
a list of bits denoting the polarity of each neighbor (up to 14 neighbors in a 3D grid), and the dynamic tree, detailed below. T}he criticality of $v$ is computed with the
traditional approach (\autoref{sec_criticalPoints}), by enumerating the
connected components of $\Link^{+}(v)^{i}$ and $\Link^{-}(v)^{i}$.
This is usually achieved with
breadth-first search traversals or with a Union-Find (UF) data structure
\cite{cormen}. However, in our setting, we would like to update
these connected components as the algorithm progresses down the hierarchy. In
particular, if a \emph{local} edge $e$ belongs to the upper link of $v$ at
\julesReplaceMinor{the }{}level $i$, but not anymore at \julesReplaceMinor{the }{}level $i+1$, the connected components of
$\Link^{+}(v)^{i+1}$ need to be updated accordingly, preferably without
recomputing them completely. For this, we use dynamic trees \cite{sleator83},
which, like the UF data structure, maintain connected components in a graph
upon edge insertion, but unlike the UF, also maintain them upon edge removal.
In particular, all the vertices of $Lk(v)^{i}$ are initially inserted in the
dynamic tree associated to $v$. Next,
we insert each local edge of $Lk(v)^{i}$
in the dynamic tree,
if both its \julesReplaceMinor{extremities}{ends} \julesReplace{admit}{have}
the same polarity. \julesReplaceMinor{Then, t}{T}he criticality of
$v$ is \julesReplaceMinor{}{then }deduced by enumerating the connected components with positive and
negative polarity, thanks to the dynamic
tree.
For each old vertex $v$ which is not topologically invariant
(\autoref{fig_dynamic_link_update}), its link polarity
is quickly updated based on the non-monotonic new vertices
of $Lk(v)^{i}$.
Each local edge $e$ of $Lk(v)^{i}$ which is
\emph{impacted} by a polarity flip of
its vertices (\autoref{sec_topologicalInvariants})
is removed from the
dynamic tree associated to $v$ if it was present in it
(to account for the corresponding disconnection of lower/upper link component),
and
added to
it otherwise,
if both its \julesReplaceMinor{extremities}{ends} \julesReplace{admit}{have} the same polarity
(if they belong to the same lower/upper link component).
Then, the criticality of $v$ is quickly updated with the
fast enumeration of the connected components of positive and negative
polarity provided by the dynamic tree.
\julesReplace{}{\julienReplace{Note that such}{Such} an efficicent update of
the criticality of $v$ would not
be feasible with a simple UF data structure, as the connected components of the link of $v$
would need to be recomputed from scratch upon edge removal.}
\subsection{Computation Shortcuts}
\label{sec_cpRegularity}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{fig8}
\mycaption{Updating the criticality of a non-topologically invariant old
vertex. From
left to right: initial state, identification of \textit{non-monotonic}
vertices (red circles),
update of the
\textit{link polarity} (red $+$/$-$ signs), and update of the connected
components
of $\Link^{+}(v)$ and $\Link^{-}(v)$.
At each step, edges present in the dynamic tree \cite{sleator83} are
highlighted in red.
Only the edges impacted by polarity flips need to be updated in the
dynamic tree: edges $(0, 1)$, $(3, 4)$ and $(4, 5)$ are
removed, and the edge
$(0, 5)$ is added.
}
\label{fig_dynamic_link_update}
\end{figure}
When moving
from the hierarchy level $i$ to $i+1$, topologically invariant old vertices
are guaranteed to maintain their criticality
(\autoref{sec_topologicalInvariants}).
For these, the dynamic trees
(\autoref{sec_cpUpdates}) do not need to be updated.
Moreover, when moving from the hierarchy level $i$ to $i+1$, each topologically
invariant new vertex $v$
is guaranteed
to be regular.
For these, the dynamic trees (\autoref{sec_cpUpdates})
are not even initialized (they will only be used when $v$ becomes no longer
topologically invariant).
Overall, our procedure to update vertex criticality
can be summarized as follows:
\noindent
\textbf{1) Mononotic vertices:} in this step, we loop over
all new vertices to check whether or not they are monotonic.
\noindent
\textbf{2) Link polarity:} in this step, we loop over all
vertices to initialize/update their link polarity. For old vertices, updates are
only needed for their non-mononotic neighbors.
If an old vertex $v$ is topologically invariant,
no more
computation is required for it at this hierarchy level.
\noindent
\textbf{3) Old vertices:} each old vertex $v$ which is not
topologically invariant efficiently updates
its criticality in $f^{i}$ as described in \autoref{sec_cpUpdates}.
\noindent
\textbf{4) New vertices:} if a new vertex $v$ is
topologically invariant, it
is classified as regular and no more
computation is required for it at this hierarchy level. Otherwise,
its criticality is updated (\autoref{sec_cpUpdates}).
\subsection{Parallelism}
\label{sec_cpParallel}
Critical point computation is an operation which is local to the link of each
vertex.
Thus, each of the four steps introduced above
can be
trivially parallelized over the vertices of
$\mathcal{M}^i$ with shared-memory parallelism. This implies no
synchronization, at the exception
of the sequential transition between two consecutive steps.
\subsection{Extremum Lifetime}
\label{sec_cpLifeTime}
As our algorithm progresses down $\mathcal{H}$,
the
population of critical points evolves.
In practice,
this means that some features of interest may
be captured by the
progressive algorithm earlier than others,
denoting
their importance
in the data.
To evaluate this, we consider for each extremum $e$ the notion of
\emph{Lifetime}, \julesReplaceMinor{noted}{defined as} $l(e) = l_d(e) - l_a(e)$,
where $l_a(e)$ and $l_d(e)$ stand for the levels where $e$ appeared
and disappeared respectively.
The evaluation of this
measure
requires
a
correspondence between the extrema computed at the
levels $i$ and $i+1$,
which is
in general a challenging assignment optimization problem
\cite{Ji2006a,
KleinE07,
bremer_tvcg11,
ReininghausKWH12,
SaikiaW17,
soler_ldav18}.
For simplicity, we focus here on a simple yet time-efficient heuristic for
estimating these correspondences, which can be enabled optionally.
Given a vertex $v$, identified as maximum at hierarchy level $i-1$,
our
heuristic consists \julesReplaceMinor{in}{of} computing, for each neighbor $n$ of $v$, an
integral line $\mathcal{L}^+(n)^{i}$.
Each of these lines terminates on local
maxima of $f^{i}$, which we add to the set of \emph{candidates} for $v$.
At the end this step, we
establish
the correspondence between $v$ and its
highest candidate in terms of $f^{i}$ values, noted $m^{*}$, and we say that
$v$
\emph{maps} to $m^{*}$ from $i-1$ to $i$.
To focus the integration on a reasonable neighborhood,
we restrict the number of edges on each integral
line to a user
parameter $L_{max}$,
set to \jules{$10$}
in our experiments.
If the set of \emph{candidates} of $v$ is empty,
the maximum present in $v$ at level $i-1$ is considered to disappear at level
$i$ $\big(l_d(v) = i\big)$.
It is possible, given a maximum $m$ at
level $i$, that no maximum from the level $i-1$ maps to it. In this case, $m$
is said to appear at the level $i$ $\big(l_a(m) = i\big)$.
Finally,
if multiple
maxima at level $i-1$ map to the same
maximum at level $i$, they are all
considered to disappear at the level $i$, at the exception of the \emph{oldest}
maximum (minimizing $l_a$),
as
suggested by the
\emph{Elder} rule in the case of
persistence \cite{edelsbrunner09}.
This optional procedure is run at each hierarchy level and
enables the progressive estimation of the lifetime of the maxima.
Note that the lifetime of minima is estimated with the symmetric procedure.
\begin{figure}
\includegraphics[width=\linewidth]{fig9}
\mycaption{
Computing the minimum-saddle persistence diagram from critical points.
Downwards monotonic paths are initiated at saddles to extract a list of
critical point triplets (part A, left), which forms a reduced topological
representation of the data. This reduced reprensentation is efficiently
processed to produce the persistence diagram (part B, right).}
\label{fig_pd_nonProgressiveStrategy}
\end{figure}
\section{Progressive Persistence Diagrams}
\label{sec_progressivePersistenceDiagram}
Our approach for progressive persistence diagrams
leverages and
combines the insights and algorithms introduced in the previous sections. It
\julesReplace{initializes}{starts} at the coarsest hierarchy level, $\mathcal{M}^{0}$, and then iterates
progressively
through the hierarchy levels,
producing
the exact
persistence diagram $\mathcal{D}(f^{i})$ for each level $i$, until $i
= h$.
We first introduce our approach
in the non-progressive case
(\autoref{sec_nonProgressive_diagram},
\autoref{fig_pd_nonProgressiveStrategy}),
and then present our progressive strategy (\autoref{sec_progressiveDiagrams}).
We focus on minimum-saddle persistence pairs, saddle-maximum pairs
being treated symmetrically.
\subsection{Persistence Diagram from Critical Points}
\label{sec_nonProgressive_diagram}
The diagram $\mathcal{D}(f)$
\julienRevision{of the extremum-saddle pairs}
of an input field $f : \mathcal{M} \rightarrow
\mathbb{R}$ is computed as follows.
\julienReplaceMinor{}{In short, critical points are
used as seeds for the computation of monotonic paths, specifically linking
saddles down to minima. This first step identifies merge events occurring at
saddle points (\emph{part A}). The merge events are processed in a second step
(\emph{part B}) to track the connected components of sub-level sets. Similarly
to previous topological techniques based on monotonic paths \cite{ChiangLLR05,
MaadasamyDN12, CarrWSA16, smirnov17}, our approach emulates the usage of a
Union-Find data structure with path compression \cite{cormen} (traditionally
used for connectivity
tracking) by propagating \emph{representants} between merged components.
However our strategy is specialized for the production of persistence diagrams,
and only visits monotonic paths of minimal length (i.e. integral lines).}
\noindent
\emph{Part A:}
\noindent
\emph{From data to reduced topological information}
\noindent
\textbf{1) Critical points.} First, critical points are extracted
(\autoref{sec_criticalPoints}).
\noindent
\textbf{2) Saddle monotonic paths.}
The second step consists in initiating monotonic paths from each saddle $s$
downwards, to identify at least one minimum for each connected component of
sub-level set merging at $s$ (\autoref{fig_pd_nonProgressiveStrategy}).
For
this,
we initiate
backward integral lines (\autoref{sec_criticalPoints}), for each connected
component of lower link $\Link^{-}(s)$ of each saddle $s$. These integral
lines are guaranteed to terminate in local minima of $f$.
\julesReplaceMinor{In practice, o}{O}nce a backward integral line
$\mathcal{L}^-(s)$ terminates in a local minimum $m$, we back-propagate
the vertex identifier of $m$ and store it for each vertex $v \in
\mathcal{L}^-(s)$. Then, $m$ is called
a
\emph{representant} of $v$,
which is noted $r(v) = \{m\}$. This strategy enables the early termination of
an integral line $\mathcal{L}^-(s_1)$ when it merges with another one,
$\mathcal{L}^-(s_0)$, computed previously. In that case, we
back-propagate the representants reported by the merge vertex on
$\mathcal{L}^-(s_1)$ back to $s_1$.
At the end of this step, each saddle $s$ is associated with the list of
representants collected by its backward integral lines. These
denote local minima
which
may
have initially created the sub-level set
components merging at $s$.
~
\noindent
\emph{Part B:}
\noindent
\emph{From reduced topological information to persistence diagrams}
\noindent
\textbf{3) Critical triplets.} For each saddle $s$, we create a list of
\emph{critical triplets}, in the form $(s, m_0, m_1)$\julesRevision{, where
$m_0$ and $m_1$ are representants of $s$ and thus are
local minima}.
These are obtained by
considering \julesEditsOut{all the possible (order-independent) }pairs\julesEditsOut{,} among the set of
representants of $s$ (computed previously).
Note that in
practice, for nearly all saddles, this list \julesReplaceMinor{counts}{consists of} only one
triplet, which describes the fact that $s$ separates two pits, $m_0$ and $m_1$.
\julesEditsOut{Only in case of forking or degenerate saddles,
multiple triplets can emerge at a given saddle.}
\julesRevision{\julienReplace{Note that}{However} in case of degenerate
saddles, multiple triplets emerge.
For a degenerate saddle associated with $d$ representants $(m_0,\ldots,m_{d-1})$
in ascending values of $f$,
we create the $d-1$ triplets $(s,m_0,m_i)$ with $0<i<d$.}
\noindent
\textbf{4) Critical point pairing.} This step iterates over the global list of
critical triplets (computed previously) in increasing order of saddle values.
The first triplet $(s_0, m_0, m_1)$ \julesReplaceMinor{models}{represents} the earliest merge
event between connected components of sub-level sets of $f$.
We introduce its \emph{simplified version}, $\big(s_0, r(m_0),
r(m_1)\big)$, which is initially equal to $(s_0, m_0, m_1)$ (initially, a local
minimum is itself its own representant).
The highest of the two minima, for instance $m_1$, is then selected to create
in $\mathcal{D}(f)$ the critical point pair $(s_0, m_1)$. Indeed, since $s_0$
is the earliest merge event, $m_1$ is guaranteed to be the \emph{youngest}
minimum, according to the Elder rule \cite{edelsbrunner09}, which created a
component of sub-level set merging with another one at $s_0$. To model the
\emph{death} of
$m_1$'s
component (its merge with the component containing
$m_0$), we
update its representant
as follows: $r(m_1) \leftarrow r(m_0)$. Thus,
all future merging events involving $m_1$ will re-direct to $m_0$, as the
component born at $m_1$ died by merging with that containing $m_0$ (following
the Elder rule \cite{edelsbrunner09}). This simplification process is iterated
over the (sorted) global list of critical triplets.
At each step, when constructing a simplified triplet $\big(s, r(m_0),
r(m_1)\big)$, we recursively retrieve the representants of $r(m_0)$ and
$r(m_1)$, until we reach minima only representing themselves.
This guarantees that for
each merge event of the sub-level set occurring at a saddle $s$, we can
efficiently retrieve the deepest minimum for each of the
components merging in $s$ and therefore pair it adequately in $\mathcal{D}(f)$.
\julesReplaceMinor{Note that this approach is equivalent to the use of
union-finds to keep track of merge events at saddles
points \julienMajorRevision{(in particular, the recursive update of
representants is equivalent to the so-called \emph{path compression} of UF data
structures \cite{cormen}). However, our approach only needs to process
integral
lines (and not the entire triangulation).}
Note that our strategy
bears global
similarities with previous topological techniques based on
monotonic paths
\cite{ChiangLLR05, MaadasamyDN12, CarrWSA16, smirnov17},
which in fact also use integral lines
as they provide the shortest monotonic paths.
However, our strategy is specialized for the production of persistence diagrams.}{Note that the recursive update of
representants is equivalent to the so-called \emph{path compression} of UF data
structures \cite{cormen}.}
Overall, iterating as described above over the list of triplets results in
populating $\mathcal{D}(f)$ with pairs from bottom to top (by increasing death
values).
\subsection{Progressive Strategy}
\label{sec_progressiveDiagrams}
The above algorithm
is divided in two parts
(\emph{A} and
\emph{B}, \autoref{sec_nonProgressive_diagram}).
In particular,
only part \emph{A} can leverage our progressive representation of the
input data (\autoref{sec_progressiveData}), as
part \emph{B} processes reduced topological information which \julesReplace{have}{has} been
abstracted from it and which therefore become completely independent.
Thus, we focus our
progressive strategy on part \emph{A}.
This has a negligible impact on practical \julesReplace{performances}{performance}.
In \julesReplaceMinor{practice}{our experience},
part \emph{B} represents
less than \jules{5\%} of the computation on average.
Critical points (Step 1) can be extracted progressively as described in
\autoref{sec_progressiveCriticalPoints}. For Step 2, we investigated
multiple shortcut mechanisms (similar to \autoref{sec_cpRegularity}), to
maintain the monotonic paths which remain valid from level $i$ to $i+1$.
However, from our experience, the overhead induced by this global maintenance
is not compensated by the acceleration it induces at level $i+1$, as monotonic
paths are usually highly localized and thus
already inexpensive to compute (less than $10\%$ of the
non-progressive computation on average).
Thus, our overall strategy for progressive persistence diagrams
simply consists,
at
each level $i$ of the triangulation hierarchy $\mathcal{H}$, in
updating progressively the critical points
(\autoref{sec_progressiveCriticalPoints})
and then triggering the fast, remaining steps of persistence diagram
computation (2,
3, 4) as described in \autoref{sec_nonProgressive_diagram}.
\subsection{Parallelism}
\label{sec_parallelPDalgortihm}
Our progressive algorithm for persistence diagram computation can be easily
parallelized. The initial critical point computation (Step 1,
\autoref{sec_nonProgressive_diagram}) is parallelized as
described in \autoref{sec_cpParallel}. Saddle integration (Step 2,
\autoref{sec_nonProgressive_diagram}) can be trivially parallelized over
saddles. However, locks need to be used during representant back
propagation (to guarantee consistency over concurrent accesses by distinct
monotonic paths).
Critical triplet generation (Step 3, \autoref{sec_nonProgressive_diagram}) is
also
parallelized over saddles. In Step 4 (critical point pairing
\autoref{sec_nonProgressive_diagram}), triplets are sorted in parallel using
\julesReplaceMinor{GNU's efficent}{the efficient GNU} implementation \cite{singler2008gnu}. The reminder of Step 4 is
intrinsically sequential (as representants need to be updated in order of
simplification), but in practice, this step represents less than \jules{1\%}
of the sequential execution, which does not impact parallel efficiency.
\section{Results}
\label{sec_results}
This section presents experimental results obtained on a
computer with two Xeon CPUs (3.0 GHz, 2x4 cores, 64GB of RAM), with a C++
implementation of our algorithms \julienRevision{(publicly available at:
\href{https://github.com/julesvidal/progressive-scalar-topology}{
https://github.com/julesvidal/progressive-scalar-topology})},
\julienRevision{written as}
modules for the
Topology
ToolKit (TTK) \cite{ttk17}.
The
datasets
\julienMajorRevision{are 3-dimensional (at the exception of
\emph{SeaSurfaceHeight}, which is 2-dimensional) and they}
have been downloaded from public repositories
\cite{openSciVisDataSets, ttkData}.
\subsection{Progressive Data Representation}
\label{sec_result_data}
In this section, we study
the practical relevance of our progressive data
representation (\autoref{sec_progressiveData}).
First,
we
evaluate its qualitative relevance. Our approach for persistence
diagram computation (\autoref{sec_progressivePersistenceDiagram}) progressively
refines an estimation of the output $\mathcal{D}(f)$, by efficiently
updating $\mathcal{D}(f^{i})$ at each new hierarchy level $i$.
\julesEditsOut{To evaluate the
quality of this estimation, for each level $i$, we measure
the $L_2$-Wasserstein distance
$\wasserstein{2}\big(\mathcal{D}(f), \mathcal{D}(f^{i})\big)$
(\autoref{sec_persistenceDiagram}).}
\julesRevision{To evaluate quantitatively the relevance of this estimation
\julienMajorRevision{$\mathcal{D}(f^{i})\big)$}, we
measure its \julienReplace{similarity}{distance}
to the final, exact result \julienMajorRevision{$\mathcal{D}(f)$} with the
\emph{Wasserstein distance}, an established
practical metric
inspired by optimal
transport \cite{Kantorovich, monge81}.
\julesReplaceMinor{}{Intuitively, this distance aims at optimizing a matching
between the features of two diagrams to compare and penalizes
\emph{mismatches} between these diagrams.}
Given two diagrams $\mathcal{D}(f)$ and
$\mathcal{D}(g)$, a pointwise distance
$\pointMetric{q}$, inspired from the $L^p$ norm, can be introduced
in the 2D birth/death space
between
two points $a = (x_a, y_a) \in \mathcal{D}(f)$ and
$b = (x_b, y_b) \in \mathcal{D}(g)$, with $q > 0$, as:
\vspace{-1.5ex}
\begin{equation}
\pointMetric{q}(a,b)=\left(|x_b-x_a|^q + |y_b-y_a|^q\right)^{1/q} = \|a-b\|_q
\label{eq_pointWise_metric}
\end{equation}
\vspace{-3.5ex}}
\noindent
\julesRevision{By convention, $\pointMetric{q}(a, b)$ is set to zero
if both $a$ and $b$ exactly lie on the diagonal ($x_a = y_a$ and $x_b = y_b$).
The $L_q$-Wasserstein distance, noted
$\wasserstein{q}$, between $\mathcal{D}(f)$ and
$\mathcal{D}(g)$ can then be introduced as:
\begin{equation}
\wasserstein{q}\big(\mathcal{D}(f), \mathcal{D}(g)\big) =
\min_{\phi
\in \Phi} \left(\sum_{a \in \mathcal{D}(f)}
\pointMetric{q}\big(a,\phi(a)\big)^q\right)^{1/q}
\label{eq_wasserstein}
\end{equation}
}
\noindent
where $\Phi$ is the set of all possible assignments $\phi$ mapping each
point
$a \in \mathcal{D}(f)$ to
a point
$b
\in \mathcal{D}(g)$
or to
its projection onto the diagonal.
$\wasserstein{q}$ can be computed
via
assignment optimization, for which
exact \cite{Munkres1957} and approximate \cite{Bertsekas81, Kerber2016}
implementations are publicly available \cite{ttk17}.
\julesReplaceMinor{Intuitively, this distance aims at optimizing a matching
between the features of two diagrams to compare ($\mathcal{D}(f)$ and
$\mathcal{D}(g)$) and penalizes
\emph{mismatches} between these diagrams.}{}
\julesRevision{For each level $i$, we measure
the $L_2$-Wasserstein distance
$\wasserstein{2}\big(\mathcal{D}(f), \mathcal{D}(f^{i})\big)$.}
We normalize this distance by dividing it by
$\wasserstein{2}\big(\mathcal{D}(f), \emptyset)$.
Then, along the hierarchy $\mathcal{H}$,
this normalized distance \julesReplaceMinor{evolves}{progresses} from $1$ to $0$
for all datasets.
Although this distance may increase in theory from one level to the next,
\julesReplaceMinor{Figure \ref{fig_convergence}}{\autoref{fig_convergence}} shows that
it is monotonically
decreasing
for
\julienReplace{our datasets (see the appendix for similar convergence
curves on an extended collection of datasets, including two examples
containing a minor oscillation at the beginning of the computation).}{all our
datasets.}
This shows that in practice, the accuracy of
our progressive outputs
indeed improves over time.
\julienRevision{This empirical convergence evaluation gives a global picture of
the quality of our progressive data representation. To further evaluate its
relevance, we report in \autoref{fig_ratio_topPairs}
the ratio of captured \emph{significant pairs} in the diagram $\mathcal{D}(f^{i})$
as a
function of the computation time.
To evaluate this ratio, we select the \emph{significant} pairs of
$\mathcal{D}(f)$, i.e. with a relative persistence greater than $0.1$. Let $n_p$
be the number of such significant pairs (reported for each dataset in the
legend of \autoref{fig_ratio_topPairs}, right, along with its percentage over
the total number of pairs in $\mathcal{D}(f)$, in parenthesis).
Next, we select the $n_p$
most persistent pairs in $\mathcal{D}(f^{i})$ and divide the resulting number of
selected pairs, noted $n_p^i \leq n_p$, by $n_p$.
In short,
this indicator helps appreciate the number of significant features captured by
the hierarchy \julesEditsOut{in its early levels}\julesRevision{early in the computation}. In particular,
\autoref{fig_ratio_topPairs} shows that for most of the datasets,
the number of captured significant pairs matches the final estimation as of
$10\%$ of the computation time.
\julesReplaceMinor{Figure~\ref{fig_average_topPairs}}{\autoref{fig_average_topPairs}} reports the average persistence of the
$n_p^i$ significant
pairs in $\mathcal{D}(f^{i})$ as a
function of the computation time, relatively to the
average persistence of the $n_p$ significant pairs in $\mathcal{D}(f)$.
This indicator helps appreciate how well the
significant
pairs are captured in the data hierarchy. In particular, this figure shows a
clear global trend across datasets: the persistence of the significant pairs
tends to
be underestimated \julesEditsOut{in the early hierarchy levels}\julesRevision{early in the computation} and this estimation improves
over time.
These quantitative observations (early capture of the significant pairs and
underestimation of persistence \julesEditsOut{at the early levels}\julesRevision{at the beginning of the computation}) can be visually observed in
\autoref{fig_ethaneDiol}, which shows that the significant pairs
are
captured early in the data hierarchy (red and yellow pairs) but that their
persistence is indeed underestimated: the corresponding points are
initially close to the diagonal in the corresponding diagrams and then, they
progressively move away from it.}
Next, we evaluate
the computational relevance of our
progressive data representation, by
reporting the number
of Topologically Invariant (TI) vertices (\autoref{sec_topologicalInvariants}),
for which no computation
is needed.
Table \ref{tab_stats}
shows that for real-world datasets,
TI vertices represent
72\% of the data on average, which indicates that efficient update mechanisms
can indeed be derived from
our progressive data representation.
\julienRevision{This table also includes the memory overhead induced in
progressive mode by the data structures employed by our topological analysis
algorithms (\autoref{sec_criticalPoints} and
\autoref{sec_persistenceDiagram})\julienReplace{. In particular, this
overhead is estimated by measuring the memory footprint of all the
data-structures which are present in our progressive algorithms \julesReplaceMinor{(sections
\ref{sec_progressiveCriticalPoints} and
\ref{sec_progressivePersistenceDiagram})}{(\autoref{sec_progressiveCriticalPoints}
and \autoref{sec_progressivePersistenceDiagram})}
but \emph{not} present in
the TTK implementation of the state-of-the-art methods. Thus, this column
depicts the additional
memory needed by our approach in comparison to the standard procedures
available in TTK. In particular, this column
shows
a linear evolution of this memory overhead with the size of the data
hierarchy.}{,
which exhibits a linear evolution with the size of the data hierarchy. }
Note
that our implementation is not optimized for memory usage and that important
gains can be expected by re-engineering our data structures at a low
level.
}
\begin{table}
\centering
\rowcolors{3}{gray!10}{white}
\resizebox{0.95\columnwidth}{!}{
\input{table1_tvcg.tex}
}
\vspace{0.5em}
\caption{
\julienRevision{Statistics of our progressive data hierarchy. From left to
right: number of vertices, number of levels, memory
\julienReplace{overhead (over TTK)}{footprint}, and number}
of topologically invariant (TI) vertices
(\autoref{sec_topologicalInvariants}) in the data hierarchy.
For real-world datasets (Random and MinMax excluded), topologically invariant
vertices represent 72\% of the data on average.}
\label{tab_stats}
\end{table}
\begin{table}[t]
\vspace{-1ex}
\rowcolors{3}{gray!10}{white}
\centering
\resizebox{0.95\columnwidth}{!}{
\input{table2.tex}
}
\vspace{0.5em}
\caption{\julienRevision{Sequential computation times (in seconds)} of our
algorithms
for critical point extraction (left) and persistence diagram computation
(right). The columns \emph{TTK} report the run times of the default
implementations provided by the Topology ToolKit \cite{ttk17}. The columns
\emph{NP} and \emph{Prog} respectively report the timings for the
non-progressive (directly initialized at the final hierarchy level) and
progressive versions of our algorithms.}
\label{tab_sequentialTimings}
\end{table}
\begin{table}
\rowcolors{3}{gray!10}{white}
\centering
\resizebox{0.95\linewidth}{!}{
\input{table3_tvcg.tex}
}
\vspace{0.5em}
\caption{\julienRevision{Parallel computation times (in seconds, 8 cores) of
our
algorithms
for critical point extraction (left) and persistence diagram computation
(right). The columns \emph{TTK} report the run times of the default
implementations provided by the Topology ToolKit \cite{ttk17}. The columns
\emph{NP} and \emph{Prog} respectively report the timings for the
non-progressive (directly initialized at the final hierarchy level) and
progressive versions of our algorithms.}}
\label{tab_parallel}
\end{table}
\begin{figure*}
\includegraphics[width=\linewidth]{foot}
\mycaption{Progressive persistence diagrams (saddle-maximum pairs, from left
to
right) of the CT scan of a foot (leftmost: isosurface), at a few steps of the
computation.
A merge tree based segmentation (colored regions, computed
with TTK \cite{ttk17}) reveals the $5$ most persistence structures in the data.
Colored spheres show the
$5$ most persistent maxima reported by the current diagram estimation,
illustrating a correct capture of the main structures early in the
computation (as of $3\%$ of computation).}
\label{fig_foot}
\end{figure*}
\subsection{Time Performance}
\label{sec_resultsCriticalPoints}
The time complexity of our progressive
algorithm for critical point extraction is linear
with the number of input vertices,
which results in our hierarchical setup in
$\mathcal{O}(\sum_{i = 0}^{i= h} |\mathcal{M}_0^i|)$
steps.
For
persistence diagrams, in the worst possible
configuration (degenerate saddles
with systematic integral line forking), each saddle would generate \julesRevision{monotonic}\julesEditsOut{monotone}
paths which would hit every minimum. This would yield
\julesEditsOut{$n_s \times \big(n_m \times (n_m - 1)\big)/2$}
\julesRevision{$n_s \times (n_m - 1)$}
critical triplets, where $n_s$ and $n_m$ stand for the
number of saddles and minima of $f$.
\julesRevision{This would yield \julesReplaceMinor{for the critical pairing step }{}$n_s\times (n_m-1)$
merge events \julesReplaceMinor{}{for the critical pairing step, each }with
an amortized complexity \julesReplaceMinor{}{of
}$\mathcal{O}\big(\alpha(n_m)\big)$, where $\alpha$
is the inverse of the Ackermann function.}
However, such configurations are extremely
rare in practice and most saddles only yield one triplet, resulting in an
overall practical time complexity of
\julesEditsOut{$\mathcal{O}\big( \sum_{i = 0}^{i= h} (|\mathcal{M}_1^i| + n_s^i)\big)$ steps.}
\julesRevision{$\mathcal{O}\big( \sum_{i = 0}^{i= h} (|\mathcal{M}_1^i| + n_s^i\log n_s^i + n_s^i\alpha(n_m^i))\big)$ steps,
also accounting for the sorting of triplets.}
Table \ref{tab_sequentialTimings} reports computation times (sequential run)
for the default algorithms \julienRevision{(\autoref{sec_criticalPoints})}
\cite{banchoff70},
\cite{gueunet_tpds19} available
in
TTK \cite{ttk17} and the
non-progressive and progressive versions of our algorithms.
Non-progressive methods (\emph{TTK} and \emph{NP} columns)
compute from scratch only the last hierarchy level $h$ directly.
We only report the run times of
TTK as
an indicative baseline as the differences in triangulation implementations
already induce alone important run
time variations (TTK emulates implicitly triangulations for regular grids
\emph{at query time}, while our implementation stores the explicit list of
link edges for each vertex, \autoref{sec_cpUpdates}).
Interestingly,
the \emph{Speedup} columns
show that, in addition to their ability to provide continuous
visual feedback, our progressive algorithms
are also
faster than
their non-progressive versions
(on
average, $1.8$ times faster for critical points, $1.6$ for persistence
diagrams).
These
speedups confirm that the overhead of processing an entire
hierarchy
($\sum_{i = 0}^{i = h} |\mathcal{M}_0^i|$ vertices in progressive
mode,
instead of $|\mathcal{M}_0^h|$ in
non-progressive mode) and of detecting TI vertices is
largely compensated by the gains these vertices provide.
Note that the datasets
with
the most (resp. least)
TI
vertices (\autoref{tab_stats}) are also those for which the largest
(resp. smallest) speedups are obtained, confirming the importance of
TI
vertices in the computation.
Table \ref{tab_parallel} details the performance of the shared-memory
parallelization of our progressive algorithms, using OpenMP
\cite{dagum1998openmp}\julienRevision{, again in comparison to the default
algorithms \julienRevision{(\autoref{sec_criticalPoints})} \cite{banchoff70},
\cite{gueunet_tpds19} available in
TTK \cite{ttk17} and to the non-progressive version of our algorithms}.
As mentioned in \autoref{sec_cpParallel}, critical point
extraction can be trivially parallelized over vertices, for each of the four
steps of our algorithm, resulting in an average parallel efficiency of 66\%. The
persistence diagram computation results in a more modest efficiency (45\%) as
monotonic path computations are subject to locks, in addition to be possibly
imbalanced.
\subsection{\julienRevision{Stress Cases}}
\julienRevision{Our experiments include two synthetic datasets, whose purpose
is to illustrate the most and the least favorable configurations for our
approach, to better appreciate the dependence of our algorithms to their
inputs. The \emph{MinMax} dataset is an elevation field which only contains
one global minimum and one global maximum.
It exhibits therefore a lot of
regularity. In contrast, the \emph{Random} dataset assigns a random value to
each vertex. Thus, no local coherency can be expected between consecutive levels
in the data hierarchy (which is an important hypothesis in our framework).}
\julienRevision{Table~\ref{tab_stats} confirms the best/worst case aspect
of these datasets, as they respectively maximize and minimize the ratio of TI
vertices: \emph{MinMax} has nearly only TI vertices ($99.3\%$) while
\emph{Random} has nearly none ($0.9\%$).}
\julienRevision{Table~\ref{tab_sequentialTimings} confirms, as can be expected,
that these two datasets also maximize and minimize the speedup induced by our
progressive approach. In particular, our progressive algorithms report a
speedup greater than $4$ over their non-progressive versions
for the \emph{MinMax} dataset. This further confirms the
observation made in \autoref{sec_resultsCriticalPoints} that processing an
entire data hierarchy with the acceleration induced by TI vertices can indeed
be faster than computing criticality from scratch at the final hierarchy level
only (in particular, up to $4$ times). In contrast, this table also shows that
in the worst possible case (\emph{Random}, nearly no TI vertices), the
processing of the entire hierarchy can be up to $50\%$ slower (for critical
points, $30\%$ for persistence diagrams) than computing in
non-progressive mode at the final hierarchy level only. All the other datasets
exhibit speedups included within these lower (\emph{Random}) and upper
(\emph{MinMax}) bounds (on average $1.8$ for critical points, $1.6$ for
persistence diagrams).}
\julienRevision{In terms of quality, the best/worst case aspect of
\emph{MinMax} and \emph{Random} is also illustrated in
Figs.~\ref{fig_convergence},~\ref{fig_ratio_topPairs}
and~\ref{fig_average_topPairs}, where \emph{MinMax} converges
immediately, while \emph{Random} describes the worst case (slow
convergence, slow and inaccurate capture of the significant pairs).
In these curves, the other datasets cover the span of possible behaviors
between these two extreme cases.}
\subsection{Progressive Topological Visualization and Analysis}
\label{sec_resultsUI}
This section discusses the progressive visualizations and analyses enabled by
our approach.
\julesReplaceMinor{Figure \ref{fig:teaser}}{\autoref{fig:teaser}} presents a typical example of progressive persistence
diagram
computation
on the electron density of the adenine-thymine (AT) molecular system.
\julienRevision{In this figure, the}
estimated diagrams
progressively capture the features in a meaningful way, as the heaviest atoms
are captured first and the lighest ones last. In particular, in the diagrams,
the introduced points
progressively stand out from the diagonal towards their
final locations.
As of $33\%$ of the computation, the diagram is complete and its
accuracy is further improved over time.
This illustrates the capacity of our approach to deliver
\julien{relevant
previews of the topological features of a dataset} and to improve them
progressively. \julesReplaceMinor{Figure \ref{fig_ethaneDiol}}{\autoref{fig_ethaneDiol}} further illustrates our estimation of
the lifetime of extrema and their trajectory in the data hierarchy.
There, as one progresses down the hierarchy, the
\julienRevision{prominent} maxima are \julienRevision{progressively} captured
and they
quickly stabilize in
the vicinity of their final location.
\julesReplaceMinor{Figure \ref{fig_foot}}{\autoref{fig_foot}} illustrates progressive persistence diagrams for an
acquired dataset. There, a merge tree based segmentation (computed with TTK
\cite{ttk17}) is shown in the background.
It represents the regions of the five most persistent leaf arcs of the merge
tree. The five most persistent maxima reported by the current diagram
estimation are reported with spheres. As of $3\%$ of the computation, these
maxima are correctly assigned to the final structures (one per toe), while their
positional accuracy is further improved with time. Overall, the diagrams
(bottom) capture the main features early in the computation, while smaller
features and noise are progressively captured as the computation unfolds.
\begin{figure}[t]
\centering
\includegraphics[width=0.845\linewidth]{gallery_column}
\mycaption{Progressive persistence diagrams (saddle-maximum pairs)
for several data sets
\julien{(combustion, heptane, boat, hydrogen, aneurism)},
at a
few steps of the computation.
Persistent maxima are represented with spheres in the
domain (scaled by persistence).
The progressive
diagrams capture well the overall shape
(number
and salience of features)
of the final, exact output ($100\%$) early in the computation and
refine it over time.
}
\label{fig_gallery}
\end{figure}
\julesReplaceMinor{Figure \ref{fig_gallery}}{\autoref{fig_gallery}} presents a
gallery of progressive persistence diagrams
for several datasets.
The diagram
estimations capture well the overall shape of the final, exact output
(i.e. the number and salience of its main features)
and
are progressively refined over time. This gallery complements the quantitative
analysis reported in
\julienRevision{Figs.~\ref{fig_convergence}, \ref{fig_ratio_topPairs},
\ref{fig_average_topPairs}}
and confirms visually the
interest
of our progressive representations, which provide
relevant
previews
of
the topological features present in a dataset.
\julien{We used the TTK library \cite{ttk17} to integrate our implementation
within the popular visualization system ParaView \cite{paraviewBook}, as
shown in the companion video (supplemental material).
This video illustrates the progressive updates of our topological
previews within interactive times, and further demonstrates their interest for
interactive visualization environments.}
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth]{isabel_clustering}
\mycaption{
\julien{
Topological clustering of the Isabel ensemble dataset.
Progressive persistence diagrams (top, interrupted at \julien{$5\%$}
of
computation)
are used as an input to the clustering approach by Vidal et al.
\cite{vidal20} (constrained to $1$ second of computation). This
time-constrained ensemble clustering yields the same, correct classification
(one color per cluster, from left to right) as the one returned
with the
exact diagrams (bottom).}}
\label{fig_clustering}
\end{figure}
\julesReplaceMinor{Figure \ref{fig_clustering}}{\autoref{fig_clustering}} illustrates the interest of
our progressive representations for the control of the run time of
batch-mode analysis pipelines.
We consider the Isabel ensemble \cite{scivisIsabel, ttkData} (12 members
illustrating 3
hurricane behaviors:
formation, drift and landfall, \autoref{fig_clustering}, left to right).
Our progressive algorithm is used to generate a persistence diagram for each
member, and is interrupted at a predefined threshold of computation time
(\autoref{fig_clustering}, top). Then,
the TTK implementation of the algorithm by Vidal et al. \cite{vidal20}
is used to cluster these diagrams (with a time constraint of
one second). Overall, this results in a topological clustering pipeline whose
time execution is
fully
controlled, from the feature extraction to their
clustering. For reasonable
computation thresholds
(\julien{5\%}
in
\autoref{fig_clustering}), this pipeline returns
the same, correct classification (one color per cluster) as
the one returned with the exact diagrams (bottom). This demonstrates that the
main trends of an ensemble in terms of features (the main clusters) can still
be estimated reliably\julesReplaceMinor{ in practice}{}, while
additionally controlling the execution time of the clustering pipeline.
\subsection{Limitations and Discussion}
\label{sec_limitations}
Our progressive persistence diagrams tend in practice to capture the
main features first. However, this
cannot be guaranteed theoretically.
For instance, sharp spikes in the data \julienRevision{(e.g. high amplitude
\emph{and} high frequency noise)}
can yield persistent maxima
only at the last levels of the hierarchy\julienRevision{, as illustrated in
\autoref{fig_gallery} where the global maximum of the \emph{Hydrogen} dataset
(fourth row)
belongs to a sharp spike in the center of the data (as also reported by the
quantitative plots Figs.~\ref{fig_convergence} and
~\ref{fig_average_topPairs})}.
This
behavior
prevents the definition of theoretical error bounds on our estimations.
However, the empirical monotonic decrease of the Wasserstein
distance
(\autoref{fig_convergence})
indicates that our progressive representations \julesReplaceMinor{}{actually }provide reliable estimations\julesReplaceMinor{ in
practice}{}\julienRevision{, as confirmed by the indicators of
Figs.~\ref{fig_ratio_topPairs},~\ref{fig_average_topPairs}, where the
real-world datasets cover the span of possible behaviors between the
two stress cases (\emph{MinMax}, \emph{Random}). This can be explained by
the fact that, in practice, persistent pairs often coincide with large features
in the domain, which get captured early in the data hierarchy.}
Although we described our approach generically, we focused in this paper on an
efficient implementation of edge-nested triangulations for regular grids
(\autoref{sec_gridHierarchy}). The generalization of our approach to
generic domains requires to investigate triangulation subdivision schemes.
Several of them seem compliant with the notion of
edge-nested triangulation (\autoref{sec_multiRes}), such as
the Loop subdivision \cite{loop87}
\julienRevision{and the \emph{red} triangulation refinement
\cite{freudenthal42, bank83, bey95, zhang95}.}
However, efficiently
transforming an arbitrary triangulation into a
\julienRevision{triangulation which admits an edge-nested hierarchy}
is an orthogonal question which we leave for future work. Similarly,
the reliable tracking of extrema through the hierarchy (for lifetime
estimation, \autoref{sec_cpLifeTime}) relates to another orthogonal problem, for
which computationally expensive optimizations
may
need to be considered.
Our algorithms complete a given hierarchical level before moving on to
the next one. \julesReplaceMinor{In practice, t}{T}his results in increasing update times as the
computation converges. In the future, finer update strategies will be
considered\julienRevision{, by considering adaptive, feature-centric,
variable level-of-detail refinement methods}. Finally, our algorithm for
persistence diagrams does not support
saddle-saddle pairs in 3D. However, from our experience, the
interpretation of these structures is not obvious in the applications.
Our progressive scheme
seems to be particularly efficient
for algorithms which visit \emph{all} the vertices of the domain
(e.g. critical point extraction), but less beneficial for inexpensive
operations which
only visit small portions of the data
(e.g.
integral line computation, \autoref{sec_progressiveDiagrams}).
This is a
lesson learned from our experiments which could serve
as guideline for future
extensions to other topological analyis algorithms.
\julesRevision{\julienReplace{Also, there}{There} is a trade off between the
benefits of the
progressive scheme and its cost in \julienReplace{terms}{term} of memory usage.
Future work is needed
to improve the memory footprint of our approach by optimizing our data structures at a low level.
\julienReplace{For instance, for triangulations of regular grids and real-life
tetrahedral meshes, the maximum number of neighbors around a vertex is typically
small,
which enables the encoding of local neighbor identifiers with very few
bits, instead of full integers (as done in our current implementation).
Other variables (such as the polarity, currently
stored with a boolean for each
neighbor) could also benefit from a more compact bit representation.}{For
regular grids in particular, there are ways to take advantage of the low
number of local neighbors to store their indices in a lower number of bytes, or
use bit manipulation methods to encode all the informatin about a vertex in a
small integer.}}
\section{Conclusion}
\label{sec_conclusion}
This paper introduced an approach for the progressive topological analysis of
scalar data. Our work is based on a hierarchical representation of the input
data and the fast identification of \emph{topologically invariant vertices},
for which we showed that no computation was required as they were introduced in
the hierarchy. This enables the definition of efficient coarse-to-fine
topological algorithms, capable of providing \julesReplace{exploitable}{interpretable} outputs upon
interruption requests, and of progressively refining them otherwise until the
final, exact output. We instantiated our approach with two examples of
topological algorithms (critical point extraction and persistence
diagram computation),
which leverage efficient update mechanisms for ordinary vertices and avoid
computation for the topologically invariant ones.
For real-life datasets,
our algorithms tend to first capture the most important features of the
data and to progressively refine their estimations with time. This is
confirmed quantitatively with the empirical convergence of the Wasserstein
distance to the final, exact output,
which is monotonically decreasing. More computation time indeed results in more
accuracy. Our experiments also reveal that our progressive computations
even turn out to be faster overall than
non-progressive algorithms and
that they can be further accelerated with shared-memory parallelism.
We showed the interest of
our approach
for interactive data exploration, where our
algorithms
provide
progressive
previews, continuously refined over time, of the topological features
found in a dataset.
We also showed that in batch-mode, our approach
enables to
control the run time of a complete TDA pipeline
(topological clustering of ensemble data).
We believe our work sets
the foundations for several exciting research
avenues
for future work.
First, we identified several improvement directions regarding the management of
the input hierarchy, including the extensions to arbitrary triangulations
or
the addition of intermediate hierarchy levels.
\julienMajorRevision{Second, since our progressive algorithms can be given a
time-budget constraint and still be resumed afterwards if needed, we would like
to investigate in the future how such preemptable data analysis algorithms
can help for optimizing scheduling policies in high-performance
environments, where data analysis is often run at the same time as data
production and where the allocation of computation resources need to be finely
optimized.}
\julienReplace{Third}{Second}, we believe our approach can be generalized to
higher dimensions (with
tailored sampling methods) as well as to
other topological abstractions, in order to
re-visit the entire TDA arsenal (merge trees, Reeb graphs, Morse-Smale
complexes) in the light of progressivity. In that perspective, the
generalization of topologically invariant vertices
to
Discrete Morse Theory \cite{forman98} looks particularly promising.
\section*{Appendix}\label{sec:introduction}
\begin{figure}
\centering
\begin{minipage}{0.9\textwidth}
\centering
\includegraphics[width=.8\textwidth]{plot_wasserstein_all_data}
\caption{
\julienRevision{Empirical convergence of the normalized L2-Wasserstein
distance for an extensive list
of data sets.\\}
{Empirical convergence of the normalized $L_2$-Wasserstein distance for an
extensive list of datasets.}
\textbf{Top}: each curve plots the distance between the currently estimated
diagram,
\julienRevision{D(fi)}{$\mathcal{D}(f_i)$}, and the final, exact diagram,
\julienRevision{D(f)}{$\mathcal{D}(f)$}, as
a function of the percentage of
computation times (logarithmic scale).
\julienRevision{The color represents the order of curves from lowest average
distance
(light beige denoting a fast convergence) to highest (light blue, denoting a
slow convergence).}
{The color map indicates the average distance, from light brown (small
distances, fast convergence) to light green (large distances, slow
convergence).}
\julienRevision{The best-case data set (dashed line)
and worst-case data set (plain line) are both plotted in red.\\}
{Extreme synthetic cases are reported in red (dash: \emph{MinMax}, solid:
\emph{Random}).}
\textbf{Bottom}:
\julienRevision{Average curve (black line) of convergence curves for every
data sets (fine gray lines). The standard deviation around
the mean is represented by the green hull. Stress cases are still in red.}
{Average normalized $L_2$-Wasserstein distance (black curve) and standard
deviation (green hull) for all \julesRevision{}{real-life }datasets
\julesRevision{}{(\textit{i.e. Random} and \textit{MinMax} excluded) }as a function
of the percentage of
computation time. Per-dataset curves are shown in the
background (red: synthetic extreme cases, grey: other
datasets).
}
}
\label{fig_openscivis_convergence}
\end{minipage}
\end{figure}
\cleardoublepage
\julienRevision{}
{Figure~\ref{fig_openscivis_convergence} (top) presents a convergence plot
similar to the one which can be found in the main manuscript, but this time on
an extensive list of real-life datasets:
all the datasets from the
\emph{Open
Scientific Visualization Dataset},
repository,
\href{https://klacansky.com/open-scivis-datasets/}
{https://klacansky.com/open-scivis-datasets/},
which fit in the main memory of
our experimental setup. This represents a set of $36$ datasets (containing
acquired and simulated data).
As discussed in the paper, while
the
monotonic decrease of the $L_2$-Wasserstein distance along the computation
cannot be guaranteed at a theoretical level, it can still be observed in
practice.
Note however that for two examples (\emph{Bunny} and
\emph{Engine}) a slight oscillation can be observed in the early stages of the
computation (between $1\%$ and $2\%$ of the computation time). However, past
this
point, the distance keeps on decreasing monotonically. Also, note that the
convergence curves for all datasets are indeed located between the
curves of the two extreme synthetic examples (\emph{MinMax} and \emph{Random}).
The bottom part of Figure~\ref{fig_openscivis_convergence}, which reports the
average distance for
all datasets and its standard deviation, further
confirms the overall convergence tendency.}
\end{document}
\section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
This demo file is intended to serve as a ``starter file''
for IEEE Computer Society conference papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}\label{sec:introduction}}
\input{notations}
\input{01_intro}
\input{02_preliminaries}
\input{03_progressiveRepresentation}
\input{04_progressiveCriticalPoints}
\input{05_progressivePersistenceDiagrams}
\input{06_results}
\input{07_conclusion}
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\fi
We would like to thank the reviewers for their thoughtful remarks and
suggestions.
This work is partially supported by the European Commission grant
H2020-FETHPC-2017 \emph{``VESTEC''} (ref. 800904).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {
"attr-fineweb-edu": 1.975586,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUduY5qoTA-AMJs7qZ | \section{Introduction}
\label{sec:introduction}
In this paper we expand the study of connections between two seemingly different,
yet interrelated classes of scheduling problems. The first class of problems involves a mobile searcher
that must explore an unknown environment so as to locate a hidden target.
Objectives of this nature are often encountered in the domain of robotic search and exploration.
The second class of problem pertains to the design of a computational multi-problem solver, which
may be interrupted at any point in time, and may be queried for its currently best solution
to any of the given problems. This setting provides a very practical modeling of situations that
often arise in the realm of AI applications, such as the design of any-time and real-time intelligent
systems~\cite{Zaimag96}.
Searching for a hidden object in an unbounded domain is a fundamental computational problem, with a rich
history that dates back to early work~\cite{bellman}~\cite{beck:ls} in the context of searching on the infinite line (informally known
as the {\em cow-path problem}). In our work we focus on a generalization of linear search, known as
the {\em star search} or {\em ray search} problem. Here,
we are given a set of $m$ semi-infinite, concurrent rays which intersect at a common origin $O$, as well as
a mobile searcher which is initially placed at the origin.
There is also a target that is hidden at some distance $d$ from $O$, at a ray unknown to the searcher. The objective is
to design a search strategy that minimizes the {\em competitive ratio}, namely the worst-case ratio
of the distance traversed by the robot (up to target detection) over the distance $d$.
Problems related to ray searching have attracted significant interest from the
AI/OR communities.
Optimal competitive ratios were obtained
in~\cite{gal:minimax} and~\cite{yates:plane}.
The setting in which certain probabilistic information concerning the target placement is known was studied
in~\cite{jaillet:online},~\cite{informed.cows}. The effect of randomization on the expected
performance was addressed in~\cite{schuierer:randomized},~\cite{ray:2randomized}. In the case where
an upper bound on the distance from the target is known~\cite{ultimate} provides a near-optimal asymptotic analysis,
whereas in the case where the searcher incurs a fixed turn cost~\cite{demaine:turn} provides an optimal
search strategy. Other work includes the setting of multiple parallel searchers~\cite{alex:robots}, the
related problem of designing {\em hybrid algorithms}~\cite{hybrid},
and more recently, the study of new performance measures~\cite{hyperbolic},~\cite{oil}.
We refer the interested reader to Chapters 8 and 9 in~\cite{searchgames} for further results.
The second class of problems is related to bounded-resource reasoning in the context of
{\em anytime} algorithms~\cite{RZ.1991.composing}.
Such algorithms provide a useful trade-off between computation
time and the quality of the output, when there is uncertainty with respect to the allowed
execution time. More specifically, our goal is to be
able to simulate an {\em interruptible algorithm} by means of repeated executions of a {\em contract algorithm}.
These are both classes of anytime algorithms which, however, differ significantly in terms of their
handling of interruptions. On the one hand, an interruptible algorithm will always produce some meaningful result
(in accordance to its performance profile) whenever an interruption occurs during its execution.
On the other hand, a contract algorithm must be provided, as part of the input, with its pre-specified
computation time (i.e., contract time). If completed by the contract time, the algorithm will
always output the solution consistent with its performance profile, otherwise
it may fail to produce any useful result.
As observed in~\cite{BPZF.2002.scheduling}, contract algorithms tend to be simpler to implement and maintain,
however they lack in flexibility compared to interruptible algorithms. This observation raises the challenge
of simulating an interruptible algorithm using repeated executions of contract algorithms. The precise
framework is as follows: given $n$ instances of optimization problems, and a contract algorithm for each problem,
provide a strategy for scheduling repeated executions of a contract algorithm, in either
a single, or multiple processors. Upon an interruption, say at time $t$, the solution to any of the $n$ problems may
be requested. The system returns the solution that corresponds to the longest completed execution of a contract algorithm
for the problem in question. The standard performance measure of this scheduling
strategy is the {\em acceleration ratio}~\cite{RZ.1991.composing}, which informally
can be described as a resource-augmentation measure:
namely, it implies that an increase of the processor speed by a
factor equal to the acceleration ratio of the schedule yields a system which is as efficient as
one in which the interruption time is known in advance.
Previous research has established the optimality of scheduling strategies based on iterative deepening methods
in the settings of single problem/single processor~\cite{RZ.1991.composing}~\cite{ZCC.1999.realtime},
single problem/multiple processors~\cite{ZCC.1999.realtime} and
multiple problems/single processor~\cite{BPZF.2002.scheduling}. The most general setting of multiple problems and
processors was investigated in~\cite{steins}, which was also the first to demonstrate connections between ray
searching and contract scheduling problems. More specifically~\cite{steins} shows that a reduction between specific
classes of search and scheduling strategies known as {\em cyclic} strategies (see Section~\ref{sec:preliminaries}).
Optimal schedules, without restrictions, were established in~\cite{aaai06:contracts}.
Issues related to soft deadlines were addressed in~\cite{soft-contracts}, and measures alternative to the acceleration
ratio have been introduced in~\cite{ALO:multiproblem}.
\smallskip
\noindent
{\bf Contribution of this paper} \
In this work we expand the study of connections between the search and scheduling problems that was initiated
in~\cite{steins}. Namely, we address several settings that provide well-motivated
extensions and generalizations of these two classes of problems. More precisely, we study the following
problems:
\noindent
{\em Uncertain target detection / Monte Carlo contract algorithms:} \ We investigate the setting in which the
searcher detects the target with probability $p$ during each visit, and the setting in which each contract
algorithm is a randomized Monte Carlo algorithm with probability of success equal to $p$.
\noindent
{\em Redundancy and fault tolerance:} \
We seek search strategies under the constraint that at least $r$ visits over the target are required in
order to locate it. On a similar vain, we seek scheduling strategies under the assumption that at least $r$ executions
of a contract algorithm are required so as to benefit from its output. This is related to search and
scheduling with uncertainty, when the probability of success is unknown.
\noindent
{\em Randomized scheduling strategies:} \ We show how access to random bits can improve the expected performance
of a scheduling strategy.
\noindent
{\em Trade-offs between performance and the number of searches and contracts:} \
We quantify the trade-offs between the performance ratios and the number of turns by the searcher or
the number of algorithm executions in the schedule.
For all problems, with the exception of randomized strategies,
we give the first results (to our knowledge) that
apply to both the multi-ray searching and multi-problem scheduling domains.
Concerning randomization, we show how to apply and extend, in a non-trivial manner,
ideas that stem from known randomized ray-searching algorithms.
In addition, we address an open question in~\cite{steins}, who asked ``whether
the contract scheduling and robot search problems have similarities beyond those that result from using
cyclic strategies''. In particular, in Section~\ref{sec:fault} we present non-cyclic strategies that improve
upon the best cyclic ones.
\section{Preliminaries}
\label{sec:preliminaries}
\noindent
{\em Ray searching.} We assume a single robot and $m$ rays, numbered $0 \ldots m-1$.
For a target placement $T$ at distance $d$ from the origin, we define the {\em competitive ratio}
of a strategy as
\begin{equation}
\alpha=\sup_T \frac{\mbox{cost for locating T}}{d}
\label{eq:compititive.ratio}
\end{equation}
A strategy is {\em round-robin} or {\em cyclic} if it described by an infinite sequence
$\{x_i\}_{i=0}^\infty$ as follows: in the $i$-th iteration,
the searcher explores ray $(i \bmod m)$ by starting at the origin $O$, reaching
the point at distance $x_i$ from $O$, and then returning to $O$. A cyclic strategy is called {\em monotone}, if
the sequence $\{x_i\}_{i=0}^\infty$ is non-decreasing. A special class of monotone strategies is the class
of {\em exponential} strategies, namely strategies in which $x_i=b^i$, for some given $b>1$, which we call the {\em base}
of the strategy. Exponential strategies are often optimal among monotone strategies
(see~\cite{searchgames}), and in many cases they are also globally optimal. Indeed, for $m$-ray searching,
the exponential strategy with base $b=\frac{m}{m-1}$ attains the optimal competitive ratio~\cite{gal:general}
\begin{equation}
\alpha^*(m)=1+2\frac{b^m-1}{b-1}, \ b=\frac{m}{m-1}.
\label{eq:prelim.optimal.cr}
\end{equation}
Note that $\alpha^*(m)=O(m)$, and $\alpha^*(m) \rightarrow 1+2\mathrm e m$ as $m\rightarrow \infty$.
\noindent
{\em Contract scheduling:} We assume a single processor and $n$ problems, numbered $0 \ldots n-1$.
For interruption time $t$, let $\ell_{i,t}$ denote the length (duration) of the longest execution of a contract
algorithm for problem $i$ that has completed by time $t$. Then the acceleration ratio
of the schedule~\cite{RZ.1991.composing} is defined as
\begin{equation}
\beta= \sup _{t, i \in [0,\ldots, n-1]} \frac{t}{\ell_{i,t}}.
\label{eq:acceleration.ratio}
\end{equation}
Similar to ray searching, a round-robin or cyclic strategy is described by an infinite sequence
$\{x_i\}_{i=0}^\infty$ such that in iteration $i$, the strategy schedules an execution of
a contract for problem $(i \bmod n)$, and of length equal to $x_i$. The definitions of monotone and exponential strategies
are as in the context of ray searching, and we note that, once again, exponential strategies often lead
to optimal or near-optimal solutions (see, e.g.,~\cite{ZCC.1999.realtime}~\cite{aaai06:contracts},~\cite{soft-contracts}).
In particular, for $n$ problems, the exponential strategy with base $b=\frac{n+1}{n}$ attains the optimal
acceleration ratio~\cite{ZCC.1999.realtime}
\begin{equation}
\beta^*(n)=\frac{b^{n+1}}{b-1}, \ b=\frac{n+1}{n}.
\label{eq:prelim.optimal.ar}
\end{equation}
Note that $\beta^*(n)=O(n)$, and that $\beta^*(n) \rightarrow \mathrm e(n+1)$, for $n\rightarrow \infty$.
Occasionally, we will make a further distinction between worst-case and asymptotic performance. Namely, the
asymptotic competitive ratio is defined as $\lim_{T:d \rightarrow \infty} \frac{\mbox{cost for locating T}}{d}$,
whereas the asymptotic acceleration ratio is defined as
$\lim _{t \rightarrow \infty} \sup_{i \in [0,\ldots, n-1]} \frac{t}{l_{i,t}}$ (assuming that the measures converge to a limit).
\section{Search with probabilistic detection and scheduling of randomized contracts}
\label{sec:probabilistic}
In this section we study the effect of uncertainty in search and scheduling.
In particular, we consider the setting in which the detection of a target is stochastic,
in that the target is revealed with probability $p$ every time the searcher passes over it.
Similarly, we address the problem of scheduling randomized contract algorithms; namely, each
execution of the (Monte Carlo) randomized algorithm succeeds with probability $p$. This variant
has been studied in~\cite{searchgames} only in the context of linear search (i.e., when $m=2$),
and the exact competitiveness of the problem is not known even in this much simpler case.
No results are known for general $m$.
In this setting, the search cost is defined as the expected time of the first successful target detection.
Moreover, for every problem $i$ and interruption $t$, we define $\mathbb E[\ell_{i,t}]$ as
the expected longest contract completed for problem $i$ by time $t$. The competitive and the
acceleration ratios are then defined naturally as extensions of~(\ref{eq:compititive.ratio})
and~(\ref{eq:acceleration.ratio}).
We begin with a lower bound on our measures.
\begin{lemma}
Every search strategy with probabilistic detection has competitive ratio at least $\frac{m}{2p}$,
and every scheduling strategy of randomized contract algorithms has acceleration ratio at least
$\frac{n}{p}$.
\label{lemma:probabilistic.lower}
\end{lemma}
\begin{proof}
Consider first the search variant. Let $S$ denote the set of all points at distance at most $d$
from the origin. Given a search strategy and a point $x \in S$, let
$t_x^k$ denote the time in which the searcher reaches $x$ for the $k$-th time. We will first show
that for every $k \geq 1$, there exists $x \in S$ such that the search cost at the time
of the $k$-th visit of $x$ is at least $kmd/2$.
To this end, we will need the assumption that the searcher cannot perform infinitely small oscillations around
a point. More precisely, we will assume that, for arbitrarily small but fixed $\epsilon>0$,
if the searcher visits a point that belongs in an interval of length
$\epsilon$ on some ray, then it must leave the interval before re-visiting this point in the future. This assumption
is required for technical reasons, but also applies naturally to robotic search. Consider the partition of
all points in $S$ in intervals of length $\epsilon$; for each such interval $I$ denote by $c_I$
the point in the middle of interval. For given $c_I$ the searcher needs to enter $I$, visit $c_I$ and eventually leave the interval $I$
$k$ times, which incurs a cost of at least $\epsilon\frac{k}{2}$. Therefore, the overall cost for visiting each center $k$ times is
at least $kmd/2$, which further implies that there exists a point in $S$ with the desired property (namely, the center whose
$k$-th visit occurs last).
Given the above bound, we obtain that targets in $S$ are detected at expected cost at least
$\sum_{k=1}^\infty p(1-p)^{k-1} t_x^k \geq \sum_{k=1}^\infty p(1-p)^{k-1} kmd/2 =md/(2p)$.
The result follows directly from~\eqref{eq:compititive.ratio}.
Consider now the scheduling variant. For a given interruption time $t$ and a given problem
instance $i$, let $l_1^i, l_2^i, \ldots l_{n_i}^i$
denote the lengths of the contracts for problem $i$ that have completed by time $t$, in non-increasing
order. Let the random variable $\ell_{i,t}$ denote the expected length of the
longest contract completed for problem $i$ by time $t$.
Then $\mathbb E[\ell_{i,t}]=\sum_{j=1}^{n_i} p(1-p)^{j-1} l_{j}^{i} \leq p \sum_{j=1}^{n_i} l_j^i$.
Since $\sum_{i=0}^{n-1} \sum_{j=1}^{n_i} l_j^{i} =t$, there exists a problem
$i$ for which $\mathbb E[\ell_{i,t}] \leq p \frac{t}{n}$. The claim follows from the definition
of acceleration ratio~(\ref{eq:acceleration.ratio}).
\end{proof}
\begin{theorem}
There exists an exponential strategy for searching with probabilistic detection that has
competitive ratio at most $1+8\frac{m}{p^2}$.
\label{thm:probabilistic.search.upper}
\end{theorem}
\begin{proof}
Let $\{x_i\}_{i=0}^\infty$ denote the searcher's exponential
strategy, where $x_i=b^i$, for some $b$ that will be chosen later in the proof.
Let $d$ denote the distance of the target from the origin, then there exists index $l$
such that $x_l <d \leq x_{l+m}$. We denote by $P_k$ the probability that the target is found
during the $k$-th visit of the searcher, when all previous $k-1$ attempts were unsuccessful,
hence $P_k =(1-p)^{k-1}p$. We also define $q_j \stackrel\cdot{=}\sum_{k=j}^\infty P_k=(1-p)^{j-1}$.
In order to simplify the analysis, we will make the assumption that the searcher can locate
the target only while it is moving away from the
origin (and never while moving towards the origin); it turns out that this assumption
weakens the result only by a constant multiplicative factor.
We first derive an expression for the expected total cost $C$ incurred by the strategy.
Note that first time the searcher passes the target, it has traveled a total distance of at most
$2\sum_{j=0}^{l+m-1}x_j+d$; more generally, the total distance traversed by the searcher at its
$k$-th visit over the target is at most $2\sum_{j=0}^{l+km-1}x_j+d$. We obtain that the expected cost
is bounded by
\[
C=\sum_{k=1}^\infty P_k (2 \sum_{j=0}^{l+mk-1} x_j+d),
\]
from which we further derive (using the connection between $P_k$ and $q_j$) that the competitive ratio of the strategy is
\begin{eqnarray}
\alpha &\leq& \frac{C}{d} \leq 1+ \frac{2}{x_l}\sum_{k=1}^{\infty} P_k \sum_{j=0}^{l+mk-1} x_j \nonumber \\
&=& 1+\frac{2}{x_l} \sum_{j=0}^l x_{j+m-1} + \frac{2}{x_l} \sum_{j=2}^\infty q_j \nonumber
\sum_{i=1}^{m-1} x_{l+(j-1)m+i}.
\end{eqnarray}
By rearranging the terms in the summations we observe that
\begin{eqnarray}
&&\sum_{j=2}^\infty q_j \sum_{i=1}^{m-1} x_{l+(j-1)m+i} =
\sum_{i=1}^{m-1} \sum_{j=2}^\infty q_j x_{l+(j-1)m+i} \nonumber \\
&=&
\sum_{i=1}^{m-1} x_{l+i} \sum_{j=2}^\infty ((b^m(1-p))^{j-1}.
\end{eqnarray}
By defining $\lambda \stackrel\cdot{=} b^m(1-p)$, and by
combining the above inequalities
we obtain that the competitive ratio is at most
$\alpha \leq 1+ 2\frac{b^m}{b-1} \sum_{j=0}^\infty \lambda^j$.
Note that unless $\lambda<1$ the competitive ratio is not bounded. Assuming that we can choose $b>1$ such that
$\lambda<1$, the competitive ratio is
\begin{equation}
\alpha \leq 1+ 2\frac{b^m}{b-1} \cdot \frac{1}{1-\lambda}.
\label{eq:prob.search.upper.4}
\end{equation}
We will show how to choose the appropriate $b>1$ so as to guarantee the desired competitive ratio.
To this end, we will first need the following technical lemma.
\begin{lemma}
The function $f:\mathbb R^+ \rightarrow \mathbb R$ with $f(x)= \mathrm e^x(1-p)+\mathrm e^x \frac{p^2}{4x}-1$
has a root $r$ such that $0<r\leq \frac{p}{2}$.
\label{lemma:geometric.technical}
\end{lemma}
\begin{proof}
The function $f$ is continuous in the interval $(0,+\infty)$, and for $x \rightarrow 0^+$, $f(x)>0$.
Suffices to show that there exists $y \leq \frac{p}{2}$ such that $f(y) \leq 0$; then the existence of the desired
root follows from Bolzano's theorem. For all $x<1$ we have
\begin{eqnarray}
f(x) &\leq& \frac{1}{1-x}\left(1-p+\frac{p^2}{4x}\right)-1, \
\mbox{ since} \ \mathrm e^x \leq \frac{1}{1-x} \nonumber \\
&\leq& \frac{\left(x-\frac{p}{2} \right)^2}{x(1-x)}.
\end{eqnarray}
Choosing $y_0=\frac{p}{2}$, we obtain that $f(y) \leq 0$, and the lemma follows.
\end{proof}
Let $r$ denote the root of the function $f$, defined in the statement of Lemma~\ref{lemma:geometric.technical}.
We will show that choosing base $b=\frac{m}{m-r}$ yields the desired competitive ratio.
It is straightforward to verify that $b>1$ and that $\lambda=b^m(1-p)<1$.
Hence, the competitive ratio converges to the value given by the RHS of~\eqref{eq:prob.search.upper.4}.
From the choice of $b$, we have that $b-1=\frac{r}{m-r}$ and $b^m \leq e^{r}$.
We then obtain
$
\frac{b^m}{b-1} \cdot \frac{1}{1-\lambda} \leq \frac{e^{r}(m-r)}{r\left((1-(1-p) \left(\frac{m}{m-r}\right)^m\right)}
\leq \frac{me^{r}}{r\left((1-(1-p) \mathrm e ^{r}\right)}.
$
Recall that from Lemma~\ref{lemma:geometric.technical}, $r$ is such that
$1-(1-p)\mathrm e^{r}=\mathrm e^{r} \frac{p^2}{4}$. We thus obtain that
$\frac{b^m}{b-1}\frac{1}{1-\lambda} \leq \frac{m}{4p^2}$, and from~(\ref{eq:prob.search.upper.4})
it follows that the competitive ratio of the strategy is at most
$1+8 m/p^2$.
\end{proof}
\begin{theorem}
\label{thm:geometric.contracts}
There exists an exponential strategy for scheduling randomized contract algorithms that has acceleration
ratio at most $\mathrm e \frac{n}{p}+ \frac{\mathrm e}{p}$.
\end{theorem}
\begin{proof}
Let $b$ denote the base of the exponential strategy. It is easy to see that the acceleration ratio is maximized for
interruptions $t$ that are arbitrarily close to, but do not exceed the finish time of a contract. Let $t$ denote
such an interruption time, in particular right before termination of contract $i+n$, for some $i>0$;
in other words, $t=\frac{b^{i+n+1}-1}{b-1}$.
Then every problem has completed a contract of expected length at least
$p b^i$ by time $t$. Therefore, the acceleration ratio of the schedule is at most
$
\beta \leq \sup_{i>0} \frac{b^{n+i+1}}{pb^i(b-1)},
$
and choosing $b=\frac{n+1}{n}$ we obtain that $\beta \leq
\mathrm e \frac{n}{p}+ \frac{\mathrm e}{p}$.
A more careful analysis of the same strategy yields a better asymptotic acceleration ratio. More specifically,
it is easy to see that for interruption $t$ defined as above and for every problem $j$,
the strategy has completed a contract for problem $j$ of expected length at least
$\sum_{l=0}^k p(1-p)^lb^{i-nl}$, where
$k$ is such that $i-kn=i \bmod n$. It follows that the acceleration ratio
is at most $\frac{b^{n+1}}{p(b-1)} \cdot \frac{1}{\sum_{l=0}^k \left( \frac{1-p}{b^n} \right)^l}$.
Choosing again $b=\frac{n+1}{n}$, and after some simple calculations, we have that the asymptotic acceleration ratio
(obtained for $k \rightarrow \infty$), is at most $(\mathrm e-1+p) \frac{n}{p}+O(\frac{1}{p})$.
\end{proof}
\section{Fault tolerance/redundancy in search and scheduling}
\label{sec:fault}
In Section~\ref{sec:probabilistic} we studied the searching and scheduling problems in a stochastic setting.
But what if the success probability is not known in advance? In the absence of such information, one could
opt for imposing a lower bound $r$ on the number of times the searcher has to visit the target and, likewise,
a lower bound $r$ on the number of times a contract algorithm must be executed before its response can be trusted.
Alternatively, this setting addresses the issues of fault tolerance and redundancy in the search and
scheduling domains. The search variant has been studied in~\cite{searchgames} only in the context of linear
search ($m=2$); as in the case of probabilistic detection, even when $m=2$ the exact optimal competitive strategies
are not known.
The following lemma follows using an approach similar to the proof
of Lemma~\ref{lemma:probabilistic.lower}.
\begin{lemma}
Every search strategy on $m$ rays with redundancy guarantee $r \in \mathbb N^+$ has competitive ratio at least $\frac{rm}{2}$.
\label{thm:fault.lower}
\end{lemma}
We first evaluate the best exponential strategy.
\begin{theorem}
The best exponential strategy has competitive ratio at most
$
2(\left \lceil\frac{r}{2}\right \rceil m-1)\ \left(\frac{\lceil\frac{r}{2}\rceil m}{\lceil\frac{r}{2}\rceil m-1}
\right)^{\lceil\frac{r}{2}\rceil m} + 1 \leq 2\mathrm e ( \lceil \frac{r}{2} \rceil(m-1)) + 1.
$
\label{thm:fault.geometric}
\end{theorem}
\begin{proof}
Let $\{x_i\}_{i=0}^\infty $ denote the exponential strategy, with $x_i=b^i$ for some $b$ to be fixed later.
Suppose that the target is at distance $d$ from the origin, and let $l \in \mathbb N$ be such that
$x_l <d \leq x_{l+m}$. We need to consider cases concerning the parity of $r$.
If $r$ is odd, i.e., $r=2k+1$ for $k \in \mathbb N$, then the cost of the strategy is upper bounded by
$
2\sum_{i=0}^l x_i +2\sum_{i=1}^{(k+1)m-1} x_{l+i}+d = 2\sum_{i=0}^{(k+1)m-1} x_{i}+d,
$
whereas if $r$ is even, ie. $r=2k$, the cost is bounded by
$
2\sum_{i=1}^l x_i +2\sum_{i=1}^{km-1} x_{l+i}+(x_{km}-d)= 2\sum_{i=0}^{km}x_i-d.
$
It follows that the competitive ratio of the exponential strategy is at most
$1+2\frac{b^\frac{(r+1)m}{2}-1}{b-1}$, if $r$ is odd, and at most $2\frac{b^\frac{rm}{2}-1}{b-1}-1$ if $r$ is even.
We observe that in both cases, the competitive ratio is essentially identical to the competitive ratio
of an exponential strategy with base $b$, when searching for a single target in $\lceil\frac{r}{2}\rceil m$ rays without
fault-tolerance considerations (with the exception of the negligible additive unit terms).
This motivates the choice of $b=\frac{\lceil\frac{r}{2}\rceil m}{\lceil\frac{r}{2}\rceil m-1}$ as the optimal base of the exponential strategy,
which yields a competitive ratio equal to $2(\lceil\frac{r}{2}\rceil m-1)\ \left(\frac{\lceil\frac{r}{2}\rceil m}
{\lceil\frac{r}{2}\rceil m-1}\right)^{\lceil\frac{r}{2}\rceil m} + 1 \leq
2\mathrm e (\lceil\frac{r}{2}\rceil m-1) + 1$, if $r$ is odd, and
$2(\lceil\frac{r}{2}\rceil m-1)\ \left(\frac{\lceil\frac{r}{2}\rceil m}
{\lceil\frac{r}{2}\rceil m-1}\right)^{\lceil\frac{r}{2}\rceil m} - 1 \leq
2\mathrm e (\lceil\frac{r}{2}\rceil m-1) - 1$, if $r$ is even.
\end{proof}
Interestingly, we can show that there exist non-monotone strategies, which, for $r>2$,
improve upon the (best) exponential strategy of Theorem~\ref{thm:fault.geometric}. For simplicity, let us assume
that $r$ is even, although the same approach applies when $r$ is odd, and leads to identical results up to an
additive constant. In particular, we will consider the following strategy: In iteration $i$, the searcher visits
ray $i \bmod m$ first up to the point at distance $x_{i-m}$, then performs $r$ traversals of the interval
$[x_{i-m},x_i]$ (thus visiting $r$ times each point of the said interval), then completes the iteration by
returning to the origin (here we define $x_j=0$ for all $j<0$). We call this strategy {\sc NM-search} (non-monotone search).
\begin{theorem}
Strategy {\sc NM-search} has competitive ratio at most
$r(m-1) \left( \frac{m}{m-1}\right)^m+2-r.$
\label{fault:nm}
\end{theorem}
\begin{proof}
Suppose that the target lies at a distance $d$ from the origin, and let $l \in N$ denote an index such that
$x_l <d \leq x_{l+m}$. Then the cost of locating the target is at most
\[
\sum_{j=0}^{l+m} (r(x_j-x_{j-m})+2x_{j-m})=
r \cdot \sum_{j=0}^{m+l} x_{j}+(2-r)\sum_{j=0}^{l} x_j.
\]
Setting $x_i=b^i$ (which we will fix shortly), and given that $d>x_l$, we obtain that the competitive ratio is at most
\begin{equation}
\alpha \leq r b \frac{b^{m+1}}{b-1} +(2-r)\frac{b^{l}-1}{b^l(b-1)}
\leq \frac{rb^{m+1}}{b-1}+(2-r),
\label{eq:fault.nm.1}
\end{equation}
where the last inequality follows from the fact that $b^l>1$. We now observe that~(\ref{eq:fault.nm.1}) is
minimized for $b=\frac{m}{m-1}$. Substituting in~(\ref{eq:fault.nm.1}) yields
\[
\alpha \leq
r(m-1) \left( \frac{m}{m-1}\right)^m+2-r.
\]
\end{proof}
It is very easy to show, by comparing the results of Theorems~\ref{thm:fault.geometric} and~\ref{fault:nm}, that
the non-monotone strategy is superior to the best exponential strategy for $r>2$.
Consider now contract scheduling with redundancy parameter $r$, in the sense that the interruptible system
may output only the solutions of contracts that have been executed at least $r$ times by time $t$. In this setting, the
best schedule is derived from a pseudo-exponential strategy, which is defined in phases as follows: in phase $i \geq 0$,
$r$ contracts for problem $i\mod n$, and of length $b^i$ are executed, for given base $b>i$.
It turns out that this strategy attains the optimal acceleration ratio. The proof of the following theorem uses
techniques from~\cite{aaai06:contracts}.
\begin{theorem}
The pseudo-exponential scheduling strategy with base $b=\frac{n+1}{n}$ has acceleration ratio at most
$rn \left(\frac{n+1}{n} \right)^{n+1}$. Furthermore, this acceleration ratio is optimal.
\label{thm:fault.contract}
\end{theorem}
\begin{proof}
The pseudo-exponential strategy with base $b$ can be analyzed using the standard approach (e.g. as in~\cite{ZCC.1999.realtime}),
and its acceleration ratio is equal to $r \frac{b^{n+1}}{b-1}$, which is minimized for $b=\frac{n+1}{n}$. On the other hand, the
lower bound follows based on ideas very similar to~\cite{aaai06:contracts}, which gives a tight lower bound on the acceleration
ratio of every schedule. In particular, the crucial observation is that there exists an optimal schedule with the property that
whenever a new contract is about to be scheduled, the problem with the smallest completed contract length (where completion
now is defined to multiplicity $r$) will be chosen. The remaining technical details follow precisely along the lines of the
proof of Theorem 1 in~\cite{aaai06:contracts}.
\end{proof}
A different setting stipulates that the schedule returns, upon interruption $t$ and for queried problem $p$, the $r$-th smallest
contract for problem $p$ that has completed its execution by time $t$. In this setting, we can still apply the pseudo-exponential
strategy (which is clearly non-monotone). We can show, as in ray searching, that this strategy is better than the
best exponential strategy, albeit slightly so.
\begin{theorem}
The best exponential strategy for the $n$-problem contract scheduling problem with redundancy parameter $r$
has acceleration ratio at most
$
\left(rn+1\right)\left(1+\frac{1}{rn}\right)^{rn} \leq \mathrm ern+e.
$
Furthermore, there exists a non-monotone strategy which improves upon the best exponential strategy for all $n,r$.
\end{theorem}
\begin{proof}
Let $b$ denote the base of the exponential strategy. Consider a worst-case interruption at time $t$, right before the
end of the $n+i$-th contract, i.e. at time $t=\frac{b^{n+i+1}-1}{b-1}$. Then, for every problem $p$, the scheduler
has completed $r$ contracts for $p$ at lengths at least $b^{i-(r-1)n}$. After some simple calculations, we derive
that the acceleration ratio of the strategy is at most $\frac{b^{rn+1}}{b-1}$, which in turn
is minimized for $b=\frac{rn+1}{rn}$, and which proves the claimed bound on the best exponential strategy.
The non-monotone strategy is precisely the pseudo-exponential strategy presented in Theorem~\ref{thm:fault.contract}.
This strategy is strictly better than the best exponential strategy, since the function $f(x)=(1+\frac{1}{x})^x$
is increasing; however, the gap between the two strategies is small. In particular, for $n \rightarrow \infty$, both strategies
converge to the same acceleration ratio.
\end{proof}
The strategies described above establish connections beyond those that result from the
use of cyclic strategies. More precisely, we have shown that non-cyclic ray-searching
algorithms have counterparts in the domain of contract-scheduling; furthermore, the non-cyclic strategies
improve upon the best cyclic ones. We have thus addressed an open question from~\cite{steins},
who asked whether there exist connections between the two problems that transcend cyclic strategies.
\section{Randomized scheduling of contract algorithms}
\label{sec:randomized}
In this section we study the power of randomization for scheduling (deterministic) contract
algorithms. Our approach is motivated by the randomized strategy of~\cite{ray:2randomized}
for searching on $m$ rays. We emphasize, however, that our analysis differs in several key points, and most
notably on the definition of appropriate random events.
We will analyze the following randomized strategy : We choose a random permutation $\pi:\{0, \ldots n-1 \}
\rightarrow \{0, \ldots n-1\}$ of the $n$ problems, as well as a random $\varepsilon$ uniformly distributed in $[0,1)$.
In every iteration $i \geq 0$, the algorithm executes a contract for problem
$\pi(i) \mod n$ with corresponding length $b^{1+\varepsilon}$, with $b>1$.
\begin{theorem}
The acceleration ratio of the randomized strategy is
$\beta_r(n,b)= n \frac {b^{n+1} \ln b}{(b^n-1)(b-1)}$.
\label{thm:full.randomization.upper}
\end{theorem}
\begin{proof}
Let $t$ denote the interruption time. Observe that $t$ can be expressed as $t=\frac{b^k-1}{b-1} b^\delta$, for some unique $k \in \mathbb{N}$ and $\delta$ such that $1 \leq b^\delta <\frac{b^{k+1}-1}{b^k-1}$. For convenience,
we will call the contract execution of length $b^{i+\varepsilon}$ the {\em $i$-th contract} of the strategy, and $i$ the contract {\em index}
(with $i \geq 0$). Note that the start and finish times of the $i$-th contract are $\frac{b^{i}-1}{b-1} b^\varepsilon$ and
$\frac{b^{i+1}-1}{b-1} b^\varepsilon$, respectively.
First, we need to identify the index of the contract during the execution of which the interruption time $t$ occurs; denote this index by $l$. Note that it cannot be that
$l \geq k+1$, since $\frac{b^{k+1}-1}{b-1}b^\varepsilon \geq
\frac{b^{k+1}-1}{b-1} >t$. Similarly, it cannot be that $l \leq k-2$ because
$\frac{b^{k-1}-1}{b-1}b^\varepsilon \leq \frac{ {b^k}-b}{b-1}<\frac{b^k-1}{b-1}\leq t$.
We conclude that either $l=k$, or $l=k-1$. In particular, the random event $(l=k-1)$ occurs
only when $\frac{b^{k}-1}{b-1} b^\varepsilon \geq t= \frac{b^{k}-1}{b-1} b^\delta$,
which implies that $\varepsilon \geq \delta$.
Next, we need to evaluate the expected value of the random variable $D$ that corresponds
to the length of the longest contract for the problem that is requested at time
$t$, and which has completed at time $t$. This will allow us to bound the acceleration ratio $\alpha$
of the randomized strategy, as
\begin{equation}
\sup_t \frac{t} {\mathbb{E}[D]}, \ \textrm{with} \ t= \frac{b^k-1}{b-1} b^\delta \leq \frac{b^{k+1}-1}{b-1}.
\label{eq:randomization:acceleration}
\end{equation}
We consider two cases, depending on whether $\delta \geq1$.
\noindent
{\em Case 1: $\delta \geq 1$}. \ In this case, $\varepsilon < \delta$, which implies, from the above discussion that $k=l$.
Therefore, the strategy will return one of the contracts with indices $k-1,k-2, \ldots, k-n$, namely the contract that
corresponds to the requested problem. Due to the random permutation of problems performed by the strategy, each of these
indices is equally probable to correspond to the requested problem. We thus obtain
$
\mathbb{E}[D] = \mathbb{E}[D \mid (k=l)] = \frac{1}{n} \sum_{i=1}^n \mathbb{E}[b^{k-i+\varepsilon}]
= \frac{1}{n} \sum_{i=1}^n b^{k-i}\frac{b-1}{\ln b} =
\frac{1}n{} \frac{b^k(b^n-1)}{b^n \ln b},
$
where we used the fact that $\varepsilon$ is uniformly distributed in $[0,1)$.
Combining with~(\ref{eq:randomization:acceleration}) we obtain
$\beta_r(n,b) \leq \frac{b^{k+1}-1}{b^-1} \frac{1}{\mathrm E[D]} \leq n \frac {b^{n+1} \ln b}{(b^n-1)(b-1)}$.
\noindent
{\em Case 2: $0 \leq \delta < 1$}, \ in other words, $b^\delta < b$. Note that in this case, the events
$(l=k)$ and $(\varepsilon < \delta)$ are equivalent; similarly for the events $(l=k-1)$ and $(\varepsilon \geq \delta)$.
The following technical lemma establishes $\mathbb E[D]$ in this case.
\begin{lemma}
$\mathbb{E}[D] = \frac{1}{n} \frac{b^{k-1}(b^n-1)b^\delta}{b^n} \ln b$.
\label{lemma:randomization.second.expectation}
\end{lemma}
\begin{proof}
Denote by $F$, and $\overline{F}$ the events $(l=k)$ and $(l=k-1)$, respectively.
We have
\begin{equation}
\mathbb{E}[D] = \mathbb{E}[D \mid F] \ \textrm{Pr($F$)}+
\mathbb{E}[D \mid \overline{F}] \ \textrm{Pr($\overline{F}$)}
\label{eq:randomization.upper.1}
\end{equation}
Moreover,
\begin{eqnarray}
\mathbb{E}[D \mid \overline{F}] &=& \frac{1}{n} \sum_{i=1}^n \mathbb{E}[b^{k-i+\varepsilon} \mid \overline{F}] \nonumber \\
& =&\frac{1}{n} \frac{b^\delta-1} {\ln b} \frac{1} {\textrm{ Pr($\overline{F}$) }} \sum_{i=1}^n b^{k-i} \nonumber \\
&=& \frac{1}{n}\frac{b^\delta-1}{\ln b} \frac{b^{k-n}}{\textrm{ Pr($\overline{F}$) }} \frac{b^n-1}{b-1}.
\label{eq:randomization:expectation1}
\end{eqnarray}
Similarly, we have that
\begin{eqnarray}
\mathbb{E}[D \mid F] &=& \frac{1}{n} \sum_{i=1}^n \mathbb{E}[b^{k-1-i+\varepsilon} \mid F] \nonumber \\
&=&\frac{1}{n} \frac{b-b^\delta} {\ln b} \frac{1} {\textrm{ Pr(F) }} \sum_{i=1}^n b^{k-1-i} \nonumber \\
&=& \frac{1}{n} \frac{b^\delta-1}{\ln b} \frac{b^{k-1-n}}{\textrm{ Pr(F) }} \frac{b^n-1}{b-1}.
\label{eq:randomization:expectation2}
\end{eqnarray}
Here, we use the facts that
\[
\mathbb{E}[b^\varepsilon \mid F] =\int_{b^\delta}^b x \cdot \frac{1} { \textrm{ Pr(F) } \ln b}, \ \textrm{and}
\]
\[
\mathbb{E}[b^\varepsilon \mid \overline{F}] =\int_{1}^{b^\delta} x \cdot \frac{1}{ \textrm{ Pr($\overline{F}$) } } \ln b.
\]
Combining~(\ref{eq:randomization.upper.1}),~(\ref{eq:randomization:expectation1}) and~(\ref{eq:randomization:expectation2})
we obtain, after some calculations, that
\begin{equation}
\mathbb{E}[D] = \frac{1}{n} \frac{b^{k-1}(b^n-1)b^\delta}{b^n} \ln b,
\label{eq:randomization.second.expectation}
\end{equation}
which completes the proof.
\end{proof}
Combining Lemma~\ref{lemma:randomization.second.expectation} and~(\ref{eq:randomization:acceleration})
we obtain again that
$\beta_r(n,b) \leq \frac{(b^k-1)b^\delta}{b^-1} \frac{1}{\mathrm E[D]} \leq n \frac {b^{n+1} \ln b}{(b^n-1)(b-1)}$.
\end{proof}
\subsection{Evaluation of the randomized strategy}
\label{subsec:randomized.evaluation}
In order to evaluate the best randomized exponential strategy, we must find the $b$
that minimizes the function $\beta_r(n,b)$.
It is easy to see, using standard calculus, that $\beta_r(n,b)$ has a unique minimum, for given $n$.
However, unlike the deterministic case, there is no closed form for $\beta_r^*(n)=\min_{b>1} \beta_r(n,b)$.
Thus, we must resort to numerical methods.
Figure~\ref{fig:randomization} illustrates the performance of the randomized strategy $\beta_r^*(n)$ versus
the deterministic optimal strategy, denoted by $\beta^*(n)$. We observe that $\beta^*_r(n) \leq 0.6\beta^*(n)$,
for $n=1, \ldots 80$.
In fact, we can show analytically that for $n\rightarrow \infty$, $\beta_r^*(n)$ converges to a value that does not exceed
$\frac{\mathrm e}{\mathrm e-1}(n+1)$ (recall that $\beta^*(n)$ converges to $\mathrm e (n+1)$). More precisely, choosing
$b=\frac{n+1}{n}$ we obtain $\beta_r^*(n) \leq (n+1)\frac{(1+1/n)^n \ln(1+1/n)}{((1+1/n)^n-1)(1+1/n)}$,
which converges to $(n+1)\frac{\mathrm e}{\mathrm e -1}$, a value extremely close to the computational results.
\begin{figure}[htb!]
\centerline{\includegraphics[height=7cm, width=9 cm]{BC.pdf}}
\caption{Plots of the randomized ($\beta_r^*(n))$ and the deterministic ($\beta^*(n)$)
acceleration ratios, as functions of $n$.}
\label{fig:randomization}
\end{figure}
\section{Trade-offs between performance and executions of searches/algorithms}
\label{sec:preemptions}
Most previous work on ray searching assumes that the searcher
can switch directions at no cost. In practice, turning is a costly operation in robotics,
and thus should not be ignored. In a similar vein, we usually
assume that there is no setup cost upon execution of a contract algorithm, however
some initialization cost may be incurred in practice. One could address this requirement by
incorporating the turn/setup cost in the performance
evaluation (see~\cite{demaine:turn} for ray searching with turn cost).
In this section we follow a different approach by studying the trade-off between performance
and the number of searches and/or executions of algorithms.
We will make a distinction between two possible settings. In the first setting, we use the standard
definitions of search and scheduling as given in Section~\ref{sec:introduction}. Specifically, we
address the question: Given a target at distance $t$ (resp. an interruption $t$) what is the minimum
number of turns (resp. executions of contracts) so as to guarantee a certain competitive ratio
(resp. acceleration ratio)? We call this the {\em standard} model.
The second setting is motivated by applications in which searching previously explored
territory comes at no cost. One such example is the expanding search paradigm~\cite{thomas:expanding}.
Another example is parallel linear searching on arrays modeled as ray searching~\cite{hyperbolic}, in which
the searcher can ``jump'' to the last-explored position.
While the latter setting does not have a true counterpart in the realm of contract scheduling, it still gives rise
to a scheduling problem. Suppose we have $n$ problems, each with its own statement of an
{\em interruptible} algorithm (as opposed to a contract algorithm). In addition, we allow the use of
{\em preemptions}, in that we can preempt, and later resume the execution of an algorithm. In this context,
we face the scheduling problem of interleaving the executions of interruptible algorithms. Note that we
can still use the acceleration ratio, given by~(\ref{eq:acceleration.ratio})) as the
performance measure, with the notable difference that here $\ell_{i,t}$ denotes the {\em total} (aggregate) time of algorithm
executions for problem $i$, by time $t$.
We call the above model the {\em preemptive} model.
\subsection{Trade offs in the preemptive model}
\label{subsec:preemptive}
We consider first the problem of scheduling interleaved executions of interruptible algorithms. Clearly,
the optimal acceleration ratio is $n$: simply assign each time unit uniformly across all problems, in
a round-robin fashion. However, this optimal strategy results in a linear number of preemptions, as function of time.
We thus consider the following {\em geometric} round-robin strategy, which is a combination of uniform and
exponential strategies. The strategy works in phases; namely, in phase $i$ ($i \geq 0$), it
executes algorithms for problems $0 \ldots n-1$ with each algorithm allotted a time span
equal to $b^i$, for fixed $b>1$ (we will call each algorithm execution for problem $i$ a {\em job} for problem $i$).
\begin{lemma}
The geometric strategy has (worst-case) acceleration ratio $n(b+1)$, asymptotic acceleration ratio $nb$, and for any
$t$, the number of preemptions incurred up to $t$ is at most $n \log_b \left(\frac{t(b-1)}{n}+1 \right) +n$.
\label{lemma:preemptions.upper}
\end{lemma}
\begin{proof}
The worst-case acceleration ratio of the geometric strategy is attained at interruptions right before the end of a phase, say phase $i$,
in other words, for interruption time $t=n\frac{b^{i+1}-1}{b-1}$. At this time, every problem has been completed to an aggregate
job length equal to $\ell=\sum_{j=0}^{i-1} b^j=\frac{b^i-1}{b-1}$. It is very easy to verify that $\frac{t}{l} \leq n(b+1)$, and
that $\frac{t}{l} \rightarrow nb$, as $t \rightarrow \infty$ (i.e., for $i \rightarrow \infty$).
We now focus on bounding the number of preemptions. Suppose that
the interruption $t$ occurs in the $i$-th phase, then we can express $t$ as
$t=n \sum_{j=0}^{i-1}b^j+xnb^i$, where $x\in[0,1)$, therefore we obtain that $\frac{t}{n} \geq \frac{b^i-1}{b-1}$,
and hence $i \leq \log_b \left(\frac{t(b-1)}{n}+1 \right)$.
On the other hand, the number of interruptions by time $t$ is $I_t \leq in+\lceil xn \rceil \leq n(i+1)$. The result follows.
\end{proof}
We will now show that the geometric strategy attains essentially the optimal trade-offs.
\begin{theorem}
For any strategy with (worst-case) acceleration ratio $n(1+b)-\epsilon$ for any $b>1$, and constant $\epsilon>0$,
there exists $t$ such that the number of preemptions up to time $t$ is at least $n \log_b \left(\frac{t(b-1)}{n}+1 \right) -n$.
Moreover, any strategy with asymptotic acceleration ratio $nb(1-\epsilon)$, for any constant $\epsilon>0$, incurs
$n \log_b \left(\frac{t(b-1)}{n}+1 \right) -o\left(n \log_b \left(\frac{t(b-1)}{n}+1 \right)\right)$ preemptions by time $t>t_0$,
for some $t_0$.
\label{thm:preemptions.lower}
\end{theorem}
\begin{proof}
For the first part of the theorem, suppose, that a strategy $S$ has (worst-case) acceleration ratio $\beta=n(b+1)-\epsilon$,
and incurs fewer than $n \log_b \left(\frac{t(b-1)}{n}+1 \right) -n$ preemptions for any $t$.
We will first show that there exists a strategy $S'$ with the following properties: i) at its first phase, $S$ executes $n$
jobs, all of the same unit length, for each of of the $n$ problems; ii) the number of preemptions of $S'$ at time $t$
does not exceed the number of preemptions of $S$ by more than $n$; and iii) $S'$ has no worse acceleration ratio than $S$.
To see this, we will use the canonical assumption that interruptions occur only after at least a job per problem has been
executed (see~\cite{steins}). Let $l_1,l_2, \ldots ,l_n$ denote the aggregate lengths of jobs in this first phase, in
non-decreasing order; here $l_i>1$, for all $i$ (since we may assume, from normalization, that the smallest job length is equal to 1).
Consider then a strategy $S'$ which first schedules $n$ unit jobs, one per problem, followed by $n$ more jobs (again one per problem)
of lengths $l_1-1, \ldots l_n-1$. From that point onwards, $S'$ is precisely $S$. In other words, $S'$ is derived by substituting the
initial phase of $S$ by two sub-phases, as defined above. It is easy to see that $S'$ has no worse acceleration ratio than $S$.
Moreover, since $S'$ introduces at most $n$ new job executions, in comparison to $S$. Therefore, $S'$ is such that at time $t$ at most
$n \log_b \left(\frac{t(b-1)}{n}+1 \right)$ preemptions are incurred.
Let $t$ be arbitrarily close to, but smaller than $nl(b+1)$.
Then from the assumption, $S'$ must incur fewer than $n \log_b\left( \frac{n(b^2-1)}{n}+1\right)$ preemptions by time $t$. This would imply
that there is a problem for which $S'$ does not schedule a job within the interval $[n, (b+1)n]$, from which it follows that the acceleration
ratio of $S'$ is at least $n(b+1)$, since at time $t$ there is a problem that has been executed to aggregate length equal to 1, which
is a contradiction.
For the second part of the theorem, fix a strategy $S$ of asymptotic acceleration ratio $\beta=nb(1-\epsilon)$.
Consider a partition of the timeline in phases,
such that the $i$-th phase ($i \geq 0$) spans the interval $[n\sum_{j=0}^{i-1}b^j, n\sum_{j=0}^i b^j)$, and thus has length $nb^i$.
We will show that there exists $i_0>0$ such that for all $i \geq i_0$, $S$ must incur at least $n$ preemptions in its $i$-th phase.
Since the geometric strategy with base $b$ incurs exactly $n$ preemptions in this interval, for all $i$,
this will imply that we can partition the timeline $t \geq i_0$ in intervals with the property that in each interval,
$S$ incurs at least as many preemptions as the geometric strategy, which suffices to prove the result.
Suppose, by way of contradiction, that $S$ incurred at most $n-1$ preemptions within $T=[n\sum_{j=0}^{i-1} b^j, n\sum_{j=0}^{i} b^j]$.
Therefore, there exists at least one problem $p$ with no execution in $T$. Consider an interruption at time
$t=n\sum_{j=0}^{i} b^j -\delta$, for arbitrarily small $\delta>0$. Thus, the aggregate job length for $p$ by time
$t$ in $S$ is $\ell_{p,t}\leq n \sum_{j=0}^{i-1} b^j=n\frac{b^{i}-1}{b-1}$. Since $S$ has asymptotic
acceleration ratio $\beta$, there must exist $i_0$ and $ \epsilon'$ with $0<\epsilon'<\epsilon$ such that for all $i \geq i_0$,
$
n\frac{b^{i+1}-1}{b-1} -\delta \leq nb(1-\epsilon') \frac{b^{i}-1}{b-1},
$
which it turn implies that
$\epsilon' \frac{b^{i}-b}{b-1} \leq \delta$ for all $i>i_0$. This is a contradiction, since $\epsilon'$
depends only on $i_0$, and $\delta$ can be arbitrarily small.
\end{proof}
Next, we consider ray-searching and the trade-offs between the competitive ratio and the number of turns.
Recall that in the model we study, the searcher incurs cost only upon visiting newly explored territory.
In particular, we define the geometric search strategy as a round-robin search of the rays; more precisely,
in the $i$-th phase of the strategy each ray is searched up to distance $b^i$ from the origin.
\begin{theorem}
The geometric search strategy has (worst-case) competitive ratio $(b+1)m$, asymptotic competitive ratio $bm$,
and is such that if the searcher had incurred cost $d$, the overall number of turns is at most
$m \log_b \left(\frac{d(b-1)}{m}+1 \right)+m$.
Moreover, for any search strategy with (worst-case) acceleration ratio $m(1+b)-\epsilon$ for any $b>1$, and constant $\epsilon>0$,
there exists a target placement such that the searcher incurs cost $d$, and the number of
number of turns is at least $m \log_b \left(\frac{d(b-1)}{m}+1 \right) -m$.
Last, any strategy with asymptotic competitive ratio $mb(1-\epsilon)$, for any constant $\epsilon>0$, makes
$m \log_b \left(\frac{d(b-1)}{m}+1 \right) -o\left(m \log_b \left(\frac{d(b-1)}{m}+1 \right)\right)$
turns for search cost $d>d_0$, for some $d_0$.
\label{thm:rays.preemption}
\end{theorem}
\begin{proof}
The proof follows by arguments very similar to the proofs of Lemma~\ref{lemma:preemptions.upper} and
Theorem~\ref{thm:preemptions.lower}.
Concerning the performance of geometric search,
let $0, \ldots ,m-1$ denote the rays visited (in round-robin
order) during each phase. We note that the worst-case placement of the target is attained at points right after the
turn point of the searcher in the end of phase $i$,
and in particular after the searcher has incurred cost $d=m\frac{b^{i+1}-1}{b-1}$, whereas the distance of the target from the origin is
equal to $\sum_{j=0}^{i-1}b^j=\frac{b^i-1}{b-1}$. The bounds on the competitive ratio and the number of turns follow
then by the arguments in Lemma~\ref{lemma:preemptions.upper}.
The trade-offs between the competitive ratio and the number of turns follow from ideas very similar to Theorem~\ref{thm:preemptions.lower}.
More precisely, suppose that a strategy $S$ has (worst-case) competitive ratio $\alpha=m(b+1)-\epsilon$,
and incurs fewer than $m \log_b \left(\frac{d(b-1)}{m}+1 \right) -m$ turns, where $d$ is the search cost.
We can show that there exists a strategy $S'$ of competitive ratio at most $\alpha$, with at most
$m \log_b \left(\frac{d(b-1)}{m}+1 \right)$ turns for some $d$, and is such that in its initial phase, each
ray is searched up to distance 1 from the origin. We then use strategy $S'$ to derive a contradiction
(this construction is as in the proof of Theorem~\ref{thm:preemptions.lower}).
Last, we can show the claimed trade-off between competitive ratio and the number of turns.
Fix a strategy $S$ of asymptotic competitive ratio $\alpha=mb(1-\epsilon)$.
Consider a partition of the timeline in phases, such that the $i$-th phase ($i \geq 0$) spans the interval
$[m\sum_{j=0}^{i-1}b^j, m\sum_{j=0}^i b^j)$, and thus has length $mb^i$. Since the searcher has unit speed, we obtain the same partition
concerning the cost incurred by the searcher.
We will show that there exists $i_0>0$ such that for all $i \geq i_0$, $S$ must make at least $m$ turns in its $i$-th phase.
Since the geometric strategy with base $b$ makes exactly $m$ turns in this interval, for all $i$,
this will imply that we can partition the cost incurred by the searcher $d \geq d_0$ in intervals with the property that in each interval,
$S$ incurs at least as many turns as the geometric strategy, which suffices to prove the result.
Suppose, by way of contradiction, that $S$ made at most $m-1$ turns within $D=[m\sum_{j=0}^{i-1} b^j, m\sum_{j=0}^{i} b^j]$.
Therefore, there exists at least one ray $r$ which was not searched in $D$. This implies that at time
$t=m\sum_{j=0}^{i} b^j -\delta$, for arbitrarily small $\delta>0$, there is a ray that has not been searched to depth more than
$m \sum_{j=0}^{i-1} b^j=m\frac{b^{i}-1}{b-1}$. The remainder of the proof follows precisely as the proof of
Theorem~\ref{thm:preemptions.lower}, by considering a target on ray $r$ placed at distance $m\frac{b^{i}-1}{b-1}+\epsilon$ from the origin.
\end{proof}
\subsection{Trade offs in the standard model}
\label{subsec:standard}
The ideas of Section~\ref{subsec:preemptive} can also be applied in the standard
model. In this setting, however, exponential strategies are a more suitable candidate.
\begin{theorem}
\noindent
For contract scheduling, the exponential strategy with base $b$ has acceleration ratio $\frac{b^{n+1}}{b-1}$,
and schedules at most $\log_b (t(b-1)+1)+1$ contracts by $t$. Moreover, any strategy with acceleration
ratio at most $\frac{b^{n+1}}{b-1}-\epsilon$ for $b>1$, and any $\epsilon>0$ must schedule at least
$\log_b (t(b-1)+1)-o(\log_b (t(b-1)+1))$ contracts by $t$, for all $t \geq t_0$.
\label{thm:scheduling.standard}
\end{theorem}
\begin{proof}
It is known that any exponential strategy with base $b$ has acceleration ratio $\frac{b^{n+1}}{b-1}$.
If an interruption $t$ occurs during the $i$-th execution of a contract, then $t \geq \sum_{j=0}^{i-1}b^j=\frac{b^i-1}{b-1}$.
Thus, $i \leq \log_b(t(b-1)+1)$, and since the number of contracts by time $t$ is at most $i+1$, we obtain the desired upper bound.
For the lower bound, we will show a result even stronger than claimed in the statement of the theorem. More precisely, we
will show that any schedule $S$ with acceleration ratio $\beta=\frac{b^{n+1}}{b-1}$
must schedule at least $n+1$ contracts in the timespan $T=[\sum_{j=0}^{i-n-1} b^j, \sum_{j=0}^{i} b^j]$, for all $i$
(we will thus allow even $\epsilon=0$).
Since the exponential strategy with base $b$ schedules exactly $n+1$ contracts in this interval, this will imply that
we can partition the timeline in intervals with the property that in each interval, $S$ schedules at least as
many contracts as the exponential strategy, which suffices to prove the result.
Suppose, by way of contradiction, that $S$ scheduled at most $n$ contracts in the timespan $T=[\sum_{j=0}^{i-n-1} b^j, \sum_{j=0}^{i} b^j]$.
Therefore, at most one contract for each problem has been executed in this interval.
Consider an interruption at time
$t=\sum_{j=0}^{i} b^j -\delta$, for arbitrarily small $\delta>0$. From the assumption, there is at least one problem $p$
for which $S$ did not {\em complete} any contract in the time span $[\sum_{j=0}^{i-n-1} b^j, \sum_{j=0}^{i} b^j-\delta]$.
Thus, the largest contract for $p$ that has completed by time $t$ in $S$
can have length at most $l=\sum_{j=0}^{i-n-1} b^j=\frac{b^{i-n}-1}{b-1}$. Since $S$ has acceleration ratio $\beta$, it must be that
$t \leq \beta l$, which gives
\[
\frac{b^{i+1}-1}{b-1} -\delta \leq \frac{b^{n+1}}{b-1} \frac{b^{i-n}-1}{b-1},
\]
which it turn implies that $\delta \geq \frac{b^{n+1}}{(b-1)^2}-\frac{1}{b-1}$.
This is a contradiction, since $\delta$ can be chosen to be arbitrarily small, and in particular, smaller than
$\frac{b^{n+1}}{(b-1)^2}-\frac{1}{b-1}$.
\end{proof}
For ray searching in the standard model, we can obtain similar results.
Recall that in this model, the searcher incurs cost at all times it moves,
regardless of whether it explores new territory.
We can prove the following theorem along the lines of the proof of Theorem~\ref{thm:scheduling.standard}.
\begin{theorem}
For ray searching in the standard model, the exponential strategy with base $b$ has competitive ratio $1+2\frac{b^{m}-1}{b-1}$, and for any distance
$d$ traversed by the searcher it makes at most $\log_b (d(b-1)+1)+1$ turns.
Moreover, any strategy with competitive ratio at most $1+2\frac{b^{m}-1}{b-1}$, for any $b>1$
incurs at least $\log_b (d(b-1)+1)+1-o(\log_b (d(b-1)+1)$ turns.
\label{thm:searching.standard}
\end{theorem}
\begin{proof}
It is known that the exponential strategy with base $b$ has competitive ratio $1+2\frac{b^{m}-1}{b-1}$. For given distance $d$
traversed by the searcher, the number of turns is computed using precisely the same argument as the number of contract executions of the
exponential strategy in the proof of Theorem~\ref{thm:scheduling.standard}. Likewise, for the lower bound,
we will show that any strategy $S$ with competitive ratio $\alpha=1+2\frac{b^{m}}{b-1}$ must make at least $m$
turns in the timespan $[\sum_{j=0}^{i-m} b^j, \sum_{j=0}^{i} b^j]$
(recall that since the searcher has unit speed, time coincides with the distance traversed by the searcher).
Since the exponential strategy with base $b$
searches exactly $m$ rays in this interval, this will imply that we can partition the timeline (and thus the distances traversed by the searcher)
in intervals with the property that in each interval, $S$ searches at least as many rays as the exponential
strategy, which suffices to prove the result.
Suppose, by way of contradiction, that $S$ made fewer than $m$ turns in the timespan $T=[\sum_{j=0}^{i-m} b^j, \sum_{j=0}^{i} b^j]$.
Therefore, there exists a ray that has not been searched in $T$, say ray $r$. Let $\rho$ denote the depth at which $r$ has been
searched up to time $\sum_{j=0}^{i-m} b^j$, and consider a target placement in $r$ at distance $\rho+\delta$, for
arbitrarily small $\delta$. For this target placement, the competitive ratio is minimized when $\rho$ is maximized; moreover, since
$r$ was not searched in $T$, we obtain that $\rho$ is at most $\frac{1}{2}\sum_{j=0}^{i-m} b^j=\frac{b^{i-m+1}-1}{2(b-1)}$.
Here, the factor $1/2$ is due to the search traversing each ray in both directions (away and towards the origin).
Note also that the target is discovered, at the earliest, at time $\sum_{j=0}^i b^i+\rho+\delta=\frac{b^{i+1}-1}{b-1}+\rho+\delta$.
Since the strategy has competitive ratio $\alpha=1+2\frac{b^{m}}{b-1}$, it must be that
\[
\frac{b^{i+1}-1}{b-1}+\rho+\delta \leq (1+2\frac{b^{m}}{b-1}) (\rho+\delta),
\]
from which, after some simplifications, we obtain that
$2\frac{b^m}{b-1} \delta \geq \frac{b^m}{(b-1)^2}-\frac{1}{b-1}$, which is a contradiction, since
$\delta$ can be arbitrarily small.
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
In this paper we demonstrated that many variants of searching for
a target on concurrent rays and scheduling contract algorithms on a single processor
are amenable to a common approach.
There are some intriguing questions that remain open.
Can we obtain a $\Theta(m/p)$-competitive algorithm for searching with probabilistic detection?
We believe that cyclic strategies are not better than $\Theta(m/p^2)$-competitive.
What are the optimal (non-monotone) algorithms for searching/scheduling
with redundancy? Note that the precise competitive ratio of these problems is open even when $m=2$. As a broader
research direction, it would be very interesting to address searching and scheduling in heterogeneous environments.
For example, one may consider the setting in which each ray is characterized by its own probability of successful target detection.
\bibliographystyle{named}
| {
"attr-fineweb-edu": 1.460938,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUduc4eIXh3noaftrb | \section{Introduction}
\label{sec:intro}
Rare $B$ decay modes provide one of the best opportunities in the search for
physics beyond the Standard Model (BSM). Among them, $B\rightarrow K^*l^+l^-$ is regarded as one of the most important channels, as the polarization of the $K^*$ allows a precise angular reconstruction resulting in many observables which can
be tested in the Standard Model (SM) and its extensions \cite{Ali:1999mm,Bobeth:2012vn,Melikhov:1998cd,Kruger:2005ep,Altmannshofer:2008dz,Matias:2012xw}.
In 2013, LHCb \cite{Aaij:2013qta} published the first analysis of a set of optimized observables, presenting an interesting
pattern of deviations, confirmed by later measurements with a larger
statistics \cite{Aaij:2015oid}, as well as by a recent analysis from the Belle
collaboration \cite{Abdesselam:2016llu}. A first interpretation of this
pattern of deviation was proposed \cite{Descotes-Genon:2013wba}, where the Wilson coefficient $C_9$ of the pertinent semileptonic operator (and, possibly, other
coefficients as well), received contribution from the BSM
physics. Further experimental
results have indicated deviations concerning the branching ratios of $B\to
K^*\mu^+\mu^-$, but also $B_s\to\phi\mu^+\mu^-$ and $B\to K\mu^+\mu^-$, with the
possibility of a violation of lepton flavour universality between electron
and muon modes~\cite{Aaij:2014pli,Aaij:2014ora,Aaij:2015dea}. These results triggered lots of
activities on the theoretical side and, in particular, their consequences on global
fits are being studied \cite{Altmannshofer:2014rta,Hurth:2016fbr,Descotes-Genon:2015uva}. In these global fits,
a special
attention has to be paid to
the theoretical uncertainties arising from the {\it form factors} of the corresponding hadronic matrix
elements, which affect the branching ratios involved in the
fit.
In the low recoil region, which will be our main focus here, these form
factors are mostly
known from light cone sum rules, which suffer from relatively large uncertainties \cite{Ball:2004rg,Straub:2015ica}. It would thus be particularly interesting
to have information on these quantities from lattice QCD simulations. Also, the
method used
to calculate these form factors could be applied to other
interesting processes as, for example, $B\rightarrow K^*\gamma$.
Recently, the first unquenched lattice QCD calculations of the $B\rightarrow K^*$ form factors have appeared \cite{Liu:2011raa,Horgan:2013hoa,Horgan:2015vla} (see also Refs.~\cite{Bowler:1993rz,Bernard:1993yt,Burford:1995fc,Abada:1995fa,Becirevic:2006nm,Abada:2002ie,Bowler:2004zb} for quenched results). Although this work represents a major progress in the field, the simulations have been performed at such quark mass values that the $K^*(892)$ resonance has been treated as a stable particle. Correspondingly, the standard methods of the lattice QCD could be used for the analysis of the data. However, they are not applicable anymore, when the $K^*$ eventually decays into $\pi K$.
The following question has to be addressed: how to compute the matrix elements involving two strongly interacting particles in the in- or out-state? Briefly, the answer is given by the so-called Lellouch-L\"uscher method \cite{Lellouch:2000pv}. It is a generalization of the L\"uscher finite-volume approach \cite{Luscher:1990ux}, which provides a method to extract the elastic phase shifts and the resonance parameters (the mass and width) from the two-particle discrete energy levels spectrum, measured on the lattice.
At the next step, it should be understood, how to {\it define} the matrix elements involving resonances such as $K^*,\rho,$ or $\Delta$. As it has been argued in Refs.~\cite{Agadjanov:2014kha,Bernard:2012bi}, the only plausible field-theoretical definition necessitates an analytic continuation of the matrix element to the resonance pole position in the complex plane. Therefore, strictly speaking, the corresponding form factor can only be defined at the resonance pole. The other well known definition of the form factor is based on the Breit-Wigner parameterization of the resonant amplitude (see, e.g., Refs.~\cite{Aznauryan,Drechsel}). However, this definition yields a model- and process-dependent result, since the background is unknown.
If the width of the resonance is not very small (it is roughly 50 MeV in the case of the
$K^*(892)$), using different definitions might have an effect on the extracted observables.
There is an additional effect, which is due to the presence of the $\eta K$ threshold. For physical quark masses, it is approximately 150 MeV above the $K^*$ mass, and this value will be reduced when the light quark masses, used in the simulations, are higher. One could expect that the effect of this threshold might be seen in the data. The recent lattice calculation by the Hadron Spectrum Collaboration, however, indicates that the coupling between the $\eta K$ and $\pi K$ channels remains small even at the pion mass as large as roughly 400 MeV \cite{Dudek:2014qha,Wilson:2014cna}. Nevertheless, the two-channel problem has to be addressed. Although of academic interest in the present context, a similar theoretical framework could be useful, e.g., for the lattice extraction of the electromagnetic form factors of the $\Lambda(1405)$ resonance (see Refs.~\cite{Menadue:2013xqa,Hall:2014uca} for the recent lattice results).
Recently, the Lellouch-L\"uscher method has been generalized to include multiple strongly-coupled decay channels \cite{Hansen:2012tf,Briceno:2012yi,Briceno:2014uqa,Briceno:2015csa}. In particular, the authors of Ref.~\cite{Briceno:2014uqa} provide general formulas for spinless particles, which are also valid for the $B\rightarrow \pi K (\eta K)$ transition. On the contrary, the extraction of the form factors at the resonance pole in the multi-channel case has not been studied yet. It has been done only in the one-channel problem \cite{Agadjanov:2014kha}. In the present work, we fill this gap by considering the $\pi K-\eta K$ coupled-channel system.
In order to establish a relation between the finite volume quantities, measured on the lattice, and infinite volume observables, a systematic theoretical framework is needed. We apply the so-called non-relativistic effective field theory in a finite volume in its covariant formulation \cite{Colangelo:2006va,Gasser:2011ju}. We find this approach algebraically simpler than the one based on the Bethe-Salpeter equation (see, e.g., Refs.~\cite{Kim:2005gf,Christ:2005gi}). In the end, both methods have the same range of applicability and one arrives at the same results.
The paper is organized as follows: In section~2, we introduce form factors governing the $B\rightarrow K^*$ transition. We also consider the proper kinematics, which should be used in lattice measurements of matrix elements. Further, in section~3, we set up the non-relativistic effective field theory in a finite volume. The two-channel analogue of the Lellouch-L\"uscher formula is re-derived. In section~4, we obtain the equation for the extraction of the form factors at the resonance pole in the two-channel case. Additionally, in view of different
opinions expressed in the literature (see, e.g., Refs.~\cite{Briceno:2015dca,Briceno:2016kkp}), we
address the issue of defining the photon virtuality at the resonance pole. In section~5, we consider the infinitely small width approximation for our results. Section~6 contains our conclusions.
\section{Matrix elements on the lattice}
\subsection{Formalism}
The effective theory of the $b\rightarrow s$ decays is based on the weak Hamiltonian \cite{Grinstein:1987vj,Grinstein:1990tj,Buras:1993xp,Ciuchini:1993ks,Ciuchini:1993fk,Ciuchini:1994xa}
\begin{equation}
\mathcal{H}_{\mathrm{eff}} ~=~ -\frac{4 G_F}{\sqrt{2}}
V_{ts}^* V_{tb}^{} \sum_i C_i W_i \,,
\label{eq:Heff}
\end{equation}
where $G_F$ denotes the Fermi constant, $V_{ts},\, V_{tb}$ are elements of the CKM matrix and the
$C_i$ are Wilson coefficients. In the SM, one has 10 effective local operators $W_i$. Such a description is applicable at energies much below the masses of the weak gauge bosons.
The seven $B\rightarrow K^*$ form factors are contained in the matrix elements of the $W_7,\,W_9$ and $W_{10}$ operators:
\begin{eqnarray}
W_7 = \frac{m_b e}{16\pi^2} \bar{s}\sigma^{\mu\nu} P_R b \, F_{\mu\nu},\quad
W_9 = \frac{e^2}{16\pi^2}\bar{s} \gamma^\mu P_L b\, \bar\ell \gamma_\mu \ell,\quad
W_{10} = \frac{e^2}{16\pi^2}\bar{s} \gamma^\mu P_L b\, \bar\ell \gamma_\mu
\gamma^5 \ell,
\end{eqnarray}
where $F_{\mu\nu}$ is the electromagnetic field strength tensor, and
\begin{equation}
P_{L/R} = \tfrac{1}{2}(1 \mp \gamma^5),\quad \sigma^{\mu\nu} =
\tfrac{i}{2}[\gamma^\mu,\gamma^\nu].
\end{equation}
They are defined, in Minkowski space, through the following expressions (see, e.g., Ref.~\cite{Horgan:2013hoa}):
\begin{eqnarray}\label{formfact1}
\langle V(k,\lambda) | \bar{s}\gamma^\mu b| B(p)\rangle
&=& \frac{2 i V(q^2)}{m_B + m_V}
\epsilon^{\mu\nu\rho\sigma} \epsilon^*_\nu k_\rho p_\sigma, \\[4mm]
\langle V(k,\lambda)|\bar{s}\gamma^\mu \gamma^5 b| B(p)\rangle
&=& 2 m_V A_0(q^2) \frac{\epsilon^* \cdot q}{q^2} q^\mu
+ (m_B + m_V)A_1(q^2)\left(
\epsilon^{*\mu} - \frac{\epsilon^* \cdot q}{q^2} q^\mu \right)\nonumber \\
&&- \; A_2(q^2) \frac{\epsilon^*\cdot q}{m_B + m_V}\left[
(p+k)^\mu - \frac{m_B^2 - m_V^2}{q^2} q^\mu\right], \\[4mm]
q_\nu\langle V(k,\lambda) | \bar{s}\sigma^{\mu\nu}b| B(p)\rangle
&=& 2 T_1(q^2) \epsilon^{\mu\rho\tau\sigma} \epsilon^*_\rho p_\tau k_\sigma,
\\[4mm]
q_\nu\langle V(k,\lambda) |
\bar{s}\sigma^{\mu\nu} \gamma^5 b| B(p)\rangle
& = & i T_2(q^2)
[(\epsilon^* \cdot q)(p+k)^\mu - \epsilon^{*\mu} (m_B^2-m_V^2)
] \nonumber \\\label{formfact4}
&&+\; i T_3(q^2)(\epsilon^*
\cdot q)\left[ \frac{q^2}{m_B^2
-m_V^2}(p+k)^\mu - q^\mu \right]\,,
\end{eqnarray}
where $q=p-k$ is a momentum transfer to the lepton pair, and $\epsilon(k,\lambda)$ denotes a polarization vector of the vector meson ($K^*$) with momentum $k$ and spin polarization $\lambda=1,2,3$ (see, e.g., Ref.~\cite{Horgan:2013hoa}). Here, it is assumed that the $K^*$ is a stable particle with mass $m_V$ and appropriate quantum numbers.
We note that the contributions of other operators to the full decay amplitude are seen to be small in the low recoil region, in which both $B$ and $K^*$ are roughly at rest (see, e.g., Refs.~\cite{Khodjamirian:1997tg,Khodjamirian:2010vf,Grinstein:2004vb,Beylich:2011aq}). Correspondingly, we consider the decay process in this kinematic region, so that the amplitude, extracted from lattice data, coincides approximately with the full one.
\subsection{Finite volume}
Since lattice simulations are performed in a finite spatial volume, the continuous rotational symmetry is broken down to the cubic one. Consequently, some particular irreducible representations (irreps) of the cubic group, or its subgroups in the moving frames, should be chosen. Taking into account the fact that,
at energies below multi-particle thresholds the neglect of D- and
higher partial waves seems to be justified, in order to clearly
extract the P-wave scattering phase shift through the L\"uscher equation,
it is preferable
to choose irreps, in which no mixing between S- and P- waves occurs.
For that purpose, we consider the process in the $K^*$ rest frame:
\begin{equation}
{\bf k}=0,\quad {\bf p}={\bf q}=\frac{2\pi}{L}{\bf d}, \quad {\bf d} \in \mathbb{Z}^3,
\end{equation}
where $L$ denotes the side length of the volume, $V=L^3$.
When the $K^*$ is not at rest, only some of the form factors can be extracted without mixing. We provide the details in Appendix A. In the following, we write down the expressions for the current matrix elements, when the ${\bf d}$ vector is chosen along the third axis ${\bf d}=(0,0,n)$. The two other cases, ${\bf d}=(n,n,0)$ and ${\bf d}=(n,n,n)$ can be treated along the same lines.
The polarization vector of the free massive spin-1 particle with momentum ${\bf k}$ takes the form:
\begin{equation}
\epsilon^\mu(k,\lambda)=\biggl(\frac{{\bf k}\cdot{\boldsymbol \epsilon}^{(\lambda)}}{m_V},\,{\boldsymbol \epsilon}^{(\lambda)}+\frac{{\bf k}\cdot{\boldsymbol \epsilon}^{(\lambda)}}{m_V(k_0+m_V)}{\bf k} \biggr),
\end{equation}
where the arbitrary vectors ${\boldsymbol \epsilon}^{(\lambda)}$ form an orthonormal basis. In particular, one can choose them as
\begin{equation}
{\boldsymbol \epsilon}^{(+)}=\frac{1}{\sqrt{2}}(1,i,0),\quad{\boldsymbol \epsilon}^{(-)}=\frac{1}{\sqrt{2}}(1,-i,0),\quad{\boldsymbol \epsilon}^{(0)}=(0,0,1).
\end{equation}
Obviously, the polarization vectors $\epsilon^\mu(k,\lambda)$ satisfy the gauge invariance condition
\begin{equation}
k_\mu\cdot\epsilon^\mu(k,\lambda)=0,\quad \lambda=+,-,0.
\end{equation}
Further, the Eqs. (\ref{formfact1})-(\ref{formfact4}) first have to be rewritten in the Euclidean space. This can be done by applying the prescription
\begin{equation}
a^E_\mu=({\bf a},ia_0),\quad \gamma^E_\mu=(-i{\boldsymbol \gamma},\gamma_0),\quad \gamma^E_5=\gamma^5,\qquad \mu=1,2,3,4,
\end{equation}
where $a^\mu$ is an arbitrary four-momentum in Minkowski space. The superscript $E$ will be suppressed from now on.
\begin{table}[placement]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline Little group & Irrep & Form factor\\ [1ex]
\hline\hline \multirow{2}{*}{$C_{4v}$} & $\mathbb{E}$ & $V$,\,$A_1$,\,$T_1$,\,$T_2$ \\[1ex]
& $\mathbb{A}_1$ & $A_0$,\,$A_{12}$,\,$T_{23}$\\
\hline
\end{tabular}
\end{center}
\caption{\small Extraction of matrix elements in the irreps without partial-wave mixing.}
\end{table}
With this in mind, we pick up the following current matrix elements
\begin{eqnarray}\nonumber\label{7ffs}
\langle V(+) | J^{(+)}| B(p)\rangle
&=& -\frac{2 i m_V|{\bf q}|V(q^2)}{m_B + m_V},\\[2mm]
\langle V(0) | i(E_B-m_V) J_A + |{\bf q}| J^{(0)}_A | B(p)\rangle
&=& - 2im_V|{\bf q}| A_0(q^2),\nonumber\\[2mm]
\langle V(+)|J^{(+)}_A| B(p)\rangle
&=& -i(m_B+m_V) A_1(q^2),\nonumber\\[2mm]
\langle V(0) | i(E_B-m_V)J^{(0)}_A-|{\bf q}| J_A | B(p)\rangle
&=& 8m_Bm_V A_{12}(q^2),\nonumber\\[2mm]
\langle V(+) | i(E_B-m_V)I^{(+)} + |{\bf q}|I^{(+)}_0 | B(p)\rangle
&=& 2im_V|{\bf q}| T_1(q^2),\nonumber\\[2mm]
\langle V(+) | i(E_B-m_V)I^{(+)}_{A} + |{\bf q}|I^{(+)}_{0A} | B(p)\rangle
&=& -i(m_B^2-m_V^2) T_2(q^2),\nonumber\\[2mm]
\langle V(0) |
I^{(0)}_A| B(p)\rangle
&=&- \frac{4m_Bm_V}{m_B+m_V} T_{23}(q^2),
\end{eqnarray}
where $E_B=\sqrt{m^2_B+{\bf q}^2}$ is energy of the $B$ meson, and $\langle V(+)|$ is a state vector with a positive circular polarization,
\begin{equation}
\langle V(+)|=\frac{\langle V(1)|-i\langle V(2)|}{\sqrt{2}}~.
\end{equation}
Here, the current operators are given by
\begin{eqnarray}\nonumber
J^{(\pm)}&=&\frac{1}{\sqrt{2}}\bar{s}(\gamma_1\pm i\gamma_2) b, \quad J^{(\pm)}_A=\frac{1}{\sqrt{2}}\bar{s}(\gamma_1\pm i\gamma_2) \gamma_5 b,
\\[2mm]\nonumber
J^{(0)}_A&=&\bar s \gamma_3\gamma_5 b,\quad J_A=\bar s \gamma_4\gamma_5 b,\quad I^{(0)}_A=\bar{s}\sigma_{34} \gamma_5 b,
\\[2mm]\nonumber
I^{(\pm)}_0&=&\frac{1}{\sqrt{2}}\bar{s}(\sigma_{13}\pm i\sigma_{23})b,\quad I^{(\pm)}_{0A}=\frac{1}{\sqrt{2}}\bar{s}(\sigma_{13}\pm i\sigma_{23})\gamma_5b,
\\[2mm]
I^{(\pm)}&=&\frac{1}{\sqrt{2}}\bar{s}(\sigma_{14}\pm i\sigma_{24})b,\quad I^{(\pm)}_{A}=\frac{1}{\sqrt{2}}\bar{s}(\sigma_{14}\pm i\sigma_{24})\gamma_5b,
\end{eqnarray}
and the quantities $A_{12}(q^2)$, $T_{23}(q^2)$ are related to the form factors through
\begin{align}
A_{12}(q^2) \;=\; & \frac{(m_B + m_V)^2(m_B^2 - m_V^2 - q^2) A_1(q^2) -
\lambda A_2(q^2)}{16 m_B m_V^2 (m_B + m_V)}, \\
T_{23}(q^2) \;=\; & \frac{m_B+m_V}{8m_B m_V^2}
\left[\left(m_B^2 + 3m_V^2 - q^2\right)T_2(q^2)
- \frac{\lambda T_3(q^2)}{m_B^2 - m_V^2} \right],
\end{align}
where $\lambda\equiv\lambda(m_B^2,m_V^2, q^2)=[(m_B+m_V)^2-q^2][(m_B-m_V)^2-q^2]$ denotes the K\"all\'en triangle function. In the following, we denote the matrix elements Eq.~(\ref{7ffs}) shortly as $F^M$, $M=1,...,7$.
When the $K^*$ is taken at rest, it is necessary to consider lattice simulations
in asymmetric boxes (see below). These boxes, which are of the type
$L\times L\times L'$, have the same symmetry properties as the symmetric ones
boosted in the ${\bf d}=(0,0,n)$ direction. In Table~1,
the irreps of the corresponding little group, where the matrix elements
Eq.~(\ref{7ffs}) should be measured, are listed.
The states $\langle V(\pm) |,\,\langle V(0) |$ are created by acting with
the following local field operators, transforming according to these irreps, on the vacuum state $\langle 0 |$:
\begin{eqnarray}\label{operators}
{\cal O}_{\mathbb{E}}^{(\pm)}({\bf 0},t)=\frac{1}{\sqrt{2}}\sum_{\bf x}\big(O_1({\bf x},t)\mp iO_2({\bf x},t)\big),\quad {\cal O}_{\mathbb{A}_1}^{(0)}({\bf 0},t)=\sum_{\bf x}O_3({\bf x},t),
\end{eqnarray}
where $O_i(x)$ are spatial components of the vector field potential (see, e.g., Ref.~\cite{Gockeler:2012yj}). Such operators are constructed out of the local quark bilinears. In practice, it is important to add also meson-meson-type non-local operators in lattice simulations. These can be constructed along the lines described in Ref.~\cite{Gockeler:2012yj}.
Until now, the $K^*$ has been assumed to be a stable vector meson. When the $K^*$ becomes a resonance in lattice simulations, the matrix elements of Eq.~(\ref{7ffs}) can still be measured. However, one gets the matrix elements of the current between a one-meson state $|B(p)\rangle$ and a certain eigenstate of the finite-volume Hamiltonian. The mass $m_V$ is now replaced by the discrete energy $E_n$ of the $n$-th eigenstate ($n=0,1,...$). The dependence of the energy $E_n$ on the volume is not suppressed exponentially (unlike the case of a stable $K^*$) \cite{Luscher:1990ux}. A similar statement holds for the quantities $F^M$.
The matrix elements $F^M$ are functions of the total center-of mass (CM) energy $E_n$ and 3-momentum $|{\bf q}|$ of the $B$ meson: $F^M=F^M(E_n,|{\bf q}|)$. As it has been previously discussed in case of the $\Delta N\gamma^*$ transition in Ref.~\cite{Agadjanov:2014kha}, in order to determine the form factors at the $K^*$ resonance pole, the quantities $F^M$ should be measured at different values of the energy $E_n$ (for a given value of $n$), while keeping $|{\bf q}|$ fixed. Again, this could be achieved by applying asymmetric volumes with asymmetry along the third axes $L\times L\times L'$ or (partial) twisting in the $b$-quark (see Ref.~\cite{Agadjanov:2014kha} for more details).
Below, we study in detail the extraction of the form factors on the real energy axis as well as at the complex resonance pole. We emphasize once more, that only the definition, which implies the analytic continuation, leads to the process-independent values of the resonance form factors.
\section{Lellouch-L\"uscher formula}
\subsection{Infinite volume}
In this section, the analogue of the Lellouch-L\"uscher formula in the two-channel case is reproduced. For that purpose, we apply the non-relativistic effective field theory in a finite volume along the lines of Refs.~\cite{Bernard:2012bi,Agadjanov:2014kha}. We generalize the formulas given there appropriately so that they can suit our needs. In the following, the $K^*$ is taken at rest, so that there is no S- and P-wave mixing.
Further, we specify the matrix elements of the scattering amplitude. The actual physics can not, of course, depend on the chosen parameterization. In the literature,
there exists a parameterization of the $S$-matrix due to
Stapp {\it et al.}~\cite{Stapp:1956mz}.
In this work, we rather follow the one from
Refs.~\cite{Hansen:2012tf,Blatt:1952zza} and write the
$T$-matrix in terms of three real parameters: the so-called eigenphases
$\delta_1(p_1)$, $\delta_2(p_2)$ and mixing parameter $\varepsilon(E)$
\begin{equation}\label{T-matrix}
T={8\pi\sqrt{s}}
\begin{pmatrix}
\frac{1}{p_1}(c_\varepsilon^2e^{i\delta_1}\sin\delta_1+s_\varepsilon^2e^{i\delta_2}\sin\delta_2) &\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon(e^{i\delta_1}\sin\delta_1-e^{i\delta_2}\sin\delta_2)\\
\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon(e^{i\delta_1}\sin\delta_1-e^{i\delta_2}\sin\delta_2) & \frac{1}{p_2}(c_\varepsilon^2e^{i\delta_2}\sin\delta_2+s_\varepsilon^2e^{i\delta_1}\sin\delta_1)
\end{pmatrix},
\end{equation}
where $s_\varepsilon\equiv\sin\varepsilon(E)$, $c_\varepsilon\equiv\cos\varepsilon(E)$. Here, $p_1$ and $p_2$ denote the relative 3-momenta in the $\pi K$ and $\eta K$ channels, respectively. They are related to the total energy $E$ through the equations
\begin{equation}\label{3-momenta}
|p_1|=\frac{\lambda(m_\pi^2,m_K^2, s)}{2\sqrt{s}},\quad |p_2|=\frac{\lambda(m_\eta^2,m_K^2, s)}{2\sqrt{s}},
\end{equation}
where $s=E^2$. We note that the eigenphases $\delta_1$, $\delta_2$ have the meaning of phase shifts in the corresponding channels $\pi K$ and $\eta K$, respectively, only in the decoupling limit $\varepsilon\rightarrow 0$. Otherwise, their behaviour with energy is non-trivial (see, e.g., Refs.~\cite{Dalitz:1970,Workman:2012hx}).
Firstly, thanks to the no-crossing theorem \cite{Wigner:1929}, the curves of the functions $\delta_1(E)$, $\delta_2(E)$ cannot intersect. Secondly, assuming the Breit-Wigner approximation, it can be shown that only one of these curves crosses $\pi/2$ in the vicinity of the resonance energy (see below). Lattice data should not be in contradiction with
these properties.
On the other hand, the $T$-matrix obeys Lippmann-Schwinger equation (see Ref.~\cite{Bernard:2012bi}):
\begin{equation}\label{LS}
T=V+VGT,
\end{equation}
where the angular momentum index $l$ has been suppressed. Here, $V$ denotes a potential and $G(s)$ is a loop function matrix given by
\begin{equation}\label{loop function}
G=
\begin{pmatrix}
\frac{ip_1}{8\pi\sqrt{s}} & 0\\
0 & \frac{ip_2}{8\pi\sqrt{s}}
\end{pmatrix}.
\end{equation}
In Eq.~(\ref{LS}), all quantities have been taken on the energy shell $p_1=p'_1,\,p_2=p'_2$,
where $p_1,\,p_2$ and $p'_1,\,p'_2$ are respective relative momenta in the initial and final
two-particle states.
The parameterization of the potential $V$ in terms of parameters $\delta_1(p_1)$, $\delta_2(p_2)$ and $\varepsilon(E)$ is obtained readily from Eqs.~(\ref{T-matrix}) and~(\ref{LS}):
\begin{equation}\label{potential}
V={8\pi\sqrt{s}}
\begin{pmatrix}
\frac{1}{p_1}(t_1+s_\varepsilon^2t) &-\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon t\\
-\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon t & \frac{1}{p_2}(t_2-s_\varepsilon^2t)
\end{pmatrix},
\end{equation}
where $t_i\equiv\tan\delta_i(p_i)$ and $t=t_2-t_1$. Clearly, the potential matrix $V$ is real and symmetric.
\subsection{Finite volume}
\subsubsection{Two-point function}
We return to the derivation of the two-channel Lellouch-L\"uscher formula. Our goal is to calculate the two- and three-point correlation functions relevant to the $B\rightarrow K^*$ form factors. Let $O(x)$ be a local operator with quantum numbers of the $K^*$ that transforms according to the given irrep, as provided explicitly in Eq.~(\ref{operators}). According to the methodology of the lattice calculations, one is interested in the Euclidean two-point function of the form
\begin{equation}\label{two-point}
D(x_0-y_0)=\langle0|{\cal O}(x_0){\cal O}^{\dagger}(y_0)|0\rangle,
\end{equation}
where ${\cal O}(t)$ is given by the Fourier transformation of the $O(x)$ in the rest frame:
\begin{equation}\label{O-def}
{\cal O}(t)=\sum_{{\bf x}}O({\bf x},t).
\end{equation}
Note that we always work in the limit of zero lattice spacing, in which the right-hand side of Eq. (\ref{O-def}) contains an integral over the finite volume instead of a sum over the lattice sites.
It is clear from the spectral representation\footnote{In this work, we use a different from Ref.~\cite{Agadjanov:2014kha} normalization of the eigenstates of the total Hamiltonian. While the single $B$-meson state in a finite
volume is still normalized, according to
$\langle B({p})|B({p})\rangle=2E_B$, the normalization of the two-particle states $E_n$ is given by $\langle E_n|E_n\rangle=1$.} of the function $D(x_0-y_0)$,
\begin{equation}\label{2-point-spectral}
D(x_0-y_0)=\sum_{{n}}e^{-E_n(x_0-y_0)}|\langle 0|{\cal O}(0)|E_n\rangle|^2,
\end{equation}
that energy levels $E_n$ can be extracted by studying the decay pattern of $D(x_0-y_0)$ in the formal limit $x_0\rightarrow+\infty,\, y_0\rightarrow-\infty$.
The diagrammatic representation of the two-point function Eq. (\ref{two-point}) within the non-relativistic effective field theory below the inelastic threshold is shown in Fig.~1. The quantities $X_\alpha,\,\alpha=1,2$, denote the couplings of the operator ${\cal O}$ to respective channels. Since the corresponding Lagrangian contains terms with arbitrary number of spatial derivatives, one has $X_\alpha=A_\alpha+B_\alpha {\bf p}^2_\alpha+\cdots$, where $A_\alpha,B_\alpha,\dots$ contain only short-range physics. Here, ${\bf p}^2_\alpha,\, \alpha=1,2$, are {\it external} relative 3-momenta squared in the corresponding channels. Although the expansion for $X_\alpha$ is written in the CM frame, it can be brought to the covariant form in an arbitrary moving frame (see Ref.~\cite{Gasser:2011ju}). It is important to note that quantities $X_\alpha$ will drop out in the final result.
\begin{figure}[placement]
\begin{center}
\includegraphics[scale=0.55]{two-point}
\end{center}
\caption{\small Two-point function $D(x_0-y_0)$ in the non-relativistic effective field theory in a finite volume. The grey circle, square, and triangle depict different couplings in the $\pi K-\eta K$ system. The quantities $X_1,X_2$ are couplings of the operator ${\cal O}$ to the respective channels. Similar diagramms are obtained by replacements $X_1\to X_2$ and $X_2\to X_1$.}
\end{figure}
After summing up all two-particle reducible diagrams, the two-point function reads
\begin{equation}\label{2point-integral}
D(x_0-y_0)={\cal V}\int_{-\infty}^{+\infty}\frac{dP_0}{2\pi}\,e^{iP_0(x_0-y_0)}X^T[G_L(P_0)+G_L(P_0) T_L(P_0) G_L(P_0)]X,
\end{equation}
where $X^T=(X_1,X_2)$, ${\cal V}$ is the lattice volume, and $G_L$ denotes a finite-volume counterpart of the loop function matrix Eq.~(\ref{loop function}):
\begin{equation}
G_L=
\begin{pmatrix}
-\frac{p_1}{8\pi\sqrt{s}}\cot\phi(p_1)\ & 0\\
0 & -\frac{p_2}{8\pi\sqrt{s}}\cot\phi(p_2)
\end{pmatrix},\quad s=-P_0^2.
\end{equation}
Here, $\phi(p_\alpha)$ are the volume-dependent functions that are related to the L\"uscher zeta-function. They are given by the following expressions in the irreps of interest $\mathbb{E}$ and $\mathbb{A}_1$ (see, e.g., Ref.~\cite{Gockeler:2012yj}):
\begin{eqnarray}
\cot\phi^\mathbb{E}(p_\alpha)=-\frac{1}{{\pi}^{3/2}\eta_\alpha}\,\biggl\{\hat Z_{00}(1;\eta_\alpha^2)-\frac{1}{\sqrt{5}\eta_\alpha^2}\,
\hat Z_{20}(1;\eta_\alpha^2)\biggr\},\\
\cot\phi^{\mathbb{A}_1}(p_\alpha)=-\frac{1}{{\pi}^{3/2}\eta_\alpha}\,\biggl\{\hat Z_{00}(1;\eta^2_\alpha)+\frac{2}{\sqrt{5}\eta^2_\alpha}\,
\hat Z_{20}(1;\eta_\alpha^2)\biggr\},
\end{eqnarray}
where $\eta_\alpha=p_\alpha L/2\pi$. The L\"uscher zeta-function $\hat Z_{lm}(1;\eta^2)$ for generic asymmetric volumes $L\times L\times L'$ with $L'=xL$ reads
\begin{equation}
\hat Z_{lm}(1;\eta^2)=\frac{1}{x}\sum_{\bf n\in\mathbb{Z}^3}\frac{{\cal Y}_{lm}({\bf r})}
{{\bf r}^2-\eta^2}\, ,\quad\quad r_{1,2}=n_{1,2}\, ,\quad r_3=\frac{1}{x}\,n_3\,
,\quad\quad \hat Z_{20}(1;\eta^2)\neq 0\, .
\end{equation}
Further, the $T_L$-matrix is a scattering amplitude in a finite volume that is defined formally also through a Lippmann-Schwinger equation with the same potential $V$:
\begin{equation}
T_L=V+VG_LT_L.
\end{equation}
Substituting the potential $V$, Eq.~(\ref{potential}), into this equation, we obtain:
\begin{equation}\label{2T_L}
T_L=\frac{8\pi\sqrt{s}}{f(E)}
\begin{pmatrix}
\frac{1}{p_1}[t_1\tau_1(t_2+\tau_2)+s_\varepsilon^2\tau_1\tau_2 t] &-\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon \tau_1\tau_2 t\\
-\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon \tau_1\tau_2 t& \frac{1}{p_2}[t_2\tau_2(t_1+\tau_1)-s_\varepsilon^2\tau_1\tau_2 t]
\end{pmatrix},
\end{equation}
where $\tau_\alpha\equiv\tan\phi(p_\alpha)$ and
\begin{equation}\label{L-equation}
f(E)\equiv(t_1+\tau_1)(t_2+\tau_2)+s_\varepsilon^2(t_2-t_1)(\tau_2-\tau_1)\textsc{\textsc{\textsc{\textsc{}}}}.
\end{equation}
The two-channel L\"uscher equation \cite{Hansen:2012tf,He:2005ey,Liu:2005kr}, which allows to determine the infinite-volume $T$-matrix elements ~\cite{Hansen:2012tf,Bernard:2010fp,Doring:2011vk}, follows directly from Eq.~(\ref{L-equation})
\begin{equation}
(t_1+\tau_1)(t_2+\tau_2)+s_\varepsilon^2(t_2-t_1)(\tau_2-\tau_1)\big|_{E=E_n}=0,
\end{equation}
where all quantities are taken at
the energies $E=E_n$ of the simple poles of the $T_L$-matrix, or equivalently, the eigenvalues of the corresponding strong Hamiltonian in a finite volume.
The integral Eq.~(\ref{2point-integral}) is evaluated by applying Cauchy's theorem. It can be shown explicitly that only the poles of the $T_L(P_0)$-matrix contribute to the integral, while free poles cancel in the integrand \cite{Bernard:2012bi,Briceno:2014uqa}. The residues of the $T_L(P_0)$ factorize in the $n$-th pole $P_0=iE_n$:
\begin{equation}
T_L^{\alpha\beta}=\frac{f_\alpha f_\beta}{E_n+iP_0}+\cdots.
\end{equation}
Here, the quantities $f_1,\,f_2$ can be brought to the following form by applying the L\"uscher equation:
\begin{eqnarray}
f_1^2=\frac{8\pi\sqrt{s}}{p_1}\frac{\tau_1^2(t_2+\tau_2-s_\varepsilon^2 t)}{f'(E)}\bigg|_{E=E_n},\quad
f_2^2=\frac{8\pi\sqrt{s}}{p_2}\frac{\tau_2^2(t_1+\tau_1+s_\varepsilon^2 t)}{f'(E)}\bigg|_{E=E_n},
\end{eqnarray}
where $f'(E)\equiv{df(E)}/{dE}$. Performing the integration over $P_0$, we get
\begin{equation}
D(x_0-y_0)=\frac{{\cal V}}{64\pi^2E_n^2}\sum_{{n}}e^{-E_n(x_0-y_0)} \,\bigg[\sum_{\alpha=1}^{2} X_\alpha p_\alpha(E_n)\tau_\alpha^{-1} (E_n)f_\alpha(E_n)\bigg]^2.
\end{equation}
Comparing this equation with the spectral representation Eq.~(\ref{2-point-spectral}), we finally obtain
\begin{equation}\label{2point-final}
|\langle 0|{\cal O}(0)|E_n\rangle|=\frac{{\cal V}^{1/2}}{8\pi E_n}\bigg|\sum_{\alpha=1}^{2} X_\alpha p_\alpha(E_n)\tau_\alpha^{-1} (E_n)f_\alpha(E_n)\bigg|.
\end{equation}
\subsubsection{Three-point function}
We proceed to evaluate the current matrix elements $F^M(E,|{\bf q}|)$ in a finite volume. To this end, we start from the quantity
\begin{equation}
\Gamma^M(x_0,p)=\langle 0|{\cal O}(x_0)J^M(0)|B(p)\rangle,\quad M=1,\dots 7.
\end{equation}
Here, the $J^M(0)$ denote the operators in the matrix elements of Eq. (\ref{7ffs}). Inserting
a complete set of states, we get the spectral representation of $\Gamma^M(x_0,p)$
\begin{equation}\label{3point-spectral}
\Gamma^M(x_0,p)=\sum_ne^{-E_nx_0}\langle 0|{\cal O}(0)|E_n\rangle F^M(E_n,|{\bf q}|).
\end{equation}
\begin{figure}[placement]
\begin{center}
\includegraphics[scale=0.38]{three-point}
\end{center}
\caption{\small Diagrams contributing to the $B\to K^*$ transition in a finite volume (see Fig.~1 for notations). The quantities $\bar F^M_\alpha(E,|{\bf q}|),\, \alpha=1,2$, are volume-independent up to exponentially suppressed contributions.}
\end{figure}
Diagrammatically, the $B\rightarrow K^*$ transition matrix elements are shown in Fig.~2. The quantities $\bar F^M_\alpha(E,|{\bf q}|),\, \alpha=1,2$, denote the sum of all two-particle irreducible diagrams in the respective channels. They
do not depend on the volume up to exponentially suppressed contributions. The volume dependence arises due to the final-state meson interaction. We note that the diagrams, in which the photon is attached to one of the internal lines or the $B$ meson external line do not contribute to the matrix elements of {\it flavor changing neutral currents}. As a result, summing up the bubble diagrams we obtain
\begin{equation}
\label{3point-integral}
\Gamma^M(x_0,p)={\cal V}^{-1/2}\int_{-\infty}^{+\infty}\frac{dP_0}{2\pi}\,e^{iP_0x_0}X^T[G_L(P_0)+G_L(P_0) T_L(P_0) G_L(P_0)]\bar F^M(P_0,|{\bf q}|),
\end{equation}
where $\bar F^M(P_0,|{\bf q}|)$ denotes a two-component vector with elements $\bar F^M_\alpha(P_0,|{\bf q}|)$. Similarly to the case of the two-point function, only the poles of the $T_L(P_0)$-matrix contribute to the integral. Integrating over $P_0$, one gets
\begin{eqnarray}
\Gamma^M(x_0,p)&=&\frac{{\cal V}^{-1/2}}{64\pi^2E_n^2}\sum_{{n}}e^{-E_nx_0}\sum_{\alpha,\beta=1}^{2} [X_\alpha p_\alpha (E_n)\tau^{-1}_\alpha(E_n)f_\alpha(E_n)]\nonumber\\[0.2cm]&\times&[p_\beta(E_n)\tau^{-1}_\beta(E_n)f_\beta(E_n)\bar F_\beta^M(E_n,|{\bf q}|) ].
\end{eqnarray}
Comparing this formula with Eq.~(\ref{3point-spectral}) and using Eq.~(\ref{2point-final}), we arrive at the final result:
\begin{equation}\label{LL-equation}
|F^M(E_n,|{\bf q}|)|=\frac{{\cal V}^{-1}}{8\pi E}\big|p_1\tau^{-1}_1f_1\,\bar F^M_1+p_2\tau^{-1}_2f_2\,\bar F^M_2\big|\bigg|_{E=E_n}.
\end{equation}
The last step that needs to be done is to relate the above defined quantities $\bar F_1^M,\,\bar F_2^M$ to the (infinite-volume) decay amplitudes ${\cal A}^M_1(B\rightarrow\pi Kl^+l^-)$ and ${\cal A}^M_2(B\rightarrow\eta Kl^+l^-)$ through the two-channel Watson theorem. After summing up the two-particle reducible diagrams in the infinite volume, one gets
\begin{equation}
{\cal A}^M=(1-VG)^{-1}\bar F^M,
\end{equation}
or
\begin{equation}
{\cal A}^M=TV^{-1}\bar F^M,
\end{equation}
where the Lippmann-Schwinger equation has been used. We obtain:
\begin{eqnarray}\label{Watson}
{\cal A}^M_1=\frac{1}{\sqrt{p_1}}(u_1^Mc_\varepsilon e^{i\delta_1}-u_2^Ms_\varepsilon e^{i\delta_2}),\quad {\cal A}^M_2=\frac{1}{\sqrt{p_2}}(u_2^Mc_\varepsilon e^{i\delta_2}+u_1^Ms_\varepsilon e^{i\delta_1}),
\end{eqnarray}
where
\begin{equation}\label{u1u2}
u_1^M=(\sqrt{p_1}c_\varepsilon\bar F^M_1+\sqrt{p_2}s_\varepsilon\bar F^M_2)\cos\delta_1,\quad u_2^M=(\sqrt{p_2}c_\varepsilon\bar F^M_2-\sqrt{p_1}s_\varepsilon\bar F^M_1)\cos\delta_2.
\end{equation}
We have arrived at the two-channel analog of the Lellouch-L\"uscher formula for the $B\to K^*$ transition. Note that, writing Eq.~(\ref{LL-equation}) in terms of the amplitudes $u_1^M,\, u_2^M$, one
obtains the expressions similar to ones given in Ref.~\cite{Hansen:2012tf}. Later, we will consider the limit of this result, when the $K^*$ resonance is infinitely narrow.
Hence, in the two-channel case, two quantities ${\bar F}^M_1$, ${\bar F}^M_2$ and their relative
sign have to be determined from one equation, whereas in the one-channel case,
only one quantity for one equation was involved. Consequently, one needs at least three different measurements at the same energy. This involves the extraction of the excited energy levels (see Ref.~\cite{Hansen:2012tf}). An alternative would be to measure the same energy level in asymmetric volumes of type $yL\times yL \times L'$ for different values of parameter $y$ and $L'$ fixed.
Also, as long as one does not insist on keeping the variable $|{\bf q}|$ fixed
and is ready to perform a two-variable fit for the quantities
${\bar F}^M_\alpha$, (partial) twisting in the $s$-quark or boosts can be applied.
Then, the spectrum becomes dependent on the value of the twisting angle and/or the boost
momentum. Although this option appears to be promising \cite{Doring:2011vk}, the
(potentially large) S- and P-wave mixing is inevitable in this case.
\section{Form factors at the {\boldmath$K^*$} resonance pole}
The current matrix elements involving resonances have the proper field-theoretical meaning only if they are analytically continued to the resonance pole position. The advantage of such a definition is that it is process-independent. On the other hand, the definition based on the Breit-Wigner parameterization is, generally, not free of process- and model-dependent ambiguities, since the non-resonant background is unknown.
\subsection{Effective-range expansion}
The first step towards the pole extraction of the $B\to K^*$ form factors consists in the determination of the $K^*$ resonance position. As is well known, the resonances are associated with complex poles of the
scattering amplitude $T$ on unphysical Riemann sheets in the energy plane ($s$ plane).
The $T$-matrix itself is analytic on the whole plane except for cuts and poles.
Here we will assume that all distant singularities from the pole do not affect
the determination of its position. Thus,
from the
analytic structure of the functions $p_1(E),\, p_2(E)$, Eq.~(\ref{3-momenta}),
the only relevant singularities for our purpose
are two cuts, which run from branch points at the threshold energies $E_1=m_K+m_\pi$ and $E_2=m_K+m_\eta$, respectively, along the positive axis to infinity.
The imaginary parts of the $p_\alpha(s),\,\alpha=1,2$, change the sign, when one goes from one sheet to another through these cuts. The four Riemann sheets are classified according to the signs of ${\rm Im}p_1$ and ${\rm Im}p_2.$ (see, e.g., Ref.~\cite{Badalian:1981xj}). For example, on the sheet II one has ${\rm Im}p_1<0$ and ${\rm Im}p_2>0$, etc.
Further, it is convenient to formulate the problem in the $K$-matrix formalism. The $l=1$ partial-wave amplitude $T$ is defined in terms of $K$-matrix as follows:
\begin{equation}
T=(8\pi\sqrt{s})(K^{-1}-iP )^{-1},
\end{equation}
where $P={\rm diag} (p_1,p_2)$ is a diagonal matrix.
A comparison of this equation with Eq.~(\ref{LS}) leads to the conclusion that the $K$-matrix is proportional to the potential $V$:
\begin{equation}
K=(8\pi\sqrt{s})^{-1}V.
\end{equation}
The poles of the scattering amplitude $T$ appear as the complex solutions of the secular equation, which we write as
\begin{equation}\label{pole-position}
\det(PK^{-1}P-iP^3)=0.
\end{equation}
The explicit form of this equation is different on each Riemann sheet. For instance, if one is interested in the solutions on the sheets II and III, then the matrix $P$ must chosen as $P_{II}={\rm diag} (-p_1,p_2)$ and $P_{III}={\rm diag} (-p_1,-p_2)$, respectively. The change of sign of momenta $p_1$ and/or $p_2$ is equivalent to the transition from one sheet to another.
The analytic properties of the $K$-matrix ensure that the $PK^{-1}P$ function obeys a polynomial expansion of the form (see Refs.~\cite{Badalian:1981xj,Ross:1961})
\begin{equation}\label{ERE}
PK^{-1}P=A+B(E-E_0)+\cdots,
\end{equation}
where $E_0$ is an arbitrary point on the real axis, around which the Taylor expansion is made. The formula Eq.~(\ref{ERE}) is a multi-channel generalization of the well-known effective-range approximation \cite{Bethe:1949yr}. Its additional advantage is the freedom to choose the value of the energy $E_0$: one does not need to start the expansion at threshold energies, as it is usually done. Consequently, the convergence of the series in Eq.~(\ref{ERE}) could be substantially improved. Also, the analytic continuation to the resonance pole position will not be spoiled by the presence of the distant singularities. This expansion, in particular, might be useful in case of the $\rho$ resonance, when lattice simulations are performed at nearly physical quark masses.
In principle, one could also expand the $K$-matrix, see, e.g., Refs.~\cite{Badalian:1981xj,Hyams:1973zf}.
However, such an expansion contains pole terms, which makes the fitting to data more complicated, although not impossible. In fact both parameterizations of the $K$-matrix have been recently used in the lattice study of the resonances in the coupled $\pi K-\eta K$ system \cite{Dudek:2014qha,Wilson:2014cna}.
The procedure to determine the resonance pole position consists in the following steps:
\begin{itemize}
\item[a)] The $K$-matrix is numerically extracted on the lattice, by applying the
L\"uscher approach;
\item[b)] The parameters $A,\, B,\dots$ are fitted to lattice data;
\item[c)] Eq.~(\ref{pole-position}) is solved on each unphysical Riemann sheet. The complex solution, which is numerically closest to the $\pi K,\,\eta K$ thresholds is identified with the $K^*$ resonance pole.
\end{itemize}
Next, we assume that the $K^*$ resonance is located on the sheet II. Other cases can be studied along the same lines.
\subsection{Pole extraction of the form factors}
We proceed with the evaluation of two- and three-point functions in the infinite volume. Afterwards, the result will be analytically continued to the resonance pole. The two-point function in Minkowski space is given by
\begin{equation}\label{2point-pole}
i\,\langle0|T [O(x) O^{\dagger}(y)]|0\rangle=\int\frac{d^4P}{(2\pi)^4}\,e^{-iP(x-y)}D(P^2),
\end{equation}
where the function $D(P^2)$ reads
\begin{equation}\label{2-point-infinite}
D(P^2)=X^T[G_{II}(s)+G_{II}(s) T_{II}(s) G_{II}(s)]X.
\end{equation}
Here, $P^2=s$ and the loop function $G_{II}(s)$ is chosen as
\begin{equation}
G_{II}(s)=
\begin{pmatrix}
-\frac{ip_1}{8\pi\sqrt{s}} & 0\\
0 & \frac{ip_2}{8\pi\sqrt{s}}
\end{pmatrix}.
\end{equation}
The form of the $G_{II}$ guaranties that the scattering amplitude $T$, which is obtained from the Lippmann-Schwinger equation,
\begin{equation}
T_{II}=(V^{-1}-G_{II})^{-1},
\end{equation}
has poles on the sheet II. The simplest way to determine the $T_{II}$-matrix is to make the replacements $\tau_1\rightarrow\,-i$, $\tau_2\rightarrow\,+i$ in Eqs.~(\ref{2T_L},\ref{L-equation}).
We get
\begin{equation}\label{T-matrix_L}
T_{II}=\frac{8\pi\sqrt{s}}{h(E)}
\begin{pmatrix}
\frac{1}{p_1}[t_1(1-it_2)+s_\varepsilon^2 t] &-\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon t\\
-\frac{1}{\sqrt{p_1p_2}}\ces_\varepsilon t& \frac{1}{p_2}[t_2(1+it_1)-s_\varepsilon^2 t]
\end{pmatrix},
\end{equation}
where the quantity $h(E)$ is given by
\begin{equation}
h(E)\equiv(t_1-i)(t_2+i)+2is_\varepsilon^2(t_2-t_1).
\end{equation}
The resonance pole position $E=E_R\equiv\sqrt{s_R}$ is obtained from the equation
\begin{equation}\label{seculareq}
h(E_R)=0.
\end{equation}
Inverting the integral Eq.~(\ref{2point-pole}) and performing the integration over all variables, we get
\begin{equation}\label{2-matrix element}
D(P^2)=\frac{Z_R}{s_R-P^2},\quad
\end{equation}
where $Z_R$ is the (complex) wave-function renormalization constant of the resonance. From Eq.~(\ref{2-matrix element}) it follows that
\begin{equation}
Z_R=\lim_{P^2\rightarrow s_R}(s_R-P^2)\,D(P^2).
\end{equation}
On the other hand, the $T(s)$-matrix on the second Riemann sheet has a pole at
$P^2=s_R$. In the vicinity of the pole, one has
\begin{equation}\label{T_L-pole}
T_{II}^{\alpha\beta}(s)=\frac{h_\alpha h_\beta}{s_R-P^2}+\cdots.
\end{equation}
Here, the quantities $h_1, h_2$ are given by
\begin{eqnarray}
h_1^2=-\frac{8\pi\sqrt{s}}{p_1}\frac{2E(t_2+i-s_\varepsilon^2 t)}{h'(E)}\bigg|_{E=E_R},\quad
h_2^2=-\frac{8\pi\sqrt{s}}{p_2}\frac{2E(t_1-i+s_\varepsilon^2 t)}{h'(E)}\bigg|_{E=E_R},
\end{eqnarray}
where $h'(E)\equiv dh(E)/dE$. Consequently, we obtain the renormalization constant $Z_R$:
\begin{equation}
Z_R=-\frac{1}{64\pi^2E_R^2}\biggl [\sum_{\alpha=1}^{2} (-1)^\alpha\,X_\alpha p_\alpha(E_R)h_\alpha(E_R)\biggr]^2.
\end{equation}
The calculation of the three-point function in the infinite volume proceeds in a similar manner. One gets
\begin{equation}
\label{3point-pole}
i\,\langle 0|T[O(x)J^M(0)] |B(p)\rangle=\int\frac{d^4P}{(2\pi)^4}\,e^{-iPx}\Gamma^M(P,p),
\end{equation}
where the quantity $\Gamma^M(P,p)$ in the frame $P^\mu=(P_0,{\bf 0})$, $p^\mu=(\sqrt{m_B^2+{\bf q}^2},\,{\bf q})$ reads
\begin{equation}\label{3-point-pole}
\Gamma^M(P,p)=X^T[G_{II}(s)+G_{II}(s) T_{II}(s) G_{II}(s)]\bar F^M(P_0,|{\bf q}|).
\end{equation}
Further, recall that the irreducible amplitudes $\bar F^M_\alpha(P_0,|{\bf q}|),\,\alpha=1,2$, are analytic functions in the complex energy plane. Then,
following Refs.~\cite{Mandelstam,Huang-Weldon}, in which the case of matrix elements between the bound states has been first studied, we {\it define} the current matrix elements at the resonance pole as
\begin{equation}\label{FF-pole definition}
F_R^M=\lim_{P^2\rightarrow s_R}Z_R^{-1/2}(s_R-P^2)\,\Gamma(P,p).
\end{equation}
Using Eqs.~(\ref{T_L-pole},\ref{3-point-pole},\ref{FF-pole definition}), we arrive at the final result:
\begin{equation}\label{ff-pole}
F^M_R(E_R,|{\bf q}|)=-\frac{i}{8\pi E}\big(p_1h_1{\bar F}^M_1-p_2h_2{\bar F}^M_2\big)\bigg|_{E=E_R}.
\end{equation}
Note that one still has an overall sign ambiguity in this formula. The corresponding form factors can be read off from Eq.~(\ref{7ffs}), in which the kinematic factors are low-energy polynomials.
In order to reproduce the one-channel result of Ref.~\cite{Agadjanov:2014kha}, the mixing between the channels should be neglected. Then, $h(E)$ takes the form
\begin{equation}
h(E)=(t_1-i)(t_2+i).
\end{equation}
So, one has at the pole position either $t_1(E_R)=+i$ or $t_2(E_R)=-i$. Consider, for instance, the first alternative $t_1(E_R)-i=0$. The derivative $h'(E)$ at $E=E_R$ reads
\begin{equation}
h'(E_R)=(t_2(E_R)+i)t_1'(E_R),
\end{equation}
so that the quantities $h_1,\,h_2$ are given by
\begin{equation}
h_1^2=-\frac{16\pi E^2}{p_1t_1'(E)}\bigg|_{E=E_R},\quad h_2^2=0.
\end{equation}
Consequently, from Eq. (\ref{ff-pole}) we obtain
\begin{equation}\label{ff-1channel}
F^M_R(E_R,|{\bf q}|)=\sqrt{\frac{p_1}{4\pi \,t_1'(E)}}{\bar F}_1^M(E,|{\bf q}|)\bigg|_{E=E_R}.
\end{equation}
A similar formula holds for the $K\eta$ channel.
\subsection{Photon virtuality}
The analytic continuation to the resonance pole yields the quantity
$F^M_R(E_R,|{\bf q}|)$. Below, we would like to briefly discuss a few conceptual
issues, related to the interpretation of this quantity. Namely, we wish
to know:
\begin{itemize}
\item
What is the photon virtuality $q^2$ for the resonance form factor, extracted
at the pole?
\item
How should one compare with the experimental results?
\end{itemize}
In the literature, different statements have been made on this issue so far.
We think that a clarification is needed at this point.
According to the procedure, which is proposed in the present paper (see also Ref.~\cite{Agadjanov:2014kha}), the finite-volume
matrix element is measured at different two-particle energies $E_n(L)$ and
a fixed value of $|{\bf q}|$. After that, an analytic continuation is performed
to the complex resonance pole, keeping $|{\bf q}|$ fixed. Further, the
photon virtuality becomes complex at the pole
\begin{eqnarray}\label{eq:virt}
q^2=\bigg(E_R-\sqrt{m_B^2+{\bf q}^2}\bigg)^2-{\bf q}^2\, .
\end{eqnarray}
\begin{figure}[placement]
\begin{center}
\includegraphics[width=7.cm]{factorization.eps}
\end{center}
\caption{\small The factorization of the amplitudes at the resonance
pole (see Fig.~2 for notations). The photon virtuality, given by Eq.~(\ref{eq:virt}), is complex.
}
\label{fig:factorization}
\end{figure}
On the other hand, in Refs.~\cite{Briceno:2015dca,Briceno:2016kkp}, where the $\rho\to\pi\gamma^*$ transition form factor is considered, the authors simultaneously
parameterize the energy- and
$q^2$-dependence of the measured matrix element by some phenomenological fit function
and perform the analytic continuation to the complex value of energy at
a fixed $q^2$. The quantity $q^2$ is taken real at the pole.
Having two different procedures for the determination of the matrix element at the pole, it seems that the result is not unique. In order to show that the form factor
can be uniquely defined, we note that the residue of the full amplitude at the pole should
factorize in the product of the resonance form factor
and the vertex, describing the transition of a resonance
into the final state, see Fig.~\ref{fig:factorization}. The background becomes irrelevant, which leads to the determination of the form factor at the pole in a process-independent manner. From this figure
it is clear that the photon virtuality, defined through the use of the
4-momentum conservation, coincides with the one given in Eq.~(\ref{eq:virt})
and thus must be complex. One could of course consider the electroproduction
amplitude at a different (even at a real) photon virtuality as well. However,
in this case, the background does not vanish completely, so the continuation
to the pole does not make sense, since the result is process-dependent anyway.
It should be stressed that this argument equally holds both in the analysis
of the data from the electroproduction experiments as well as for the results of lattice QCD simulations.
Another argument addresses the analytic properties of the amplitudes which are
extrapolated into the complex plane. We have shown
that the irreducible amplitudes are low-energy polynomials
in the vicinity of a resonance in the CM energy $E$, if
the photon 3-momentum $|{\bf q}|$ is fixed (see Ref.~\cite{Agadjanov:2014kha}). This fact implies that the analytic
continuation to the complex energies is robust. To the best of our knowledge,
no such statement exists
in case of the function of two independent variables $E$, $q^2$ that might
render the analytic continuation unstable. It remains to be seen, whether
the information about the analytic properties of the form factors in the
variable $q^2$ can be reasonably included. This could greatly constrain the
fit and would be very useful in the analysis of the presently available data,
which correspond to different values of $q^2$ (see, e.g.,
Refs.~\cite{Briceno:2015dca,Briceno:2016kkp}).
\section{Infinitely narrow width}
In this section, for illustrative purposes, we consider the case of a resonance
with an infinitely narrow width having in mind the hypothetical case of a
$K^*$ pole located above the $\eta K$ threshold with a very small width.
The arguments follow the path of Ref.~\cite{Agadjanov:2014kha}, where the
same problem has been considered in case of the elastic scattering (see also Ref.~\cite{Briceno:2015csa}).
It has been shown there that, in the limit of the infinitely narrow width,
the matrix element, measured on the lattice, coincides with the infinite-volume
resonance form factor up to a constant, which takes into account the
difference between the normalization of the one- and two-particle states in a finite volume. However, the multi-channel
case is more subtle, since different two-particle states occur,
and the relation between the infinite- and finite-volume
matrix elements becomes
obscure. Still, as we will see, the final result has exactly the same form as in the one-channel problem\footnote{Inadvertently, in Ref. \cite{Agadjanov:2014kha}, the factor ${\cal V}^{-1/2}$ was missing on the right-hand side of the counterpart of Eq. (\ref{3point-integral}).}.
We start with the two-body
potential from Eq.~(\ref{potential}), which can be written in the following form
\begin{eqnarray}
V=8\pi\sqrt{s}P^{-1/2}O\tilde VO^TP^{-1/2}\, ,
\end{eqnarray}
where
\begin{eqnarray}
P=\mbox{diag}\,(p_1,p_2)\, ,\quad\quad
\tilde V=\mbox{diag}\,(t_1,t_2)\, ,\quad\quad
O=\begin{pmatrix}c_\varepsilon & -s_\varepsilon \cr s_\varepsilon & c_\varepsilon\end{pmatrix}\, .
\end{eqnarray}
Suppose that the resonance behavior near the (real) energy $E=E_0$ emerges in the
quantity $t_1=\tan\delta_1$, whereas the quantity $t_2$ stays regular in this
energy interval. Then, in the vicinity of $E=E_0$, one can write
\begin{eqnarray}
\delta_1(E)=\delta_R(E)+\phi(E)\, ,\quad\quad
\tan\delta_R(E)=\frac{\Gamma_0/2}{E_0-E}\, ,
\end{eqnarray}
and assume that a (small) background phase $\phi(E)$ stays regular.
Further, one may straightforwardly ensure that
\begin{eqnarray}\label{eq:t1}
\cot\delta_1(E)=\frac{ E_{BW}-E}{\Gamma/2}+\cdots\, ,
\end{eqnarray}
where
\begin{eqnarray}
E_{BW}=E_0-\frac{\Gamma_0}{2}\,\tan\phi(E_{BW})\, ,\quad\quad
\frac{\Gamma}{2}=\frac{\Gamma_0}{2}\,(1+\tan^2\phi(E_{BW}))\, .
\end{eqnarray}
This shows that, in the vicinity of a narrow resonance, one can always
get rid of the background phase by a redefinition of the resonance parameters. We note that the second background phase still remains.
The quantities $ E_{BW}$ and $\Gamma$ are the Breit-Wigner mass and
width of the (narrow) resonance.
In the vicinity of a narrow resonance, the scattering amplitude Eq.~(\ref{T-matrix}), which can be represented on the first Riemann sheet as
\begin{eqnarray}
T=8\pi\sqrt{s}P^{-1/2}O\tilde TO^TP^{-1/2},\quad \tilde T=\mbox{diag}\,(e^{i\delta_1}\sin\delta_1,e^{i\delta_2}\sin\delta_2),
\end{eqnarray}
becomes
\begin{eqnarray}\label{eq:T}
T_{\alpha\beta}=\frac{b_\alpha b_\beta}{s_{BW}-s-i\sqrt{s_{BW}}\,\Gamma}+\mbox{regular terms at $E\to E_{BW}$}\,,
\end{eqnarray}
where $s_{BW}=E_{BW}^2$. Here, the quantities $b_1,\,b_2$ are given by
\begin{equation}
b_1=\sqrt{\frac{8\pi s_{BW}\Gamma}{p_1}}\,c_\varepsilon\, ,\quad
b_2=\sqrt{\frac{8\pi s_{BW}\Gamma}{p_2}}\,s_\varepsilon\, ,
\end{equation}
and the regular terms emerge from the contribution of $t_2$.
In order to find a complex pole on the second Riemann sheet, one has to solve the
secular equation, Eq.~(\ref{seculareq}), $h(E_R)=0$. Recalling that $t_1,t_2$ are single-valued
functions and using the explicit representation of $t_1$ from
Eq.~(\ref{eq:t1}), at $E=E_R$ we get
\begin{eqnarray}
t_1(E_R)=\frac{\Gamma/2}{E_{BW}-E_R}
=\frac{i(t_2+i)-2is_\varepsilon^2t_2}{t_2+i-2is_\varepsilon^2}\bigg|_{E=E_R}.
\end{eqnarray}
\subsection{Real axis}
\begin{sloppypar}
On the real energy axis, one can introduce the infinite-volume quantities (``form factors''), which parameterize the imaginary parts of the decays amplitudes ${\cal A}^M_1,\,{\cal A}^M_2$ in the vicinity of the Breit-Wigner resonance.
We denote these volume-independent matrix elements as $F_A^M(E,|{\bf q}|)$. In analogy to the one-channel case (see, e.g. Refs.~\cite{Aznauryan,Drechsel,Briceno:2015csa}), we consider the resonance exchange mechanism at tree level, as shown in Fig. 4. Consequently, the amplitudes ${\cal A}^M_1,\,{\cal A}^M_2$ near $E=E_{BW}$ read
\end{sloppypar}
\begin{equation}
{\cal A}^M_\alpha(E,|{\bf q}|)=\frac{b_\alpha F_A^M(E_{BW},|{\bf q}|)}{E^2_{BW}-E^2-iE_{BW}\Gamma}+\cdots\,,\quad \alpha=1,2,
\end{equation}
where the ellipses stand for the terms emerging from the regular contributions
in Eq.~(\ref{eq:T}).
Setting further $E=E_{BW}$, we get the imaginary parts of the ${\cal A}^M_\alpha$
\begin{eqnarray}\label{amplitudes-narrow}
{\rm Im}{\cal A}^M_1(E_{BW},|{\bf q}|)&=&\sqrt{\frac{8\pi}{p_1\Gamma}}F_A^M(E_{BW},|{\bf q}|)c_\varepsilon+O(1)\, ,
\nonumber\\[2mm]
{\rm Im}{\cal A}^M_2(E_{BW},|{\bf q}|)&=&\sqrt{\frac{8\pi}{p_2\Gamma}}F_A^M(E_{BW},|{\bf q}|)s_\varepsilon+O(1)\, .
\end{eqnarray}
Note that the leading terms in this expression are of order $\Gamma^{-1/2}$, and
the sub-leading $O(1)$ terms emerge from the regular contributions.
\begin{figure}[placement]
\begin{center}
\includegraphics[scale=0.5]{amplitude-BW}
\end{center}
\caption{\small Decay amplitudes ${\cal A}^M_\alpha,\,\alpha=1,2$, in the vicinity of the infinitely narrow $K^*$. The quantities $b_\alpha,\,\alpha=1,2$, denote the couplings of the $K^*$ to the respective channels at $E=E_{BW}$.}
\end{figure}
Further, comparing Eqs.~(\ref{Watson}) and (\ref{amplitudes-narrow}), we see that the following condition has to be satisfied at the Breit-Wigner pole $E=E_{BW}$:
\begin{equation}
u_2^M(E_{BW},|{\bf q}|)=O(1),
\end{equation}
while for the amplitudes $u_1^M(E_{BW},|{\bf q}|)$ one has
\begin{equation}
u_1^M(E_{BW},|{\bf q}|)=\sqrt{p_1}c_\varepsilon\,{\rm Im}{\cal A}^M_1(E,|{\bf q}|)\big|_{E_n\rightarrow E_{BW}}+\sqrt{p_2}s_\varepsilon\,{\rm Im}{\cal A}^M_2(E,|{\bf q}|)\big|_{E_n\rightarrow E_{BW}},
\end{equation}
or
\begin{equation}\label{u1-narrow}
u_1^M(E_{BW},|{\bf q}|)=\sqrt{\frac{8\pi}{\Gamma}}F_A^M(E_{BW},|{\bf q}|)+O(1)=O(\Gamma^{-1/2})\, .
\end{equation}
Consequently, in the limit $\Gamma\to 0$, the leading contribution to the ${\cal A}^M_\alpha$ comes
from $u_1^M$.
However, the amplitudes $u_\alpha^M,\,\alpha=1,2$ are not low-energy polynomials
in the vicinity of $E=E_{BW}$. In order to establish quantities, which have such a
property, we first note that in case of a very narrow resonance, the function
$\cot\delta_1(E)$ is a polynomial in $E$ (see Eq.~(\ref{eq:t1})). It can be further
assumed that the mixing parameter $s_\epsilon(E)$ and $\cot\delta_2(E)$ are also
low-energy polynomials in the vicinity of the resonance. Furthermore, even if the
radius of convergence of the modified effective range expansion, Eq.~(\ref{ERE}),
is assumed to be much larger than the width $\Gamma$, it is still limited from above
by the distance to the nearest threshold. Since the limit $\Gamma\to 0$ is considered
here, it is natural to assume that the mixing parameter $s_\epsilon(E)$ and
$\cot\delta_2(E)$ are also low-energy polynomials in the vicinity of the resonance.
It is then straightforward to check that the functions
\begin{eqnarray}\label{u-tilde}
\tilde u_\alpha^M=\frac{u_\alpha^M}{\sin\delta_\alpha}
\end{eqnarray}
are low-energy polynomials. Indeed, the irreducible amplitudes $\bar F_\alpha^M,\alpha=1,2$, diverge at $E=E_{BW}$, due to the propagation of the bare $K^*$ in the $s$-channel (see Ref.\cite{Agadjanov:2014kha}). According to Eqs.~(\ref{u1u2}) and~(\ref{u-tilde}), this divergence is exactly canceled in the amplitudes $\tilde u_\alpha^M,\,\alpha=1,2$. Consequently, they can be safely expanded in the vicinity of the narrow resonance. This property, in particular, is important, if one considers an analytic continuation into the complex plane.
Rewriting the two-channel Lellouch-L\"uscher formula
in terms of $\tilde u_\alpha^M$, we get
\begin{equation}\label{2LL-alternative}
\big|F^M(E_n,|{\bf q}|)\big|= \frac{{\cal V}^{-1}}{\sqrt{8\pi E}}\,
\big(a_1 \tilde u_1^M+a_2\tilde u^M_2\big)\bigg|_{E=E_n},
\end{equation}
where the quantities $a_1,a_2$ are given by
\begin{eqnarray}
a_1^2&=&t_1^2\frac{t_2+\tau_2-s_\varepsilon^2(\tau_2-\tau_1) }{f'(E)}\bigg|_{E=E_n}\, ,
\nonumber\\[2mm]
a_2^2&=&t_2^2\frac{t_1+\tau_1+s_\varepsilon^2(\tau_2-\tau_1)}{f'(E)}\bigg|_{E=E_n}\ .
\end{eqnarray}
Evaluating the quantities $a_1,a_2$ in the limit of the infinitely narrow width
is somewhat less trivial than in the one-channel case. In order to proceed
further here, let us first recall the line of reasoning used
in the one-channel case. In this case, the L\"uscher equation has a simple form
\begin{eqnarray}
\delta_1+\varphi_1=n\pi\, ,\quad\quad n\in\mathbb{Z}\, ,\quad\quad
\tan\varphi_1=\tau_1\, .
\end{eqnarray}
For sufficiently small $\Gamma$, this equation will have a solution at
$E_n= E_{BW}+O(\Gamma)$. At this energy, the quantities
$t_1,\tau_1$ are of order $O(1)$. However, the {\em derivatives} of $t_1$ and $\tau_1$ behave differently at $E_n\to E_{BW}$. One has $t_1'=\delta_1'/\cos^2\delta_1$
and $\tau_1'=\varphi_1'/\cos^2\varphi_1$, where $\cos^2\delta_1=\cos^2\varphi_1$,
due to the L\"uscher equation. According to Eq.~(\ref{eq:t1}), the derivative of the phase
shift $\delta_1$ diverges as $\Gamma^{-1}$ whereas $\varphi_1'$ stays finite
as $E_n\to E_{BW}$, since it is a kinematical function that does not contain any small scales of order $\Gamma$. Consequently, as $E_n\to E_{BW}$ and $\Gamma\to 0$, one may neglect $\tau_1'$ as compared to $t_1'$.
A similar argument can be carried out in the two-channel case, rewriting the L\"uscher equation in the form
\begin{eqnarray}
\delta_1+\varphi_1=n\pi\, ,\quad\quad
\tan\varphi_1=\frac{\tau_1(t_2+\tau_2)+s_\varepsilon^2t_2(\tau_2-\tau_1)}
{t_2+\tau_2-s_\varepsilon^2(\tau_2-\tau_1)}\, .
\end{eqnarray}
The function $\varphi_1$ is not purely kinematical as it contains $t_2$.
However, it still does not contain small scales of order $\Gamma$. Consequently,
the derivatives of $\varphi_1$ are finite and the quantities
$\tau_1',\tau_2'$ are of order $O(1)$, while $t_1'=O(\Gamma^{-1})$.
Next, retaining only the most divergent terms in $f'(E_n)$ at
$E_n\to E_{BW}$, one gets
\begin{equation}
f'(E_n\rightarrow E_{BW})=t_1'\big(t_2+\tau_2-s_\varepsilon^2(\tau_2-\tau_1)\big)\big|_{E_n\rightarrow E_{BW}}+\cdots\, .
\end{equation}
Consequently, the quantities $a_1^2,a_2^2$ take the values
\begin{eqnarray}
a_1^2&=&\frac{t_1^2}{t_1'}\bigg|_{E_n\to E_{BW}}=\frac{\Gamma}{2}+O(\Gamma^2) ,
\nonumber\\[2mm]
a_2^2&=&\frac{t_2^2}{t_1'}\,\frac{t_1+\tau_1+s_\varepsilon^2(\tau_2-\tau_1)}{t_2+\tau_2-s_\varepsilon^2(\tau_2-\tau_1)}\bigg|_{E_n\to E_{BW}}=O(\Gamma).
\end{eqnarray}
Hence, it follows that the leading contribution to the matrix element
$F^M(E_n,|{\bf q}|)$ in the limit $\Gamma\to 0$ comes only from the term, proportional to $\tilde u^M_1$, whereas the second term is sub-leading. As a result, we obtain
\begin{equation}
\big|F^M(E_n,|{\bf q}|)\big|=\frac{{\cal V}^{-1}}{\sqrt{2E_{n}}}\,\big|F_A^M(E_{n},|{\bf q}|)\big|+O(\Gamma^{1/2}),\quad E_n = E_{BW}+O(\Gamma).
\end{equation}
As seen, the Lellouch-L\"uscher formula has a fairly simple form in the
vicinity of the Breit-Wigner resonance: the
infinite-volume quantities
$F_A^M(E_{BW},|{\bf q}|)$ are equal to the current matrix elements
$F^M(E_{BW},|{\bf q}|)$, measured on the lattice , up to a normalization
factor (note that, in Ref.~\cite{Agadjanov:2014kha}, a different
normalization
of the states has been used).
The form factors can be found from Eq.~(\ref{7ffs}).
\subsection{Complex plane}
The values of the form factors at the resonance pole in the infinitely narrow width limit can be determined along the same lines, as discussed above.
We again express the final result Eq.~(\ref{ff-pole}) through the
amplitudes $\tilde u_1^M,\,\tilde u_2^M$ to get
\begin{equation}\label{ff-pole-alternative}
F^M_R(E_R,|{\bf q}|)= \frac{1}{\sqrt{4\pi}}\,\big(r_1 \tilde u_1^M+r_2\tilde u^M_2\big)\big|_{E=E_R}.
\end{equation}
Here, the quantities $r_1,r_2$ read
\begin{eqnarray}\label{f1f2}
r_1^2&=&t_1^2\frac{t_2+i-2is_\varepsilon^2 }{h'(E)}\bigg|_{E=E_R}\, ,
\nonumber\\[2mm]
r_2^2&=&t_2^2\frac{t_1-i+2is_\varepsilon^2}{h'(E)}\bigg|_{E=E_R}\, .
\end{eqnarray}
Since the functions $\tilde u_\alpha^M$ are low-energy polynomials in
the vicinity of the Breit-Wigner pole, one can analytically
continue them from the real axis to the pole. Consequently, in the limit $\Gamma\to 0$, their values at the pole and at the real axis are equal, up to the terms of order $O(\Gamma)$. We note that
this procedure cannot be applied to the $u^M_\alpha$. Calculating the quantities $r_1,r_2$ at $E_R\to E_{BW}$, we get
\begin{eqnarray}
r_1^2&=&\frac{t_1^2}{t_1'}\bigg|_{E_R\to E_{BW}}=\frac{\Gamma}{2}+O(\Gamma^2)\, ,
\nonumber\\[2mm]
r_2^2&=&\frac{t_1^2}{t_1'}\,\frac{t_2+i-2is_\varepsilon^2 }{t_1-i+2is_\varepsilon^2}\bigg|_{E_R\to E_{BW}}=O(\Gamma).
\end{eqnarray}
As on the real axis, the leading contribution to the $F^M_R$ is dominated by the
$\tilde u_1^M$ term in Eq. (\ref{ff-pole-alternative}). The final expression takes the form
\begin{equation}
F^M_R(E_R,|{\bf q}|)\big|_{\Gamma\to 0}=F_A^M(E_{BW},|{\bf q}|)+O(\Gamma^{1/2}).
\end{equation}
As expected, for infinitely narrow resonance, the form factors $F_A^M(E,|{\bf q}|)$ and $F_R^M(E,|{\bf q}|)$, defined on the real energy axis and complex plane, respectively, coincide.
\section{Conclusions}
In this work, we have studied the extraction of the
$B\to K^*$ transition form factors on the lattice. We have
taken into account, in particular, the possible admixture of the
$\eta K$ to $\pi K$ final states. To this end, we have applied the
non-relativistic effective field theory in a finite volume
and reproduced the two-channel analogue of the Lellouch-L\"uscher formula,
which allows to extract the $B\to K^*l^+l^-$ decay amplitude in the
low-recoil region.
Since the $K^*$ is a resonance, the corresponding current matrix elements
are properly defined and free of process-dependent ambiguities only if
the analytic continuation in the complex energy plane to the resonance
pole position is performed. Consequently, we have set up a framework for
the determination of the form factors at the $K^*$ pole. This is a
generalization of
the one-channel formula, which has been derived in Ref.~\cite{Agadjanov:2014kha}.
In addition, we have discussed in detail the consistent determination of the photon
virtuality at the resonance pole.
Finally, we have considered the limit of an infinitely small width in our
results. The equations in the multi-channel case are more involved and this
limit cannot be performed in a straightforward manner. Nevertheless, we have demonstrated that, even in the multi-channel case, the current matrix element measured on the lattice is equal to the one in the infinite volume, up to a normalization factor that does not depend on the dynamics. This result represents a useful check of our framework.
\bigskip\bigskip\bigskip
\noindent{\it Acknowledgments:} We thank R. Brice\~no and M. Mai for useful discussions.
We acknowledge the support by the DFG (CRC 16,
``Subnuclear Structure of Matter'' and CRC 110 ``Symmetries and the Emergence of Structure in QCD'') and by the Bonn-Cologne Graduate School of Physics and
Astronomy. This research is supported in part by Volkswagenstiftung
under contract no. 86260, and by the Chinese Academy of Sciences (CAS) President's
International Fellowship Initiative (PIFI) (Grant No. 2015VMA076).
\bigskip\bigskip\bigskip
| {
"attr-fineweb-edu": 1.988281,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdvI4ubnhAvmH7RM7 | \section{Introduction}
Mirror symmetry has been extended to the non-Calabi-Yau setting,
notably to Fano manifolds, by the works of
Givental~\cite{Givental94},~\cite{Givental96},~\cite{Givental97a},
Kontsevich~\cite{Kontsevich98} and Hori-Vafa~\cite{HV00}. If
$\bar{X}$ is a Fano manifold, then its mirror is conjectured to be a
pair $(Y,W)$, where $Y$ is a non-compact K\"{a}hler manifold and
$W:Y\rightarrow\mathbb{C}$ is a holomorphic Morse function. In the
physics literature, the pair $(Y,W)$ is called a
\emph{Landau-Ginzburg model}, and $W$ is called the
\emph{superpotential} of the model. One of the very first
mathematical predictions of this mirror symmetry is that there
should be an isomorphism between the small quantum cohomology ring
$QH^*(\bar{X})$ of $\bar{X}$ and the Jacobian ring $Jac(W)$ of the
function $W$. This has been verified (at least) for toric Fano
manifolds by the works of Batyrev~\cite{Batyrev93},
Givental~\cite{Givental97a} and many others. A version of the
\emph{Homological Mirror Symmetry Conjecture} has also been
formulated by Kontsevich~\cite{Kontsevich98}, which again has been
checked in many
cases~\cite{Seidel00},~\cite{Ueda04},~\cite{AKO04},~\cite{AKO05},
~\cite{Abouzaid05},~\cite{Abouzaid06}. However, no direct geometric
explanation for the mirror symmetry phenomenon for Fano manifolds
had been given, until the works of Cho-Oh \cite{CO03}, which showed
that, when $\bar{X}$ is a toric Fano manifold, the superpotential
$W$ can be computed in terms of the counting of Maslov index two
holomorphic discs in $\bar{X}$ with boundary in Lagrangian torus
fibers.
On the other hand, the celebrated \emph{Strominger-Yau-Zaslow (SYZ)
Conjecture}~\cite{SYZ96} suggested that mirror symmetry for
Calabi-Yau manifolds should be understood as a \emph{T-duality},
i.e. dualizing special Lagrangian torus fibrations, modified with
suitable quantum corrections. This will explain the geometry
underlying mirror symmetry \cite{Morrison96}. Recently, Gross and
Siebert~\cite{GS07} made a breakthrough in the study of this
conjecture, after earlier works of Fukaya~\cite{Fukaya02} and
Kontsevich-Soibelman~\cite{KS04}. It is expected that their program
will finally provide a very explicit and geometric way to see how
mirror symmetry works for \textit{both} Calabi-Yau and
non-Calabi-Yau manifolds (more precisely, for varieties with
effective anticanonical class). On the other hand, in
\cite{Auroux07}, Auroux started his program which is aimed at
understanding mirror symmetry in the non-Calabi-Yau setting by
applying the SYZ approach. More precisely, he studied the mirror
symmetry between a general compact K\"{a}hler manifold equipped with
an anticanonical divisor and a Landau-Ginzburg model, and
investigated how the superpotential can be computed in terms
holomorphic discs counting on the compact K\"{a}hler manifold. In
particular, this includes the mirror symmetry for toric Fano
manifolds as a special case.\\
In this paper, we shall again follow the SYZ philosophy and study
the mirror symmetry phenomenon for toric Fano manifolds by using
T-duality. The main point of this work, which is also the crucial
difference between this and previous works, is that, explicit
transformations, which we call \textit{SYZ mirror transformations},
are constructed and used to understand the results (e.g.
$QH^*(\bar{X})\cong Jac(W)$) implied by mirror symmetry. From this
perspective, this paper may be regarded as a sequel to the second
author's work \cite{Leung00}, where semi-flat SYZ mirror
transformations (i.e. fiberwise real Fourier-Mukai transforms) were
used to study mirror symmetry for semi-flat Calabi-Yau manifolds.
While in that case, quantum corrections do not arise because the
Lagrangian torus fibrations are smooth (i.e. they are fiber
bundles), we will have to deal with quantum corrections in the toric
Fano case.
However, we shall emphasize that the quantum corrections which arise
in the toric Fano case are only due to contributions from the
anticanonical toric divisor (the toric boundary); correspondingly,
the Lagrangian torus fibrations do \textit{not} have proper singular
fibers (i.e. singular fibers which are contained in the complement
of the anticanonical divisor), so that their bases are affine
manifolds with boundaries but \textit{without singularities}. This
is simpler than the general non-Calabi-Yau case treated by
Gross-Siebert~\cite{GS07} and Auroux~\cite{Auroux07}, where further
quantum corrections could arise, due to the fact that general
Lagrangian torus fibrations do admit proper singular fibers, so that
their bases are affine manifolds with \textit{both} boundaries and
singularities. Hence, the toric Fano case is in-between the
semi-flat case, which corresponds to nonsingular affine manifolds
without boundary, and the general case. In particular, in the toric
Fano case, we do not need to worry about wall-crossing phenomena,
and this is one of the reasons why we can construct the SYZ mirror
transformations explicitly as fiberwise Fourier-type transforms,
much like what was done in the semi-flat case \cite{Leung00}.
(Another major reason is that holomorphic discs in toric manifolds
with boundary in Lagrangian torus fibers are completely classified
by Cho-Oh \cite{CO03}.) It is interesting to generalize the results
here to non-toric settings, but, certainly, much work needs to be
done before we can see how SYZ mirror transformations are
constructed and used in the general case. For more detailed
discussions of mirror symmetry and the wall-crossing phenomena in
non-toric situations , we refer the reader to the works of
Gross-Siebert~\cite{GS07} and Auroux~\cite{Auroux07}.\\
What follows is an outline of our main results. We will focus on one
half of the mirror symmetry between a complex $n$-dimensional toric
Fano manifold $\bar{X}$ and the mirror Landau-Ginzburg model
$(Y,W)$, namely, the correspondence between the symplectic geometry
(A-model) of $\bar{X}$ and the complex geometry (B-model) of
$(Y,W)$.
To describe our results, let us fix some notations first. Let
$N\cong\mathbb{Z}^n$ be a lattice and
$M=N^\vee=\textrm{Hom}(N,\mathbb{Z})$ the dual lattice. Also let
$N_\mathbb{R}=N\otimes_\mathbb{Z}\mathbb{R}$,
$M_\mathbb{R}=M\otimes_\mathbb{Z}\mathbb{R}$, and denote by
$\langle\cdot,\cdot\rangle:M_\mathbb{R}\times
N_\mathbb{R}\rightarrow\mathbb{R}$ the dual pairing. Let $\bar{X}$
be a toric Fano manifold, i.e. a smooth projective toric variety
such that the anticanonical line bundle $K_{\bar{X}}$ is ample. Let
$v_1,\ldots,v_d\in N$ be the primitive generators of the
1-dimensional cones of the fan $\Sigma$ defining $\bar{X}$. Then a
polytope $\bar{P}\subset M_\mathbb{R}$ defined by the inequalities
$$\langle x,v_i\rangle\geq\lambda_i,\quad i=1,\ldots,d,$$
and with normal fan $\Sigma$, associates a K\"{a}hler structure
$\omega_{\bar{X}}$ to $\bar{X}$. Physicists \cite{HV00} predicted
that the mirror of ($\bar{X}$, $\omega_{\bar{X}}$) is given by the
pair $(Y,W)$, where $Y$, which we call \emph{Hori-Vafa's mirror
manifold}, is biholomorphic to the non-compact K\"{a}hler manifold
$(\mathbb{C}^*)^n$, and $W:Y\rightarrow\mathbb{C}$ is the Laurent
polynomial
$$e^{\lambda_1}z^{v_1}+\ldots+e^{\lambda_d}z^{v_d},$$
where $z_1,\ldots,z_n$ are the standard complex coordinates of
$Y\cong(\mathbb{C}^*)^n$ and $z^v$ denotes the monomial
$z_1^{v^1}\ldots z_n^{v^n}$ if $v=(v^1,\ldots,v^n)\in
N\cong\mathbb{Z}^n$.
The symplectic manifold $(\bar{X},\omega_{\bar{X}})$ admits a
Hamiltonian action by the torus $T_N=N_\mathbb{R}/N$, and the
corresponding moment map $\mu_{\bar{X}}:\bar{X}\rightarrow\bar{P}$
is naturally a Lagrangian torus fibration. While this fibration is
singular (with collapsed fibers) along $\partial\bar{P}$, the
restriction to the open dense $T_N$-orbit $X\subset\bar{X}$ is a
Lagrangian torus bundle
$$\mu=\mu_{\bar{X}}|_X:X\rightarrow P,$$ where
$P=\bar{P}\setminus\partial\bar{P}$ is the interior of the polytope
$\bar{P}$.\footnote{$\mu:X\rightarrow P$ is a special Lagrangian
fibration if we equip $X$ with the standard holomorphic volume form
on $(\mathbb{C}^*)^n$, so that $X$ becomes an almost Calabi-Yau
manifold. See Definition 2.1 and Lemma 4.1 in Auroux
\cite{Auroux07}.} Applying T-duality and the \emph{semi-flat SYZ
mirror transformation} (see Definition~\ref{def3.1}) to this torus
bundle, we can, as suggested by the SYZ philosophy, obtain the
mirror manifold $Y$ (see Proposition~\ref{prop3.1} and
Proposition~\ref{prop3.2}).\footnote{In fact, we prove that
T-duality gives a bounded domain in the Hori-Vafa mirror manifold.
This result also appeared in Auroux's paper (\cite{Auroux07},
Proposition 4.2).} However, we are not going to get the
superpotential $W:Y\rightarrow\mathbb{C}$ because we have ignored
the anticanonical toric divisor $D_\infty=\bigcup_{i=1}^d
D_i=\bar{X}\setminus X$, and hence quantum corrections. Here, for
$i=1,\ldots,d$, $D_i$ denotes the toric prime divisor which
corresponds to $v_i\in N$. To recapture the quantum corrections, we
consider the (trivial) $\mathbb{Z}^n$-cover
$$\pi:LX=X\times N\rightarrow X$$
and various functions on it.\footnote{In the expository paper
\cite{CL08}, we interpreted LX as a finite dimensional subspace of
the free loop space $\mathcal{L}\bar{X}$ of $\bar{X}$.} Let
$\mathcal{K}(\bar{X})\subset H^2(\bar{X},\mathbb{R})$ be the
K\"{a}hler cone of $\bar{X}$. For each
$q=(q_1,\ldots,q_l)\in\mathcal{K}(\bar{X})$ (here
$l=d-n=\textrm{Picard number of $\bar{X}$}$), we define a
$T_N$-invariant function $\Phi_q:LX\rightarrow\mathbb{R}$ in terms
of the counting of Maslov index two holomorphic discs in $\bar{X}$
with boundary in Lagrangian torus fibers of $\mu:X\rightarrow P$
(see Definition~\ref{def2.1} and Remark~\ref{rmk2.1}). If we further
assume that $\bar{X}$ is a product of projective spaces, then this
family of functions $\{\Phi_q\}_{q\in\mathcal{K}(\bar{X})}\subset
C^\infty(LX)$ can be used to compute the small quantum cohomology
ring $QH^*(\bar{X})$ of $\bar{X}$ as follows (see Section~\ref{sec2}
for details).
\begin{prop}\label{prop1.1}$\mbox{}$
\begin{enumerate}
\item[1.] The logarithmic derivatives of $\Phi_q$, with respect to
$q_a$, for $a=1,\ldots,l$, are given by
$$q_a\frac{\partial\Phi_q}{\partial q_a}=\Phi_q\star\Psi_{n+a}.$$
Here, for each $i=1,\ldots,d$, the function
$\Psi_i:LX\rightarrow\mathbb{R}$ is defined, in terms of the
counting of Maslov index two holomorphic discs in $\bar{X}$ with
boundary in Lagrangian torus fibers which intersect the toric prime
divisor $D_i$ at an interior point (see the statement of
Proposition~\ref{prop2.1} and the subsequent discussion), and
$\star$ denotes a convolution product of functions on $LX$ with
respect to the lattice $N$.
\item[2.] We have a natural isomorphism of $\mathbb{C}$-algebras
\begin{equation}\label{isom1.1}
QH^*(\bar{X})\cong\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L},
\end{equation}
where $\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]$ is the
polynomial algebra generated by $\Psi_1^{\pm1},\ldots,\Psi_n^{\pm}$
with respect to the convolution product $\star$, and $\mathcal{L}$
is the ideal generated by linear relations that are defined by the
linear equivalence among the toric divisors $D_1,\ldots,D_d$,
provided that $\bar{X}$ is a product of projective spaces.
\end{enumerate}
\end{prop}
The proof of the above isomorphism (\ref{isom1.1}) given in
Subsection~\ref{subsec2.1} will be combinatorial in nature and is
done by a simple computation of certain Gromov-Witten invariants.
While this result may follow easily from known results in the
literature, we choose to include an elementary proof to make this
paper more self-contained. Our proof relies on the assumption that
$\bar{X}$ is a product of projective spaces. However, the more
important reason for us to impose such a strong assumption is that,
when $\bar{X}$ is a product of projective spaces, there is a better
way to understand the geometry underlying the isomorphism
(\ref{isom1.1}) by using \textit{tropical geometry}. A brief
explanation is now in order. More details can be found in
Subsection~\ref{subsec2.2}.
Suppose that $\bar{X}$ is a product of projective spaces. We first
define a tropical analog of the small quantum cohomology ring of
$\bar{X}$, call it $QH^*_{trop}(\bar{X})$. The results of
Mikhalkin~\cite{Mikhalkin03} and Nishinou-Siebert~\cite{NS04}
provided a one-to-one correspondence between those holomorphic
curves in $\bar{X}$ which have contribution to the quantum product
in $QH^*(\bar{X})$ and those tropical curves in $N_\mathbb{R}$ which
have contribution to the tropical quantum product in
$QH^*_{trop}(\bar{X})$. From this follows the natural isomorphism
$$QH^*(\bar{X})\cong QH^*_{trop}(\bar{X}).$$
Next comes a simple but crucial observation: \textit{each tropical
curve which contributes to the tropical quantum product in
$QH^*_{trop}(\bar{X})$ can be obtained by gluing tropical
discs}.\footnote{More recently, Gross \cite{Gross09} generalized
this idea further to give a tropical interpretation of the
\textit{big} quantum cohomology of $\mathbb{P}^2$.} Now, making use
of the fundamental results of Cho and Oh~\cite{CO03} on the
classification of holomorphic discs in toric Fano manifolds, we get
a one-to-one correspondence between the relevant tropical discs and
the Maslov index two holomorphic discs in $\bar{X}$ with boundary in
Lagrangian torus fibers of $\mu:X\rightarrow P$. The latter were
used to define the functions $\Psi_i$'s. So we naturally have
another canonical isomorphism
$$QH^*_{trop}(\bar{X})\cong\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L}.$$
Hence, by factoring through the tropical quantum cohomology ring
$QH^*_{trop}(\bar{X})$ and using the correspondence between
symplectic geometry (holomorphic curves and discs) of $\bar{X}$ and
tropical geometry (tropical curves and discs) of $N_\mathbb{R}$, we
obtain a more conceptual and geometric understanding of the
isomorphism (\ref{isom1.1}). This is in line with the general
philosophy advocated in the Gross-Siebert program \cite{GS07}.
Notice that all these can only be done for products of projective
spaces because, as is well known, tropical geometry cannot be used
to count curves which have irreducible components mapping to the
toric boundary divisor, and if $\bar{X}$ is not a product of
projective spaces, those curves \textit{do} contribute to
$QH^*(\bar{X})$ (see Example 3 in Section~\ref{sec4}). This is the
main reason why we confine ourselves to the case of products of
projective spaces, although the isomorphism (\ref{isom1.1}) holds
for all toric Fano manifolds (see Remark~\ref{rmk2.3}).\\
Now we come to the upshot of this paper, namely, we can explicitly
construct and apply SYZ mirror transformations to understand the
mirror symmetry between $\bar{X}$ and $(Y,W)$. We shall define the
SYZ mirror transformation $\mathcal{F}$ for the toric Fano manifold
$\bar{X}$ as \emph{a combination of the semi-flat SYZ mirror
transformation and taking fiberwise Fourier series} (see
Definition~\ref{def3.2} for the precise definition). Our first
result says that the SYZ mirror transformation of $\Phi_q$ is
precisely the exponential of the superpotential $W$, i.e.
$\mathcal{F}(\Phi_q)=\exp(W)$. Then, by proving that the SYZ mirror
transformation $\mathcal{F}(\Psi_i)$ of the function $\Psi_i$ is
nothing but the monomial $e^{\lambda_i}z^{v_i}$, for $i=1,\ldots,d$,
we show that $\mathcal{F}$ exhibits a natural and canonical
isomorphism between the small quantum cohomology ring
$QH^*(\bar{X})$ and the Jacobian ring $Jac(W)$, which takes the
quantum product $\ast$ (which can now, by Proposition~\ref{prop1.1},
be realized as the convolution product $\star$) to the ordinary
product of Laurent polynomials, just as what classical Fourier
series do. This is our main result (see Section~\ref{sec3}):
\begin{thm}\label{main_thm}$\mbox{}$
\begin{enumerate}
\item[1.] The SYZ mirror transformation of the function
$\Phi_q\in C^\infty(LX)$, defined in terms of the counting of Maslov
index two holomorphic discs in $\bar{X}$ with boundary in Lagrangian
torus fibers, is the exponential of the superpotential $W$ on the
mirror manifold $Y$, i.e.
$$\mathcal{F}(\Phi_q)=e^W.$$
Furthermore, we can incorporate the symplectic structure
$\omega_X=\omega_{\bar{X}}|_X$ on $X$ to give the holomorphic volume
form on the Landau-Ginzburg model $(Y,W)$ through the SYZ mirror
transformation $\mathcal{F}$, in the sense that,
\begin{equation*}
\mathcal{F}(\Phi_q e^{\sqrt{-1}\omega_X})=e^W\Omega_Y.
\end{equation*}
\item[2.] The SYZ mirror transformation gives a canonical isomorphism
of $\mathbb{C}$-algebras
\begin{equation*}\label{isom1.2}
\mathcal{F}:QH^*(\bar{X})\overset{\cong}{\longrightarrow} Jac(W),
\end{equation*}
provided that $\bar{X}$ is a product of projective spaces.
\end{enumerate}
\end{thm}
\noindent Here we view $\Phi_q e^{\sqrt{-1}\omega_X}$ as the
\emph{symplectic structure corrected by Maslov index two holomorphic
discs},\footnote{In \cite{CL08}, we rewrote the function $\Phi_q$ as
$\exp(\Psi_1+\ldots+\Psi_d)$, so that $\Phi_q
e^{\sqrt{-1}\omega_X}=\exp(\sqrt{-1}\omega_X+\Psi_1+\ldots+\Psi_d)$
and the formula in part 1. of Theorem~\ref{main_thm} becomes
$\mathcal{F}(e^{\sqrt{-1}\omega_X+\Psi_1+\ldots+\Psi_d})=e^W\Omega_Y$.
May be it is more appropriate to call
$\sqrt{-1}\omega_X+\Psi_1+\ldots+\Psi_d\in\Omega^2(LX)\oplus\Omega^0(LX)$
the symplectic structure corrected by Maslox index two holomorphic
discs.} and $e^W\Omega_Y$ as the holomorphic volume form of the
Landau-Ginzburg model $(Y,W)$.
As mentioned at the beginning, the existence of an isomorphism
$QH^*(\bar{X})\cong Jac(W)$ is not a new result, and was established
before by the works of Batyrev~\cite{Batyrev93} and
Givental~\cite{Givental97a}. However, we shall emphasize that the
key point here is that there is an isomorphism which is realized by
an explicit Fourier-type transformation, namely, the SYZ mirror
transformation $\mathcal{F}$. This hopefully provides a more
conceptual understanding of what is going on.
In \cite{FOOO08a} (Section 5), Fukaya-Oh-Ohta-Ono studied the
isomorphism $QH^*(\bar{X})\cong Jac(W)$ from the point of view of
Lagrangian Floer theory. They worked over the Novikov ring, instead
of $\mathbb{C}$, and gave a proof (Theorem 1.9) of this isomorphism
(over the Novikov ring) for all toric Fano manifolds basing on
Batyrev's formulas for presentations of the small quantum cohomology
rings of toric manifolds and Givental's mirror theorem
\cite{Givental97a}. Their proof was also combinatorial in nature,
but they claimed that a geometric proof would appear in a sequel of
\cite{FOOO08a}. A brief history and a more detailed discussion of
the proof of the isomorphism were also contained in Remark 1.10 of
\cite{FOOO08a}. See also the sequel \cite{FOOO08b}.\\
The rest of this paper is organized as follows. In the next section,
we define the family of functions
$\{\Phi_q\}_{q\in\mathcal{K}(\bar{X})}$ in terms of the counting of
Maslov index two holomorphic discs and give a combinatorial proof
Proposition~\ref{prop1.1}, which is followed by a discussion of the
role played by tropical geometry. The heart of this paper is
Section~\ref{sec3}, where we construct explicitly the SYZ mirror
transformation $\mathcal{F}$ for a toric Fano manifold $\bar{X}$ and
show that it indeed transforms the symplectic structure of $\bar{X}$
to the complex structure of $(Y,W)$, and vice versa. This is the
first part of Theorem~\ref{main_thm}. We then move on to prove the
second part, which shows how the SYZ mirror transformation
$\mathcal{F}$ can realize the isomorphism $QH^*(\bar{X})\cong
Jac(W)$. Section~\ref{sec4} contains some examples. We conclude with
some discussions in the final section.
\section{Maslov index two holomorphic discs and $QH^*(\bar{X})$}\label{sec2}
In the first part of this section, we define the functions $\Phi_q$,
$q\in\mathcal{K}(\bar{X})$, and $\Psi_1,\ldots,\Psi_d$ on $LX$ in
terms of the counting of Maslov index two holomorphic discs in
$\bar{X}$ with boundary in Lagrangian torus fibers of the moment map
$\mu:X\rightarrow P$, and show how they can be used to compute the
small quantum cohomology ring $QH^*(\bar{X})$ in the case when
$\bar{X}$ is a product of projective spaces. In particular, we
demonstrate how the quantum product can be realized as a convolution
product (part 2. of Proposition~\ref{prop1.1}). In the second part,
we explain the geometry of these results by using tropical geometry.
\subsection{Computing $QH^*(\bar{X})$ in terms of functions on
$LX$}\label{subsec2.1}
Recall that the primitive generators of the 1-dimensional cones of
the fan $\Sigma$ defining the toric Fano manifold $\bar{X}$ are
denoted by $v_1,\ldots,v_d\in N$. Without loss of generality, we can
assume that $v_1=e_1,\ldots,v_n=e_n$ is the standard basis of
$N\cong\mathbb{Z}^n$. The map
$$\partial:\mathbb{Z}^d\rightarrow N,\ (k_1,\ldots,k_d)\mapsto\sum_{i=1}^d k_iv_i$$
is surjective since $\bar{X}$ is compact. Let $K$ be the kernel of
$\partial$, so that the sequence
\begin{equation}\label{seq2.1}
0\longrightarrow K\overset{\iota}{\longrightarrow}\mathbb{Z}^d
\overset{\partial}{\longrightarrow}N\longrightarrow0
\end{equation}
is exact (see, for example, Appendix 1 in the book of Guillemin
\cite{Guillemin94}).
Now consider the K\"{a}hler cone $\mathcal{K}(\bar{X})\subset
H^2(\bar{X},\mathbb{R})$ of $\bar{X}$, and let
$q_1,\ldots,q_l\in\mathbb{R}_{>0}$ ($l=d-n$) be the coordinates of
$\mathcal{K}(\bar{X})$. For each
$q=(q_1,\ldots,q_l)\in\mathcal{K}(\bar{X})$, we choose $\bar{P}$ to
be the polytope defined by
$$\bar{P}=\{x\in M_\mathbb{R}:\langle x,v_i\rangle\geq\lambda_i,\quad i=1,\ldots,d\}$$
with $\lambda_i=0$, for $i=1,\ldots,n$, and $\lambda_{n+a}=\log
q_a$, for $a=1,\ldots,l$. This associates a K\"{a}hler structure
$\omega_{\bar{X}}$ to $\bar{X}$.
\begin{nb}\label{rmk2.0}
Let $\bar{P}$ be the polytope defined by the inequalities
$$\langle x,v_i\rangle\geq\lambda_i,\quad i=1,\ldots,d.$$
Also let
$$Q_1=(Q_{11},\ldots,Q_{d1}),\ldots,Q_l=(Q_{1l},\ldots,Q_{dl})\in\mathbb{Z}^d$$
be a $\mathbb{Z}$-basis of $K$. Then the coordinates
$q=(q_1,\ldots,q_l)\in\mathcal{K}(\bar{X})$ of the K\"{a}hler cone
are given by $q_a=e^{-r_a}$, where
$$r_a=-\sum_{i=1}^d Q_{ia}\lambda_i,$$
for $a=1,\ldots,l$. Hence, different choices of the
$\mathbb{Z}$-basis of $K$ and the constants
$\lambda_1,\ldots,\lambda_d$ can give rise to the same K\"{a}hler
structure parametrized by $q\in\mathcal{K}(\bar{X})$. We choose the
$\mathbb{Z}$-basis $\{Q_1,\ldots,Q_l\}$ of $K$ such that
$(Q_{n+a,b})_{1\leq a,b\leq l}=\textrm{Id}_{l\times l}$, and the
constants $\lambda_1,\ldots,\lambda_d$ such that
$\lambda_1=\ldots=\lambda_n=0$.
\end{nb}
Recall that $\mu:X\rightarrow P$ is the restriction of the moment
map $\mu_{\bar{X}}:\bar{X}\rightarrow\bar{P}$ to the open dense
$T_N$-orbit $X\subset\bar{X}$, where $P$ is the interior of the
polytope $\bar{P}$. For a point $x\in P$, we let
$L_x=\mu^{-1}(x)\subset X$ be the Lagrangian torus fiber over $x$.
Then the groups $H_2(\bar{X},\mathbb{Z})$, $\pi_2(\bar{X},L_x)$ and
$\pi_1(L_x)$ can be identified canonically with $K$, $\mathbb{Z}^d$
and $N$ respectively, so that the exact sequence (\ref{seq2.1})
above coincides with the following exact sequence of homotopy groups
associated to the pair $(\bar{X},L_x)$:
$$0\longrightarrow H_2(\bar{X},\mathbb{Z})\overset{\iota}{\longrightarrow}
\pi_2(\bar{X},L_x)\overset{\partial}{\longrightarrow}\pi_1(L_x)\longrightarrow0.$$
To proceed, we shall recall some of the fundamental results of
Cho-Oh~\cite{CO03} on the classification of holomorphic discs in
$(\bar{X},L_x)$:
\begin{thm}[Theorem 5.2 and Theorem
8.1 in Cho-Oh~\cite{CO03}]\label{cho-oh} $\pi_2(\bar{X},L_x)$ is
generated by $d$ Maslov index two classes
$\beta_1,\ldots,\beta_d\in\pi_2(\bar{X},L_x)$, which are represented
by holomorphic discs with boundary in $L_x$. Moreover, given a point
$p\in L_x$, then, for each $i=1,\ldots,d$, there is a unique (up to
automorphism of the domain) Maslov index two holomorphic disc
$\varphi_i:(D^2,\partial D^2)\rightarrow(\bar{X},L_x)$ in the class
$\beta_i$ whose boundary passes through $p$,\footnote{Another way to
state this result: Let $\mathcal{M}_1(L_x,\beta_i)$ be the moduli
space of holomorphic discs in $(\bar{X},L_x)$ in the class $\beta_i$
and with one boundary marked point. In the toric Fano case,
$\mathcal{M}_1(L_x,\beta_i)$ is a smooth compact manifold of real
dimension $n$. Let $ev:\mathcal{M}_1(L_x,\beta_i)\rightarrow L_x$ be
the evaluation map at the boundary marked point. Then we have
$ev_*[\mathcal{M}_1(L_x,\beta_i)]=[L_x]$ as $n$-cycles in $L$. See
Cho-Oh \cite{CO03} and Auroux \cite{Auroux07} for details.} and the
symplectic area of $\varphi_i$ is given by
\begin{equation}\label{area}
\textrm{Area}(\varphi_i)=\int_{\beta_i}\omega_{\bar{X}}=\int_{D^2}\varphi_i^*\omega_{\bar{X}}=2\pi(\langle
x,v_i\rangle-\lambda_i).
\end{equation}
\end{thm}
Furthermore, for each $i=1,\ldots,d$, the disc $\varphi_i$
intersects the toric prime divisor $D_i$ at a unique interior point.
(We can in fact choose the parametrization of $\varphi_i$ so that
$\varphi_i(0)\in D_i$.) Indeed, a result of Cho-Oh (Theorem 5.1 in
\cite{CO03}) says that, the Maslov index of a holomorphic disc
$\varphi:(D^2,\partial D^2)\rightarrow(\bar{X},L_x)$ representing a
class $\beta\in\pi_2(\bar{X},L_x)$ is given by twice the algebraic
intersection number $\beta\cdot D_\infty$, where
$D_\infty=\bigcup_{i=1}^dD_i$ is the toric boundary divisor (see
also Auroux~\cite{Auroux07}, Lemma 3.1).
Let $LX$ be the product $X\times N$. We view $LX$ as a (trivial)
$\mathbb{Z}^n$-cover over $X$:
$$\pi:LX=X\times N\rightarrow X,$$
and we equip $LX$ with the symplectic structure $\pi^*(\omega_X)$,
so that it becomes a symplectic manifold. We are now in a position
to define $\Phi_q$.
\begin{defn}\label{def2.1}
Let $q=(q_1,\ldots,q_l)\in\mathcal{K}(\bar{X})$. The function
$\Phi_q:LX\rightarrow\mathbb{R}$ is defined as follows. For
$(p,v)\in LX=X\times N$, let $x=\mu(p)\in P$ and $L_x=\mu^{-1}(x)$
be the Lagrangian torus fiber containing $p$. Denote by
$$\pi_2^+(\bar{X},L_x)=\Big\{\sum_{i=1}^d
k_i\beta_i\in\pi_2(\bar{X},L_x):k_i\in\mathbb{Z}_{\geq0},\
i=1,\ldots,d\Big\}$$ the positive cone generated by the Maslov index
two classes $\beta_1,\ldots,\beta_d$ which are represented by
holomorphic discs with boundary in $L_x$. For $\beta=\sum_{i=1}^d
k_i\beta_i\in\pi_2^+(\bar{X},L_x)$, we denote by $w(\beta)$ the
number $k_1!\ldots k_d!$. Then set
$$\Phi_q(p,v)=\sum_{\beta\in\pi_2^+(\bar{X},L_x),\ \partial\beta=v}
\frac{1}{w(\beta)}e^{-\frac{1}{2\pi}\int_\beta\omega_{\bar{X}}}.$$
\end{defn}
\begin{nb}\label{rmk2.1}$\mbox{}$
\begin{enumerate}
\item[1.] We say that $\Phi_q$ is defined by the counting of Maslov
index two holomorphic discs because of the following: Let $(p,v)\in
LX, x=\mu(p)\in P, L_x\subset X$ and
$\beta_1,\ldots,\beta_d\in\pi_2(\bar{X},L_x)$ be as before. For
$i=1,\ldots,d$, let $n_i(p)$ be the (algebraic) number of Maslov
index two holomorphic discs $\varphi:(D^2,\partial
D^2)\rightarrow(\bar{X},L_x)$ in the class $\beta_i$ whose boundary
passes through $p$. This number is well-defined since $\bar{X}$ is
toric Fano (see Section 3.1 and Section 4 in Auroux
\cite{Auroux07}). Then we can re-define
$$\Phi_q(p,v)=\sum_{\beta\in\pi_2^+(\bar{X},L_x),\ \partial\beta=v}
\frac{n_\beta(p)}{w(\beta)}e^{-\frac{1}{2\pi}\int_\beta\omega_{\bar{X}}},$$
where $n_\beta(p)=n_1(p)^{k_1}\ldots n_d(p)^{k_d}$ if
$\beta=\sum_{i=1}^d k_i\beta_i$. Defining $\Phi_q$ in this way makes
it explicit that $\Phi_q$ carries enumerative meaning. By
Theorem~\ref{cho-oh}, we have $n_i(p)=1$, for all $i=1,\ldots,d$ and
for any $p\in X$. So this definition reduces to the one above.
\item[2.] By definition, $\Phi_q$ is invariant under the
$T_N$-action on $X\subset\bar{X}$. Since
$X=T^*P/N=P\times\sqrt{-1}T_N$ (and the moment map $\mu:X\rightarrow
P$ is nothing but the projection to the first factor), we may view
$\Phi_q$ as a function on $P\times N$.
\item[3.] The function $\Phi_q$ is well-defined, i.e. the infinite
sum in its definition converges. To see this, notice that, by the
symplectic area formula (\ref{area}) of Cho-Oh, we have
$$\Phi_q(p,v)=\Bigg(\sum_{\substack{k_1,\ldots,k_d\in\mathbb{Z}_{\geq0},\\ \sum_{i=1}^d k_iv_i=v}}
\frac{q_1^{k_{n+1}}\ldots q_l^{k_d}}{k_1!\ldots
k_d!}\Bigg)e^{-\langle x,v\rangle},$$ and the sum inside the big
parentheses is less than $e^{n+q_1+\ldots+q_l}$.
\end{enumerate}
\end{nb}
For $T_N$-invariant functions $f,g:LX\rightarrow\mathbb{R}$, we
define their \textit{convolution product} $f\star
g:LX\rightarrow\mathbb{R}$ by
$$(f\star g)(p,v)=\sum_{v_1,v_2\in N,\ v_1+v_2=v}f(p,v_1)g(p,v_2),$$
for $(p,v)\in LX$. As in the theory of Fourier analysis, for the
convolution $f\star g$ to be well-defined, we need some conditions
for both $f$ and $g$. We leave this to Subsection~\ref{subsec3.2}
(see Definition~\ref{def3.3} and the subsequent discussion).
Nevertheless, if one of the functions is nonzero only for finitely
many $v\in N$, then the sum in the definition of $\star$ is a finite
sum, so it is well-defined. This is the case in the following
proposition.
\begin{prop}\label{prop2.1}[=part 1. of Proposition~\ref{prop1.1}]
The logarithmic derivatives of $\Phi_q$, with respect to $q_a$ for
$a=1,\ldots,l$, are given by
$$q_a\frac{\partial\Phi_q}{\partial q_a}=\Phi_q\star\Psi_{n+a}$$
where $\Psi_i:LX\rightarrow\mathbb{R}$ is defined, for
$i=1,\ldots,d$, by
$$\Psi_i(p,v)=\left\{ \begin{array}{ll}
e^{-\frac{1}{2\pi}\int_{\beta_i}\omega_{\bar{X}}} & \textrm{if $v=v_i$}\\
0 & \textrm{if $v\neq v_i$,}
\end{array}\right.$$
for $(p,v)\in LX=X\times N$, and with $x=\mu(p)\in P$,
$L_x=\mu^{-1}(x)$ and $\beta_1,\ldots,\beta_d\in\pi_2(\bar{X},L_x)$
as before.
\end{prop}
\begin{proof}
We will compute $q_l\frac{\partial\Phi_q}{\partial q_l}$. The others
are similar. By using Cho-Oh's formula (\ref{area}) and our choice
of the polytope $\bar{P}$, we have
$$e^{\langle x,v\rangle}\Phi_q(p,v)=
\sum_{\substack{k_1,\ldots,k_d\in\mathbb{Z}_{\geq0},\\ \sum_{i=1}^d
k_iv_i=v}}\frac{q_1^{k_{n+1}}\ldots q_l^{k_d}}{k_1!\ldots k_d!}.$$
Note that the right-hand-side is independent of $p\in X$.
Differentiating both sides with respect to $q_l$ gives
\begin{eqnarray*}
e^{\langle x,v\rangle}\frac{\partial\Phi_q(p,v)}{\partial q_l} & = &
\sum_{\substack{k_1,\ldots,k_{d-1}\in\mathbb{Z}_{\geq0},\
k_d\in\mathbb{Z}_{\geq1},\\ \sum_{i=1}^d
k_iv_i=v}}\frac{q_1^{k_{n+1}}\ldots
q_{l-1}^{k_{d-1}}q_l^{k_d-1}}{k_1!\ldots k_{d-1}!(k_d-1)!}\\
& = & \sum_{\substack{k_1,\ldots,k_d\in\mathbb{Z}_{\geq0},\\
\sum_{i=1}^d k_iv_i=v-v_d}}\frac{q_1^{k_{n+1}}\ldots
q_l^{k_d}}{k_1!\ldots
k_d!}\\
& = & e^{\langle x,v-v_d\rangle}\Phi_q(p,v-v_d).
\end{eqnarray*}
Hence, we obtain
\begin{eqnarray*}
q_l\frac{\partial\Phi_q(p,v)}{\partial q_l}=q_l e^{-\langle
x,v_d\rangle}\Phi_q(p,v-v_d).
\end{eqnarray*}
Now, by the definition of the convolution product $\star$, we have
\begin{eqnarray*}
\Phi_q\star\Psi_d(p,v)=\sum_{v_1,v_2\in N,\
v_1+v_2=v}\Phi_q(p,v_1)\Psi_d(p,v_2)=\Phi_q(p,v-v_d)\Psi_d(p,v_d),
\end{eqnarray*}
and
$\Psi_d(p,v_d)=e^{-\frac{1}{2\pi}\int_{\beta_d}\omega_{\bar{X}}}=e^{\lambda_d-\langle
x,v_d\rangle}=q_le^{-\langle x,v_d\rangle}$. The result follows.
\end{proof}
In the previous proposition, we introduce the $T_N$-invariant
functions $\Psi_1,\ldots,\Psi_d\in C^\infty(LX)$. Similar to what
has been said in Remark~\ref{rmk2.1}(1), these functions carry
enumerative meanings, and we should have defined $\Psi_i(p,v)$,
$i=1,\ldots,d$ in terms of the counting of Maslov index two
holomorphic discs in $(\bar{X},L_{\mu(p)})$ with boundary $v$ which
pass through $p$, i.e.
$$\Psi_i(p,v)=\left\{ \begin{array}{ll}
n_i(p)e^{-\frac{1}{2\pi}\int_{\beta_i}\omega_{\bar{X}}} & \textrm{if $v=v_i$}\\
0 & \textrm{if $v\neq v_i$,}
\end{array}\right.$$
for $(p,v)\in LX=X\times N$, where $x=\mu(p)\in P,
L_x=\mu^{-1}(x)\subset X$ and
$\beta_1,\ldots,\beta_d\in\pi_2(\bar{X},L_x)$ are as before. Again,
since the number $n_i(p)$ is always equal to one, for any $p\in X$
and for all $i=1,\ldots,d$, this definition of $\Psi_i$ is the same
as the previous one. But we should keep in mind that the function
$\Psi_i\in C^\infty(LX)$ encodes the following enumerative
information: for each $p\in X$, there is a unique Maslov index two
holomorphic disc $\varphi_i$ in the class $\beta_i$ with boundary in
the Lagrangian torus fiber $L_{\mu(p)}$ whose boundary passes
through $p$ and whose interior intersects the toric prime divisor
$D_i$ at one point. In view of this, we put the $d$ functions
$\{\Psi_i\}_{i=1}^d$, the $d$ families of Maslov index two
holomorphic discs $\{\varphi_i\}_{i=1}^d$ and the $d$ toric prime
divisors $\{D_i\}_{i=1}^d$ in one-to-one correspondences:
\begin{equation}\label{1-1}
\{\Psi_i\}_{i=1}^d\ \overset{1-1}{\longleftrightarrow}\
\{\varphi_i\}_{i=1}^d\ \overset{1-1}{\longleftrightarrow}\
\{D_i\}_{i=1}^d.
\end{equation}
Through these correspondences, we introduce linear relations in the
$d$-dimensional $\mathbb{C}$-vector space spanned by the functions
$\Psi_1,\ldots,\Psi_d$ using the linear equivalences among the
divisors $D_1,\ldots,D_d$.
\begin{defn}
Two linear combinations $\sum_{i=1}^d a_i\Psi_i$ and $\sum_{i=1}^d
b_i\Psi_i$, where $a_i,b_i\in\mathbb{C}$, are said to be linearly
equivalent, denoted by $\sum_{i=1}^d a_i\Psi_i\sim\sum_{i=1}^d
b_i\Psi_i$, if the corresponding divisors $\sum_{i=1}^d a_iD_i$ and
$\sum_{i=1}^d b_iD_i$ are linearly equivalent.
\end{defn}
We further define $\Psi_i^{-1}:LX\rightarrow\mathbb{R}$,
$i=1,\ldots,d$, by
$$\Psi_i^{-1}(p,v)=\left\{ \begin{array}{ll}
e^{\frac{1}{2\pi}\int_{\beta_i}\omega_X} & \textrm{if $v=-v_i$}\\
0 & \textrm{if $v\neq-v_i$,}
\end{array}\right.$$
for $(p,v)\in LX$, so that $\Psi_i^{-1}\star\Psi_i=\mathbb{1}$,
where $\mathbb{1}:LX\rightarrow\mathbb{R}$ is the function defined
by
$$\mathbb{1}(p,v)=\left\{ \begin{array}{ll}
1 & \textrm{if $v=0$}\\
0 & \textrm{if $v\neq 0$.}
\end{array}\right.$$
The function $\mathbb{1}$ serves as a multiplicative identity for
the convolution product, i.e. $\mathbb{1}\star f=f\star\mathbb{1}=f$
for any $f\in C^\infty(LX)$. Now the second part of
Proposition~\ref{prop1.1} says that
\begin{prop}[=part 2. of Proposition~\ref{prop1.1}]\label{prop2.2}
We have a natural isomorphism of $\mathbb{C}$-algebras
\begin{equation}\label{isom2.3}
QH^*(\bar{X})\cong\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L},
\end{equation}
where $\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]$ is the
polynomial algebra generated by $\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}$
with respect to the convolution product $\star$ and $\mathcal{L}$ is
the ideal generated by linear equivalences, provided that $\bar{X}$
is a product of projective spaces.
\end{prop}
In the rest of this subsection, we will give an elementary proof of
this proposition by simple combinatorial arguments and computation
of certain Gromov-Witten invariants.
First of all, each toric prime divisor $D_i$ ($i=1,\ldots,d$)
determines a cohomology class in $H^2(\bar{X},\mathbb{C})$, which
will be, by abuse of notations, also denoted by $D_i$. It is known
by the general theory of toric varieties that the cohomology ring
$H^*(\bar{X},\mathbb{C})$ of the compact toric manifold $\bar{X}$ is
generated by the classes $D_1,\ldots,D_d$ in
$H^2(\bar{X},\mathbb{C})$ (see, for example, Fulton \cite{Fulton93}
or Audin \cite{Audin04}). More precisely, there is a presentation of
the form:
$$H^*(\bar{X},\mathbb{C})=\mathbb{C}[D_1,\ldots,D_d]/(\mathcal{L}+\mathcal{SR}),$$
where $\mathcal{L}$ is the ideal generated by linear equivalences
and $\mathcal{SR}$ is the \textit{Stanley-Reisner ideal} generated
by \textit{primitive relations} (see Batyrev \cite{Batyrev91}). Now,
by a result of Siebert and Tian (Proposition 2.2 in \cite{ST94}),
$QH^*(\bar{X})$ is also generated by $D_1,\ldots,D_d$ and a
presentation of $QH^*(\bar{X})$ is given by replacing each relation
in $\mathcal{SR}$ by its quantum counterpart. Denote by
$\mathcal{SR}_Q$ the quantum Stanley-Reisner ideal. Then we can
rephrase what we said as:
$$QH^*(\bar{X})=\mathbb{C}[D_1,\ldots,D_d]/(\mathcal{L}+\mathcal{SR}_Q).$$
The computation of $QH^*(\bar{X})$ (as a presentation) therefore
reduces to computing the generators of the ideal $\mathcal{SR}_Q$.
Let $\bar{X}=\mathbb{C}P^{n_1}\times\ldots\times\mathbb{C}P^{n_l}$
be a product of projective spaces. The complex dimension of
$\bar{X}$ is $n=n_1+\ldots+n_l$. For $a=1,\ldots,l$, let
$v_{1,a}=e_1,\ldots,v_{n_a,a}=e_{n_a},v_{n_a+1,a}=-\sum_{j=1}^{n_a}e_j\in
N_a$ be the primitive generators of the 1-dimensional cones in the
fan of $\mathbb{C}P^{n_a}$, where $\{e_1,\ldots,e_{n_a}\}$ is the
standard basis of $N_a\cong\mathbb{Z}^{n_a}$. For
$j=1,\ldots,n_a+1$, $a=1,\ldots,l$, we use the same symbol $v_{j,a}$
to denote the vector
$$(0,\ldots,\underbrace{v_{j,a}}_{\textrm{$a$-th}},\ldots,0)
\in N=N_1\oplus\ldots\oplus N_l,$$ where $v_{j,a}$ sits in the $a$th
place. These $d=\sum_{a=1}^l(n_a+1)=n+l$ vectors in $N$ are the
primitive generators of the 1-dimensional cones of the fan $\Sigma$
defining $\bar{X}$. In the following, we shall also denote the toric
prime divisor, the relative homotopy class, the family of Maslov
index two holomorphic discs with boundary in Lagrangian torus fibers
and the function on $LX$ corresponding to $v_{j,a}$ by $D_{j,a}$,
$\beta_{j,a}$, $\varphi_{j,a}$ and $\Psi_{j,a}$ respectively.
\begin{lem}
There are exactly $l$ primitive collections given by
$$\mathfrak{P}_a=\{v_{j,a}:j=1,\ldots,n_a+1\},\ a=1,\ldots,l,$$
and hence the Stanley-Reisner ideal of
$\bar{X}=\mathbb{C}P^{n_1}\times\ldots\times\mathbb{C}P^{n_l}$ is
given by
$$\mathcal{SR}=\langle D_{1,a}\cup\ldots\cup D_{n_a+1,a}:a=1\ldots,l\rangle.$$
\end{lem}
\begin{proof}
Let $\mathfrak{P}$ be any primitive collection. By definition,
$\mathfrak{P}$ is a collection of primitive generators of
1-dimensional cones of the fan $\Sigma$ defining $\bar{X}$ such that
for any $v\in\mathfrak{P}$, $\mathfrak{P}\setminus\{v\}$ generates a
$(|\mathfrak{P}|-1)$-dimensional cone in $\Sigma$, while
$\mathfrak{P}$ itself does not generate a
$|\mathfrak{P}|$-dimensional cone in $\Sigma$. Suppose that
$\mathfrak{P}\not\subset\mathfrak{P}_a$ for any $a$. For each $a$,
choose $v\in\mathfrak{P}\setminus(\mathfrak{P}\cap\mathfrak{P}_a)$.
By definition, $\mathfrak{P}\setminus\{v\}$ generates a cone in
$\Sigma$. But all the cones in $\Sigma$ are direct sums of cones in
the fans of the factors. So, in particular,
$\mathfrak{P}\cap\mathfrak{P}_a$, whenever it's nonempty, will
generate a cone in the fan of $\mathbb{C}P^{n_a}$. Since
$\mathfrak{P}=\bigsqcup_{a=1}^l\mathfrak{P}\cap\mathfrak{P}_a$, this
implies that the set $\mathfrak{P}$ itself generates a cone, which
is impossible. We therefore conclude that $\mathfrak{P}$ must be
contained in, and hence equal to one of the $\mathfrak{P}_a$'s.
\end{proof}
Hence, to compute the quantum Stanley-Reisner ideal
$\mathcal{SR}_Q$, we must compute the expression
$D_{1,a}\ast\ldots\ast D_{n_a+1,a}$, for $a=1,\ldots,l$, where
$\ast$ denotes the small quantum product of $QH^*(\bar{X})$. Before
doing this, we shall recall the definitions and properties of the
relevant Gromov-Witten invariants and the small quantum product for
$\bar{X}=\mathbb{C}P^{n_1}\times\ldots\times\mathbb{C}P^{n_l}$ as
follows.\\
For $\delta\in H_2(\bar{X},\mathbb{Z})$, let
$\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)$ be the moduli space
of genus 0 stable maps with $m$ marked points and class $\delta$.
Since $\bar{X}$ is convex (i.e. for all maps
$\varphi:\mathbb{C}P^1\rightarrow\bar{X}$,
$H^1(\mathbb{C}P^1,\varphi^*T\bar{X})=0$), the moduli space
$\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)$, if nonempty, is a
variety of pure complex dimension
$\textrm{dim}_\mathbb{C}(\bar{X})+c_1(\bar{X})\cdot\delta+m-3$ (see,
for example, the book \cite{Aluffi97}, p.3). For $k=1,\ldots,m$, let
$ev_k:\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)\rightarrow\bar{X}$
be the evaluation map at the $k$th marked point, and let
$\pi:\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)\rightarrow\overline{\mathcal{M}}_{0,m}$
be the forgetful map, where $\overline{\mathcal{M}}_{0,m}$ denotes
the Deligne-Mumford moduli space of genus 0 stable curves with $m$
marked points. Then, given cohomology classes $A\in
H^*(\overline{\mathcal{M}}_{0,m},\mathbb{Q})$ and
$\gamma_1,\ldots,\gamma_m\in H^*(\bar{X},\mathbb{Q})$, the
Gromov-Witten invariant is defined by
$$GW_{0,m}^{\bar{X},\delta}(A;\gamma_1,\ldots,\gamma_m)=
\int_{[\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)]}\pi^*(A)\wedge
ev_1^*(\gamma_1)\wedge\ldots\wedge ev_m^*(\gamma_m),$$ where
$[\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)]$ denotes the
fundamental class of $\overline{\mathcal{M}}_{0,m}(\bar{X},\delta)$.
Let $\ast$ be the small quantum product of $QH^*(\bar{X})$. Then it
is not hard to show that, for any classes
$\gamma_1,\ldots,\gamma_r\in H^*(\bar{X},\mathbb{Q})$, the
expression $\gamma_1\ast\ldots\ast\gamma_r$ can be computed by the
formula
$$\gamma_1\ast\ldots\ast\gamma_r=\sum_{\delta\in H_2(\bar{X},\mathbb{Z})}\sum_{i}
GW_{0,r+1}^{\bar{X},\delta}(\textrm{PD(pt)};\gamma_1,\ldots,\gamma_r,t_i)t^iq^\delta,$$
where $\{t_i\}$ is a basis of $H^*(\bar{X},\mathbb{Q})$, $\{t^i\}$
denotes the dual basis of $\{t_i\}$ with respect to the Poincar\'{e}
pairing, and $\textrm{PD(pt)}\in
H^{2m-6}(\overline{\mathcal{M}}_{0,m},\mathbb{Q})$ denotes the
Poincar\'{e} dual of a point in $\overline{\mathcal{M}}_{0,m}$ (see,
e.g. formula (1.4) in Spielberg \cite{Spielberg01}). Moreover, since
$\bar{X}$ is homogeneous of the form $G/P$, where $G$ is a Lie group
and $P$ is a parabolic subgroup, the Gromov-Witten invariants are
enumerative, in the sense that
$GW_{0,r+1}^{\bar{X},\delta}(\textrm{PD(pt)};\gamma_1,\ldots,\gamma_r,t_i)$
is equal to the number of holomorphic maps
$\varphi:(\mathbb{C}P^1;x_1,\ldots,x_r,x_{r+1})\rightarrow\bar{X}$
with $\varphi_*([\mathbb{C}P^1])=\delta$ such that
$(\mathbb{C}P^1;x_1,\ldots,x_r,x_{r+1})$ is a given point in
$\overline{\mathcal{M}}_{0,r+1}$, $\varphi(x_k)\in\Gamma_k$, for
$k=1,\ldots,r$, and $\varphi(x_{r+1})\in T_i$, where
$\Gamma_1,\ldots,\Gamma_r,T_i$ are representatives of cycles
Poincar\'{e} duals to the classes $\gamma_1,\ldots,\gamma_r,t_i$
respectively (see \cite{Aluffi97}, p.12).\\
We shall now use the above facts to compute $D_{1,a}\ast\ldots\ast
D_{n_a+1,a}$, which is given by the formula
$$D_{1,a}\ast\ldots\ast D_{n_a+1,a}=\sum_{\delta\in H_2(\bar{X},\mathbb{Z})}\sum_{i}
GW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)t^iq^\delta.$$
First of all, since $H_2(\bar{X},\mathbb{Z})$ is the kernel of the
boundary map
$\partial:\pi_2(\bar{X},L_x)=\mathbb{Z}^d\rightarrow\pi_1(L_x)=N$, a
homology class $\delta\in H_2(\bar{X},\mathbb{Z})$ can be
represented by a $d$-tuple of integers
$$\delta=(c_{1,1},\ldots,c_{n_1+1,1},\ldots,c_{1,b},\ldots,c_{n_b+1,b},\ldots,c_{1,l},
\ldots,c_{n_l+1,l})\in\mathbb{Z}^d$$ satisfying
$\sum_{b=1}^l\sum_{j=1}^{n_b+1}c_{j,b}v_{j,b}=0\in N$. Then we have
$c_1(\bar{X})\cdot\delta=\sum_{b=1}^l\sum_{j=1}^{n_b+1}c_{j,b}$. For
the Gromov-Witten invariant
$GW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)$
to be nonzero, $\delta$ must be represented by irreducible
holomorphic curves which pass through all the divisors
$D_{1,a},\ldots,D_{n_a+1,a}$. This implies that $c_{j,a}\geq1$, for
$j=1,\ldots,n_a+1$, and moreover, $\delta$ lies in the cone of
effective classes $H_2^{\textrm{eff}}(\bar{X},\mathbb{Z})\subset
H_2(\bar{X},\mathbb{Z})$. By Theorem 2.15 of Batyrev
\cite{Batyrev91}, $H_2^{\textrm{eff}}(\bar{X},\mathbb{Z})$ is given
by the kernel of the restriction of the boundary map
$\partial|_{\mathbb{Z}_{\geq0}^d}:\mathbb{Z}_{\geq0}^d\rightarrow
N$. So we must also have $c_{j,b}\geq0$ for all $j$ and $b$, and we
conclude that
$$c_1(\bar{X})\cdot\delta=\sum_{b=1}^l\sum_{j=1}^{n_b+1}c_{j,b}\geq n_a+1.$$
By dimension counting,
$GW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)\neq0$
only when
$$2(\textrm{dim}_\mathbb{C}(\bar{X})+c_1(\bar{X})\cdot\delta+(n_a+2)-3)=
2((n_a+2)-3)+2(n_a+1)+\textrm{deg}(t_i).$$ The above inequality then
implies that $\textrm{deg}(t_i)\geq2\textrm{dim}(\bar{X})$. We
therefore must have $t_i=\textrm{PD(pt)}\in
H^{2\textrm{dim}(\bar{X})}(\bar{X},\mathbb{Q})$ and $\delta\in
H_2(\bar{X},\mathbb{Z})$ is represented by the $d$-tuple of integers
$\delta_a:=(c_{1,1},\ldots,c_{n_l+1,l})\in\mathbb{Z}^d$, where
$$c_{j,b}=\left\{\begin{array}{ll}
1 & \textrm{if $b=a$ and $j=1,\ldots,n_a+1$}\\
0 & \textrm{otherwise,}
\end{array}\right.$$
i.e., $\delta=\delta_a$ is the pullback of the class of a line in
the factor $\mathbb{C}P^{n_a}$. Hence,
$$D_{1,a}\ast\ldots\ast D_{n_a+1,a}=
GW_{0,n_a+2}^{\bar{X},\delta_a}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},\textrm{PD(pt)})q^{\delta_a}.$$
By Theorem 9.3 in Batyrev~\cite{Batyrev93} (see also Siebert
\cite{Siebert95}, section 4), the Gromov-Witten invariant on the
right-hand-side is equal to 1. Geometrically, this means that, for
any given point $p\in X\subset\bar{X}$, there is a unique
holomorphic map
$\varphi_a:(\mathbb{C}P^1;x_1,\ldots,x_{n_a+2})\rightarrow\bar{X}$
with class $\delta_a$, $\varphi_a(x_j)\in D_{j,a}$, for
$j=1,\ldots,n_a+1$, $\varphi_a(x_{n_a+2})=p$ and such that
$(\mathbb{C}P^1;x_1,\ldots,x_{n_a+2})$ is a given configuration in
$\overline{\mathcal{M}}_{0,n_a+2}$. Also note that, for
$a=1,\ldots,l$,
$q^{\delta_a}=\exp(-\frac{1}{2\pi}\int_{\delta_a}\omega_{\bar{X}})=e^{-r_a}=q_a$,
where $(q_1,\ldots,q_l)$ are the coordinates of the K\"{a}hler cone
$\mathcal{K}(\bar{X})$. Thus, we have the following lemma.
\begin{lem}\label{lem2.2} For $a=1,\ldots,l$, we have
$$D_{1,a}\ast\ldots\ast D_{n_a+1,a}=q_a.$$
Hence, the quantum Stanley-Reisner ideal of
$\bar{X}=\mathbb{C}P^{n_1}\times\ldots\times\mathbb{C}P^{n_l}$ is
given by
$$\mathcal{SR}_Q=\langle D_{1,a}\ast\ldots\ast D_{n_a+1,a}-q_a:a=1\ldots,l\rangle,$$
and the quantum cohomology ring of $\bar{X}$ has a presentation
given by
\begin{eqnarray*}
QH^*(\bar{X})=&\frac{\mathbb{C}[D_{1,1},\ldots,D_{n_1+1,1},\ldots,D_{1,l},\ldots,D_{n_l+1,l}]}
{\big\langle D_{j,a}-D_{n_a+1,a}:j=1,\ldots,n_a,\
a=1,\ldots,l\big\rangle
+\big\langle\prod_{j=1}^{n_a+1}D_{j,a}-q_a:a=1,\ldots,l\big\rangle}.
\end{eqnarray*}
\end{lem}
Proposition~\ref{prop2.2} now follows from a simple combinatorial
argument:
\begin{proof}[Proof of Proposition~\ref{prop2.2}] For a general toric Fano manifold $\bar{X}$,
recall, from Remark~\ref{rmk2.0}, that we have chosen a
$\mathbb{Z}$-basis
$$Q_1=(Q_{11},\ldots,Q_{d1}),\ldots,Q_l=(Q_{1l},\ldots,Q_{dl})\in\mathbb{Z}^d$$
of $K=H_2(\bar{X},\mathbb{Z})$ such that $(Q_{n+a,b})_{1\leq a,b\leq
l}=\textrm{Id}_{l\times l}$. So, by Cho-Oh's symplectic area formula
(\ref{area}), for $a=1,\ldots,l$, we have
\begin{eqnarray*}
\Big(\sum_{i=1}^nQ_{ia}\int_{\beta_i}\omega_{\bar{X}}\Big)+\int_{\beta_{n+a}}\omega_{\bar{X}}
& = & 2\pi\Big(\sum_{i=1}^nQ_{ia}(\langle
x,v_i\rangle-\lambda_i)\Big)+2\pi(\langle x,v_{n+a}\rangle-\lambda_{n+a})\\
& = & 2\pi\langle
x,\sum_{i=1}^nQ_{ia}v_i+v_{n+a}\rangle-2\pi(\sum_{i=1}^aQ_{ia}\lambda_a+\lambda_{n+a})\\
& = & 2\pi r_a.
\end{eqnarray*}
Then, by the definition of the convolution product of functions on
$LX$, we have
\begin{eqnarray*}
\Psi_1^{Q_{1a}}\star\ldots\star\Psi_n^{Q_{na}}\star\Psi_{n+a}(x,v)
& = & \left\{
\begin{array}{ll}
e^{-\frac{1}{2\pi}(\sum_{i=1}^nQ_{ia}\int_{\beta_i}\omega_{\bar{X}})
-\frac{1}{2\pi}\int_{\beta_{n+a}}\omega_{\bar{X}}} & \textrm{if $v=0$}\\
0 & \textrm{if $v\neq0$}
\end{array} \right.\\
& = & \left\{ \begin{array}{ll}
e^{-r_a} & \textrm{if $v=0$}\\
0 & \textrm{if $v\neq0$}
\end{array} \right.\\
& = & q_a\mathbb{1},
\end{eqnarray*}
or
$\Psi_{n+a}=q_a(\Psi_1^{-1})^{Q_{1a}}\star\ldots\star(\Psi_n^{-1})^{Q_{na}}$,
for $a=1,\ldots,l$.
Suppose that the following condition: $Q_{ia}\geq0$ for
$i=1,\ldots,n$, $a=1,\ldots,l$, and for each $i=1,\ldots,n$, there
exists $1\leq a\leq l$ such that $Q_{ia}>0$, is satisfied, which is
the case when $\bar{X}$ is a product of projective spaces. Then the
inclusion
$$\mathbb{C}[\Psi_1,\ldots,\Psi_n,\Psi_{n+1},\ldots,\Psi_d]\hookrightarrow
\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]$$ is an isomorphism.
Consider the surjective map
$$\rho:\mathbb{C}[D_1,\ldots,D_d]\rightarrow\mathbb{C}[\Psi_1,\ldots,\Psi_d]$$
defined by mapping $D_i$ to $\Psi_i$ for $i=1,\ldots,d$. This map is
not injective because there are nontrivial relations in
$\mathbb{C}[\Psi_1,\ldots,\Psi_d]$ generated by the relations
$$\Psi_1^{Q_{1a}}\star\ldots\star\Psi_n^{Q_{na}}\star\Psi_{n+a}-q_a\mathbb{1}=0,\ a=1,\ldots,l.$$
By Lemma~\ref{lem2.2}, the kernel of $\rho$ is exactly given by the
ideal $\mathcal{SR}_Q$ when $\bar{X}$ is a product of projective
spaces. Thus, we have an isomorphism
$$\mathbb{C}[D_1,\ldots,D_d]/\mathcal{SR}_Q\overset{\cong}{\longrightarrow}\mathbb{C}[\Psi_1,\ldots,\Psi_d].$$
Since
$(\mathbb{C}[D_1,\ldots,D_d]/\mathcal{SR}_Q)/\mathcal{L}=\mathbb{C}[D_1,\ldots,D_d]/(\mathcal{L}+\mathcal{SR}_Q)
=QH^*(\bar{X})$, we obtain the desired isomorphism
$$QH^*(\bar{X})\cong\mathbb{C}[\Psi_1,\ldots,\Psi_d]/\mathcal{L}\cong
\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L},$$
provided that $\bar{X}$ is a product of projective spaces.
\end{proof}
\begin{nb}\label{rmk2.3}$\mbox{}$
\begin{enumerate}
\item[1.] In \cite{Batyrev93}, Theorem 5.3, Batyrev gave a "formula" for the
quantum Stanley-Reisner ideal $\mathcal{SR}_Q$ for any compact toric
K\"{a}hler manifolds, using his own definition of the small quantum
product (which is different from the usual one because Batyrev
counted only holomorphic maps from $\mathbb{C}P^1$). By Givental's
mirror theorem~\cite{Givental97a}, Batyrev's formula is true, using
the usual definition of the small quantum product, for all toric
Fano manifolds. Our proof of Lemma~\ref{lem2.2} is nothing but a
simple verification of Batyrev's formula in the case of products of
projective spaces, without using Givental's mirror theorem.
\item[2.] In any event,
Batyrev's formula in \cite{Batyrev93} for a presentation of the
small quantum cohomology ring $QH^*(\bar{X})$ of a toric Fano
manifold $\bar{X}$ is correct. In the same paper, Batyrev also
proved that $QH^*(\bar{X})$ is canonically isomorphic to the
Jacobian ring $Jac(W)$, where $W$ is the superpotential mirror to
$\bar{X}$ (Theorem 8.4 in \cite{Batyrev93}). Now, by
Theorem~\ref{thm3.3} in Subsection~\ref{subsec3.3}, the inverse SYZ
transformation $\mathcal{F}^{-1}$ gives a canonical isomorphism
$\mathcal{F}^{-1}:Jac(W)\overset{\cong}{\rightarrow}
\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L}$. Then,
the composition map
$QH^*(\bar{X})\rightarrow\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L}$,
which maps $D_i$ to $\Psi_i$, for $i=1,\ldots,d$, is an isomorphism.
This proves Proposition~\ref{prop2.2} all toric Fano manifolds. We
choose not to use this proof because all the geometry is then hid by
the use of Givental's mirror theorem.
\end{enumerate}
\end{nb}
\subsection{The role of tropical geometry}\label{subsec2.2}
While our proof of the isomorphism (\ref{isom2.3}) in
Proposition~\ref{prop2.2} is combinatorial in nature, the best way
to understand the geometry behind it is through the correspondence
between holomorphic curves and discs in $\bar{X}$ and their tropical
counterparts in $N_\mathbb{R}$. Indeed, this is the main reason why
we confine ourselves to the case of products of projective spaces.
Our first task is to define a tropical analog $QH^*_{trop}(\bar{X})$
of the small quantum cohomology ring of $\bar{X}$, when $\bar{X}$ is
a product of projective spaces. For this, we shall recall some
notions in tropical geometry. We will not state the precise
definitions, for which we refer the reader to Mikhalkin
\cite{Mikhalkin03}, \cite{Mikhalkin06}, \cite{Mikhalkin07} and
Nishinou-Siebert \cite{NS04}.\\
A \textit{genus 0 tropical curve with $m$ marked points} is a
connected tree $\Gamma$ with exactly $m$ unbounded edges (also
called leaves) and each bounded edge is assigned a positive length.
Let $\overline{\mathcal{M}}_{0,m}^{trop}$ be the moduli space of
genus 0 tropical curves with $m$ marked points (modulo
isomorphisms). The combinatorial types of $\Gamma$ partition
$\overline{\mathcal{M}}_{0,m}^{trop}$ into disjoint subsets, each of
which has the structure of a polyhedral cone $\mathbb{R}_{>0}^e$
(where $e$ is the number of bounded edges in $\Gamma$). There is a
distinguished point in $\overline{\mathcal{M}}_{0,m}^{trop}$
corresponding to the (unique) tree $\Gamma_m$ with exactly one
($m$-valent) vertex $V$, $m$ unbounded edges $E_1,\ldots,E_m$ and
\textit{no} bounded edges. See Figure 2.1 below. We will fix this
point in $\overline{\mathcal{M}}_{0,m}^{trop}$; this is analog to
fixing a point in $\overline{\mathcal{M}}_{0,m}$.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,30)
\curve(30,11.5, 12,27.5) \put(15,25.5){$E_2$} \curve(30,11.5,
12,-1.5) \put(16.5,-1){$E_3$} \curve(30,11.5, 40,30.5)
\put(39,26.5){$E_1$} \curve(30,11.5, 48,0.5) \put(41,-0.2){$E_4$}
\put(29.2,10.7){$\bullet$} \put(32,11){$V$} \put(55,11.5){Figure
2.1: $\Gamma_4\in\overline{\mathcal{M}}_{0,4}^{trop}.$}
\end{picture}
\end{figure}
Let $\Sigma$ be the fan defining
$\bar{X}=\mathbb{C}P^{n_1}\times\ldots\times\mathbb{C}P^{n_l}$, and
denote by
$\Sigma[1]=\{v_{1,1},\ldots,v_{n_1+1,1},\ldots,v_{1,a},\ldots,v_{n_a+1,a},\ldots,
v_{1,l},\ldots,v_{n_l+1,l}\}\subset N$ the set of primitive
generators of 1-dimensional cones in $\Sigma$. Let
$h:\Gamma_m\rightarrow N_\mathbb{R}$ be a continuous embedding such
that, for each $k=1,\ldots,m$,
$h(E_k)=h(V)+\mathbb{R}_{\geq0}v(E_k)$ for some
$v(E_k)\in\Sigma[1]$, and the following \textit{balancing condition}
is satisfied:
$$\sum_{k=1}^m v(E_k)=0.$$
Then the tuple $(\Gamma_m;E_1,\ldots,E_m;h)$ is a
\textit{parameterized $m$-marked, genus 0 tropical curve in
$\bar{X}$}. The \textit{degree} of $(\Gamma_m;E_1,\ldots,E_m;h)$ is
the $d$-tuple of integers
$\delta(h)=(c_{1,1},\ldots,c_{n_1+1,1},\ldots,c_{1,a},\ldots,c_{n_a+1,a},\ldots,
c_{1,l},\ldots,c_{n_l+1,l})\in\mathbb{Z}^d$, where
$$c_{j,a}=\left\{\begin{array}{ll}
1 & \textrm{if $v_{j,a}\in\{v(E_1),\ldots,v(E_m)\}$}\\
0 & \textrm{otherwise.}
\end{array}\right.$$
By the balancing condition, we have
$\sum_{a=1}^l\sum_{j=1}^{n_a+1}c_{j,a}v_{j,a}=0$, i.e. $\delta(h)$
lies in the kernel of $\partial:\mathbb{Z}^d\rightarrow N$, and so
$\delta(h)\in H_2(\bar{X},\mathbb{Z})$.
We want to consider the tropical counterpart, denoted by
$$TGW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i),$$
of the Gromov-Witten invariant
$GW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)$.
\footnote{"TGW" stands for "tropical Gromov-Witten".} Since a
general definition is not available, we introduce a tentative
definition as follows.
\begin{defn}
We define
$TGW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)$
to be the number of parameterized $(n_a+1)$-marked, genus 0 tropical
curves of the form $(\Gamma_{n_a+1};E_1,\ldots,E_{n_a+1};h)$ with
$\delta(h)=\delta$ such that
$h(E_j)=h(V)+\mathbb{R}_{\geq0}v_{j,a}$, for $j=1,\ldots,n_a+1$, and
$h(V)\in\textrm{Log}(T_i)$, where $T_i$ is a cycle Poincar\'{e} dual
to $t_i$, whenever this number is finite. We set
$TGW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)$
to be 0 if this number is infinite. Here, $\textrm{Log}:X\rightarrow
N_\mathbb{R}$ is the map, after identifying $X$ with
$(\mathbb{C}^*)^n$, defined by
$\textrm{Log}(w_1,\ldots,w_n)=(\log|w_1|,\ldots,\log|w_n|)$, for
$(w_1,\ldots,w_n)\in X$.
\end{defn}
We then define the \textit{tropical small quantum cohomology ring}
$QH^*_{trop}(\bar{X})$ of
$\bar{X}=\mathbb{C}P^{n_1}\times\ldots\times\mathbb{C}P^{n_l}$ as a
presentation:
$$QH^*_{trop}(\bar{X})=\mathbb{C}[D_{1,1},\ldots,D_{n_1+1,1},\ldots,D_{1,l}\ldots,D_{n_l+1,l}]/
(\mathcal{L}+\mathcal{SR}_Q^{trop}),$$ where $\mathcal{SR}_Q^{trop}$
is the tropical version of the quantum Stanley-Reisner ideal,
defined to be the ideal generated by the relations
$$D_{1,a}\ast_T\ldots\ast_T D_{n_a+1,a}=\sum_{\delta\in H_2(\bar{X},\mathbb{Z})}\sum_{i}
TGW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)t^iq^\delta,$$
for $a=1,\ldots,l$. Here $\ast_T$ denotes the product in
$QH^*_{trop}(\bar{X})$, which we call the tropical small quantum
product. It is not hard to see that, as in the holomorphic case, we
have
$$TGW_{0,n_a+2}^{\bar{X},\delta}(\textrm{PD(pt)};D_{1,a},\ldots,D_{n_a+1,a},t_i)
=\left\{\begin{array}{ll}
1 & \textrm{if $t_i=\textrm{PD(pt)}$ and $\delta=\delta_a$}\\
0 & \textrm{otherwise.}
\end{array}\right.$$
Indeed, as a special case of the \textit{correspondence theorem} of
Mikhalkin \cite{Mikhalkin03} and Nishinou-Siebert \cite{NS04}, we
have: For a given point $p\in X$, let $\xi:=\textrm{Log}(p)\in
N_\mathbb{R}$. Then the unique holomorphic curve
$\varphi_a:(\mathbb{C}P^1;x_1,\ldots,x_{n_a+2})\rightarrow\bar{X}$
with class $\delta_a$, $\varphi_a(x_j)\in D_{j,a}$, for
$j=1,\ldots,n_a+1$, $\varphi_a(x_{n_a+2})=p$ and such that
$(\mathbb{C}P^1;x_1,\ldots,x_{n_a+2})$ is a given configuration in
$\overline{\mathcal{M}}_{0,n_a+2}$, is corresponding to the unique
parameterized $(n_a+1)$-marked tropical curve
$(\Gamma_{n_a+1};E_1,\ldots,E_{n_a+1};h_a)$ of genus 0 and degree
$\delta_a$ such that $h_a(V)=\xi$ and
$h_a(E_j)=\xi+\mathbb{R}_{\geq0}v_{j,a}$, for $j=1,\ldots,n_a+1$. It
follows that
$$\mathcal{SR}_Q^{trop}=\langle D_{1,a}\ast_T\ldots\ast_T D_{n_a+1,a}-q_a:a=1\ldots,l\rangle,$$
and there is a canonical isomorphism
\begin{equation}\label{step1}
QH^*(\bar{X})\cong QH^*_{trop}(\bar{X}).
\end{equation}
\begin{nb}
All these arguments and definitions rely, in an essential way, on
the fact that $\bar{X}$ is a product of projective spaces, so that
Gromov-Witten invariants are enumerative and all the (irreducible)
holomorphic curves, which contribute to $QH^*(\bar{X})$, are not
mapped into the toric boundary divisor $D_\infty$. Remember that
tropical geometry cannot be used to count nodal curves or curves
with irreducible components mapping into $D_\infty$.
\end{nb}
Next, we take a look at tropical discs. Consider the point
$\Gamma_1\in\overline{\mathcal{M}}_{0,1}^{trop}$. This is nothing
but a half line, consisting of an unbounded edge $E$ emanating from
a univalent vertex $V$. See Figure 2.2 below.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,20)
\curve(20,3, 63,20) \put(19,2){$\bullet$} \put(16,1){$V$}
\put(41,7){$E$} \put(64,10){Figure 2.2:
$\Gamma_1\in\overline{\mathcal{M}}_{0,1}^{trop}.$}
\end{picture}
\end{figure}
\noindent A \textit{parameterized Maslov index two tropical disc in
$\bar{X}$} is a tuple $(\Gamma_1,E,h)$, where $h:\Gamma_1\rightarrow
N_\mathbb{R}$ is an embedding such that
$h(E)=h(V)+\mathbb{R}_{\geq0}v$ for some
$v\in\Sigma[1]$.\footnote{For precise definitions of general
tropical discs (with higher Maslov indices), we refer the reader to
Nishinou \cite{Nishinou06}; see also the recent work of Gross
\cite{Gross09}.} For any given point $\xi\in N_\mathbb{R}$, it is
obvious that, there is a unique parameterized Maslov index two
tropical disc $(\Gamma_1,E,h_{j,a})$ such that $h_{j,a}(V)=\xi$ and
$h_{j,a}(E)=\xi+\mathbb{R}_{\geq0}v_{j,a}$, for any
$v_{j,a}\in\Sigma[1]$. Comparing this to the result
(Theorem~\ref{cho-oh}) of Cho-Oh on the classification of Maslov
index two holomorphic discs in $\bar{X}$ with boundary in the
Lagrangian torus fiber $L_\xi:=\textrm{Log}^{-1}(\xi)\subset X$, we
get a one-to-one correspondence between the families of Maslov index
two holomorphic discs in $(\bar{X},L_\xi)$ and the parameterized
Maslov index two tropical discs $(\Gamma_1,E,h)$ in $\bar{X}$ such
that $h(V)=\xi$. We have the holomorphic disc
$\varphi_{j,a}:(D^2,\partial D^2)\rightarrow(\bar{X},L_\xi)$
corresponding to the tropical disc
$(\Gamma_1,E,h_{j,a})$.\footnote{This correspondence also holds for
other toric manifolds, not just for products of projective spaces.}
Then, by (\ref{1-1}), we also get a one-to-one correspondence
between the parameterized Maslov index two tropical discs
$(\Gamma_1,E,h_{j,a})$ in $\bar{X}$ and the functions
$\Psi_{j,a}:L_X\rightarrow\mathbb{R}$:
\begin{equation*}
\{\varphi_{j,a}\}\ \overset{1-1}{\longleftrightarrow}\
\{(\Gamma_1,E,h_{j,a})\}\ \overset{1-1}{\longleftrightarrow}\
\{\Psi_{j,a}\}.
\end{equation*}
Now, while the canonical isomorphism
\begin{equation}\label{step2}
QH^*_{trop}(\bar{X})\cong\mathbb{C}[\Psi_{1,1}^{\pm1},\ldots,\Psi_{n_1,1}^{\pm1},\ldots,
\Psi_{1,l}^{\pm1},\ldots,\Psi_{n_l,l}^{\pm1}]/\mathcal{L}
\end{equation}
follows from the same simple combinatorial argument in the proof of
Proposition~\ref{prop2.2}, the geometry underlying it is exhibited
by a simple but crucial observation, which we formulate as the
following proposition.
\begin{prop}\label{prop2.3}
Let $\xi\in N_\mathbb{R}$, then the unique parameterized
$(n_a+1)$-marked, genus 0 tropical curve
$(\Gamma_{n_a+1};E_1,\ldots,E_{n_a+1};h_a)$ such that $h_a(V)=\xi$
and $h_a(E_j)=\xi+\mathbb{R}_{\geq0}v_{j,a}$, for
$j=1,\ldots,n_a+1$, is obtained by gluing the $n_a+1$ parameterized
Maslov index two tropical discs
$(\Gamma_1,E,h_{1,a}),\ldots,(\Gamma_1,E,h_{n_a+1,a})$ with
$h_{j,a}(V)=\xi$, for $j=1,\ldots,n_a+1$, in the following sense:
The map $h:(\Gamma_{n_a+1};E_1,\ldots,E_{n_a+1})\rightarrow
N_\mathbb{R}$ defined by $h|_{E_j}=h_{j,a}|_E$, for
$j=1,\ldots,n_a+1$, gives a parameterized $(n_a+1)$-marked, genus 0
tropical curve, which coincides with
$(\Gamma_{n_a+1};E_1,\ldots,E_{n_a+1};h_a)$.
\end{prop}
\begin{proof}
Since $\sum_{j=1}^{n_a+1}v_{j,a}=0$, the balancing condition at
$V\in\Gamma_{n_a+1}$ is automatically satisfied. So $h$ defines a
parameterized $(n_a+1)$-marked, genus 0 tropical curve, which
satisfies the same conditions as
$(\Gamma_{n_a+1};E_1,\ldots,E_{n_a+1};h_a)$.
\end{proof}
For example, in the case of $\bar{X}=\mathbb{C}P^2$, this can be
seen in Figure 2.3 below.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,35)
\curve(20,17, 38,17) \curve(20,17, 20,35) \curve(20,17, 5,2)
\put(10,1){$(\Gamma_3,h)$} \put(19.1,16.1){$\bullet$}
\put(16,16){$V$} \put(42,16){glued from} \curve(74,17, 93,17)
\put(73,16.1){$\bullet$} \put(83,13.5){$(\Gamma_1,h_1)$}
\curve(72,19, 72,37) \put(71.1,18){$\bullet$}
\put(72.5,29){$(\Gamma_1,h_2)$} \curve(71,16, 56,1)
\put(70,15){$\bullet$} \put(60,2){$(\Gamma_1,h_3)$}
\put(35,-2){Figure 2.3}
\end{picture}
\end{figure}
The functions $\Psi_{j,a}$'s could have been defined by counting
parameterized Maslov index two tropical discs, instead of counting
Maslov index two holomorphic discs. So the above proposition indeed
gives a geometric reason to explain why the relation
$$D_{1,a}\ast_T\ldots\ast_T D_{n_a+1,a}=q_a$$
in $QH^*_{trop}(\bar{X})$ should coincide with the relation
$$\Psi_{1,a}\star\ldots\star\Psi_{n_a+1,a}=q_a\mathbb{1}$$
in $\mathbb{C}[\Psi_{1,1}^{\pm1},\ldots,\Psi_{n_1,1}^{\pm1},\ldots,
\Psi_{1,l}^{\pm1},\ldots,\Psi_{n_l,l}^{\pm1}]/\mathcal{L}$. The
convolution product $\star$ may then be thought of as a way to
encode the gluing of tropical discs.
We summarize what we have said as follows: In the case of products
of projective spaces, we factor the isomorphism
$QH^*(\bar{X})\cong\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L}$
in Proposition~\ref{prop2.2} into two isomorphisms (\ref{step1}) and
(\ref{step2}). The first one comes from the correspondence between
holomorphic curves in $\bar{X}$ which contribute to $QH^*(\bar{X})$
and tropical curves in $N_\mathbb{R}$ which contribute to
$QH^*_{trop}(\bar{X})$. The second isomorphism is due to, on the one
hand, the fact that each tropical curve which contributes to
$QH^*_{trop}(\bar{X})$ can be obtained by gluing Maslov index two
tropical discs, and, on the other hand, the correspondence between
these tropical discs in $N_\mathbb{R}$ and Maslov index two
holomorphic discs in $\bar{X}$ with boundary on Lagrangian torus
fibers. See Figure 2.4 below.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,31)
\put(5,30){$QH^*(\bar{X})$} \put(11,16){\vector(0,1){12.5}}
\put(11,16){\vector(0,-1){11}} \put(4,1){$QH^*_{trop}(\bar{X})$}
\put(28.5,26){\vector(2,-1){13}} \put(28.5,26){\vector(-2,1){9}}
\put(31,25.5){Prop~\ref{prop2.2}} \put(28.5,6){\vector(2,1){13}}
\put(28.5,6){\vector(-2,-1){7}} \put(31,4){Prop~\ref{prop2.3}}
\put(41,14.5){$\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L}$}
\put(80,15.5){\vector(1,0){9}} \put(80,15.5){\vector(-1,0){8}}
\put(80,16.5){$\mathcal{F}$} \put(90,14.5){$Jac(W)$}
\put(50,-2){Figure 2.4}
\end{picture}
\end{figure}
\noindent Here $\mathcal{F}$ denotes the SYZ mirror transformation
for $\bar{X}$, which is the subject of Section~\ref{sec3}.
\section{SYZ mirror transformations}\label{sec3}
In this section, we first derive Hori-Vafa's mirror manifold using
semi-flat SYZ mirror transformations. Then we introduce the main
character in this paper: the SYZ mirror transformations for toric
Fano manifolds, and prove our main result.
\subsection{Derivation of Hori-Vafa's mirror manifold by T-duality}\label{subsec3.1}
Recall that we have an exact sequence (\ref{seq2.1}):
\begin{equation}\label{seq3.1}
0\longrightarrow K\overset{\iota}{\longrightarrow}\mathbb{Z}^d
\overset{\partial}{\longrightarrow}N\longrightarrow0,
\end{equation}
and we denote by
$$Q_1=(Q_{11},\ldots,Q_{d1}),\ldots,Q_l=(Q_{1l},\ldots,Q_{dl})\in\mathbb{Z}^d$$
a $\mathbb{Z}$-basis of $K$. The mirror manifold of $\bar{X}$,
derived by Hori and Vafa in \cite{HV00} using physical arguments, is
the complex submanifold
$$Y_{HV}=\Big\{(Z_1,\ldots,Z_d)\in\mathbb{C}^d:\prod_{i=1}^d Z_i^{Q_{ia}}=q_a,\ a=1,\ldots,l\Big\},$$
in $\mathbb{C}^d$, where $q_a=e^{-r_a}=\exp(-\sum_{i=1}^d
Q_{ia}\lambda_i)$, for $a=1,\ldots,l$. As a complex manifold,
$Y_{HV}$ is biholomorphic to the algebraic torus $(\mathbb{C}^*)^n$.
By our choice of the $\mathbb{Z}$-basis $Q_1,\ldots,Q_l$ of $K$ in
Remark~\ref{rmk2.0}, $Y_{HV}$ can also be written as
$$Y_{HV}=\Big\{(Z_1,\ldots,Z_d)\in\mathbb{C}^d:Z_1^{Q_{1a}}\ldots Z_n^{Q_{na}}Z_{n+a}=q_a,\ a=1,\ldots,l\Big\}.$$
Note that, in fact, $Y_{HV}\subset(\mathbb{C}^*)^d$. In terms of
these coordinates, Hori and Vafa predicted that the superpotential
$W:Y_{HV}\rightarrow\mathbb{C}$ is given by
\begin{eqnarray*}
W & = & Z_1+\ldots+Z_d\\
& = & Z_1+\ldots+Z_n+\frac{q_1}{Z_1^{Q_{11}}\ldots Z_n^{Q_{n1}}}+
\ldots+\frac{q_l}{Z_1^{Q_{1l}}\ldots Z_n^{Q_{nl}}}.
\end{eqnarray*}
The goal of this subsection is to show that the SYZ mirror manifold
$Y_{SYZ}$, which is obtained by applying T-duality to the open dense
orbit $X\subset\bar{X}$, is contained in Hori-Vafa's manifold
$Y_{HV}$ as a bounded open subset. The result itself is not new, and
can be found, for example, in Auroux \cite{Auroux07}, Proposition
4.2. For the sake of completeness, we give a self-contained proof,
which will show how T-duality, i.e. fiberwise dualizing torus
bundles, transforms the \textit{symplectic quotient space} $X$ into
the \textit{complex subspace} $Y_{SYZ}$.\\
We shall first briefly recall the constructions of $\bar{X}$ and $X$
as symplectic quotients. For more details, we refer the reader to
Appendix 1 in Guillemin \cite{Guillemin94}.
From the above exact sequence (\ref{seq3.1}), we get an exact
sequence of real tori
\begin{equation}\label{seq3.2}
1\longrightarrow T_K\overset{\iota}{\longrightarrow}T^d
\overset{\partial}{\longrightarrow}T_N\longrightarrow1,
\end{equation}
where $T^d=\mathbb{R}^d/(2\pi\mathbb{Z})^d$ and we denote by
$K_\mathbb{R}$ and $T_K$ the real vector space
$K\otimes_\mathbb{Z}\mathbb{R}$ and the torus $K_\mathbb{R}/K$
respectively. Considering their Lie algebras and dualizing give
another exact sequence
\begin{equation}\label{seq3.2'}
0\longrightarrow
M_\mathbb{R}\overset{\check{\partial}}{\longrightarrow}(\mathbb{R}^d)^\vee
\overset{\check{\iota}}{\longrightarrow}K_\mathbb{R}^\vee\longrightarrow0.
\end{equation}
Denote by $W_1,\ldots,W_d\in\mathbb{C}$ the complex coordinates on
$\mathbb{C}^d$. The standard diagonal action of $T^d$ on
$\mathbb{C}^d$ is Hamiltonian with respect to the standard
symplectic form $\frac{\sqrt{-1}}{2}\sum_{i=1}^d dW_i\wedge
d\bar{W}_i$ and the moment map
$h:\mathbb{C}^d\rightarrow(\mathbb{R}^d)^\vee$ is given by
$$h(W_1,\ldots,W_d)=\frac{1}{2}(|W_1|^2,\ldots,|W_d|^2).$$
Restricting to $T_K$, we get a Hamiltonian action of $T_K$ on
$\mathbb{C}^d$ with moment map $h_K=\check{\iota}\circ
h:\mathbb{C}^d\rightarrow\check{K}_\mathbb{R}$. In terms of the
$\mathbb{Z}$-basis $\{Q_1,\ldots,Q_l\}$ of $K$, the map
$\check{\iota}:(\mathbb{R}^d)^\vee\rightarrow K^\vee_\mathbb{R}$ is
given by
\begin{equation}\label{iota*}
\check{\iota}(X_1,\ldots,X_d)=\Bigg(\sum_{i=1}^d
Q_{i1}X_i,\ldots,\sum_{i=1}^d Q_{il}X_i\Bigg),
\end{equation}
for $(X_1,\ldots,X_d)\in(\mathbb{R}^d)^\vee$, in the coordinates
associated to the dual basis $\check{Q}_1,\ldots,\check{Q}_l$ of
$K^\vee=\textrm{Hom}(K,\mathbb{Z})$. The moment map
$h_K:\mathbb{C}^d\rightarrow K^\vee_\mathbb{R}$ can thus be written
as
$$h_K(W_1,\ldots,W_d)=\frac{1}{2}\Bigg(\sum_{i=1}^d
Q_{i1}|W_i|^2,\ldots,\sum_{i=1}^d Q_{il}|W_i|^2\Bigg)\in
K^\vee_\mathbb{R}.$$ In these coordinates,
$r=(r_1,\ldots,r_l)=-\check{\iota}(\lambda_1,\ldots,\lambda_d)$ is
an element in $K^\vee_\mathbb{R}=H^2(\bar{X},\mathbb{R})$, and
$\bar{X}$ and $X$ are given by the symplectic quotients
$$\bar{X}=h_K^{-1}(r)/T_K\textrm{ and }X=(h_K^{-1}(r)\cap(\mathbb{C}^*)^d)/T_K$$
respectively.
In the above process, the image of $h_K^{-1}(r)$ under the map
$h:\mathbb{C}^d\rightarrow(\mathbb{R}^d)^\vee$ lies inside the
affine linear subspace
$M_\mathbb{R}(r)=\{(X_1,\ldots,X_d)\in(\mathbb{R}^d)^\vee:\check{\iota}(X_1,\ldots,X_d)=r\}$,
i.e. a translate of $M_\mathbb{R}$. In fact,
$h(h_K^{-1}(r))=\check{\iota}^{-1}(r)\cap\{(X_1,\ldots,X_d)\in(\mathbb{R}^d)^\vee:X_i\geq0,\textrm{
for $i=1,\ldots,d$}\}$ is the polytope $\bar{P}\subset
M_\mathbb{R}(r)$, and $h(h_K^{-1}(r)\cap(\mathbb{C}^*)^d)=
\check{\iota}^{-1}(r)\cap\{(X_1,\ldots,X_d)\in(\mathbb{R}^d)^\vee:X_i>0,\textrm{
for $i=1,\ldots,d$}\}$ is the interior $P$ of $\bar{P}$. Now,
restricting $h$ to $h_K^{-1}(r)\cap(\mathbb{C}^*)^d$ gives a
$T^d$-bundle $h:h_K^{-1}(r)\cap(\mathbb{C}^*)^d\rightarrow P$ (which
is trivial), and $X$ is obtained by taking the quotient of this
$T^d$-bundle fiberwise by $T_K$. Hence, $X$ is naturally a
$T_N$-bundle over $P$, which can be written as
$$X=T^*P/N=P\times\sqrt{-1}T_N$$
(cf. Abreu \cite{Abreu00}).\footnote{We have, by abuse of notations,
used $N$ to denote the family of lattices $P\times\sqrt{-1}N$ over
$P$. Similarly, we denote by $M$ the family of lattices
$P\times\sqrt{-1} M$ below.} The reduced symplectic form
$\omega_X=\omega_{\bar{X}}|_X$ is the standard symplectic form
$$\omega_X=\sum_{j=1}^n dx_j\wedge du_j$$
where $x_1,\ldots,x_n\in\mathbb{R}$ and
$u_1,\ldots,u_n\in\mathbb{R}/2\pi\mathbb{Z}$ are respectively the
coordinates on $P\subset M_\mathbb{R}(r)$ and $T_N$. In other words,
the $x_j$'s and $u_j$'s are \textit{symplectic coordinates} (i.e.
action-angle coordinates). And the moment map is given by the
projection to $P$
$$\mu:X\rightarrow P.$$
We define the \textit{SYZ mirror manifold} by T-duality as follows.
\begin{defn}
The SYZ mirror manifold $Y_{SYZ}$ is defined as the total space of
the $T_M$-bundle, where $T_M=M_\mathbb{R}/M=(T_N)^\vee$, which is
obtained by fiberwise dualizing the $T_N$-bundle $\mu:X\rightarrow
P$.
\end{defn}
In other words, we have
$$Y_{SYZ}=TP/M=P\times\sqrt{-1}T_M\subset M_\mathbb{R}(r)\times\sqrt{-1}T_M.$$
$Y_{SYZ}$ has a natural complex structure, which is induced from the
one on $M_\mathbb{R}(r)\times\sqrt{-1}T_M\cong(\mathbb{C}^*)^n$. We
let $z_j=\exp(-x_j-\sqrt{-1}y_j)$, $j=1,\ldots,n$, be the
\textit{complex coordinates} on
$M_\mathbb{R}(r)\times\sqrt{-1}T_M\cong(\mathbb{C}^*)^n$ and
restricted to $Y_{SYZ}$, where
$y_1,\ldots,y_n\in\mathbb{R}/2\pi\mathbb{Z}$ are the coordinates on
$T_M=(T_N)^\vee$ dual to $u_1,\ldots,u_n$. We also let
$\Omega_{Y_{SYZ}}$ be the following nowhere vanishing holomorphic
$n$-form on $Y_{SYZ}$:
$$\Omega_{Y_{SYZ}}=\bigwedge_{j=1}^n(-dx_j-\sqrt{-1}dy_j)
=\frac{dz_1}{z_1}\wedge\ldots\wedge\frac{dz_n}{dz_n},$$ and denote
by
$$\nu:Y_{SYZ}\rightarrow P$$
the torus fibration dual to $\mu:X\rightarrow P$.
\begin{prop}\label{prop3.1}
The SYZ mirror manifold $Y_{SYZ}$ is contained in Hori-Vafa's mirror
manifold $Y_{HV}$ as an open complex submanifold. More precisely,
$Y_{SYZ}$ is the bounded domain $\{(Z_1,\ldots,Z_d)\in
Y_{SYZ}:|Z_i|<1,\ i=1,\ldots,d\}$ inside $Y_{HV}$.
\end{prop}
\begin{proof}
Dualizing the sequence (\ref{seq3.2}), we get
\begin{equation*}
1\longrightarrow T_M\overset{\check{\partial}}{\longrightarrow}
(T^d)^\vee\overset{\check{\iota}}{\longrightarrow}(T_K)^\vee\longrightarrow1,
\end{equation*}
while we also have the sequence (\ref{seq3.2'})
\begin{equation*}
0\longrightarrow
M_\mathbb{R}\overset{\check{\partial}}{\longrightarrow}(\mathbb{R}^d)^\vee
\overset{\check{\iota}}{\longrightarrow}K^\vee_\mathbb{R}\longrightarrow0.
\end{equation*}
Let $T_i=X_i+\sqrt{-1}Y_i\in\mathbb{C}/2\pi\sqrt{-1}\mathbb{Z}$,
$i=1,\ldots,d$, be the complex coordinates on
$(\mathbb{R}^d)^\vee\times\sqrt{-1}(T^d)^\vee\cong(\mathbb{C}^*)^d$.
If we let $Z_i=e^{-T_i}\in\mathbb{C}^*$, $i=1,\ldots,d$, then, by
the definition of $Y_{HV}$ and by (\ref{iota*}), we can identify
$Y_{HV}$ with the following complex submanifold in
$(\mathbb{C}^*)^d$:
\begin{equation*}
\check{\iota}^{-1}(r\in
K^\vee_\mathbb{R})\cap\check{\iota}^{-1}(1\in
(T_K)^\vee)\subset(\mathbb{R}^d)^\vee\times\sqrt{-1}(T^d)^\vee=(\mathbb{C}^*)^d.
\end{equation*}
Hence,
$$Y_{HV}=M_\mathbb{R}(r)\times\sqrt{-1}T_M\cong(\mathbb{C}^*)^n$$
as complex submanifolds in $(\mathbb{C}^*)^d$. Since
$Y_{SYZ}=TP/M=P\times\sqrt{-1}T_M$, $Y_{SYZ}$ is a complex
submanifold in $Y_{HV}$. In fact, as
$P=\check{\iota}^{-1}(r)\cap\{(X_1,\ldots,X_d)\in(\mathbb{R}^d)^*:X_i>0,i=1,\ldots,d\}$,
we have
$$Y_{SYZ}=\{(Z_1,\ldots,Z_d)\in Y_{HV}:|Z_i|<1,\textrm{ for $i=1,\ldots,d$}\}.$$
So $Y_{SYZ}$ is a bounded domain in $Y_{HV}$.
\end{proof}
We remark that, in terms of the complex coordinates
$z_j=\exp(-x_j-\sqrt{-1}y_j)$, $j=1,\ldots,n$, on
$Y_{HV}=M_\mathbb{R}(r)\times\sqrt{-1}T_M\cong(\mathbb{C}^*)^n$, the
embedding $\check{\partial}:Y_{HV}\hookrightarrow(\mathbb{C}^*)^d$
is given by
$$\check{\partial}(z_1,\ldots,z_n)=(e^{\lambda_1}z^{v_1},\ldots,e^{\lambda_d}z^{v_d}),$$
where $z^v$ denotes the monomial $z_1^{v^1}\ldots z_n^{v^n}$ if
$v=(v^1,\ldots,v^n)\in N=\mathbb{Z}^n$. So the coordinates $Z_i$'s
and $z_i$'s are related by
$$Z_i=e^{\lambda_i}z^{v_i},$$
for $i=1,\ldots,d$, and the SYZ mirror manifold $Y_{SYZ}$ is given
by the bounded domain
$$Y_{SYZ}=\{(z_1,\ldots,z_n)\in Y_{HV}=(\mathbb{C}^*)^n:|e^{\lambda_i}z^{v_i}|<1,\ i=1,\ldots,d\}.$$
Now, the superpotential $W:Y_{SYZ}\rightarrow\mathbb{C}$ (or
$W:Y_{HV}\rightarrow\mathbb{C}$) is of the form
$$W=e^{\lambda_1}z^{v_1}+\ldots+e^{\lambda_d}z^{v_d}.$$
From the above proposition, the SYZ mirror manifold $Y_{SYZ}$ is
strictly \textit{smaller} than Hori-Vafa's mirror manifold $Y_{HV}$.
This issue was discussed in Hori-Vafa \cite{HV00}, Section 3 and
Auroux \cite{Auroux07}, Section 4.2, and may be resolved by a
process called \textit{renormalization}. We refer the interested
reader to those references for the details. In this paper, we shall
always be (except in this subsection) looking at the SYZ mirror
manifold, and the letter $Y$ will also be used exclusively to denote
the SYZ mirror manifold.
\subsection{SYZ mirror transformations as fiberwise Fourier transforms}\label{subsec3.2}
In this subsection, we first give a brief review of semi-flat SYZ
mirror transformations (for details, see Hitchin \cite{Hitchin97},
Leung-Yau-Zaslow \cite{LYZ00} and Leung \cite{Leung00}). Then we
introduce the SYZ mirror transformations for toric Fano manifolds,
and prove part 1. of Theorem~\ref{main_thm}.\\
To begin with, recall that the dual torus $T_M=(T_N)^\vee$ of $T_N$
can be interpreted as the moduli space of flat $U(1)$-connections on
the trivial line bundle $T_N\times\mathbb{C}\rightarrow T_N$. In
more explicit terms, a point $y=(y_1,\ldots,y_n)\in
M_\mathbb{R}\cong\mathbb{R}^n$ corresponds to the flat
$U(1)$-connection $\nabla_y=d+\frac{\sqrt{-1}}{2}\sum_{j=1}^n y_j
du_j$ on the trivial line bundle $T_N\times\mathbb{C}\rightarrow
T_N$. The \textit{holonomy} of this connection is given, in our
convention, by the map
$$\textrm{hol}_{\nabla_y}:N\rightarrow U(1),\ v\mapsto
e^{-\sqrt{-1}\langle y,v\rangle}.$$ $\nabla_y$ is gauge equivalent
to the trivial connection $d$ if and only if
$(y_1,\ldots,y_n)\in(2\pi\mathbb{Z})^n=M$. So, in the following, we
will regard $(y_1,\ldots,y_n)\in
T_M\cong\mathbb{R}^n/(2\pi\mathbb{Z})^n$. Moreover, this
construction gives all flat $U(1)$-connections on
$T_N\times\mathbb{C}\rightarrow T_N$ up to unitary gauge
transformations. The universal $U(1)$-bundle $\mathcal{P}$, i.e. the
Poincar\'{e} line bundle, is the trivial line bundle over the
product $T_N\times T_M$ equipped with the $U(1)$-connection
$d+\frac{\sqrt{-1}}{2}\sum_{j=1}^n (y_jdu_j-u_jdy_j)$, where
$u_1,\ldots,u_n\in\mathbb{R}/2\pi\mathbb{Z}$ are the coordinates on
$T_N\cong\mathbb{R}^d/(2\pi\mathbb{Z})^d$. The curvature of this
connection is given by the two-form
$$F=\sqrt{-1}\sum_{j=1}^n dy_j\wedge du_j\in\Omega^2(T_N\times T_M).$$
From this perspective, the SYZ mirror manifold $Y$ is the moduli
space of pairs $(L_x,\nabla_y)$, where $L_x$ ($x\in P$) is a
Lagrangian torus fiber of $\mu:X\rightarrow P$ and $\nabla_y$ is a
flat $U(1)$-connection on the trivial line bundle
$L_x\times\mathbb{C}\rightarrow L_x$. The construction of the mirror
manifold in this way is originally advocated in the SYZ Conjecture
\cite{SYZ96} (cf. Hitchin \cite{Hitchin97} and Sections 2 and 4 in
Auroux \cite{Auroux07}).
Now recall that we have the dual torus bundles $\mu:X\rightarrow
P\textrm{ and }\nu:Y\rightarrow P$. Consider their fiber product
$X\times_P Y=P\times\sqrt{-1}(T_N\times T_M)$.
\begin{equation*}
\begin{CD}
X\times_P Y @>\pi_Y>> Y \\
@VV\pi_X V @VV \nu V \\
X @>\mu>> P
\end{CD}\\
\end{equation*}
By abuse of notations, we still use $F$ to denote the fiberwise
universal curvature two-form $\sqrt{-1}\sum_{j=1}^n dy_j\wedge
du_j\in\Omega^2(X\times_P Y)$.
\begin{defn}\label{def3.1}
The semi-flat SYZ mirror transformation
$\mathcal{F}^{\textrm{sf}}:\Omega^*(X)\rightarrow\Omega^*(Y)$ is
defined by
\begin{eqnarray*}
\mathcal{F}^{\textrm{sf}}(\alpha) & = &
(-2\pi\sqrt{-1})^{-n}\pi_{Y,*}(\pi_X^*(\alpha)\wedge e^{\sqrt{-1}F})\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{T_N}\pi_X^*(\alpha)\wedge
e^{\sqrt{-1}F},
\end{eqnarray*}
where $\pi_X:X\times_P Y\rightarrow X$ and $\pi_Y:X\times_P
Y\rightarrow Y$ are the two natural projections.
\end{defn}
The key point is that, the semi-flat SYZ mirror transformation
$\mathcal{F}^{\textrm{sf}}$ transforms the (exponential of
$\sqrt{-1}$ times the) symplectic structure
$\omega_X=\sum_{j=1}^ndx_j\wedge du_j$ on $X$ to the holomorphic
$n$-form
$\Omega_Y=\frac{dz_1}{z_1}\wedge\ldots\wedge\frac{dz_n}{z_n}$ on
$Y$, where $z_j=\exp(-x_j-\sqrt{-1}y_j)$, $j=1,\ldots,n$. This is
probably well-known and implicitly contained in the literature, but
we include a proof here because we cannot find a suitable
reference.\\
\begin{prop}~\label{prop3.2}
We have
$$\mathcal{F}^{\textrm{sf}}(e^{\sqrt{-1}\omega_X})=\Omega_Y.$$
Moreover, if we define the inverse SYZ transformation
$(\mathcal{F}^{\textrm{sf}})^{-1}:\Omega^*(Y)\rightarrow\Omega^*(X)$
by
\begin{eqnarray*}
(\mathcal{F}^{sf})^{-1}(\alpha) & = &
(-2\pi\sqrt{-1})^{-n}\pi_{X,*}(\pi_Y^*(\alpha)\wedge e^{-\sqrt{-1}F})\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{T_M}\pi_Y^*(\alpha)\wedge
e^{-\sqrt{-1}F},
\end{eqnarray*}
then we also have
$$(\mathcal{F}^{\textrm{sf}})^{-1}(\Omega_Y)=e^{\sqrt{-1}\omega_X}.$$
\end{prop}
\begin{proof}
The proof is by straightforward computations.
\begin{eqnarray*}
\mathcal{F}^{\textrm{sf}}(e^{\sqrt{-1}\omega_X}) & = &
(-2\pi\sqrt{-1})^{-n}\int_{T_N}\pi_{X}^*(e^{\sqrt{-1}\omega_X})\wedge e^{\sqrt{-1}F}\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{T_N}e^{\sqrt{-1}\sum_{j=1}^n (dx_j+\sqrt{-1}dy_j)\wedge du_j}\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{T_N} \bigwedge_{j=1}^n\big(1+\sqrt{-1}(dx_j+\sqrt{-1}dy_j)\wedge du_j\big)\\
& = & (2\pi)^{-n}\int_{T_N}
\Bigg(\bigwedge_{j=1}^n(-dx_j-\sqrt{-1}dy_j)\Bigg)\wedge
du_1\wedge\ldots\wedge du_n\\
& = & \Omega_Y,
\end{eqnarray*}
where we have $\int_{T_N}du_1\wedge\ldots\wedge du_n=(2\pi)^n$ for
the last equality. On the other hand,
\begin{eqnarray*}
(\mathcal{F}^{\textrm{sf}})^{-1}(\Omega_Y) & = &
(-2\pi\sqrt{-1})^{-n}\int_{T_M}\pi_Y^*(\Omega_Y)\wedge e^{-\sqrt{-1}F}\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{T_M}
\Bigg(\bigwedge_{j=1}^n(-dx_j-\sqrt{-1}dy_j)\Bigg)\wedge
e^{\sum_{j=1}^n dy_j\wedge du_j}\\
& = & (2\pi\sqrt{-1})^{-n}\int_{T_M}\bigwedge_{j=1}^n\big((dx_j+\sqrt{-1}dy_j)\wedge e^{dy_j\wedge du_j}\big)\\
& = & (2\pi\sqrt{-1})^{-n}\int_{T_M}\bigwedge_{j=1}^n\big(dx_j+\sqrt{-1}dy_j-dx_j\wedge du_j\wedge dy_j\big)\\
& = & (2\pi)^{-n}\int_{T_M}\bigwedge_{j=1}^n \Big(1+\sqrt{-1}dx_j\wedge du_j\Big)\wedge dy_j\\
& = &
(2\pi)^{-n}\int_{T_M}\bigwedge_{j=1}^n\big(e^{\sqrt{-1}dx_j\wedge
du_j}\wedge dy_j\big)
\end{eqnarray*}
\begin{eqnarray*}
& = & (2\pi)^{-n}\int_{T_M}e^{\sqrt{-1}\sum_{j=1}^n dx_j\wedge du_j}\wedge dy_1\wedge\ldots\wedge dy_n\\
& = & e^{\sqrt{-1}\omega_X},
\end{eqnarray*}
where we again have $\int_{T_M}dy_1\wedge\ldots\wedge dy_n=(2\pi)^n$
in the last step.
\end{proof}
One can also apply the semi-flat SYZ mirror transformations to other
geometric structures and objects. For details, see Leung
\cite{Leung00}.\\
The semi-flat SYZ mirror transformation $\mathcal{F}^{\textrm{sf}}$
can transform the symplectic structure $\omega_X$ on $X$ to the
holomorphic $n$-form $\Omega_Y$ on $Y$. However, as we mentioned in
the introduction, we are not going to obtain the superpotential
$W:Y\rightarrow\mathbb{C}$ in this way because we have ignored the
toric boundary divisor $\bar{X}\setminus X=D_\infty=\bigcup_{i=1}^d
D_i$. Indeed, it is the toric boundary divisor $D_\infty$ which
gives rise to the quantum corrections in the A-model of $\bar{X}$.
More precisely, these quantum corrections are due to the existence
of holomorphic discs in $\bar{X}$ with boundary in Lagrangian torus
fibers which have intersections with the divisor $D_\infty$. To
restore this information, our way out is to look at the (trivial)
$\mathbb{Z}^n$-cover
$$\pi:LX=X\times N\rightarrow X.$$
Recall that we equip $LX$ with the symplectic structure
$\pi^*(\omega_X)$; we will confuse the notations and use $\omega_X$
to denote either the symplectic structure on $X$ or that on $LX$. We
will further abuse the notations by using $\mu$ to denote the
fibration
$$\mu:LX\rightarrow P,$$
which is the composition of the map $\pi:LX\rightarrow X$ with
$\mu:X\rightarrow P$.\\
We are now ready to define the SYZ mirror transformation
$\mathcal{F}$ for the toric Fano manifold $\bar{X}$. It will be
constructed as \textit{a combination of the semi-flat SYZ
transformation $\mathcal{F}^{\textrm{sf}}$ and taking fiberwise
Fourier series}.
Analog to the semi-flat case, consider the fiber product
$$LX\times_P Y=P\times N\times\sqrt{-1}(T_N\times T_M)$$
of the maps $\mu:LX\rightarrow P$ and $\nu:Y\rightarrow P$.
\begin{equation*}
\begin{CD}
LX\times_P Y @>\pi_Y>> Y \\
@VV\pi_{LX} V @VV \nu V \\
LX @>\mu>> P
\end{CD}\\
\end{equation*}
Note that we have a covering map $LX\times_P Y\rightarrow X\times_P
Y$. Pulling back $F\in\Omega^2(X\times_P Y)$ to $LX\times_P Y$ by
this covering map, we get the fiberwise universal curvature two-form
$$F=\sqrt{-1}\sum_{j=1}^n dy_j\wedge du_j\in\Omega^2(LX\times_P Y).$$
We further define the \emph{holonomy function}
$\textrm{hol}:LX\times_P Y\rightarrow U(1)$ as follows. For
$(p,v)\in LX$ and $z=(z_1,\ldots,z_n)\in Y$ such that
$\mu(p)=\nu(z)=:x\in P$, we let $x=(x_1,\ldots,x_n)$, and write
$z_j=\exp(-x_j-\sqrt{-1}y_j)$, so that $y=(y_1,\ldots,y_n)\in
(L_x)^\vee:=\nu^{-1}(x)\subset Y$. Then we set
$$\textrm{hol}(p,v,z):=\textrm{hol}_{\nabla_y}(v)=e^{-\sqrt{-1}\langle y,v\rangle},$$
where $\nabla_y$ is the flat $U(1)$-connection on the trivial line
bundle $L_x\times\mathbb{C}\rightarrow L_x$ over $L_x:=\mu^{-1}(x)$
corresponding to the point $y\in(L_x)^\vee$.
\begin{defn}\label{def3.2}
The SYZ mirror transformation
$\mathcal{F}:\Omega^*(LX)\rightarrow\Omega^*(Y)$ for the toric Fano
manifold $\bar{X}$ is defined by
\begin{eqnarray*}
\mathcal{F}(\alpha) & = &
(-2\pi\sqrt{-1})^{-n}\pi_{Y,*}(\pi_{LX}^*(\alpha)\wedge e^{\sqrt{-1}F}\textrm{hol})\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{N\times
T_N}\pi_{LX}^*(\alpha)\wedge e^{\sqrt{-1}F}\textrm{hol},
\end{eqnarray*}
where $\pi_{LX}:LX\times_P Y\rightarrow LX$ and $\pi_Y:LX\times_P
Y\rightarrow Y$ are the two natural projections.
\end{defn}
Before stating the basic properties of $\mathcal{F}$, we introduce
the class of functions on $LX$ relevant to our applications.
\begin{defn}\label{def3.3} A $T_N$-invariant function
$f:LX\rightarrow\mathbb{C}$ is said to be admissible if for any
$(p,v)\in LX=X\times N$,
$$f(p,v)=f_v e^{-\langle x,v\rangle},$$
where $x=\mu(p)\in P$ and $f_v\in\mathbb{C}$ is a constant, and the
fiberwise Fourier series
\begin{equation*}
\widehat{f}:=\sum_{v\in N}f_v e^{-\langle
x,v\rangle}\textrm{hol}_{\nabla_y}(v)=\sum_{v\in N}f_vz^v,
\end{equation*}
where $z^v=\exp(\langle-x-\sqrt{-1}y,v\rangle)$, is convergent and
analytic, as a function on $Y$. We denote by $\mathcal{A}(LX)\subset
C^\infty(LX)$ set of all admissible functions on $LX$.
\end{defn}
Examples of admissible functions on $LX$ include those
$T_N$-invariant functions which are not identically zero on
$X\times\{v\}\subset LX$ for only finitely many $v\in N$. In
particular, the functions $\Psi_1,\ldots,\Psi_d$ are all in
$\mathcal{A}(LX)$. We will see (in the proof of
Theorem~\ref{thm3.2}) shortly that $\Phi_q$ is also admissible.
Now, for functions $f,g\in\mathcal{A}(LX)$, we define their
\textit{convolution product} $f\star g:LX\rightarrow\mathbb{C}$, as
before, by
$$(f\star g)(p,v)=\sum_{v_1,v_2\in N,\ v_1+v_2=v}f(p,v_1)g(p,v_2).$$
That the right-hand-side is convergent can be seen as follows. By
definition, $f,g\in\mathcal{A}(LX)$ implies that for any $p\in X$
and any $v_1,v_2\in N$,
$$f(p,v_1)=f_{v_1}e^{-\langle x,v_1\rangle},\ g(p,v_2)=g_{v_2}e^{-\langle x,v_2\rangle},$$
where $x=\mu(p)$ and $f_{v_1},g_{v_2}\in\mathbb{C}$ are constants;
also, the series $\widehat{f}=\sum_{v_1\in N}f_{v_1}z^{v_1}$ and
$\widehat{g}=\sum_{v_2\in N}g_{v_2}z^{v_2}$ are convergent and
analytic. Then their product, given by
\begin{eqnarray*}
\widehat{f}\cdot\widehat{g}=\Bigg(\sum_{v_1\in
N}f_{v_1}z^{v_1}\Bigg)\Bigg(\sum_{v_2\in
N}g_{v_2}z^{v_2}\Bigg)=\sum_{v\in N}\Bigg(\sum_{\substack{v_1,v_2\in
N,\\ v_1+v_2=v}}f_{v_1}g_{v_2}\Bigg)z^v,
\end{eqnarray*}
is also analytic. This shows that the convolution product $f\star g$
is well defined and gives another admissible function on $LX$.
Hence, the $\mathbb{C}$-vector space $\mathcal{A}(LX)$, together
with the convolution product $\star$, forms a
$\mathbb{C}$-algebra.\\
Let $\mathcal{O}(Y)$ be the $\mathbb{C}$-algebra of holomorphic
functions on $Y$. Recall that $Y=TP/M=P\times\sqrt{-1}T_M$. For
$\phi\in\mathcal{O}(Y)$, the restriction of $\phi$ to a fiber
$(L_x)^\vee=\nu^{-1}(x)\cong T_M$ gives a $C^\infty$ function
$\phi_x:T_M\rightarrow\mathbb{C}$ on the torus $T_M$. For $v\in N$,
the $v$-th Fourier coefficient of $\phi_x$ is given by
$$\widehat{\phi}_x(v)=\int_{T_M}\phi_x(y)e^{\sqrt{-1}\langle y,v\rangle}dy_1\wedge\ldots\wedge dy_n.$$
Then, we define a function $\widehat{\phi}:LX\rightarrow\mathbb{C}$
on $LX$ by
$$\widehat{\phi}(p,v)=\widehat{\phi}_x(v),$$
where $x=\mu(p)\in P$. $\widehat{\phi}$ is clearly admissible. We
call the process,
$\phi\in\mathcal{O}(Y)\mapsto\widehat{\phi}\in\mathcal{A}(LX)$,
taking \textit{fiberwise Fourier coefficients}. The following lemma
follows from the standard theory of Fourier analysis on tori (see,
for example, Edwards \cite{Edwards79}).
\begin{lem}\label{lem3.1}
Taking fiberwise Fourier series, i.e. the map
$$\mathcal{A}(LX)\rightarrow\mathcal{O}(Y),\quad f\mapsto\widehat{f}$$
is an isomorphism of $\mathbb{C}$-algebras, where we equip
$\mathcal{A}(LX)$ with the convolution product and $\mathcal{O}(Y)$
with the ordinary product of functions. The inverse is given by
taking fiberwise Fourier coefficients. In particular,
$\widehat{\widehat{f}}=f$ for any $f\in\mathcal{A}(LX)$.
\end{lem}
The basic properties of the SYZ mirror transformation $\mathcal{F}$
are summarized in the following theorem.
\begin{thm}\label{thm3.1}
Let
$\mathcal{A}(LX)e^{\sqrt{-1}\omega_X}:=\{fe^{\sqrt{-1}\omega_X}:f\in
\mathcal{A}(LX)\}\subset\Omega^*(LX)$ and
$\mathcal{O}(Y)\Omega_Y:=\{\phi\Omega_Y:\phi\in\mathcal{O}(Y)\}\subset\Omega^*(Y)$.
\begin{enumerate}
\item[(i)] For any admissible function $f\in\mathcal{A}(LX)$,
$$\mathcal{F}(fe^{\sqrt{-1}\omega_X})=\widehat{f}\Omega_Y\in\mathcal{O}(Y)\Omega_Y.$$
\item[(ii)] If we define the inverse SYZ mirror transformation
$\mathcal{F}^{-1}: \Omega^*(Y)\rightarrow\Omega^*(LX)$ by
\begin{eqnarray*}
\mathcal{F}^{-1}(\alpha) & = &
(-2\pi\sqrt{-1})^{-n}\pi_{LX,*}(\pi_Y^*(\alpha)\wedge e^{-\sqrt{-1}F}\textrm{hol}^{-1})\\
& = & (-2\pi\sqrt{-1})^{-n}\int_{T_M}\pi_Y^*(\alpha)\wedge
e^{-\sqrt{-1}F}\textrm{hol}^{-1},
\end{eqnarray*}
where $\textrm{hol}^{-1}:LX\times_P Y\rightarrow\mathbb{C}$ is the
function defined by
$\textrm{hol}^{-1}(p,v,z)=1/\textrm{hol}(p,v,z)=e^{\sqrt{-1}\langle
y,v\rangle}$, for any $(p,v,z)\in L_X\times_P Y$, then
$$\mathcal{F}^{-1}(\phi\Omega_Y)=\widehat{\phi}e^{\sqrt{-1}\omega_X}\in\mathcal{A}(LX)e^{\sqrt{-1}\omega_X},$$
for any $\phi\in\mathcal{O}(Y)$.
\item[(iii)] The restriction map
$\mathcal{F}:\mathcal{A}(LX)e^{\sqrt{-1}\omega_X}
\rightarrow\mathcal{O}(Y)\Omega_Y$ is a bijection with inverse
$\mathcal{F}^{-1}:\mathcal{O}(Y)\Omega_Y\rightarrow\mathcal{A}(LX)e^{\sqrt{-1}\omega_X}$,
i.e. we have
\begin{equation*}
\mathcal{F}^{-1}\circ\mathcal{F}=\textrm{Id}_{\mathcal{A}(LX)e^{\sqrt{-1}\omega_X}},\
\mathcal{F}\circ\mathcal{F}^{-1}=\textrm{Id}_{\mathcal{O}(Y)\Omega_Y}.
\end{equation*}
This shows that the SYZ mirror transformation $\mathcal{F}$ has the
inversion property.
\end{enumerate}
\end{thm}
\begin{proof}
Let $f\in\mathcal{A}(LX)$. Then, for any $v\in N$,
$f(p,v)=f_ve^{-\langle x,v\rangle}$ for some constant
$f_v\in\mathbb{C}$. By observing that both functions $\pi_{LX}^*(f)$
and $\textrm{hol}$ are $T_N$-invariant functions on $LX\times_P Y$,
we have
\begin{eqnarray*}
\mathcal{F}(fe^{\sqrt{-1}\omega_X}) & = &
(-2\pi\sqrt{-1})^{-n}\int_{N\times
T_N}\pi_{LX}^*(fe^{\sqrt{-1}\omega_X})\wedge
e^{\sqrt{-1}F}\textrm{hol}\\
& = & (-2\pi\sqrt{-1})^{-n}\sum_{v\in
N}\pi_{LX}^*(f)\cdot\textrm{hol}
\int_{T_N}\pi_{LX}^*(e^{\sqrt{-1}\omega_X})\wedge e^{\sqrt{-1}F}\\
& = & (-2\pi\sqrt{-1})^{-n}\Bigg(\sum_{v\in
N}\pi_{LX}^*(f)\cdot\textrm{hol}\Bigg)
\Bigg(\int_{T_N}\pi_X^*(e^{\sqrt{-1}\omega_X})\wedge
e^{\sqrt{-1}F}\Bigg).
\end{eqnarray*}
The last equality is due to the fact that the forms
$\pi_{LX}^*(e^{\sqrt{-1}\omega_X})=\pi_X^*(e^{\sqrt{-1}\omega_X})$
and $e^F$ are independent of $v\in N$. By Proposition~\ref{prop3.2},
the second factor is given by
$$\int_{T_N}\pi_X^*(e^{\sqrt{-1}\omega_X})\wedge e^{\sqrt{-1}F}
=(-2\pi\sqrt{-1})^n\mathcal{F}^{\textrm{sf}}(e^{\sqrt{-1}\omega_X})=(-2\pi\sqrt{-1})^n\Omega_Y,$$
while the first factor is the function on $Y$ given, for
$x=(x_1,\ldots,x_n)\in P$ and $y=(y_1,\ldots,y_n)\in T_M$, by
\begin{eqnarray*}
\Bigg(\sum_{v\in N}\pi_{LX}^*(f)\cdot\textrm{hol}\Bigg)(x,y) & = &
\sum_{v\in N} f_ve^{-\langle x,v\rangle}e^{-\sqrt{-1}\langle y,v\rangle}\\
& = & \sum_{v\in N} f_vz^v\\
& = & \widehat{f}(z),
\end{eqnarray*}
where
$z=(z_1,\ldots,z_n)=(\exp(-x_1-\sqrt{-1}y_1),\ldots,\exp(-x_n-\sqrt{-1}y_n))\in
Y$. Hence
$\mathcal{F}(fe^{\sqrt{-1}\omega_X})=\widehat{f}\Omega_Y\in\mathcal{O}(Y)\Omega_Y$.
This proves (i).
For (ii), expand $\phi\in\mathcal{O}(Y)$ into a fiberwise Fourier
series
$$\phi(z)=\sum_{w\in N}\widehat{\phi}_x(w)e^{-\sqrt{-1}\langle
y,w\rangle},$$ where $x,y,z$ are as before. Then
\begin{eqnarray*}
\mathcal{F}^{-1}(\phi\Omega_Y) & = &
(-2\pi\sqrt{-1})^{-n}\int_{T_M}\pi_Y^*(\phi\Omega_Y)\wedge e^{-\sqrt{-1}F}\textrm{hol}^{-1}\\
& = & (-2\pi\sqrt{-1})^{-n}\sum_{w\in
N}\Bigg(\widehat{\phi}_x(w)\int_{T_M}e^{\sqrt{-1}\langle
y,v-w\rangle}\pi_Y^*(\Omega_Y)\wedge e^{-\sqrt{-1}F}\Bigg).
\end{eqnarray*}
Here comes the key observation: If $v-w\neq0\in N$, then, using (the
proof of) the second part of Proposition~\ref{prop3.2}, we have
\begin{eqnarray*}
& & \int_{T_M} e^{\sqrt{-1}\langle
y,v-w\rangle}\pi_Y^*(\Omega_Y)\wedge e^{-\sqrt{-1}F}\\
& = & \int_{T_M} e^{\sqrt{-1}\langle
y,v-w\rangle}\Bigg(\bigwedge_{j=1}^n(-dx_j-\sqrt{-1}dy_j)\Bigg)\wedge
e^{\sum_{j=1}^n dy_j\wedge du_j}\\
& = & (-\sqrt{-1})^n
e^{\sqrt{-1}\omega_X}\int_{T_M}e^{\sqrt{-1}\langle
y,v-w\rangle} dy_1\wedge\ldots\wedge dy_n\\
& = & 0.
\end{eqnarray*}
Hence,
\begin{eqnarray*}
\mathcal{F}^{-1}(\phi\Omega_Y) & = &
(-2\pi\sqrt{-1})^{-n}\widehat{\phi}_x(v)\int_{T_M}\pi_Y^*(\Omega_Y)\wedge
e^{-\sqrt{-1}F}\\
& = &
\widehat{\phi}\Big((\mathcal{F}^{\textrm{sf}})^{-1}(\Omega_Y)\Big)
=\widehat{\phi}e^{\omega_X}\in\mathcal{A}(LX)e^{\omega_X},
\end{eqnarray*}
again by Proposition~\ref{prop3.2}.
(iii) follows from (i), (ii) and Lemma~\ref{lem3.1}.
\end{proof}
We will, again by abuse of notations, also use
$\mathcal{F}:\mathcal{A}(LX)\rightarrow\mathcal{O}(Y)$ to denote the
process of taking fiberwise Fourier series:
$\mathcal{F}(f):=\widehat{f}$ for $f\in\mathcal{A}(LX)$. Similarly,
we use $\mathcal{F}^{-1}:\mathcal{O}(Y)\rightarrow\mathcal{A}(LX)$
to denote the process of taking fiberwise Fourier coefficients:
$\mathcal{F}^{-1}(\phi):=\widehat{\phi}$ for
$\phi\in\mathcal{O}(Y)$. To which meanings of the symbols
$\mathcal{F}$ and $\mathcal{F}^{-1}$ are we referring will be clear
from the context.
We can now prove the first part of Theorem~\ref{main_thm}, as a
corollary of Theorem~\ref{thm3.1}.
\begin{thm}[=part 1. of Theorem~\ref{main_thm}]\label{thm3.2}
The SYZ mirror transformation of the function $\Phi_q\in
C^\infty(LX)$, defined in terms of the counting of Maslov index two
holomorphic discs in $\bar{X}$ with boundary in Lagrangian torus
fibers, is the exponential of the superpotential $W$ on the mirror
manifold $Y$, i.e.
$$\mathcal{F}(\Phi_q)=e^W.$$
Conversely, we have
$$\mathcal{F}^{-1}(e^W)=\Phi_q.$$
Furthermore, we can incorporate the symplectic structure
$\omega_X=\omega_{\bar{X}}|_X$ on $X$ to give the holomorphic volume
form on the Landau-Ginzburg model $(Y,W)$ through the SYZ mirror
transformation $\mathcal{F}$, and vice versa, in the following
sense:
\begin{equation*}
\mathcal{F}(\Phi_q e^{\sqrt{-1}\omega_X})=e^W\Omega_Y,\
\mathcal{F}^{-1}(e^W\Omega_Y)=\Phi_q e^{\sqrt{-1}\omega_X}.
\end{equation*}
\end{thm}
\begin{proof}
By Theorem~\ref{thm3.1}, we only need to show that $\Phi_q\in
C^\infty(LX)$ is admissible and
$\mathcal{F}(\Phi_q)=\widehat{\Phi}_q=e^W\in\mathcal{O}(Y)$. Recall
that, for $(p,v)\in LX=X\times N$ and $x=\mu(p)\in P$,
$$\Phi_q(p,v)=\sum_{\beta\in\pi_2^+(\bar{X},L_x),\
\partial\beta=v}\frac{1}{w(\beta)}e^{-\frac{1}{2\pi}\int_\beta\omega_{\bar{X}}}.$$
For $\beta\in\pi_2^+(\bar{X},L_x)$ with $\partial\beta=v$, by the
symplectic area formula (\ref{area}) of Cho-Oh, we have
$\int_\beta\omega_{\bar{X}}=2\pi\langle x,v\rangle+\textrm{const}$.
So $\Phi_q(p,v)$ is of the form $\textrm{const}\cdot e^{-\langle
x,v\rangle}$. Now,
\begin{eqnarray*}
\sum_{v\in N}\Phi_q(p,v)\textrm{hol}_{\nabla_y}(v) & = & \sum_{v\in
N}\Bigg(\sum_{\beta\in\pi_2^+(\bar{X},L_x),\
\partial\beta=v}\frac{1}{w(\beta)}e^{-\frac{1}{2\pi}\int_\beta\omega_{\bar{X}}}\Bigg)e^{-\sqrt{-1}\langle y,v\rangle}\\
& = & \sum_{k_1,\ldots,k_d\in\mathbb{Z}_{\geq0}}\frac{1}{k_1!\ldots
k_d!}e^{-\sum_{i=1}^d k_i(\langle
x,v_i\rangle-\lambda_i)}e^{-\sum_{i=1}^d k_i\sqrt{-1}\langle
y,v_i\rangle}\\
& = & \prod_{i=1}^d\Bigg(\sum_{k_i=0}^\infty\frac{1}{k_i!}
\big(e^{\lambda_i-\langle
x+\sqrt{-1}y,v_i\rangle}\big)^{k_i}\Bigg)\\
& = & \prod_{i=1}^d \exp(e^{\lambda_i}z^{v_i})=e^W.
\end{eqnarray*}
This shows that $\Phi_q$ is admissible and $\widehat{\Phi}_q=e^W$.
\end{proof}
The form $\Phi_q e^{\sqrt{-1}\omega_X}\in\Omega^*(LX)$ can be viewed
as \textit{the symplectic structure modified by quantum corrections
from Maslov index two holomorphic discs in $\bar{X}$ with boundaries
on Lagrangian torus fibers}. That we call $e^W\Omega_Y$ the
holomorphic volume form of the Landau-Ginzburg model $(Y,W)$ can be
justified in several ways. For instance, in the theory of
singularities, one studies the complex oscillating integrals
$$I=\int_{\Gamma} e^{\frac{1}{\hbar}W}\Omega_Y,$$
where $\Gamma$ is some real $n$-dimensional cycle in $Y$ constructed
by the Morse theory of the function $\textrm{Re($W$)}$. These
integrals are reminiscent of the periods of holomorphic volume forms
on Calabi-Yau manifolds, and they satisfy certain Picard-Fuchs
equations (see, for example, Givental~\cite{Givental97b}). Hence,
one may think of $e^W\Omega_Y$ as playing the same role as the
holomorphic volume form on a Calabi-Yau manifold.
\subsection{Quantum cohomology vs. Jacobian ring}\label{subsec3.3}
The purpose of this subsection is to give a proof of the second part
of Theorem~\ref{main_thm}. Before that, let us recall the definition
of the Jacobian ring $Jac(W)$. Recall that the SYZ mirror manifold
$Y$ is given by the bounded domain
$$Y=\{(z_1,\ldots,z_n)\in(\mathbb{C}^*)^n:|e^{\lambda_i}z^{v_i}|<1,\ i=1,\ldots,d\},$$
in $(\mathbb{C}^*)^n$, and the superpotential
$W:Y\rightarrow\mathbb{C}$ is the Laurent polynomial
$$W=e^{\lambda_1}z^{v_1}+\ldots+e^{\lambda_d}z^{v_d},$$
where, as before, $z^v$ denotes the monomial $z_1^{v^1}\ldots
z_n^{v^n}$ if $v=(v^1,\ldots,v^n)\in N=\mathbb{Z}^n$. Let
$\mathbb{C}[Y]=\mathbb{C}[z_1^{\pm1},\ldots,z_n^{\pm1}]$ be the
$\mathbb{C}$-algebra of Laurent polynomials restricted to $Y$. Then
the Jacobian ring $Jac(W)$ of $W$ is defined as the quotient of
$\mathbb{C}[Y]$ by the ideal generated by the logarithmic
derivatives of $W$:
\begin{eqnarray*}
Jac(W) & = & \mathbb{C}[Y]\Big/\Big\langle z_j\frac{\partial
W}{\partial z_j}:j=1,\ldots,n\Big\rangle\\
& = &\mathbb{C}[z_1^{\pm1},\ldots,z_n^{\pm1}]\Big/\Big\langle
z_j\frac{\partial W}{\partial z_j}:j=1,\ldots,n\Big\rangle.
\end{eqnarray*}
The second part of Theorem~\ref{main_thm} is now an almost immediate
corollary of Proposition~\ref{prop2.2} and Theorem~\ref{thm3.3}.
\begin{thm}\label{thm3.3}
The SYZ mirror transformation $\mathcal{F}$ gives an isomorphism
$$\mathcal{F}:\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]/\mathcal{L}\rightarrow
Jac(W)$$ of $\mathbb{C}$-algebras. Hence, $\mathcal{F}$ induces a
natural isomorphism of $\mathbb{C}$-algebras between the small
quantum cohomology ring of $\bar{X}$ and the Jacobian ring of $W$:
\begin{eqnarray*}
\mathcal{F}:QH^*(\bar{X})\overset{\cong}{\longrightarrow}Jac(W),
\end{eqnarray*}
provided that $\bar{X}$ is a product of projective spaces.
\end{thm}
\begin{proof}
The functions $\Psi_1,\Psi_1^{-1},\ldots,\Psi_n,\Psi_n^{-1}$ are all
admissible, so $\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]$ is a
subalgebra of $\mathcal{A}(LX)$. It is easy to see that, for
$i=1,\ldots,d$, the SYZ mirror transformation
$\mathcal{F}(\Psi_i)=\widehat{\Psi}_i$ of $\Psi_i$ is nothing but
the monomial $e^{\lambda_i}z^{v_i}$. By our choice of the polytope
$\bar{P}\subset M_\mathbb{R}$, $v_1=e_1,\ldots,v_n=e_n$ is the
standard basis of $N=\mathbb{Z}^n$ and
$\lambda_1=\ldots=\lambda_n=0$. Hence,
$$\mathcal{F}(\Psi_i)=z_i,$$
for $i=1,\ldots,n$, and the induced map
$$\mathcal{F}:\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]\rightarrow\mathbb{C}[z_1^{\pm1},\ldots,z_n^{\pm1}]$$
is an isomorphism of $\mathbb{C}$-algebras. Now, notice that
$$z_j\frac{\partial W}{\partial z_j}
=\sum_{i=1}^d z_j\frac{\partial}{\partial
z_j}(e^{\lambda_i}z_1^{v_i^1}\ldots z_n^{v_i^n})=\sum_{i=1}^d v_i^j
e^{\lambda_i}z_1^{v_i^1}\ldots z_n^{v_i^n}=\sum_{i=1}^d v_i^j
e^{\lambda_i}z^{v_i},$$ for $j=1,\ldots,n$. The inverse SYZ
transformation of $z_j\frac{\partial W}{\partial z_j}$ is thus given
by
$$\mathcal{F}^{-1}(z_j\frac{\partial W}{\partial z_j})
=\widehat{\sum_{i=1}^d v_i^je^{\lambda_i}z^{v_i}}=\sum_{i=1}^d
v_i^j\Psi_i.$$ Thus,
$$\mathcal{F}^{-1}\Big(\Big\langle
z_j\frac{\partial W}{\partial
z_j}:j=1,\ldots,n\Big\rangle\Big)=\mathcal{L},$$ is the ideal in
$\mathbb{C}[\Psi_1^{\pm1},\ldots,\Psi_n^{\pm1}]$ generated by linear
equivalences. The result follows.
\end{proof}
\section{Examples}\label{sec4}
In this section, we give some examples to illustrate our results.\\
\noindent\textbf{Example 1. $\bar{X}=\mathbb{C}P^2$.} In this case,
$N=\mathbb{Z}^2$. The primitive generators of the 1-dimensional
cones of the fan $\Sigma$ defining $\mathbb{C}P^2$ are given by
$v_1=(1,0),v_2=(0,1),v_3=(-1,-1)\in N$, and the polytope
$\bar{P}\subset M_\mathbb{R}\cong\mathbb{R}^2$ we chose is defined
by the inequalities
$$x_1\geq0,\ x_2\geq0,\ x_1+x_2\leq t,$$
where $t>0$. See Figure 4.1 below.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,35)
\put(10,2){\vector(0,1){35}} \put(10,2){\vector(1,0){35}}
\curve(10,32, 40,2) \curve(40,2.8, 40,1.2) \put(40,-1){$t$}
\curve(10.8,32, 9.2,32) \put(7.8,31){$t$} \put(8,0){0}
\put(13,11){$\bar{P}\subset M_\mathbb{R}$} \put(5.3,14.5){$D_1$}
\put(23,-1.3){$D_2$} \put(24.9,17.9){$D_3$} \curve(75,17, 93,17)
\curve(75,17, 75,35) \curve(75,17, 60,2)
\put(74.15,16.15){$\bullet$} \put(71,16){$\xi$} \put(87,18){$E_1$}
\put(75.5,30){$E_2$} \put(67,6.3){$E_3$} \put(75,4){$(\Gamma_3,h)$
in $N_\mathbb{R}$} \put(44,-3){Figure 4.1}
\end{picture}
\end{figure}
The mirror manifold $Y$ is given by
\begin{eqnarray*}
Y & = & \{(Z_1,Z_2,Z_3)\in\mathbb{C}^3:Z_1Z_2Z_3=q, |Z_i|<1,\ i=1,2,3\}\\
& = & \{(z_1,z_2)\in(\mathbb{C}^*)^2:|z_1|<1, |z_2|<1,
|\frac{q}{z_1z_2}|<1\},
\end{eqnarray*}
where $q=e^{-t}$ is the K\"{a}hler parameter, and, the
superpotential $W:Y\rightarrow\mathbb{C}$ can be written, in two
ways, as
$$W=Z_1+Z_2+Z_3=z_1+z_2+\frac{q}{z_1z_2}.$$
In terms of the coordinates $Z_1,Z_2,Z_3$, the Jacobian ring
$Jac(W)$ is given by
\begin{eqnarray*}
Jac(W) & = & \mathbb{C}[Z_1,Z_2,Z_3]\big/\big\langle
Z_1-Z_3,Z_2-Z_3,Z_1Z_2Z_3-q\big\rangle\\
& \cong & \mathbb{C}[Z]\big/\big\langle Z^3-q\big\rangle.
\end{eqnarray*}
There are three toric prime divisors $D_1,D_2,D_3$, which are
corresponding to the three admissible functions
$\Psi_1,\Psi_2,\Psi_3:LX\rightarrow\mathbb{R}$ defined by
\begin{eqnarray*}
\Psi_1(p,v) & = & \left\{ \begin{array}{ll}
e^{-x_1} & \textrm{if $v=(1,0)$}\\
0 & \textrm{otherwise,}
\end{array} \right.\\
\Psi_2(p,v) & = & \left\{ \begin{array}{ll}
e^{-x_2} & \textrm{if $v=(0,1)$}\\
0 & \textrm{otherwise,}
\end{array} \right.\\
\Psi_3(p,v) & = & \left\{ \begin{array}{ll}
e^{-(t-x_1-x_2)} & \textrm{if $v=(-1,-1)$}\\
0 & \textrm{otherwise,}
\end{array} \right.
\end{eqnarray*}
for $(p,v)\in LX$ and where $x=\mu(p)\in P$, respectively. The small
quantum cohomology ring of $\mathbb{C}P^2$ has the following
presentation:
\begin{eqnarray*}
QH^*(\mathbb{C}P^2) & = & \mathbb{C}[D_1,D_2,D_3]\big/\big\langle
D_1-D_3,D_2-D_3,D_1\ast D_2\ast D_3-q\big\rangle\\
& \cong & \mathbb{C}[H]\big/\big\langle H^3-q\big\rangle,
\end{eqnarray*}
where $H\in H^2(\mathbb{C}P^2,\mathbb{C})$ is the hyperplane class.
Quantum corrections appear only in one relation, namely,
$$D_1\ast D_2\ast D_3=q.$$
Fix a point $p\in X$. Then the quantum correction is due to the
unique holomorphic curve
$\varphi:(\mathbb{C}P^1;x_1,x_2,x_3,x_4)\rightarrow\mathbb{C}P^2$ of
degree 1 (i.e. a line) with 4 marked points such that
$\varphi(x_4)=p$ and $\varphi(x_i)\in D_i$ for $i=1,2,3$. The
parameterized 3-marked, genus 0 tropical curve corresponding to this
line is $(\Gamma_3;E_1,E_2,E_3;h)$, which is glued from three half
lines emanating from the point $\xi=\textrm{Log}(p)\in N_\mathbb{R}$
in the directions $v_1$, $v_2$ and $v_3$. See Figure 4.1 above.
These half lines are the parameterized Maslov index two tropical
discs $(\Gamma_1,h_i)$, where $h_i(V)=\xi$ an
$h_i(E)=\xi+\mathbb{R}_{\geq0}v_i$, for $i=1,2,3$ (see Figure 2.3).
They are corresponding to the Maslov index two holomorphic discs
$\varphi_1,\varphi_2,\varphi_3:(D^2,\partial
D^2)\rightarrow(\mathbb{C}P^2,L_{\mu(p)})$ which pass through $p$
and intersect the corresponding toric divisors $D_1,D_2,D_3$
respectively.\\
\noindent\textbf{Example 2.
$\bar{X}=\mathbb{C}P^1\times\mathbb{C}P^1$.} The primitive
generators of the 1-dimensional cones of the fan $\Sigma$ defining
$\mathbb{C}P^1\times\mathbb{C}P^1$ are given by
$v_{1,1}=(1,0),v_{2,1}=(-1,0),v_{1,2}=(0,1),v_{2,2}=(0,-1)\in
N=\mathbb{Z}^2$. We choose the polytope $\bar{P}\subset
M_\mathbb{R}=\mathbb{R}^2$ to be defined by the inequalities
$$0\leq x_1\leq t_1,\ 0\leq x_2\leq t_2$$
where $t_1,t_2>0$. See Figure 4.2 below.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,36)
\put(2,6){\vector(0,1){31}} \put(2,6){\vector(1,0){42}} \curve(2,32,
39,32) \curve(39,6, 39,32) \curve(39,6.8, 39,5.2)
\put(38,2.4){$t_1$} \curve(3.8,32, 6.2,32) \put(-1,31){$t_2$}
\put(0,4){0} \put(13.8,17.3){$\bar{P}\subset M_\mathbb{R}$}
\put(-4.5,18){$D_{1,1}$} \put(39.5,18){$D_{2,1}$}
\put(17.5,3.1){$D_{1,2}$} \put(17.5,33.2){$D_{2,2}$} \curve(57,23,
103,23) \curve(85,6, 85,35) \put(84.1,22){$\bullet$}
\put(82.8,19.7){$\xi$} \put(80,2.3){$(\Gamma_2,h_2)$}
\put(58.5,20){$(\Gamma_2,h_1)$} \put(92,30){$N_\mathbb{R}$}
\put(42,-2){Figure 4.2}
\end{picture}
\end{figure}
The mirror Landau-Ginzburg model $(Y,W)$ consists of
\begin{eqnarray*}
Y\!\!\!&=&\!\!\!\{(Z_{1,1},Z_{2,1},Z_{1,2},Z_{2,2})\in
\mathbb{C}^4:Z_{1,1}Z_{2,1}=q_1,Z_{1,2}Z_{2,2}=q_2,
|Z_{i,j}|<1,\textrm{ all $i,j$}\}\\
\!\!\!&=&\!\!\!\{(z_1,z_2)\in(\mathbb{C}^*)^2:|z_1|<1,|z_2|<1,
|\frac{q_1}{z_1}|<1,|\frac{q_2}{z_2}|<1\},
\end{eqnarray*}
where $q_1=e^{-t_1}$ and $q_2=e^{-t_2}$ are the K\"{a}hler
parameters, and
$$W=Z_{1,1}+Z_{2,1}+Z_{1,2}+Z_{2,2}=z_1+\frac{q_1}{z_1}+z_2+\frac{q_2}{z_2}.$$
The Jacobian ring $Jac(W)$ is given by
\begin{eqnarray*}
Jac(W) & = &
\frac{\mathbb{C}[Z_{1,1},Z_{2,1},Z_{1,2},Z_{2,2}]}{\big\langle
Z_{1,1}-Z_{2,1},Z_{1,2}-Z_{2,2},Z_{1,1}Z_{2,1}-q_1,Z_{1,2}Z_{2,2}-q_2\big\rangle}\\
& \cong & \mathbb{C}[Z_1,Z_2]\big/\big\langle
Z_1^2-q_1,Z_2^2-q_2\big\rangle.
\end{eqnarray*}
The four toric prime divisors $D_{1,1},D_{2,1},D_{1,2},D_{2,2}$
correspond respectively to the four admissible functions
$\Psi_{1,1},\Psi_{2,1},\Psi_{1,2},\Psi_{2,2}:LX\rightarrow\mathbb{C}$
defined by
\begin{eqnarray*}
\Psi_{1,1}(p,v) & = & \left\{ \begin{array}{ll}
e^{-x_1} & \textrm{if $v=(1,0)$}\\
0 & \textrm{otherwise,}
\end{array} \right.\\
\Psi_{2,1}(p,v) & = & \left\{ \begin{array}{ll}
e^{-(t_1-x_1)} & \textrm{if $v=(0,-1)$}\\
0 & \textrm{otherwise,}
\end{array} \right.\\
\Psi_{1,2}(p,v) & = & \left\{ \begin{array}{ll}
e^{-x_2} & \textrm{if $v=(0,1)$}\\
0 & \textrm{otherwise,}
\end{array} \right.\\
\Psi_{2,2}(p,v) & = & \left\{ \begin{array}{ll}
e^{-(t_2-x_2)} & \textrm{if $v=(0,-1)$}\\
0 & \textrm{otherwise,}
\end{array} \right.
\end{eqnarray*}
for $(p,v)\in LX$ and where $x=\mu(p)\in P$. The small quantum
cohomology ring of $\mathbb{C}P^1\times\mathbb{C}P^1$ is given by
\begin{eqnarray*}
QH^*(\mathbb{C}P^1\times\mathbb{C}P^1)\!\!\!&=&\!\!\!
\frac{\mathbb{C}[D_{1,1},D_{2,1},D_{1,2},D_{2,2}]}{\big\langle
D_{1,1}-D_{2,1},D_{1,2}-D_{2,2},D_{1,1}\ast D_{2,1}-q_1,D_{1,2}\ast
D_{2,2}-q_2\big\rangle}\\
\!\!\!&\cong&\!\!\!\mathbb{C}[H_1,H_2]\big/\big\langle
H_1^2-q_1,H_2^2-q_2\big\rangle
\end{eqnarray*}
where $H_1,H_2\in H^2(\mathbb{C}P^1\times\mathbb{C}P^1)$ are the
pullbacks of the hyperplane classes in the first and second factors
respectively. Quantum corrections appear in two relations:
$$D_{1,1}\ast D_{2,1}=q_1\textrm{ and }D_{1,2}\ast D_{2,2}=q_2.$$
Let us focus on the first one, as the other one is similar. For any
$p\in X$, there are two Maslov index two holomorphic discs
$\varphi_{1,1},\varphi_{2,1}:(D^2,\partial
D^2)\rightarrow(\mathbb{C}P^1\times\mathbb{C}P^1,L_{\mu(p)})$
intersecting the corresponding toric divisors. An interesting
feature of this example is that, since the sum of the boundaries of
the two holomorphic discs is zero as a \textit{chain}, instead of as
a class, in $L_{\mu(p)}$, they glue together \textit{directly} to
give the unique holomorphic curve
$\varphi_1:(\mathbb{C}P^1:x_1,x_2,x_3)\rightarrow\mathbb{C}P^1\times\mathbb{C}P^1$
of degree 1 with $\varphi_1(x_1)\in D_{1,1}$, $\varphi_1(x_2)\in
D_{2,1}$ and $\varphi_1(x_3)=p$. So the relation $D_{1,1}\ast
D_{2,1}=q_1$ is directly corresponding to
$\Psi_{1,1}\star\Psi_{2,1}=q_1\mathbb{1}$, without going through the
corresponding relation in $QH^*_{trop}(\bar{X})$. In other words, we
do not need to go to the tropical world to see the geometry of the
isomorphism $QH^*(\mathbb{C}P^1\times\mathbb{C}P^1)\cong
\mathbb{C}[\Psi_{1,1}^{\pm1},\Psi_{1,2}^{\pm1}]/\mathcal{L}$
(although in Figure 4.2 above, we have still drawn the tropical
lines $h_1$ and $h_2$ passing through $\xi=\textrm{Log}(p)\in
N_\mathbb{R}$).\\
\noindent\textbf{Example 3. $\bar{X}$ is the toric blowup of
$\mathbb{C}P^2$ at one point.} Let $\bar{P}\subset\mathbb{R}^2$ be
the polytope defined by the inequalities
$$x_1\geq0,\ 0\leq x_2\leq t_2,\ x_1+x_2\leq t_1+t_2,$$
where $t_1,t_2>0$.
\begin{figure}[ht]
\setlength{\unitlength}{1mm}
\begin{picture}(100,32)
\put(18,4){\vector(0,1){30}} \put(18,4){\vector(1,0){60}}
\curve(18,28, 44,28) \curve(18,28.1, 44,28.1) \curve(18,27.9,
44,27.9) \put(17.1,27.1){$\bullet$} \put(43.1,27.1){$\bullet$}
\curve(44,28, 68,4) \curve(18.8,28, 17.2,28) \put(14.1,27){$t_2$}
\curve(68,3.2, 68,4.8) \put(63.5,0.6){$t_1+t_2$} \put(16,1){0}
\put(28,14){$\bar{P}\subset M_\mathbb{R}$} \put(13.5,15.5){$D_1$}
\put(38,1){$D_2$} \put(56,17){$D_3$} \put(28,29){$D_4$}
\put(48,30.5){\vector(-1,0){15}} \put(49,29.5){exceptional curve}
\put(80,15){Figure 4.3}
\end{picture}
\end{figure}
The toric Fano manifold $\bar{X}$ corresponding to this trapezoid
(see Figure 4.3 above) is the blowup of $\mathbb{C}P^2$ at a
$T_N$-fixed point. The primitive generators of the 1-dimensional
cones of the fan $\Sigma$ defining $\bar{X}$ are given by
$v_1=(1,0),v_2=(0,1),v_3=(-1,-1),v_4=(0,-1)\in N=\mathbb{Z}^2$. As
in the previous examples, we have the mirror manifold
\begin{eqnarray*}
Y & = &
\{(Z_1,Z_2,Z_3,Z_4)\in\mathbb{C}^4:Z_1Z_3=q_1Z_4,Z_2Z_4=q_2,|Z_i|<1\textrm{for
all $i$}\}\\
& = &
\{(z_1,z_2)\in(\mathbb{C}^*)^2:|z_1|<1,|z_2|<1,|\frac{q_1q_2}{z_1z_2}|<1,|\frac{q_2}{z_2}|<1\},
\end{eqnarray*}
and the superpotential
$$W=Z_1+Z_2+Z_3+Z_4=z_1+z_2+\frac{q_1q_2}{z_1z_2}+\frac{q_2}{z_2},$$
where $q_1=e^{-t_1},q_2=e^{-t_2}$. The Jacobian ring of $W$ is
\begin{equation*}
Jac(W)=\frac{\mathbb{C}[Z_1,Z_2,Z_3,Z_4]}{\big\langle
Z_1-Z_3-Z_4,Z_2-Z_4,Z_1Z_3-q_1Z_4,Z_2Z_4-q_2\big\rangle}
\end{equation*}
and the small quantum cohomology ring of $\bar{X}$ is given by
$$QH^*(\bar{X})=\frac{\mathbb{C}[D_1,D_2,D_3,D_4]}{\big\langle
D_1-D_3-D_4,D_2-D_4,D_1\ast D_3-q_1D_4,D_2\ast
D_4-q_2\big\rangle}.$$ Obviously, we have an isomorphism
$QH^*(\bar{X})\cong Jac(W)$ and, the isomorphism
$$QH^*(\bar{X})\cong\mathbb{C}[\Psi_1^{\pm1},\Psi_2^{\pm1}]/\mathcal{L}$$
in Proposition~\ref{prop2.2} still holds, as we have said in
Remark~\ref{rmk2.3}. However, the geometric picture that we have
derived in Subsection~\ref{subsec2.2} using tropical geometry breaks
down. This is because there is a \textit{rigid} holomorphic curve
contained in the toric boundary $D_\infty$ which \textit{does}
contribute to $QH^*(\bar{X})$. Namely, the quantum relation
$$D_1\ast D_3=q_1D_4$$
is due to the holomorphic curve
$\varphi:\mathbb{C}P^1\rightarrow\bar{X}$ such that
$\varphi(\mathbb{C}P^1)\subset D_4$. This curve is exceptional since
$D_4^2=-1$, and thus cannot be deformed to a curve outside the toric
boundary. See Figure 4.3 above. Hence, it is \textit{not}
corresponding to any tropical curve in $N_\mathbb{R}$. This means
that tropical geometry cannot "see" the curve $\varphi$, and it is
not clear how one could define the tropical analog of the small
quantum cohomology ring in this case.
\section{Discussions}
In this final section, we speculate the possible generalizations of
the results of this paper. The discussion will be rather informal.\\
The proofs of the results in this paper rely heavily on the
classification of holomorphic discs in a toric Fano manifold
$\bar{X}$ with boundary in Lagrangian torus fibers, and on the
explicit nature of toric varieties. Nevertheless, it is still
possible to generalize these results, in particular, the
construction of SYZ mirror transformations, to non-toric situations.
For example, one may consider a \textit{complex flag manifold}
$\bar{X}$, where the Gelfand-Cetlin integrable system provide a
natural Lagrangian torus fibration structure on $\bar{X}$ (see, for
example, Guillemin-Sternberg \cite{GS83}). The base of this
fibration is again an affine manifold with boundary but without
singularities. In fact, there is a \textit{toric degeneration} of
the complex flag manifold $\bar{X}$ to a toric variety, and the base
is nothing but the polytope associated to that toric variety.
Furthermore, the classification of holomorphic discs in a complex
flag manifold $\bar{X}$ with boundary in Lagrangian torus fibers was
recently done by Nishinou-Nohara-Ueda \cite{NNU08}, and, at least
for the full flag manifolds, there is an isomorphism between the
small quantum cohomology ring and the Jacobian ring of the mirror
superpotential (cf. Corollary 12.4 in \cite{NNU08}). Hence, one can
try to construct the SYZ mirror transformations for a complex flag
manifold $\bar{X}$ and prove results like Proposition \ref{prop1.1}
and Theorem \ref{main_thm} as in the toric Fano case.\\
Certainly, the more important (and more ambitious) task is to
generalize the constructions of SYZ mirror transformations to the
most general situations, where the bases of Lagrangian torus
fibrations are affine manifolds with both boundary and
singularities. To do this, the first step is to make the
construction of the SYZ mirror transformations become a local one.
One possible way is the following: Suppose that we have an
$n$-dimensional compact K\"{a}hler manifold $\bar{X}$, together with
an anticanonical divisor $D$. Assume that there is a Lagrangian
torus fibration $\mu:\bar{X}\rightarrow\bar{B}$, where $\bar{B}$ is
a real $n$-dimensional (possibly) singular affine manifold with
boundary $\partial\bar{B}$. We should also have
$\mu^{-1}(\partial\bar{B})=D$. Now let $U\subset
B:=\bar{B}\setminus\partial\bar{B}$ be a small open ball contained
in an affine chart of the nonsingular part of $B$, i.e.
$\mu^{-1}(b)$ is a nonsingular Lagrangian torus in $\bar{X}$ for any
$b\in U$, so that we can identify each fiber $\mu^{-1}(b)$ with
$T^n$ and identify $\mu^{-1}(U)$ with $T^*U/\mathbb{Z}^n\cong
U\times T^n$. Let $N\cong\mathbb{Z}^n$ be the fundamental group of
any fiber $\mu^{-1}(b)$, and consider the $\mathbb{Z}^n$-cover
$L\mu^{-1}(U)=\mu^{-1}(U)\times N$. Locally, the mirror manifold
should be given by the dual torus fibration
$\nu:U\times(T^n)^\vee\rightarrow U$. Denote by $\nu^{-1}(U)$ the
local mirror $U\times(T^n)^\vee$. Then we can define the
\textit{local SYZ mirror transformation}, as before, through the
fiber product $L\mu^{-1}(U)\times_U\nu^{-1}(U)$.
\begin{equation*}
\begin{CD}
L\mu^{-1}(U)\times_U\nu^{-1}(U) @> >> \nu^{-1}(U) \\
@VV V @VV \nu V \\
L\mu^{-1}(U) @>\mu>> U
\end{CD}\\
\end{equation*}
Now, fix a reference fiber $L_0=\mu^{-1}(b_0)$. Given $v\in N$,
define a function $\Upsilon_v:L\mu^{-1}U\rightarrow\mathbb{R}$ as
follows. For any point $p\in\mu^{-1}(U)$, let $L_b=\mu^{-1}(b)$ be
the fiber containing $p$, where $b=\mu(p)\in U$. Regard $v$ as an
element in $\pi_1(L_b)$. Consider the 2-chain $\gamma$ in
$\mu^{-1}(U)$ with boundary in $v\cup L_0$, and define
$$\Upsilon_v(p,v)=\exp(-\frac{1}{2\pi}\int_\gamma\omega_{\mu^{-1}(U)}),$$
where $\omega_{\mu^{-1}(U)}=\omega_{\bar{X}}|_{\mu^{-1}(U)}$ is the
restriction of the K\"{a}hler form to $\mu^{-1}(U)$. Also set
$\Upsilon_v(p,w)=0$ for any $w\in N\setminus\{v\}$ (cf. the
discussion after Lemma 2.7 in Auroux \cite{Auroux07}). This is
analog to the definitions of the functions $\Psi_1,\ldots,\Psi_d$ in
the toric Fano case, and it is easy to see that the local SYZ mirror
transformations of these kind of functions give local holomorphic
functions on the local mirror $\nu^{-1}(U)=U\times(T^n)^\vee$. We
expect that these constructions will be sufficient for the purpose
of understanding quantum corrections due to the boundary divisor
$D$. However, to take care of the quantum corrections which arise
from the proper singular Lagrangian fibers (i.e. singular fibers
contained in $X=\mu^{-1}(B)$), one must modify and generalize the
constructions of the local SYZ mirror transformations to the case
where $U\subset B$ contains singular points. For this, new ideas are
needed in order to incorporate the wall-crossing phenomena.\\
\noindent\textbf{Acknowledgements.} We thank the referees for very
useful comments and suggestions. The work of the second author was
partially supported by RGC grants from the Hong Kong Government.
| {
"attr-fineweb-edu": 1.49707,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdw_xaKgS2KyudEeV | \section{Introduction}
In recent years, hybrid transaction/analytical processing (in short, HTAP) systems are proliferating fast.
Many giant companies provide HTAP systems, including Oracle database in-memory~\cite{lahiri2015oracle}, SQL Server~\cite{larson2015real}, SAP HANA~\cite{lee2017parallel}, MemSQL~\cite{MemSQL111} and TiDB~\cite{huang2020tidb}.
HTAP systems are popular for two reasons.
First, giant companies are demanding fresher data in shorter shipping duration for real-time customer analysis~\cite{harvard111}.
Throughout this paper, real-time emphasizes performing a task like data analysis or user behavior simulation interactively in contexts like financial fraud monitoring or recommendation~\cite{pi2020search}~\cite{ma2020temporal}. Our usage of real-time is different from its traditional definition that limits the operations to completion within a hard or soft deadline~\cite{realtime111}.
The value of mass business data will diminish with time~\cite{pavlo2016s}.
Moving data from the OLTP system to the OLAP system is complex and time-consuming.
Meanwhile, it is impossible to perform real-time analysis on data that has passed a long turnaround time.
Besides, software development and maintenance for two separate systems are also expensive.
Second, the advanced modern software and hardware technologies, including in-memory computing (IMC) techniques~\cite{IMC}, multi-core processors, various levels of memory caches, and large memories~\cite{bog2012interactive}~\cite{ozcan2017hybrid}, contribute to the HTAP systems' rapid development.
There are three primary architectures for HTAP systems design and implementation.
The first one introduces extract-transform-load (ETL) processing~\cite{vassiliadis2002conceptual} between the OLTP DBMS and the data warehouse to complete data migration, data format transformation, and complex data analysis. However, the ETL systems are not competent for real-time analysis as they introduce time and space costs that can not be neglected. The second one is to utilize a stream processing system~\cite{toshniwal2014storm}~\cite{zaharia2013discretized} that feeds the incoming data to a batch processing system ~\cite{shvachko2010hadoop}~\cite{armbrust2015spark}.
Due to several unfavorable factors such as strong consistency semantics and double operating cost, it is not easy to use separate stream processing systems as HTAP solutions.
The other architecture uses a single HTAP DBMS ~\cite{kemper2011hyper, lee2017parallel, makreshanski2017batchdb, huang2020tidb}, achieving high performance for OLTP and OLAP.
It eliminates data movement overhead but adds pressure to performance isolation or consistency guarantees. In the face of numerous HTAP solutions, it is hard to compare different system performances.
We describe these three architectures in detail in Section~\ref{sec: 3.1}.
The HTAP benchmarks provide quantitative metrics, methodology, and tools to evaluate different systems and their specific design decisions, offering valuable design and implementation inputs. However, as summarized in Table~\ref{table: 4}, most state-of-the-art~\cite{coelho2017htapbench} and state-of-the-practice~\cite{cole2011mixed} HTAP benchmarks fail to consider real-time, semantically consistent, and domain-specific~\cite{qu2022current} in benchmarking, designing, and implementing HTAP systems. Hence, at best, they are incommensurable and, at worst, misleading in benchmarking, designing, and implementing HTAP systems. We quantitatively justify why we consider those three critical factors.
First, being real-time is essential. On the one hand, real-time is crucial to customer analysis -- the fresher the data, the higher the value. On the other hand, there are widely-observed user behavior patterns -- performing real-time analysis before making a quick decision. For example, if the customer wants to order an item in e-commerce, a query to get the lowest price rather than the random price of the item is most likely to happen before ordering the item. We propose the abstraction of a hybrid transaction, which performs a real-time query in-between an online transaction, to model this user behavior pattern, which the previous HTAP benchmarks overlook.
Figure~\ref{fig: 1} shows the impact of a hybrid transaction on the performance of the online transactions in TiDB~\cite{huang2020tidb} -- a state-of-the-art HTAP system against that of a sole online transaction as a baseline.
The real-time query increases the baseline latency by a factor of 5.9, and decreases the baseline throughput by a factor of 5.9.
Second, the previous works use stitch schema. For example, the stitch schema in state-of-the-art HTAP benchmarks~\cite{cole2011mixed}\cite{coelho2017htapbench} just reuses the schema from TPC-C~\cite{TPCC111} and TPC-H~\cite{TPCH111}.
Instead, in real-world application scenarios, the data operated by the analytical query is generated by the online transaction, so the OLAP schema is a subset of the OLTP schema. We call this characteristic semantically consistent.
The stitch schema can not disclose the severe interferences between analytical workload and transactional workloads in real-world scenarios. Our experiment result shows that the analytical workloads decrease transactional throughput by 89\% using the semantically consistent schema, rather than 10\% using stitch schema in the previous work~\cite{huang2020tidb}.
Last but not least, most of the previous works only provide a general HTAP benchmark.
The generic benchmark reflects a wide range of use cases. Instead, real-world applications have different workloads and schema with varying resource demands. So we propose the domain-specific benchmarks, which are specialized to evaluate the performance of HTAP systems in one or several specific scenarios.
In Section~\ref{sec: 6}, our evaluation shows that the peak transactional throughput of a general benchmark is nearly 20 times that of a domain-specific benchmark on the same testbed.
Their performance varies greatly depending on the complexity of the relationships in the table, the read/write ratio of the transaction, and different system resource requirements.
We propose an HTAP benchmark suite in addition to a benchmarking framework named OLxPBench to help users perform performance comparisons. The main contributions of this paper are as follows.
(1) We quantitatively justify why we should consider real-time queries, semantically consistent schema, and domain-specific workloads in HTAP benchmarking~\cite{zhan2021call}. OLxPBench proposes new built-in hybrid workloads that perform a real-time query in-between an online transaction; a semantically consistent HTAP schema; one general benchmark; and two domain-specific benchmarks to evaluate the HTAP systems, including retail, banking, and telecommunications~\cite{harvard111}.
(2) We design and implement an extensible HTAP benchmarking framework and three comprehensive HTAP benchmarks: subenchmark, fibenchmark, and tabenchmark;
Compared against the most related work -- CH-benCHmark~\cite{cole2011mixed}, our work is not trivial because there are eighteen analytical queries and seventeen hybrid queries tailored for different benchmarks.
OLxPBench provides valuable experience in designing and implementing schematically consistent schema, hybrid workloads, and domain-specific benchmarks in HTAP benchmarking.
(3) Extensive experiments are conducted on the two mainstream HTAP systems: MemSQL and TiDB using OLxPBench against CH-benCHmark~\cite{cole2011mixed}. We have observed the following insights.
The vertical table technique adopted by MemSQL is not very helpful to deal with the hybrid workloads because a large number of join operations generated by relationship query statements increases the waiting time of the hybrid transactions;
The mainstream HTAP systems have poor performance in scanning operations for composite primary keys; The mutual interference between online transactions and analytical queries causes poor performance isolation.
\section{Related Works} ~\label{sec: 2}
We compare OLxPBench with five other state-of-the-art and state-of-the-practice HTAP benchmarks in Table~\ref{table: 4}.
We classify the HTAP benchmarks into two groups according to the complexity of their workloads.
The one contains intricate transactions and queries, such as CH-benCHmark~\cite{cole2011mixed}, CBTR~\cite{bog2012interactive}, and HTAPBench~\cite{coelho2017htapbench}.
The other includes a mix of simple insert/select operations, i.e., ADAPT~\cite{arulraj2016bridging} and HAP~\cite{athanassoulis2019optimal}.
The real-time queries generally involve simple
$aggregate$ operations and the analytical queries include more complex operations.
\begin{table*}
\centering
\caption{Comparison of OLxPBench With State-of-the-art and State-of-the-practice Benchmarks.}
\scalebox{0.80}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Name&Online transaction&Analytical query&Hybrid transaction&Real-time query&Semantically consistent schema&General benchmark&Domain-specific benchmark\\
\hline
CH-benCHmark&$\surd$&$\surd$&$\times$&$\times$&$\times$&$\surd$&$\times$\\
\hline
CBTR&$\surd$&$\surd$&$\times$&$\times$&$\surd$&$\times$&$\surd$\\
\hline
HTAPBench&$\surd$&$\surd$&$\times$&$\times$&$\times$&$\surd$&$\times$\\
\hline
ADAPT&$\times$&$\times$&$\times$&$\times$&$\surd$&$\surd$&$\times$\\
\hline
HAP&$\times$&$\times$&$\times$&$\times$&$\surd$&$\surd$&$\times$\\
\hline
OLxPBench&$\surd$&$\surd$&$\surd$&$\surd$&$\surd$&$\surd$&$\surd$\\
\hline
\end{tabular}}
\label{table: 4}
\vspace{-0.5cm}
\end{table*}
CH-benCHmark launches the online transactions adopted from TPC-C and analytical queries from TPC-H concurrently on the stitch schema; however, they have different business semantics.
Moreover, CH-benCHmark never updates the Supplier, Nation, and Region tables used by OLAP since the online transactions only update partial OLAP tables using stitch schema. Thus, its OLTP and OLAP operate different data and further cover up the contention in massive concurrency.
HTAPBench uses the same schema model with CH-benCHmark.
Besides, HTAPBench~\cite{coelho2017htapbench} implements the Client Balancer to control the number of analytical queries to avoid too many analytical queries affecting the performance of online transactions.
CBTR mimics the order-to-cash process of real enterprise systems~\cite{bog2012interactive} and provides more complex as well as dynamic workloads.
It is indisputable that CBTR has effective instruction for ad-hoc database design.
Unfortunately, CBTR does not include the real-time query and is not open-source.
Besides, its single domain-specific benchmark is insufficient to evaluate HTAP solutions in various scenarios.
ADAPT benchmark has two tables -- a narrow table and a wide table~\cite{arulraj2016bridging}.
The operations are abstracted from a natural productive environment, and the read-only operations are 80\%.
The HAP benchmark is based on the ADAPT benchmark and expands the update and deleted operation to test the storage engine.
The read-only operations are 50\%. Overall, the operations are too simple to represent complex transactions in natural business environments.
When designing OLxPBench, we choose the semantically consistent schema rather than the stitched schema~\cite{coelho2017htapbench}\cite{cole2011mixed} to expose the original interference between OLTP workloads and OLAP workloads.
Besides, we increase the real-time query for real-time user behavior analyzing and simulating.
We also obey both general~\cite{TPCC111} and domain-specific~\cite{seltzer1999case} principles.
The general benchmark helps designers perform performance comparisons, and the domain-specific benchmarks help the user select the HTAP DBMS that best support their specific workloads.
Moreover, we compare the semantically consistent schema to stitched schema~\cite{cole2011mixed} in Section~\ref{sec: 5.3.1} because CH-benCHmark~\cite{cole2011mixed} originates from the HTAP transactional benchmark.
\section{Background and Motivation} ~\label{sec: 3}
\subsection{The Background of HTAP Systems} ~\label{sec: 3.1}
HTAP DBMSs need to perform trade-offs considering different performance requirements of different workloads.
Currently, there are three types of HTAP solutions, and we compare their pros and cons in the following subsections.
The first solution is to use separate DBMSs~\cite{raman2013db2, lahiri2013oracle, raza2020adaptive, yang2020f1} to achieve high performance of online transactions and analytical queries. Generally, online transactions adopt a row-based store due to its high efficiency for records insert and update. Analytical queries often adopt a column-based data warehouse since it supports efficient data scans. However, separate DBMS needs to convert row-based data to column-based ones, i.e., ETL processing, which is too time-consuming to analyze the latest data and make instant decisions.
For example, Pavlo et al.~\cite{pavlo2016s}\cite{ozcan2017hybrid} refer to the standard ETL process to migrate data from the OLTP system to the OLAP system as one of the solutions to HTAP.
Second, the lambda architecture~\cite{marz2013big, kiran2015lambda, lin2017lambda}, which consists of a real-time stream processing system and a batch processing system, can perform real-time analytics on incoming data, but it is expensive.
The real-time stream processing~\cite{toshniwal2014storm} systems provide views of online data while simultaneously using batch processing~\cite{HADOOP111} to provide comprehensive and accurate views of batch data.
Besides, the serving layer merges the real-time view with the batch views and then responses to the end-user.
The lambda architecture provides a real-time analysis at a considerable cost, including double write costs, double or more development costs, and so on.
In brief, the cost of maintaining two systems is also very high.
Third, using a single HTAP DBMS to handle online transactions and real-time queries.
Because real-time analytics on fresh data is valuable, the lightweight propagation technique is developed to transfer recent transactional logs to the analytical storage node in a more short and flexible schedule~\cite{makreshanski2017batchdb}.
Microsoft SQL Server~\cite{larson2015real} stores hot inserted and updated rows in middle delta storage to transfer them to OLAP storage and speed up query processing.
Oracle in-memory database~\cite{lahiri2015oracle} keeps a dual-format store for OLTP and OLAP workloads without double memory requirements.
It uses a read-only snapshot maintained in memory for analysis.
Their latest work~\cite{mukherjee2016fault} provides a more available distributed architecture and fault tolerance than the original.
MemSQL uses an in-memory row-store and an on-disk column-store to handle highly concurrent operational and analytical workloads~\cite{MemSQL111}.
The storage layer of TiDB consists of a row-based store and a column-based store.
To analyze real-time queries of the fresh data, TiDB uses asynchronous log replication to keep the data consistent~\cite{huang2020tidb}.
The IBM Db2 Analytics Accelerator uses replication technology to enhance its real-time capacity~\cite{butterstein2020replication}.
VEGITO~\cite{shen2021retrofitting} retrofits the high availability mechanism to support HTAP workloads.
\subsection{Motivation} ~\label{sec: 3.2}
\subsubsection{It is mandatory to include real-time queries in HTAP benchmarking\label{sec: 3.2.1}}
HTAP benchmarking should contain real-time queries for the following reasons.
First, real-time queries matter in customer analysis.
HTAP DBMSs enable more informed and in-business real-time decision-making~\cite{MGHIC}.
Real-time customer analysis is crucial because it is the basis of instant decision-making and fraud detection.
The fresher the data, the higher the value.
Real-time queries are usually executed on the recent data committed by transactions.
For example, if an item requested by a customer has been sold out according to the real-time inventory, similar ones will be recommended instantly.
Second, real-time queries can be used to mimic real-time user behavior.
For example, if a customer wants to create a ${New\_Order}$ transaction in TPC-C~\cite{TPCC111}, what is most likely to happen before selecting an item during the ${New\_Order}$ transaction~\cite{cole2011mixed} is a real-time query that finds the lowest price of the goods, rather than the random price.
However, none state-of-the-art~\cite{coelho2017htapbench} and state-of-the-practice~\cite{cole2011mixed} HTAP benchmarks provide the workloads that include real-time queries imitating user behavior.
Figure~\ref{fig: 1} shows the impact of a real-time query on the performance of TiDB~\cite{huang2020tidb} -- a state-of-the-art HTAP system. The ${New\_Order}$ transaction is the same as the ${New\_Order}$ transaction in TPC-C~\cite{TPCC111}. The real-time query is an aggregate operation that gets the item's lowest price in real time.
Real-time queries in the subenchmark are from a top-tier E-commerce internet service provider.
The experimental setup is the same as Section~\ref{sec: 5.1}.
When a real-time query is injected in the ${New\_Order}$ transaction~\cite{coelho2017htapbench}~\cite{cole2011mixed}, the average latency increases by 5.9x, and the throughput reduces by 5.9x.
So it is mandatory to include real-time queries in the HTAP benchmarks, or else the evaluation result will be misleading~\cite{zhan2021call}.
\subsubsection{Semantically consistent schema is essential\label{sec: 3.2.2}}
\begin{table*}[t]
\centering
\caption{Features of the OLxPBench workloads.}
\scalebox{0.89}{
\begin{tabular}{c c c c c c c c c}
\hline
Benchmark&Tables&Columns&Indexes&OLTP Transactions&Read-only OLTP Transactions&Queries&Hybrid Transactions&Read-only Hybrid Transactions\\
\hline
Subenchmark&9&92&3&5&8.0\%&9&5&60.0\%\\
Fibenchmark&3&6&4&6&15.0\%&4&6&20.0\%\\
Tabenchmark&4&51&5&7&80.0\%&5&6&40.0\%\\
\hline
\end{tabular}}
\label{table: 3}
\vspace{-0.5cm}
\end{table*}
The state-of-the-art~\cite{coelho2017htapbench} and state-of-the-practice~\cite{cole2011mixed} HTAP benchmarks all use stitch schema.
The stitch schema integrates the TPC-C schema~\cite{TPCC111} with TPC-H~\cite{TPCH111} schema and includes 12 tables.
The ${NEW-ORDER}$, ${STOCK}$, ${CUSTOMER}$, ${ORDERLINE}$, ${ORDERS}$, and ${ITEM}$ tables are accessed by TPC-C and TPC-H.
The ${WAREHOUSE}$, ${DISTRICT}$, and ${HISTORY}$ tables are only accessed by TPC-C. The ${SUPPLIER}$, ${NATION}$, and ${REGION}$ tables are accessed by TPC-H only.
Both TPC-C and TPC-H keep the third normal form to reduce data duplication.
There are two flaws in such a stitch schema:
First, OLTP and OLAP operate on the same business data in real scenarios; however, the query only analyzes one-sided business data with the stitch schema, leading to biased decisions.
For example, in CH-benCHmark, the stitch schema only allows queries to analyze data from the shared six tables between TPC-C and TPC-H.
When the ${Payment}$ transaction in CH-benCHmark is completed, a record will be written in the history table.
The records in the history table are essential for analyzing the custom's behavior.
However, none of the analysis queries in previous benchmarks~\cite{coelho2017htapbench}~\cite{cole2011mixed} can analyze the tens of thousands of records in the history table.
In addition, there is no query to analyze the warehouse table and district table of TPC-C~\cite{coelho2017htapbench} ~\cite{cole2011mixed}.
It is very costly to discard valuable parts of OLTP data.
The stitch schema leads the results of the analytical queries to be partial, perplexing, and incorrect.
Second, with stitch schema, the competitions between analytical workload and transactional workloads are hidden, making it impossible to fairly evaluate the interference between analytical workload and transactional workloads in real-world scenarios.
The online transactions and analytical queries operate on the same business data in the real world, so intense competitions for resources are not avoidable.
However, in the previous benchmark~\cite{coelho2017htapbench, cole2011mixed}, 45.4\%, 40.9\%, and 13.6\% of the 22 queries on the stitch schema access the ${SUPPLIER}$, ${NATION}$, and ${REGION}$ tables that never update or insert records, respectively.
The low competition between analytical and transactional workload will propagate a false image that the HTAP system can guarantee the isolated performance for separate OLTP and OLAP workloads~\cite{huang2020tidb}.
In Section~\ref{sec: 6.1.2}, we use the general benchmark in OLxPBench, which uncovers the high competition between the OLTP and OLAP workloads, to evaluate the TiDB and find the throughput interference of OLTP and OLAP is as high as 89\% and 59\%, respectively.
\subsubsection{Domain-specific benchmarks should be included\label{sec: 3.2.3}}
CH-benCHmark~\cite{cole2011mixed} is a general benchmark that fails to evaluate the HTAP system performance in a particular application scenario.
In Section~\ref{sec: 6}, our evaluation shows that the peak transactional throughput of a general benchmark is nearly 20 times that of a domain-specific benchmark on the same testbed.
In the face of numerous HTAP solutions, there is an urgent need to consider generic and domain-specific HTAP DBMS benchmarks.
OLxPBench provides one generic benchmark and two domain-specific benchmarks for evaluating HTAP systems.
In Sections~\ref{sec: 6.2.1} and~\ref{sec: 6.3.2}, our evaluation shows that the peak transactional throughput of a domain-specific benchmark is nearly 200 times that of another domain-specific benchmark on the same testbed.
OLxPBench implements an extensible framework that makes it easy for developers to add a new benchmark.
\section{The design and implementation~\label{sec: 4}}
To fully evaluate HTAP DBMSs, we present OLxPBench, consisting of a general benchmark and two domain-specific benchmarks.
The general benchmark, which we name subenchmark, extracts complex operations from the retail activity and does not attempt to model an actual application scenario~\cite{TPCC111}, which intends to perform performance comparison for HTAP DBMSs.
Meanwhile, OLxPBench has two domain-specific benchmarks, which we name fibenchmark and tabenchmark, model the financial~\cite{alomari2008cost} and telecommunication~\cite{TATP111} scenarios, and help the users select the HTAP DBMS that best support their specific workloads.
This section introduces the OLxPBench suite from the schema model design, the details of workloads, the benchmark categories, and implementation.
\subsection{HTAP Schema model design}~\label{sec: 4.1}
We follow three principles in designing the HTAP schema.
(1) Any record accessible to OLTP should be accessible to OLAP.
Because online transactions generate the data that the analytical queries will analyze.
The OLTP schema set should include the OLAP schema.
We first propose that in the HTAP benchmarks, the mixed workloads, including OLTP and OLAP workloads, should use the semantically consistent schema. They will reveal the inherent interference between OLTP workload and OLAP workloads.
(2) The schema models should be diverse and practical for thoroughly evaluating the various HTAP solutions.
The diversity of schema models is reflected in the diversity of practical uses.
Therefore, we provide the generic schema model for performance comparisons and two domain-specific schema models for users to select the HTAP DBMS that best support their specific workloads.
We choose schema models for retail, banking, and telecommunications activities because the above practitioners were among the first to adopt the HTAP solutions~\cite{harvard111}.
(3) The design of integrity constraints should be relevant to implementing a specific HTAP database.
For example, some HTAP DBMSs (such as MemSQL~\cite{MemSQL111}) do not currently support foreign keys.
As a result, OLxPBench's schema models come in two versions, one with no fundamental foreign constraint and one with foreign constraint, which users can choose on-demand.
\subsection{The details of HTAP Workloads}~\label{sec: 4.2}
OLxPBench contains nine built-in workloads with different types and complexity.
Three online transaction workloads are extracted from popular benchmarks~\cite{TATP111}~\cite{TPCC111}~\cite{cahill2009serializable}.
In addition, we add three analytical query workloads and three hybrid transaction workloads for real-time customer analysis and simulating the real-time user behaviors.
We distill the E-commerce services from an industry partner, which we keep anonymously at its request, into representative real-time queries.
The analytical workloads contain complex analytical operations such as $multi-join$, $sub-selection$, $Group-By$, and $Order-By$ operations based on the different schema models.
Table~\ref{table: 3} describes the features of these benchmarks.
In more specific implementations, we modify the integrity constraints of the schema of SmallBank~\cite{alomari2008cost} and TATP~\cite{TATP111} to adapt the implementation of the MemSQL~\cite{MemSQL111}.
Furthermore, we increase the composite primary key to TATP~\cite{TATP111}, which is common in real business scenarios.
OLxPBench provides valuable experience for schema model design and hybrid transaction abstraction.
The request rates, transaction/query weights, and schema relations are configurable for different testing purposes.
This subsection will introduce the details of these benchmarks in turn.
\subsubsection{Subenchmark} ~\label{sec: 4.2.1}
The subenchmark is inspired by TPC-C~\cite{TPCC111}, which is not bound to a specific scenario, and the community considers a general benchmark for OLTP system evaluation.
The online workloads of the subenchmark are the same as TPC-C's transactions, which are write-heavy and merely 8\% read-only transactions.
The online transactions include ${NewOrder}$, ${Payment}$, ${OrderStatus}$, ${Delivery}$, and ${StockLevel}$.
The nine analytical queries in the subenchmark keep the essential characteristics such as the complexity of operations. The analytical queries perform multi-join, aggregation, grouping, and sorting operations on a semantically consistent schema.
For example, the Orders Analytical Report Query (Q1) is designed for getting the magnitude summary for all $\textit{ORDER\_LINE}$ items as of a given date.
The query lists the total quantity, total amount, average quantity, and average amount for further analysis.
The above aggregates are grouped by their number and listed in ascending order.
We newly increase five hybrid transactions, and the default configuration of the subenchmark has 60\% read-only hybrid transactions.
The real-time queries that simulate the user behavior are the representative aggregation operations in the actual E-commerce production application: if the customer wants to create a ${New\_Order}$ transaction, a query to get the lowest price rather than the random price of the item (X1).
\subsubsection{Fibenchmark}~\label{sec: 4.2.2}
The fibenchmark is inspired by SmallBank~\cite{alomari2008cost}, which aims at bank scenarios.
Hence, it is a domain-specific benchmark.
The fibenchmark contains three tables: $\textit{ACCOUNT}$, $\textit{SAVING}$, and $\textit{CHECKING}$, and the transactions mainly modify the customers' accounts.
The online transactions are $\textit{Amalgamate}$, $\textit{Balance}$, $\textit{DepositChecking}$, $\textit{SendPayment}$, $\textit{TransactSavings}$, and $\textit{WriteCheck}$.
Fifteen percent of the above transactions are real-only in the default configuration.
We keep all the online transactions of SmallBank~\cite{alomari2008cost}, and we newly increase the analytical workloads and the hybrid transactions in fibenchmark.
The analytical workloads perform real-time customer account analytics.
The complex queries include ${join}$, ${aggregate}$, ${sub-selection}$, ${Order-By}$ and ${Group-By}$ operations.
For example, the Account Name Query (Q1) lists the name in the combining row from $\textit{ACCOUNT}$ and $\textit{CHECKING}$ tables.
Besides, the real-time queries in hybrid transactions are generally the ${aggregate}$ operations and perform the real-time financial analysis on the user's account.
There are six hybrid transactions, and the default configuration of the fibenchmark has 20\% read-only hybrid transactions.
For example, the Checking Balance Transactions (X6) checks whether the cheque balance is sufficient and aggregates the value of the minimum savings.
The volatility of extreme values is also an important research topic in the financial field.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.30]{olxparch.pdf}
\caption{OLxPBench architecture.}
\label{fig: 2}
\vspace{-0.5cm}
\end{figure}
\subsubsection{Tabenchmark} ~\label{sec: 4.2.3}
The tabenchmark is inspired by TATP~\cite{TATP111}, which aims at telecom scenarios.
Hence, it is a domain-specific benchmark.
The online transactions in the tabenchmark simulate a typical Home Location Register (HLR) database used by a mobile carrier~\cite{TATP111}.
Eighty percent of online transactions are real-only and the transactions are $\textit{DeleteCallForwarding}$, $\textit{GetAccessData}$, $\textit{GetNewDestination}$, $\textit{GetSubscriberData}$, $\textit{InsertCallForwarding}$, $\textit{UpdateLocation}$ and $\textit{UpdateLocation}$.
We modify the primary key of the ${SUBSCRIBER}$ table from ${s\_id}$ to ${(s\_id, sf\_type)}$, because the composite primary key is standard in the real business scenario.
The original data definition language file is a choice.
We keep all the online transactions of TATP~\cite{TATP111}, and we newly increase the analytical workloads and the hybrid transactions in fibenchmark.
The analytical queries help the mobile network operators to analyze user behavior in real-time.
The analytical queries also comprise the ${arithmetic}$ operation besides the operations in the fibenchmark.
For example, the Start Time Query (Q3) calculates the average of the starting time of the call forwarding.
The average value of start time is essential for load forecasting.
The real-time queries perform the real-time activities on practical mobile users.
The real-time query not only performs aggregation operation but also does a fuzzy search based on the sub-string.
For example, the Fuzzy Search Transaction (X6) queries all information about the subscriber.
It selects the subscriber IDS whose user data matches the fuzzy search criteria.
\subsection{The implementation of OLxPBench}~\label{sec: 4.3}
OLxPBench is used for evaluating distributed HTAP DBMSs and other HTAP DBaaS systems that support SQL through JDBC. The architecture of the OLxPBench is shown in Figure~\ref{fig: 2}. OLxPBench parses configuration files at runtime and generates the corresponding hybrid workloads. Then the hybrid workloads are populated in request queues.
The request rates, transaction types, real-time query types, weights, and target DB configuration are specified in the XML file.
The thread connects to the target database by JDBC pool and pulls requests from the request queue.
The threads also measure the latency and throughput metrics.
Finally, the statistics module aggregates the above metrics and stores the min, max, medium, 90th, 95th, 99.9th, and 99.99th percentile latency in a file specified by the user in the terminal.
The open-loop mode sends the requests with the precise request rate control mechanism because the open-loop load generator sends the request without waiting for the previous request to come back.
However, in a closed-loop mode, the response of a request triggers the sending of a new request.
Besides, the users can customize the weights of various online transactions and analytical queries.
OLxPBench is inspired by the OLTP-Bench's OLTP module~\cite{difallah2013oltp} and newly increased analytic and hybrid modules.
OLxPBench achieves three online and analytical agent combination modes for different HTAP solutions.
The first mode sequentially sends the online transactions or analytical queries.
The second mode concurrently invokes the transactional workload and analytical workload.
The last mode sends hybrid transactions performing a real-time query in-between an online transaction to simulate the user behavior.
The OLxPBench client is a java program and is easy to extend with new hybrid database back-ends.
\section{Evaluation}~\label{sec: 5}
Our evaluation illustrates the effectiveness of OLxPBench. In Section~\ref{sec: 5.3}, we compare OLxPBench with the state-of-the-practice~\cite{cole2011mixed} work, testing the key features of OLxPBench and reporting the standard deviation of absolute value. From Section~\ref{sec: 6.1} to Section~\ref{sec:find}, we evaluate the mainstream distributed HTAP DBMSs using OLxPBench and pinpoint the bottlenecks of two mainstream distributed HTAP DBMSs. In Section~\ref{sec: 6.3}, we evaluate the scaling capability of TiDB, MemSQL, and OceanBase.
\subsection{Experimental Setup~\label{sec: 5.1}}
\subsubsection{Cluster Deployment}~\label{sec: 5.1.1}
We deploy a 4-node cluster for our evaluation. Each server includes 2 Intel Xeon [email protected] CPUs, 64 GB memory, and 1TB SSD.
Each CPU has 6 physical cores, and hyper-thread is enabled.
We used all of the 24 hardware threads.
For the scaling capability experiments in Section~\ref{sec: 6.3}, we deploy a 16-node cluster, with each cloud server including 8 Intel Xeon Platinum [email protected] virtual CPUs, 32 GiB memory, and 140GiB enhanced solid-state disk (ESSD).
We used all of the 8 threads.
All machines are configured with Intel Ethernet 1GbE NICs.
The operating system is ubuntu 16.04 with the Linux kernel 4.4.
\subsubsection{Database Deployment} ~\label{sec: 5.1.1}
In our experiments, 4-node configuration ensures that the components of systems are under test are distributed deployed.
TiDB~\cite{huang2020tidb} is a Raft-based HTAP database. TiSpark is a powerful analysis engine to help TiDB connect to the Hadoop ecosystem.
The SQL engine processes process online transactions and analytical queries.
The distributed storage layer consists of a row store (TiKV) and a columnar store (TiFlash).
Two TiKV instances are deployed on two servers with a TiDB SQL engine instance.
Two TiFlash instances are deployed on the other servers with a TiSpark instance.
The TiDB~\cite{huang2020tidb} version is 5.0.0-RC, and the number of replication is two.
TiDB provides snapshot isolation or repeatable read isolation levels, and we choose repeatable read isolation in the experiments.
The MemSQL~\cite{MemSQL111} cluster consists of aggregator nodes and leaf nodes.
The aggregator nodes receive queries and distribute them to leaf nodes.
The leaf nodes store data and execute queries issued by the aggregator nodes.
The version of MemSQL is 7.3, and the number of replication is two.
A MemSQL cluster consists of at least one master aggregator node and one leaf node.
We keep two leaf nodes, one aggregator node, one master aggregator node in 4 separate servers.
MemSQL only supports a read committed isolation level.
We remain the default configurations of the above two distributed hybrid databases.
\begin{figure}[!t]
\includegraphics[height=3.8cm, width=9.0cm,center]{schemamodel.png}
\caption{Comparing the schema model of OLxPBench and CH-benCHmark on TiDB cluster.}
\label{fig: schemamodel}
\vspace{-0.3cm}
\end{figure}
\begin{figure}
\includegraphics[height=3.81cm, width=8.9cm, left]{lock.png}
\caption{Comparing the lock overhead of different schema models.}
\label{fig: lock}
\vspace{-0.5cm}
\end{figure}
\subsubsection{Workloads Deployment}~\label{sec: 5.1.2}
The workloads in the following experiments have three sources: subenchmark, fibenchmark, and tabenchmark.
Each benchmark contributes two composites of workloads: (1) the OLTP agents and OLAP agents launch the mixtures of online transactions with analytical queries; (2) The hybrid agents send the hybrid workloads performing a real-time query in-between an online transaction to simulate the user behavior.
We do not test the performance of the cold start procedures, so there is a 60-second warm-up period before the 240-second run time.
All the workloads are open-loop, and the request rates used in the experiment depend on the cluster's peak performance.
The requested rates vary during the experiment, and the warehouse quantity is 50.
The interference between online transactions and analytical queries increases with the increasing request rates. The transactional/analytical request rate unit is transactions per second (tps). OLxPBench reports the average latency, tail latency, and throughput.
\subsection{The evaluation of OLxPBench key design features}\label{sec: 5.3}
In this subsection, we demonstrate the design features of OLxPBench. \textbf{(1) Which schema model to adopt}, \textbf{ (2) how the real-time query impacts the performance of HTAP systems}, and \textbf{(3) Why the domain-specific benchmark should be considered}.
The following results are the average results of the three runs.
\subsubsection{Schema Model}~\label{sec: 5.3.1}
We explore the performance difference between the semantically consistent and traditional stitched schema in the same TiDB~\cite{huang2020tidb} cluster.
We choose the state-of-the-practice HTAP benchmark, CH-benCHmark~\cite{cole2011mixed}, as the reference.
Because CH-benCHmark~\cite{cole2011mixed} is still the most popular HTAP benchmark.
The average number of requests \textit{L} equals the long-term average arrival rate \textit{$\lambda$} multiplied by the average latency \textit{W} that a request spends in the system. The Little's Law~\cite{little2008little} is
\begin{equation}
L=\lambda W
\end{equation}
According to Little's Law~\cite{little2008little}, the load stress in the TiDB is directly influenced by the average number of requests \textit{L} rather than the different average request arrival rates \textit{$\lambda$} of open data generator (OLxPBench) and closed data generator (CH-benCHmark). So, when the average number of requests \textit{L} in the queuing system (TiDB) is fixed, the load stress in the TiDB is fixed.
The average number of online transactions in a stable TiDB cluster over the long term is around 45.
We drop the write-heavy transactions such as \emph{NewOrder} and \emph{Payment} to reduce the possibility of load imbalance.
\textbf{Test Case 1: Varied OLAP pressures.}
The sum of the OLAP thread increases from zero to two. We put the incremental OLAP pressure on the tested systems to disturb the performance of online transactions and compare the latency of the different schema designs.
We normalized the baseline of OLxPBench and CH-benCHmark to make a fair comparison.
Little's Law does not guarantee that OLxPBench and CH-benCHmark have the same transmission rate of request, so the absolute value comparison is unfair.
Figure~\ref{fig: schemamodel} shows that the normalized average latency of online transactions in OLxPBench is more than double with the lowest OLAP pressure compared to without the OLAP pressure.
However, the normalized average latency of online transactions in CH-benCHmark increases by no more than one-fifth of the baseline under the same OLAP pressure.
Under the enormous OLAP pressure, the normalized average latency of online transactions in OLxPBench increases more than three times.
Each OLAP thread sends one OLAP query per second.
The OLAP queries are time-consuming scan tables operations that bring a large amount of data into the buffer pool and evict an equivalent amount of older data.
Two OLAP threads can generate enormous pressure and cause a significant increase in server-side average CPU utilization.
At the same time, the normalized average latency of online transactions in CH-benCHmark increases by around 48 percent of the baseline.
TiDB~\cite{huang2020tidb} provides a row-based store, TiKV, and a column-based store, TiFlash.
The data in TiFlash keep consistent with the data in TiKV using asynchronous log replication.
Nevertheless, the scan tables operations can occur in the row store of TiKV or the column store of TiFlash.
As the number of OLAP agents increases, the number of scan table operations increases.
We use the performance monitoring unit tool such as Perf to obtain the performance events and count the overhead of the lock.
According to the Linux Perf tool's manual, samples are performance data collected using the ' perf record' command. Lock samples indicate the number of samples collected in the lock function.
Lock overhead includes the syscall overhead of mutual exclusion (mutex) locks, fast userspace mutex (futex), and spinlock.
Lock overhead \textit{LO} equals the number of lock samples \textit{LS} divided by the total number of samples \textit{TS}.
The baseline lock overhead \textit{BLO} is the lock overhead of the online transactions without analytical query influencing.
\begin{figure}[!t]
\includegraphics[height=3.80cm, width=9cm, center]{query.png}
\caption{Comparing the analytical queries to the real-time queries of subenchmark on the TiDB cluster.}
\label{fig: query}
\vspace{-0.5cm}
\end{figure}
The normalized lock overhead \textit{NLO} is the lock overhead \textit{LO} divided by the baseline lock overhead \textit{BLO}.
\begin{equation}
NLO = \frac{LS}{TS*BLO} \times 100\%
\end{equation}
When the analytical agent increases, the throughputs of online transactions will be influenced.
So the normalized lock overhead decreases with the analytical agent increases in Figure~\ref{fig: lock}.
The difference in performance isolation measured by OLxPBench is far more significant than CH-benCHmark. Better performance isolation indicates that the execution of OLTP with OLAP workloads affects the other one's performance much lighter.
Figure~\ref{fig: lock} reports that the lock overhead gap between semantically consistent schema and stitched schema is 1.76x using one OLAP thread and 1.68x using two OLAP threads.
It indicates that the shared data between OLTP and OLAP on a semantically consistent schema is more significant than stitched ones.
The low competition in CH-benCHmark between OLTP and OLAP workloads will propagate a false image that the HTAP system can guarantee isolated performance.
\textbf{Implication 1}
\emph{Experiments show that semantically consistent schema reveals inherent competition between OLTP and OLAP than stitched schema.}
\begin{figure}[!t]
\includegraphics[height=3.8cm, width=8cm, left]{benchmarks.png}
\caption{Comparing the generic benchmark to the domain-specific benchmarks on TiDB cluster.}
\label{fig: benchmarks}
\vspace{-0.5cm}
\end{figure}
\subsubsection{Real-time Query}~\label{sec: 5.3.2}
We now compare the two main queries common in real-world scenarios: \emph{analytical queries} and \emph{real-time queries}.
First, the analytical queries keep the essential characteristics such as the complexity of operations and perform multidimensional customer analysis.
Second, the real-time queries in OLxPBench are extracted from the existing production applications.
The real-time queries are used in the production applications to perform real-time user behavior simulations.
However, none state-of-the-art~\cite{coelho2017htapbench} and state-of-the-practice~\cite{cole2011mixed} HTAP benchmarks provide the workloads that include real-time queries imitating user behavior.
We compare the two different intentional queries on the TiDB cluster.
\textbf{Test Case 2: Queries comparison.}
We run the subenchmark using the semantically consistent schema at 30 online transactions per second as the baseline.
Then we inject analytical queries at 1 query per second into the baseline as experimental group one.
Meanwhile, we send the hybrid transaction performing a real-time query in-between an online transaction at 30 requests per second as the experimental group two.
Figure~\ref{fig: query} shows that the analytical queries increase the baseline latency by around three times.
The real-time queries in hybrid transactions increase the baseline latency by more than nine times.
The hybrid transaction contains both OLTP statements and OLAP statements, but the SQL engine can only choose a row-based store or column-based store to handle the hybrid transaction.
However, the analytical queries and the online transactions can be handled separately by the column-based TiFlash and the row-based TiKV.
Therefore, the impact of real-time query simulating the user behavior is more significant than the impact of analytical queries on the performance of online transactions.
Besides, the standard deviation of the average baseline latency is 2.21.
With the analytical queries interference, the standard deviation of average baseline latency increases from 2.21 to 9.16.
Under the real-time queries interference, the standard deviation of average baseline latency increases from 2.21 to 38.91.
It indicates that interference of real-time queries to online transactions is greater than that of analytical queries.
\begin{figure*} \centering
\subfigure[Throughput of OLTP.] {
\label{fig.suoltp}
\includegraphics[height=3.3cm, width=5.5cm]{figure_benchmark_suoltp.pdf}
}
\subfigure[Throughput of OLAP.] {
\label{fig.suolap}
\includegraphics[height=3.3cm, width=5.4cm]{figure_benchmark_suolap.pdf}
}
\subfigure[Throughput of OLxP.] {
\label{fig.suolxp}
\includegraphics[height=3.3cm, width=5.4cm]{figure_benchmark_suolxp.pdf}
}
\caption{ OLTP, OLAP and OLxP performance of subenchmark. }
\label{fig.su}
\vspace{-0.3cm}
\end{figure*}
So it is necessary to include the real-time queries extracted from the production environment in the HTAP benchmark for helping users choose the appropriate HTAP system to handle real-time queries.
\textbf{Implication 2}
\emph{It is necessary to include the real-time queries
in the HTAP benchmark for testing whether the HTAP system can handle real-time queries from users.}
\subsubsection{Domain-specific Benchmark}~\label{sec: 5.3.3}
In this paper, we classify the benchmarks into two categories: \emph{generic benchmark} and \emph{domain-specific benchmark}.
The subenchmark is inspired by TPC-C~\cite{TPCC111}, which is not bound to a specific scenario, and the community considers a general benchmark. The fibenchmark and the tabenchmark model a banking scenario and a telecom scenario.
Hence, they are domain-specific benchmarks.
\textbf{Test Case 3: Domain-specific benchmark.}
We run the subenchmark, the fibenchmark, and the tabenchmark at 80 online transactions per second as the baseline.
Then we send the analytical queries at 1 query per second with the baseline.
Figure~\ref{fig: benchmarks} shows that the baseline of the above three benchmarks is 53.47ms, 10.25ms, and 69.53ms.
Moreover, the standard deviations of the baseline of the above three benchmarks are 0.23, 0.05, and 0.47.
The online transactions of fibenchmark perform read-heavy and simple update operations, so its baseline latency is the smallest one.
Slow queries took longer than one second in tabenchmark's online transactions. So the baseline of the tabenchmark is the biggest one.
We will analyze the reason for slow queries in Section~\ref{sec: 6.3}.
Only 8\% of online transactions in subenchmark do not modify the table, and the tables in subenchmark contain complex relations.
So its baseline average latency is the median.
Under the OLAP pressure, the OLTP latency of subenchmark increases by more than five times, and the OLTP latency of fibenchmark increases by less than forty percent.
And the OLTP latency of tabenchmark increases by less than twenty percent.
\begin{figure*} \centering
\subfigure[Throughput of OLTP.] {
\label{fig.fioltp}
\includegraphics[height=3.3cm, width=5.5cm]{figure_benchmark_fioltp.pdf}
}
\subfigure[Throughput of OLAP.] {
\label{fig.fiolap}
\includegraphics[height=3.3cm, width=5.6cm]{figure_benchmark_fiolap.pdf}
}
\subfigure[Throughput of OLxP.] {
\label{fig.fiolxp}
\includegraphics[height=3.3cm, width=5.4cm]{figure_benchmark_fiolxp.pdf}
}
\caption{ OLTP, OLAP and OLxP performance of fibenchmark. }
\label{fig.fi}
\vspace{-0.3cm}
\end{figure*}
Meanwhile, the standard deviations of the above three benchmarks increase to 14.10, 0.58, and 4.05 under the analytical queries interference.
The complex analytical queries in subenchmark generate many table scan operations, which increase the waiting time of online transactions.
The read-heavy online transactions of fibenchmark are mostly negligible by OLAP agents.
Therefore, online transactions of subenchmark are most affected by OLAP pressure, followed by fibenchmark' online transactions, and tabenchmark' online transactions are the least affected.
\textbf{Implication 3}
\emph{The domain-specific benchmarks help users identify system bottlenecks in their specific scenarios.
Besides, it also helps system designers point in the direction of system optimization. }
\section{Evaluation of The Mainstream Distributed HTAP DBMSs}\label{sec: 6}
In addition to the feature evaluation of OLxPBench, we also thoroughly test end-to-end performance for mainstream distributed HTAP DBMSs.
We will provide detailed evaluation data in the following subsections, including peak performance.
The peak performance refers to the saturation value that a single workload can reach in the test cluster.
Furthermore, we will describe and deeply analyze the mutual interference between OLTP and OLAP~\cite{9458644} using the control variate method.
The transactional/analytical request rates are divided into four numerically increasing groups with the same interval based on peak throughput.
The transactional/analytical request rates in each group are the same, and the analytical/transactional request rates increase from zero to peak to explore the influence of the analytical/transactional agents on the transactional/analytical agents.
Besides, CH-benCHmark~\cite{cole2011mixed} uses the stitch schema while OLxPBench uses a semantically consistent one. In addition, OLxPBench uses hybrid workloads.
The difference in performance isolation measured by OLxPBench is far more significant than CH-benCHmark. Better performance isolation indicates the execution of OLTP with OLAP workloads affects the other one's performance much lighter. Moreover, we find that the lock overhead gap between the OLxPBench and CH-benCHmark is 1.76x under the same OLAP pressure in TiDB. The low competition in CH-benCHmark between OLTP and OLAP workloads will propagate a false image that the HTAP system can guarantee isolated performance.
So, we do not report the experimental results of CH-benCHmark.
\subsection{Subenchmark evaluation}\label{sec: 6.1}
\subsubsection{Peak performance\label{sec: 6.1.1}}
Figure~\ref{fig.suoltp} shows that the transactional throughput increases with the incremental transactional request rates.
In the MemSQL cluster, the throughput reaches the top when the transactional request rates are 2400 tps.
The average latency of transactions is 29.7 milliseconds without OLAP agent inferences.
And the 95th percentile latency of transactions is 78.53 milliseconds.
In the TiDB cluster, the maximum transactional throughput is 800 tps.
Figure~\ref{fig.suoltp} illustrates that the throttled transactional throughput of subenchmark in the TiDB cluster is one-third that of the MemSQL cluster.
The above result is the data processing of MemSQL in memory rather than in solid-state disk.
Figure~\ref{fig.suolap} shows that the maximum analytical throughput is around eight tps in the MemSQL cluster.
Moreover, the analytical throughput reaches the top when the analytical request rates are four tps in the TiDB cluster.
Under the same analytical request rates, the average latency of OLAP increases with the incremental transactional request rates.
The performance of OLxP in the different hybrid request rates is shown in Figure~\ref{fig.suolxp}.
The OLxP workloads include hybrid transactions, which perform a real-time query in-between an online transaction to simulate the user behavior.
The real-time query is a time-consuming aggregate operation.
So, the transactional statements behind the real-time query must wait for the real-time query execution because of the atomicity property of the transaction. So, the maximum throughput of OLxP is 4.28tps and 15.98tps in Figure~\ref{fig.suolxp}.
In the MemSQL cluster, the maximum average hybrid latency is 133.44 seconds.
The gap between the maximum and minimum average delays is 223 times.
And the 95th percentile latency of hybrid workload is 209.50 seconds.
MemSQL adopts the vertical partitioning technology, which results in many join operations generated by relationship query statements in hybrid transactions and increases the waiting time of hybrid transactions.
In the TiDB cluster, the throttled OLxP throughput is 16 tps, and the maximum average hybrid latency is 397 milliseconds.
The maximum average delay is 1.47 times the minimum average delay.
And the 95th percentile latency of hybrid workload is 905.36 milliseconds.
The above results indicate that TiDB's separated storage engine can handle the OLxP workloads compared to MemSQL's single storage engine.
\subsubsection{Performance interference between OLTP agents and OLAP agents\label{sec: 6.1.2}}
The performance impact of analytical agents on transactional agents is shown in Figure~\ref{fig.suoltp}.
When the transactional request rates are controlled, the average latency of the transactional agents increases by up to 17.4 times compared with the absence of the analytical agents in the MemSQL cluster.
And the gap of the 95th percentile latency is 33.7x.
The performance impact of the online transactions on the analytical queries is shown in Figure~\ref{fig.suolap}.
When the analytical request rates are controlled, the average latency of the analytical agents increases by up to 2.2 times compared with the absence of the transactional agents.
And the gap of the 95th percentile latency is 2.0x.
It indicates that violent interference exists between transactional agents and analytical agents.
The expensive analytical queries compete for the resource with the online transactions in the single storage and increase the latency of online transactions.
In the TiDB cluster, under the same analytical request rates, the analytical throughput decreases to the baseline of 59\% as the transactional request rate increases.
Furthermore, when the transactional request rates are 800 tps, the transactional throughput plummets as the analytical request rates increase, up to 89\%.
The transactional agents significantly affect the execution of analytical agents.
The higher the request rates, the more table scan operations.
Time-consuming table scan operations increase the waiting time of requests and reduce requests throughput.
\subsection{Fibenchmark evaluation}\label{sec: 6.2}
\subsubsection{Peak performance\label{sec: 6.2.1}}
As shown in Figure~\ref{fig.fioltp}, the peak transactional throughput is around 23476 tps in the MemSQL cluster.
The maximum transactional throughput is 9165 tps in the TiDB cluster.
The read-only transaction ratio of fibenchmark is higher than that of subenchmark, so the peak transactional throughput of fibenchmark is higher than that of subenchmark.
A large number of queries are blocked until the previous complex queries are completed.
So the peak analytical throughput of MemSQL is around 0.12 tps in Figure~\ref{fig.fiolap}.
And the maximum analytical throughput of TiDB is 0.25 tps.
There are a lot of scan table operations in the workloads of fibenchmark, and scanning row-format tables in TiKV~\cite{huang2020tidb} is stochastic and expensive, explained in Section~\ref{sec: 5.3.1}.
Figure~\ref{fig.fiolxp} shows that the hybrid throughput increases as the hybrid request rates increase when the hybrid request rates are no more than four tps.
In the MemSQL cluster, the peak hybrid throughput is 2.9 tps.
In the TiDB cluster, the peak hybrid throughput is 4 tps.
The average hybrid latency increases at most 17.2\% as the hybrid request rates increase.
And the 95th percentile latency increases up to 36.4\% as the hybrid request rates increase.
The hybrid latency increase as the hybrid request rates increase without bound.
The increasing average and 95th percentile latency of hybrid transactions result from more waiting time with the higher hybrid request rates.
\begin{figure*} \centering
\subfigure[Throughput of OLTP.] {
\label{fig.taoltp}
\includegraphics[height=3.4cm, width=5.6cm]{figure_benchmark_taoltp.pdf}
}
\subfigure[Throughput of OLAP.] {
\label{fig.taolap}
\includegraphics[height=3.4cm, width=5.5cm]{figure_benchmark_taolap.pdf}
}
\subfigure[Throughput of OLxP.] {
\label{fig.taolxp}
\includegraphics[height=3.4cm, width=5.5cm]{figure_benchmark_taolxp.pdf}
}
\caption{ OLTP, OLAP and OLxP performance of tabenchmark. }
\label{fig.ta}
\vspace{-0.3cm}
\end{figure*}
\subsubsection{Performance interference between OLTP agents and OLAP agents\label{sec: 6.2.2}}
The performance impact of analytical agents on transactional agents is shown in Figure~\ref{fig.fioltp}.
And the performance impact of transactional agents on analytical agents is shown in Figure~\ref{fig.fiolap}.
In the MemSQL cluster, the transactional throughput decreases with the analytical request rates increasing under the same transactional request rates.
And the average latency of the transactional request rates increases as the analytical request rates increase.
The analytical throughput decreases with the incremental transactional request rates when the analytical request rates are less than three tps, and the transactional request rates are less than 7000 tps.
The analytical throughput fluctuates wildly when the analytical request rates exceed the processing capacity of the MemSQL.
Meanwhile, the long-term running analytical queries increase the waiting time of online transactions.
In the TiDB cluster, under the same transactional request rates, the transactional throughput decreases as the analytical request rates increase when the analytical request rates are no more than 3 tps.
Under the same analytical request rates, the analytical throughput decreases as the transactional request rates increase when the analytical request rates are no more than 2 tps, and the transactional request rates are no more than 5000 tps.
And the average analytical latency increases as the transactional request rates increase when the transactional request rates are no more than 2500 tps.
\subsection{Tabenchmark evaluation}\label{sec: 6.3te}
\subsubsection{Peak performance\label{sec: 6.3.2}}
Figure~\ref{fig.taoltp} shows that the maximum transactional throughput is 124 tps in the MemSQL cluster.
The transactional throughput increases as the transactional request rates increase when the transactional request rate is no more than 140 tps.
The maximum transactional throughput is 43 tps in the TiDB cluster.
The transactional throughput increases as the transactional request rates increase when the transactional request rates are no more than 50 tps.
Tabenchmark has the highest percentage of read-only transactions among the three benchmarks.
However, there is a slow query in the \textit{DeleteCallForwarding} transaction that took longer than one second, so tabenchmark has the lowest transactional throughput of the three benchmarks.
The SQL statement is \textit{"explain SELECT s\_id FROM SUBSCRIBER WHERE sub\_nbr = ?"}.
The s\_id and sub\_nbr are the composite keys of the SUBSCRIBER table.
The full table scan in memory is time-consuming when the slow query is executed in MemSQL.
Even worse, when a slow query is executed in the storage engine of the TiDB, the index full scan will perform a random read on the solid-state disk.
Therefore, the maximum transactional throughput of MemSQL is higher than the maximum transaction throughput of TiDB.
Figure~\ref{fig.taolap} shows that the maximum analytical throughput is 0.7 tps in MemSQL cluster.
The analytical throughput increases as the analytical request rates increase when the analytical request rates are no more than two tps.
And the maximum analytical throughput is 0.23 tps in the TiDB cluster.
The analytical throughput increases as the analytical request rates increase when the analytical request rates are no more than two tps.
Figure~\ref{fig.taolxp} shows that the MemSQL cluster is saturated when the hybrid request rate increases to 12 tps.
Figure~\ref{fig.taolxp} shows that the maximum hybrid throughput is around five tps in the TiDB cluster.
And the average latency increases with hybrid request rates increases without bound.
The gap between 95th percentile latency and average latency is up to 2.2x.
\subsubsection{Performance interference between OLTP agents and OLAP agents\label{sec: 6.3.3}}
Figure~\ref{fig.taoltp} shows that the performance impact of analytical agents on transactional agents.
The performance impact of transactional agents on analytical agents is shown in Figure~\ref{fig.taolap}.
In the MemSQL cluster, OLxPBench executes the precisely transactional request rates control when the transactional request rates are no more than 105 tps, and the analytical request rates are no more than three tps.
Under the same transactional request rates, the average transactional latency increases more than 34.4 times.
And the 95th percentile latency increases by 12.8x.
The analytical throughput decreases when the analytical request rates are identical as the transactional request rates rise, which are no more than 50 tps.
The transactional throughput decreases to 49.8\% with the analytical agents' inference in the TiDB cluster.
It indicates the analytical agents significantly increase the online transaction waiting time.
The performance impact of transactional agents on the analytical agents is shown in Figure~\ref{fig.taolap}.
The analytical throughput decreases up to 89\% under the transactional agents' inference.
The average latency increases 30.8\% under the transactional agents' inference.
And the 95th percentile latency increases 12.2\% under the transactional agents' inference.
It indicates that the slow queries in the online transaction block the analytical agents' execution.
\begin{figure*} \centering
\subfigure[OLTP latency.] {
\label{fig.scaleoltp}
\includegraphics[height=3.3cm, width=5.72cm]{scale_oltp.png}
}
\subfigure[OLTP latency with OLAP interference.] {
\label{fig.scaleolap}
\includegraphics[height=3.3cm, width=5.72cm]{scale_htap.png}
}
\subfigure[OLxP latency.] {
\label{fig.scaleolxp}
\includegraphics[height=3.3cm, width=5.72cm]{scale_olxp.png}
}
\caption{ OLTP, HTAP and OLxP latency as cluster size increases. }
\label{fig.scale16}
\vspace{-0.35cm}
\end{figure*}
\subsection{The main findings of differences between MemSQL and TiDB}\label{sec:find}
First, the enormous transactional performance gap between MemSQL and TiDB results from the different storage mediums for data processing, i.e., memory for MemSQL and solid-state disk for TiDB. The peak transactional throughput gap between MemSQL and TiDB is 3.0x, 2.6x, and 2.9x using subenchmark, fibenchmark, and tabenchmark.
Second, compared to the single storage engine of MemSQL, the separated storage engines (row-based and column-based) of TiDB handle the hybrid workload better, performing real-time queries in-between an online transaction. The peak hybrid workload throughput gap between TiDB and MemSQL is 3.7x and 1.4x using subenchmark and fibenchmark.
Third, both MemSQL and TiDB handle the query using the composite keys awkwardly. MemSQL uses time-consuming full table scans in memory, while TiDB uses index full scans that perform a random read on the solid-state disk. The maximum hybrid workload throughput of MemSQL is 2.2x than that of TiDB.
\subsection{Scalability}\label{sec: 6.3}
We choose the mainstream HTAP DBMSs -- TiDB~\cite{huang2020tidb}, MemSQL~\cite{MemSQL111}, and OceanBase~\cite{OceanBase111} for scale-out experiments.
We test the scaling capability of TiDB and OceanBase by varying the cluster sizes from 4 to 16 nodes~\footnote{For too high cost for commercial software, we test MemSQL on four servers.}.
Meanwhile, the data size and target request rates rise in proportion to the increasing cluster size.
The following results are the average results of the five runs.
TiDB decouples the computational engine layer and storage layer.
Due to complex execution plans and compute-intensive queries, we set the ratio of storage instances(SI) to computational instances(CI) at 2:1 in the TiDB cluster.
Storage instances are deployed on all servers in the cluster, and computational instances are deployed on half of the servers in the cluster.
Oceanbase is shared-nothing architecture, and each OceanBase server (OBServer) is the same.
The number of OBServers is the cluster size.
The average latency and 95th percentile latency for workloads in subenchmark are shown in Figure~\ref{fig.scale16}.
First, OLxPBench clients run on a separate eight vCPU machine and can spawn up to 300 threads to generate target request rates.
OLxPBench clients can be deployed on separate client servers, so client servers are not a bottleneck.
The more OLxPBench clients are deployed, the more requests are generated.
Second, OceanBase and TiDB cannot scale-out well when dealing with the OLTP workloads, OLxP workloads, and the mixtures of OLTP and OLAP workloads.
In the OceanBase cluster, the average latency and 95th percentile latency of OLTP workloads increase by 20\% and 24\% as the cluster size increase from 4 to 16 nodes.
In the TiDB cluster, the average latency and 95th percentile latency of OLTP workloads increase more than 1x as the cluster size increase from 4 to 16 nodes.
Significantly, the latency of OLxP workloads increases sharply as the cluster size increase from 4 to 16 nodes.
It is challenging for the above HTAP DBMSs to deal with the OLxP workloads.
Third, compared with OceanBase, TiDB provides better performance isolation as the cluster size increases.
Under the same OLAP pressure, the average latency of OLTP workloads increases by 6\% and 18\% in the TiDB cluster and OceanBase cluster.
Meanwhile, TiDB is better than OceanBase at dealing with OLxP workloads.
The performance isolation benefits from the decoupled storage layer consisting of a row store (TiKV) and a columnar store (TiFlash).
\section{Conclusions}
This paper quantitatively discloses that the previous HTAP benchmarks provide misleading information in evaluating, designing, and implementing HTAP systems.
We design and implement an extensible HTAP benchmarking framework named OLxPBench. OLxPBench proposes the abstraction of a hybrid transaction to model the widely-observed behavior pattern -- making a quick decision while consulting real-time analysis; a semantically consistent schema to express the relationships between OLTP and OLAP schemas; the combination of domain-specific and general benchmarks to characterize diverse application scenarios with varying resource demands.
Extensive experiments demonstrate its merit and pinpoint the bottlenecks of the mainstream distributed HTAP DBMSs.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.637695,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdxM4eIfiUSg2gBXM | \section{Code Constructions (Theorem \ref{Reed-Solomon Update-Efficient MVC Theorem} and Theorem \ref{Slepian-Wolf Storage Cost Theorem}) }
\allowdisplaybreaks{
In this section, we provide our code constructions. We study the case where $\delta_K$ is not known and present a MVC based on Reed-Solomon code in Section \ref{Update-efficient Multi-version Codes}. Later on in this section, we study the case where $\delta_K$ is known and propose a random binning argument in Section \ref{Slepian-Wolf Based Multi-version Codes}.
\label{Code Constructions}
\subsection{Update-efficient Multi-version Codes}
\label{Update-efficient Multi-version Codes}
We develop simple multi-version coding scheme that exploits the correlation between the different versions and have smaller storage cost as compared with \cite{multi_arxiv}. In this scheme, the servers do not know the correlation degree $\delta_K$ in advance. We begin by recalling the definition of the update efficiency of a code from \cite{anthapadmanabhan2010update}. \begin{definition}[Update efficiency]
For a code $\mathcal {C}$ of length $N$ and dimension $K$ with encoder $\mathcal{C}:\ \mathbb{F}^{K} \rightarrow \ \mathbb{F}^{N},$ the update efficiency of the code is the maximum number of codeword symbols that must be updated when a single message symbol is changed and is expressed as follows
\begin{align}
t=\max_{\substack{\mathbf W, \mathbf W' \in \mathbb{F}^K:\\ d_H(\mathbf W, \mathbf W')=1}} d_H(\mathcal{C}(\mathbf W), \mathcal{C}(\mathbf W')).
\end{align}
\end{definition}
\noindent An $( N, K)$ code $\mathcal{C}$ is referred to as update-efficient code if it has an update efficiency of $o( N)$.
\begin{definition}[Update efficiency of a server]
Suppose that $\mathcal{C}^{(i)}:\mathbb{F}^{K} \rightarrow \ \mathbb{F}^{N/n}$ denotes the $i$-th co-ordinate of the output of $\mathcal{C}$ stored by the $i$-th server. The update efficiency of the $i$-th server is the maximum number of codeword symbols that must be updated in this server when a single message symbol is changed and is expressed as follows
\begin{align}
t^{(i)}=\max_{\substack{\mathbf W, \mathbf W' \in \mathbb{F}^K: \\ d_H(\mathbf W, \mathbf W')=1}} d_H (\mathcal{C}^{(i)}(\mathbf W), \mathcal{C}^{(i)}(\mathbf W')).
\end{align}
\end{definition} \noindent
Suppose that $G=(G^{(1)}, G^{(2)}, \cdots, G^{(n)})$ is the generator matrix of a linear code $\mathcal{C}$, where $G^{(i)}$ is a $K \times N/n$ matrix that corresponds to the $i$-th server. The update efficiency of the $i$-th server is the maximum row weight of $G^{(i)}$. \color{black}
\begin{definition}[The per-server maximum update efficiency]
The per-server maximum update efficiency is the maximum number of codeword symbols that must be updated in any server when a single message symbol is changed and is given by
\begin{equation}
t_s=\max_{\substack{i \in [n]}} \ t^{(i)}.
\end{equation}
\end{definition}
We next present an update-efficient MVC construction, illustrated in Fig. \ref{Reed-Solomon}, that is based on Reed-Solomon code and has a maximum update efficiency per-server $t_s=1$.
\begin{construction}[Reed-Solomon Update-Efficient MVC]
\label{update-efficient construction} Suppose that the $i$-th server receives the versions $\mathbf S(i)=\{s_1, s_2, \ldots, s_{|\mathbf S(i)|}\}$, where $s_1<s_2<\cdots<s_{|\mathbf S(i)|}$. A version $\mathbf W_{s_j}$ is divided into $\frac{K}{c \log n_p}$ blocks of length $ c \log n_p,$ where $n_p=2^{\lceil \log_{2}n\rceil}$. In each block, every consecutive string of $\log n_p$ bits is represented by a symbol in $\mathbb{F}_{n_p}$. The representation of $\mathbf W_{s_j}$ over $\mathbb{F}_{n_p}$ is denoted by $\mathbf {\overline W}_{s_j}$. Each block is then encoded by an $(n, c)$ Reed-Solomon code with a generator matrix $\tilde G$ that is given by
\begin{align}
\tilde G &=\left(
\begin{array}{c c c c}
1 & 1 & \cdots & 1 \\
\lambda_1 & \lambda_2 & \cdots & \lambda_n \\
\lambda_1^2 & \lambda_2^2 & \cdots & \lambda_n^2 \\
\vdots & \vdots & \vdots & \vdots \\
\lambda_1^{c-1}& \lambda_2^{c-1} & \cdots & \lambda_n^{c-1}
\end{array}
\right),
\end{align}
where $\Lambda=\left\lbrace \lambda_1, \lambda_2, \cdots, \lambda_n \right\rbrace \subset \mathbb{F}_{n_p}$ is a set of distinct elements.
For $\mathbf W_{s_1}$, the $i$-th server stores $\mathbf {\overline W}_{s_1}^{\rm T} G^{(i)}$, where $G^{(i)}$ is a $ \frac{K}{\log n_p} \times \frac{K}{c \log n_p}$ matrix that is given by
\begin{align}
G ^{(i)}&=\left(
\begin{array}{c c c c c}
\tilde G \mathbf e_i & 0 & \cdots &0 & 0 \\
0 & \tilde G \mathbf e_i & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots\\
0 & 0 & \cdots &0 & \tilde G \mathbf e_i
\end{array}
\right),
\end{align}
where $\mathbf{e}_{i}$ is $i$-th standard basis vector over $\mathbb F_{n_p}$. For $\mathbf{W}_{s_m}$, where $m>1$, the server may only store the updated symbols from the old version $\mathbf{\overline{W}}_{s_{m-1}}$ or store $\mathbf {\overline W}_{s_m}^{\rm T} G^{(i)}$.
\end{construction}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth,height=.22\textheight]{Reed-Solomon}
\caption{Illustration of Construction \ref{update-efficient construction}. A bit that changes in the message leads at most to one updated codeword symbol in the $i$-th server, $\forall i \in [n]$, hence $t_s=1$. \label{Reed-Solomon} \color{black}}
\end{figure}
\begin{theorem}
\label{Reed-Solomon Update-Efficient MVC Theorem}[Reed-Solomon Update-Efficient MVC]
Construction \ref{update-efficient construction} is a $0$-error $(n, c, \nu, 2^K, q, \delta_K)$ multi-version code with a worst-case storage that is at most
\begin{equation}
\frac{K}{c}+ (\nu-1) \min(\delta_K K \log \left( \frac{Kn_p}{c\log n_p}\right), K/c),
\end{equation}
where $n_p=2^{\lceil \log_{2}n\rceil}$.
\end{theorem}
\begin{proof} [Proof of Theorem \ref{Reed-Solomon Update-Efficient MVC Theorem}]
We observe that Construction \ref{update-efficient construction} is a valid multi-version code as the latest common version is recoverable from any $c$ servers by the MDS property of Reed-Solomon code. In order to characterize the worst-case storage cost, we observe that the update efficiency of the $i$-th server is equal to the maximum row weight of $G^{(i)}$ which is equal to $1, \forall i \in [n]$. Thus, the per-server maximum update efficiency $t_s$ is also equal to $1$. \\ The worst-case storage cost corresponds to the case where a server receives all versions. In this case, the server stores $\mathbf {\overline W}_{1}^{\rm T} G^{(i)}$ for the first version. For $\mathbf{W}_{u}$, where $u>2$, the server may only store the updated symbols from the old version $\mathbf{\overline{W}}_{{u-1}}$ or store $\mathbf {\overline W}_{u}^{\rm T} G^{(i)}$. Storing the index of an updated symbol requires $\log (\frac{K}{c\log n_p})$ bits and storing the value requires $\log n_p$. Therefore, the per-server storage cost is upper-bounded as follows
\begin{align*}
\alpha &\leq \frac{K}{c}+ \sum_{u=2}^{\nu} \min(d_H(\mathbf W_{u}, \mathbf W_{u-1}) \log \left( \frac{Kn_p}{c\log n_p}\right), K/c) \\
& \leq \frac{K}{c}+ \sum_{u=2}^{\nu} \min(d_H(\mathbf W_{u}, \mathbf W_{u-1}) \log \left( \frac{Kn_p}{c\log n_p}\right), K/c) \\
& \leq \frac{K}{c}+ (\nu-1) \min(\delta_K K \log \left( \frac{Kn_p}{c\log n_p}\right), K/c).
\end{align*}
\end{proof}
\color{black}
\subsection{Random Binning Based Multi-version Codes}
\label{Slepian-Wolf Based Multi-version Codes}
We next introduce a random binning argument showing the existence of a multi-version code that can harness both of the erasure and the delta coding gains for all versions for the case where $\delta_K$ is known. Recall that Slepian-Wolf coding \cite{slepian1973noiseless,wolf1973data} is a distributed data compression technique for correlated sources that are drawn in independent and identical manner according to a given distribution. In the Slepian-Wolf setting, the decoder is interested in decoding the data of all sources. In the multi-version coding problem, the decoder is interested in decoding the latest common version, or a later version, among any set of $c$ servers.
We notice that our model differs from the Slepian-Wolf setting as we do not aspire to decode all the versions.
The lossless source coding problem with a helper \cite{ahlswede1975source, wyner1975source, cover1975proof} may also seem related to our approach, since the side information of the older versions may be interpreted as helpers. In the optimal strategy for the helper setting, the helper side information is encoded via a joint typicality encoding scheme, whereas the random binning is used for the message. However, in the multi-version coding setting, a version that may be a side information for one state may be required to be decoded in another state. For this reason, a random binning scheme for all versions leads to schemes with a near-optimal storage cost. We next present the code construction that is inspired by Cover's random binning proof of the Slepian-Wolf problem \cite{cover1975proof}.
\begin{construction}[Random binning multi-version code]
\label{Random binning multi-version code}
Suppose that the $i$-th server receives the versions $\mathbf S(i)=\{s_1, s_2, \cdots, s_{|\mathbf S(i)|}\} \subseteq [\nu]$, where $s_1<s_2<\cdots<s_{|\mathbf S(i)|}$.
\begin{itemize}
\item \emph{Random code generation:} At the $i$-th server, for a version $s_j$ the encoder assigns an index at random from $\lbrace 1, 2, \cdots, 2^{R_{s_j}^{(i)}/c} \rbrace$ uniformly and independently to each vector of length $K$ bits, where $R_{s_j}^{(i)}/c$ is the rate assigned by the $i$-th server to version $s_j$.
\item \emph{Encoding}:\label{Slepian-Wolf Encoder} The server stores the corresponding index to each version that it receives and the decoder is also aware of this mapping. The encoding function of the $i$-th server is given by
\begin{align}
\varphi_{\mathbf S(i)}^{(i)}=( \varphi_{ s_1 }^{(i)}, \varphi_{ s_2 }^{(i)}, \cdots, \varphi_{s_{|\mathbf S(i)|}}^{(i)} ),
\end{align}
where $\varphi_{ s_j }^{(i)} \colon [2^K] \to \lbrace 1, 2 \cdots, 2^{K R_{s_j}^{(i)}/c} \rbrace$
and we choose the rates as follows
\begin{align}
K R_{s_1}^{(i)}&=K+(s_1-1)\log Vol(\delta_K K, K)+ (s_1-1)-\log \epsilon 2^{- \nu n},\label{eq:SWrate1}\\
KR_{s_j}^{(i)}&=(s_j-s_{j-1}) \log Vol(\delta_K K, K)+(s_j-1)-\log \epsilon 2^{-\nu n}, j \in \lbrace 2, 3, \cdots, |\mathbf S(i)|\rbrace. \label{eq:SWrate2}
\end{align}
\end{itemize}
Consider a state $\mathbf{S} \in \mathcal{P}([\nu])^n$ and suppose that the decoder connects to the servers $T=\{t_1, t_2, \cdots, t_c\} \subseteq [n]$. Suppose that a version $s_j$ is received by a set of servers $\{i_1, i_2, \cdots, i_r \} \subseteq T$, then the bin index corresponding to this version is given by
\begin{align}
\varphi_{ s_j }= (\varphi_{ s_j }^{(i_1)}, \varphi_{ s_j }^{(i_2)}, \cdots, \varphi_{ s_j }^{(i_r)}).
\end{align}
In this case, the rate of version $s_j$ is given by
\begin{align}
\label{sum_rate_all_servers}
R_{s_j}=\frac{1}{c}\sum_{i \in \{i_1, i_2, \cdots, i_r \} } R_{s_j}^{(i)}.
\end{align}
\item \emph{Decoding}:\label{Slepian-Wolf Decoder}
The decoder employs the \emph{possible set decoding} strategy as follows. Assume that $\mathbf W_{u_L}$ is the latest common version in $\mathbf{S}$ and that the versions $\mathbf W_{u_1}, \mathbf W_{u_2}, \cdots, \mathbf W_{u_{L-1}}$ are the older versions such that each of them is received by at least one server out of those $c$ servers. We denote this set of versions by $\mathbf S_{\rm T}$ and define it formally as follows
\begin{align}
\label{set of versions}
\mathbf S_{\rm T} =\{u_1, u_2, \cdots, u_L\}=\left( \bigcup_{t \in T} \mathbf{S}(t)\right) \setminus \{u_L+1, u_L+2, \cdots, \nu \},
\end{align}
where $u_1<u_2< \cdots<u_L$. Given the bin indices $(b_{u_1}, b_{u_2}, \cdots, b_{u_L})$, the decoder finds all tuples $(\mathbf w_{u_1}, \mathbf w_{u_2}, \cdots, \mathbf w_{u_L}) \in A_{\delta_K}$ such that $(\varphi_{u_1} (\mathbf w_{u_1})=b_{u_1}, \varphi_{u_2} (\mathbf w_{u_2})=b_{u_2}, \cdots, \varphi_{u_L} (\mathbf w_{u_L})=b_{u_L})$. If all of these tuples have the same latest common version $\mathbf w_{u_L}$, the decoder declares $\mathbf w_{u_L}$ to be the estimate of the latest common version $\mathbf{\hat W_{u_L}}$. Otherwise, it declares an error.
\end{construction}
\begin{theorem}
\label{Slepian-Wolf Storage Cost Theorem}[Random Binning MVC]
There exists an $\epsilon$-error $(n, c, \nu, 2^K, q, \delta_K)$ multi-version code whose worst-case storage cost is at most
\begin{align}
\frac{K}{c} + \frac{(\nu -1) \log Vol(\delta_K K, K)}{c}+\frac{\nu(\nu-1)/2- \nu \log \epsilon 2^{-\nu n}}{c}.
\end{align}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Slepian-Wolf Storage Cost Theorem}]
We show that Construction \ref{Random binning multi-version code} is an $\epsilon$-error multi-version code. \\ We denote the error event by $E$ and express it as follows
\begin{align}
\label{binning error event}
E = \{ \exists (\mathbf{w}'_{u_1}, \mathbf{w}'_{u_2}, \ldots,\mathbf{w}_{u_L}') \in A_{\delta_K}: \mathbf{w}_{u_L}' \neq \mathbf W_{u_L} \ \textrm{and} ~~\varphi_u(\mathbf w'_{u})=\varphi_u(\mathbf W_{u}), \forall u \in \mathbf S_T\}.
\end{align}
The error event in decoding can be equivalently expressed as follows
\begin{align}
E= \bigcup_{ \mathcal I \subseteq \mathbf {S}_T: u_L \in \mathcal{I}} E_{ \mathcal I}
\end{align}
where
\begin{align}
\label{Error Events}
E_{\mathcal I} = &\{\exists \mathbf w'_u \neq \mathbf W_u, \forall u \in {\mathcal I}: \varphi_u(\mathbf w'_{u} )= \varphi_u(\mathbf W_{u}), \forall u \in {\mathcal I} \ \text{and} \ (\mathbf w'_{\mathcal I}, \mathbf W_{\mathbf S_T \setminus \mathcal I}) \in A_{\delta_K}\},
\end{align}
for $ \mathcal{I} \subseteq \mathbf S_T$ such that $u_L \in \mathcal I$. By the union bound, we have
\begin{equation}
\begin{aligned}
P_e (\mathbf S, T)& \coloneqq P(E) =P \left( \bigcup_{ \mathcal I \subseteq \mathbf {S}_T: u_L \in \mathcal I} E_{ \mathcal I}\right)
\leq \sum_{ \mathcal I \subseteq {\mathbf{S}_T: u_L \in \mathcal I}} P(E_{ \mathcal I}),
\end{aligned}
\end{equation}
and we require that $P_e (\mathbf S, T) \leq \epsilon 2^{-\nu n}$. Thus, for every $\mathcal I \subseteq \mathbf S_T$ such that $u_L \in \mathcal I$, it suffices to show that $P(E_\mathcal I) \leq \epsilon 2^{-(L-1)} 2^{-\nu n}$. \\
\begin{figure}[h]
\centering
\includegraphics[width=.65\textwidth,height=.25\textheight]{Recursive_Argument}
\caption{Illustration of the error analysis of Construction \ref{Random binning multi-version code}. \color{black} \label{Recursive Argument}}
\end{figure}
\noindent We now proceed in a case by case manner as shown in Fig. \ref{Recursive Argument}. We first consider the case where $u_{L-1} \notin \mathcal I$, later we consider the case where $u_{L-1} \in \mathcal{I}$. For the case where $u_{L-1} \notin \mathcal I$, we have
\begin{align}
E_{ \mathcal I} \subset \tilde E_{u_{L-1}} &\coloneqq \{\exists \mathbf w_{u_L}' \neq \mathbf W_{u_L}: \varphi_{u_L}(\mathbf w_{u_L}')=\varphi_{u_L}(\mathbf W_{u_L}) \ \text{and} \ (\mathbf W_{u_{L-1}}, \mathbf w_{u_L}') \in A_{\delta_K} \}.
\end{align}
\noindent Consequently, we have
\begin{align}
P(E_{ \mathcal I}) & \leq P(\tilde E_{u_{L-1}})
= \sum_{(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-1}}, \mathbf w_{u_L}) \notag \\ &P( \exists \mathbf w_{u_L}' \neq \mathbf w_{u_L}: \varphi_{u_L}(\mathbf w_{u_L}')=\varphi_{u_L}(\mathbf w_{u_L}) \ \text{and} \ (\mathbf w_{u_{L-1}}, \mathbf w_{u_L}') \in A_{\delta_K}) \notag \\
& \stackrel{(a)}{\leq} \sum_{(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})\sum_{\substack{\mathbf w_{u_L}' \neq \mathbf w_{u_L}\\ (\mathbf w_{u_{L-1}}, \mathbf w_{u_L}') \in A_{\delta_K}}} P(\varphi_{u_L}(\mathbf w_{u_L}')=\varphi_{u_L}(\mathbf w_{u_L})) \notag \\
& \stackrel{(b)}{=} \sum_{(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})\sum_{\substack{\mathbf w_{u_L}' \neq \mathbf w_{u_L}\\ (\mathbf w_{u_{L-1}}, \mathbf w_{u_L}') \in A_{\delta_K}}} \prod_{i=1}^{c} P(\varphi_{u_L}^{(t_i)}(\mathbf w_{u_L}')=\varphi_{u_L}^{(t_i)}(\mathbf w_{u_L})) \notag \\
&\leq \sum_{(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-1}}, \mathbf w_{u_L}) |A_{\delta_K} (\mathbf W_{u_L}|\mathbf w_{u_{L-1}})| \ \prod_{i=1}^{c} 2^{-K R_{u_L}^{(t_i)}/c }
\notag \\
&\stackrel{(c)}{=} \sum_{(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-1}}, \mathbf w_{u_L}) |A_{\delta_K} (\mathbf W_{u_L}|\mathbf w_{u_{L-1}})| \ 2^{-K R_{u_L} } \notag \\
&= \sum_{(\mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-1}}, \mathbf w_{u_L}) 2^{-(KR_{u_L} -\log Vol ((u_L-u_{L-1}) \delta_K K, K))} \notag \\
& = 2^{-(KR_{u_L} -\log Vol ((u_L-u_{L-1}) \delta_K K, K))},
\end{align}
where $(a)$ follows by the union bound, $(b)$ follows since each server assigns an index independently from the other servers and $(c)$ follows from (\ref{sum_rate_all_servers}). Choosing $R_{u_L}$ to satisfy
\begin{align*}
KR_{u_L} \geq \log Vol ((u_L-u_{L-1}) \delta_K K, K)+(L-1)- \log \epsilon 2^{-\nu n}
\end{align*}
ensures that $P(E_\mathcal I) \leq \epsilon 2^{-(L-1)} 2^{-\nu n}$.
Now, we consider the case where $u_{L-1} \in \mathcal I$. In this case, we consider the following two cases.
First, we consider the case where $u_{L-2} \notin \mathcal I $, later we consider the case where $u_{L-2} \in \mathcal{I}$. For the case where $u_{L-2} \notin \mathcal I,$ we have
\begin{align}
E_\mathcal I \subseteq \tilde E_{u_{L-2}} &\coloneqq \{\exists \mathbf w_{u_{L-1}}' \neq \mathbf W_{u_{L-1}}, \mathbf w_{u_L}' \neq \mathbf W_{u_L}: \varphi_{u_{L-1}}(\mathbf w_{u_{L-1}}')=\varphi_{u_{L-1}}(\mathbf W_{u_{L-1}}), \notag \\ &\varphi_{u_L}(\mathbf w_{u_L}')=\varphi_{u_L}(\mathbf W_{u_L}) \ \text{and} \ (\mathbf W_{u_{L-2}}, \mathbf w_{u_{L-1}}', \mathbf w_{u_L}') \in A_{\delta_K} \}.
\end{align}
Therefore, we have
\begin{align}
P(E_\mathcal I) &< P(\tilde E_{u_{L-2}}) \leq \sum_{(\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}}, \mathbf w_{u_L})} p(\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}}, \mathbf w_{u_L}) \notag \\ &\sum_{\substack{\mathbf w_{u_{L-1}}' \neq \mathbf w_{u_{L-1}}\\ \mathbf w_{u_{L}}' \neq \mathbf w_{u_{L}} \\ (\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}'}, \mathbf w_{u_{L}}') \in A_{\delta_K}}} P(\varphi(\mathbf w_{u_{L-1}}')=\varphi(\mathbf w_{u_{L-1}})) P(\varphi_{u_L}(\mathbf w_{u_L}')=\varphi(\mathbf w_{u_L})) \notag \\
&\leq \sum_{(\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}}, \mathbf w_{u_L})}
p(\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}}, \mathbf w_{u_L}) 2^{-K(R_{u_{L-1}}+R_{u_L})} |A_{\delta_K} (\mathbf W_{u_{L-1}}, \mathbf W_{u_{L}}|\mathbf w_{u_{L-2}})| \notag \\
&= \sum_{(\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}}, \mathbf w_{u_L})}
p(\mathbf w_{u_{L-2}}, \mathbf w_{u_{L-1}}, \mathbf w_{u_L}) 2^{-K(R_{u_{L-1}}+R_{u_L})} \notag \\ & \ \ \ \ \ 2^{\log Vol ((u_L-u_{L-1}) \delta_K K, K)+\log Vol ( (u_{L-1}-u_{L-2}) \delta_K K, K)}.
\end{align}
In this case, we choose the rates as follows
\begin{align*}
K(R_{u_{L-1}}+R_{u_L}) \geq \sum_{j=L-1}^{L} \log Vol ( (u_{j}-u_{j-1}) \delta_K K, K)+(L-1)-\log \epsilon 2^{-\nu n}.
\end{align*}
We next consider the other case where $u_{L-2} \in \mathcal I$. In this case, we also have two cases based on whether $u_{L-3}$ is in $\mathcal I$ or not. By applying the above argument repeatedly, we obtain the following conditions for the overall probability of error to be upper bounded by $\epsilon 2^{-\nu n}$.
\begin{align}
\label{Rate Region}
K \sum_{j=i}^{L} R_{u_j} & \geq \sum_{j=i}^{L} \log Vol ( (u_j-u_{j-1}) \delta_K K, K)+(L-1)-\log \epsilon 2^{-\nu n}, \forall i \in \{2, 3, \cdots, L\}, \notag \\
K \sum_{j=1}^{L} R_{u_j} & \geq K+\sum_{j=2}^{L} \log Vol ((u_{j}-u_{j-1}) \delta_K K, K)+(L-1)-\log \epsilon 2^{-\nu n}.
\end{align}
Since $\log Vol (m \delta_K K, K) \leq m \log Vol (\delta_K K, K), \forall m \in \mathbb{Z}^{+}$, it suffices if the rates satisfy
\begin{align}
\label{relaxed rate region}
K \sum_{j=i}^{L} R_{u_j} & \geq \sum_{j=i}^{L} (u_j-u_{j-1}) \log Vol (\delta_K K, K)+(L-1)-\log \epsilon 2^{-\nu n}, \forall i \in \{2, 3, \cdots, L\}, \notag \\
K \sum_{j=1}^{L} R_{u_j} & \geq K+\sum_{j=2}^{L} (u_{j}-u_{j-1}) \log Vol (\delta_K K, K)+(L-1)-\log \epsilon 2^{-\nu n}.
\end{align}
The rates chosen in (\ref{eq:SWrate1}), (\ref{eq:SWrate2}) satisfy the above inequalities, therefore our construction has a probability of error bounded by $\epsilon 2^{-\nu n}$. Moreover, as in \cite[Chpater 7]{el2011network}, it follows that there exists a deterministic code that has a probability of error bounded by $\epsilon$. The worst-case storage cost is when a server receives all versions and is given by
\begin{align*}
\frac{K-\log \epsilon 2^{-\nu n}}{c}+\frac{(\nu -1) (\log Vol(\delta_K K, K)-\log \epsilon 2^{-\nu n}+\nu/2)}{c}.
\end{align*}
\end{proof}
\noindent Motivated by the fact that linear codes have lower complexity, in the Appendix, we show that linear codes exist that achieve the storage cost of Theorem \ref{Slepian-Wolf Storage Cost Theorem}. Our proof is inspired by \cite{wyner1974recent}.
\begin{remark}
The proof of Theorem \ref{Slepian-Wolf Storage Cost Theorem} uses simultaneous non-unique decoding ideas \cite{Bandemer_nonunique} used in several multi-user scenarios. In particular, with our non-unique decoding approach to decode $ \mathbf W_{u_L},$ the decoder picks the unique $\mathbf{w}_{u_L}$ such that $(\mathbf w_{u_1}, \mathbf w_{u_2},\ldots, \mathbf w_{u_L}) \in A_{\delta_K}$ for \emph{some} $\mathbf w_{u_1}, \mathbf w_{u_2},\ldots, \mathbf w_{u_{L-1}}$, which are consistent with the bin indices. We use this strategy since unlike the Slepian-Wolf problem where all the messages are to be decoded, we are only required to decode the latest common version. In contrast, the \emph{unique decoding} approach employed by Slepian-Wolf coding would require the decoder to obtain for some subset $S\subseteq \{u_1, u_2,\ldots, u_L\}$ such that $u_L \in S$, the unique $\mathbf w_{S}$ in the possible set that is consistent with the bin-indices; unique decoding, for instance, would not allow for correct decoding if there are multiple possible tuples even if they happen to have the same latest common version $\mathbf{w}_{u_L}$. The discussion in \cite{Bidhokti_nonunique}, which examined the necessity of non-unique decoding, motivates the following question: Can we use the decoding ideas of Slepian-Wolf - where all the messages are decoded - however, for an appropriately chosen subset of versions and have the same rates? In other words, if we take the union of the unique decoding rate regions over all possible subsets of $\{\mathbf W_{u_1}, \mathbf W_{u_2}, \ldots, \mathbf W_{u_L}\}$, does the rate allocation of (\ref{eq:SWrate1}), (\ref{eq:SWrate2}) lie in this region? The answer of this question is that non-unique decoding provides better rates than unique decoding in our case as we explain in Example \ref{counterexample}.
\begin{example}
\label{counterexample}
We consider the case where $\delta_K = \delta,$ $c=2$ and $\nu=3$. Consider the state where server $2$ does not receive $\mathbf W_1$. Then, the storage allocation of our scheme is given by Table \ref{table: Example}. We use $KR_u$ to denote the total number of bits stored for version $u \in [\nu]$ in Table \ref{table: Example}.
\begin{table*}[h]
\renewcommand{\arraystretch}{1.2}
\centering
\begin{tabular}{ |p{2 cm} | p{3 cm} | p{3 cm} | }
\hline
\textbf{Version} &\textbf{Server 1} & \textbf{Server 2} \\ \hline
$\mathbf W_1$ & $\frac{K+o(K)}{2}$ &-
\\ \hline
$\mathbf W_2$ & $\frac{KH(\delta)+o(K)}{2}$ & $\frac{K+KH(\delta)+o(K)}{2}$
\\ \hline
$\mathbf W_3$ & $\frac{KH(\delta)+o(K)}{2}$ & $\frac{KH(\delta)+o(K)}{2}$
\\ \hline
\end{tabular}
\caption{Storage Allocation of Example $1$.}\label{table: Example}
\end{table*}
We first examine unique decoding based decoders that aim to recover $\mathbf W_{3}$. It is clear that the decoder cannot recover the $K$ bits of $\mathbf W_{3}$ without using side information, since the total number of bits of $\mathbf W_3$ stored is only $KH(\delta)+o(K)$.
We next consider the case where a unique decoding based decoder uses the subset $\{\mathbf W_1, \mathbf W_3\}$. The rates $(R_1^{(unique, \mathbf W_1, \mathbf W_3)}, R_3^{(unique, \mathbf W_1, \mathbf W_3)})$ for a vanishing probability of error must satisfy
\begin{align*}
K R_3^{(unique, \mathbf W_1, \mathbf W_3)} &\geq K H(\delta* \delta)+o(K), \\
K( R_1^{(unique, \mathbf W_1, \mathbf W_3)}+R_3^{(unique, \mathbf W_1, \mathbf W_3)}) &\geq K+ K H(\delta* \delta) + o(K),
\end{align*}
where $\delta * \delta = 2 \delta(1-\delta)$. However, $K R_3 = K H(\delta)+o(K) < KR_{3}^{(unique, \mathbf W_1, \mathbf W_3)}$.
We next consider the case where a unique decoding based decoder uses $\{\mathbf W_2,\mathbf W_3\}$ for decoding. In this case, the decoder requires
\begin{align*}
K R_3^{(unique, \mathbf W_2, \mathbf W_3)} &\geq KH(\delta) +o(K), \\
K(R_2^{(unique, \mathbf W_2, \mathbf W_3)}+ R_3^{(unique, \mathbf W_2, \mathbf W_3)}) &\geq K+ KH(\delta) + o(K)
\end{align*}
We notice $K(R_2+R_3) = K/2 + 2KH(\delta)+o(K)< K(R_2^{(unique, \mathbf W_2, \mathbf W_3)}+ R_3^{(unique, \mathbf W_2, \mathbf W_3)})$, for $\delta < H^{-1}(0.5).$
Finally, consider the case where a unique decoding based decoder uses all three messages $\{\mathbf{W}_{1},\mathbf W_2,\mathbf W_3\}.$ In this case, the rate tuples $(R_1^{(unique, \mathbf W_1, \mathbf W_2, \mathbf W_3)}, R_3^{(unique, \mathbf W_1, \mathbf W_2, \mathbf W_3)})$ have to satisfy seven inequalities, including the following inequality
$$K( R_1^{(unique, \mathbf W_1, \mathbf W_2, \mathbf W_3)}+R_3^{(unique, \mathbf W_1, \mathbf W_2, \mathbf W_3)}) \geq K+ K H(\delta* \delta) + o(K)$$
Clearly $K(R_1 + R_3) < K(R_1^{(unique, \mathbf W_1, \mathbf W_2, \mathbf W_3)}+R_3^{(unique, \mathbf W_1, \mathbf W_2, \mathbf W_3)}).$
Thus, the union of the unique decoding rate regions for vanishing error probabilities, taken over all possible subsets of $\{\mathbf W_1, \mathbf W_2, \mathbf W_3\},$ does not include the rate tuple of Table \ref{table: Example}.
\end{example}
\end{remark}
\begin{remark}
\label{rem:zeroerror}
{
The result of Theorem \ref{Slepian-Wolf Storage Cost Theorem} allows for erroneous decoding with probability at most $\epsilon.$ A natural question is to ask whether a similar storage cost can be obtained if we want the probability of error to be $0$, i.e., if we want our decoder to correctly decode for \emph{every} possible message tuple. The answer to this question in general is connected to the question of whether $0$-error rate region and $\epsilon$-error rate region are identical for our setting. There are several instances in network information theory where, even though there is no noise in the network the $\epsilon$-error capacity is still larger than the zero error capacity, (e.g., multiple access \cite{cover_thomas}). For some networks, the answer to this question is unknown and involves deep connections to other related questions \cite{langberg2011network}. Note also for distributed source coding setups such as our problem, the determination of the $0$-error capacity is more complicated and involves the use of graph entropy \cite{korner1973coding}. In this paper, we leave the question of whether the $0$-error and $\epsilon$-error rate regions are the same, open.}
\end{remark}
\section{Introduction}
Distributed key-value stores are an important part of modern cloud computing infrastructure\footnote{A \emph{read-write} key-value store is a shared database that supports \emph{get} or read operations, and \emph{put} or write operations.}. Key-value stores are commonly used by several applications including reservation systems, financial transactions, collaborative text editing and multi-player gaming. Owing to their utility, there are numerous commercial and open-source cloud-based key-value store implementations such as Amazon Dynamo \cite{Decandia}, Google Spanner \cite{corbett2013spanner} and Apache Cassandra \cite{CassandradB}. Distributed data storage services including key-value stores commonly build fault tolerance and availability by replicating the data across multiple nodes. Unlike archival data storage services, in key-value stores, the data stored is updated frequently, and the time scales of data updates and access are often comparable to the time scale of dispersing the data to the nodes \cite{Decandia}. In fact, distributed key-value stores are commonly \emph{asynchronous} as the time scale of data propagation is unpredictable and different nodes can receive the updates at different points in time. In such settings, ensuring that a client gets the latest version of the data requires careful and delicate protocol design.
Modern key-value stores depend on high-speed memory that is expensive as compared with hard drives to provide fast operations. Hence, the goal of efficiently using memory has motivated significant interest in erasure-coded key-value stores. In absence of consistency requirements, erasure-coded ``in-memory'' storage systems can significantly improve latency as compared to replication-based counter parts \cite{EC-Cache}. Systems research related to erasure-coded consistent data stores also has received significant interest (See, e.g., \cite{taranov2018fast} and also \cite{Giza}, which uses erasure coding for Microsoft's data centers even for non-archival consistent data storage services). Delta coding \cite{deltacompression} is another classical technique that improves memory efficiency in data storage systems, where it is desired to store older versions of the data. Delta coding relies on the idea of compressing differences between subsequent versions to improve the storage cost.
The main contribution of our paper is developing a coding-theoretic approach that combines erasure coding and delta coding to exploit correlations between subsequent updates and enable significant memory savings as compared to replication-based schemes and erasure coding approaches that do not exploit correlations. We begin by an overview of the consistent data storage algorithms, and then discuss the \emph{multi-version coding} framework, which is an information-theoretic framework for distributed storage codes tailor-made for consistent key-value stores.
\subsection{Overview of Key-Value Stores}
The design principles of key-value stores are rooted in the distributed computing-theoretic abstraction known as \emph{shared memory emulation} \cite{Lynch1996}. The goal of the \emph{read-write} shared memory emulation problem is to implement a read-write variable over a distributed system. While there has been interest in archival data storage systems in coding theory, e.g. \cite{Dimakis, Tamo_Barg}, the shared memory emulation set up differs in the following aspects.
\begin{enumerate}
\item \emph{Asynchrony:} a new version of the data may not arrive at all servers simultaneously.
\item \emph{Decentralized nature:} there is no single encoder that is aware of all versions of the data simultaneously, and a server is not aware of which versions are received by other servers.
\end{enumerate}
Shared memory emulation algorithms use \emph{quorum-based} techniques to deal with the asynchrony. Specifically, in a system of $n$ servers that tolerates $f$ crash-stop server failures \cite{Lynch1996}, a write operation sends a request to all servers and waits for the acknowledgments from any $c_W\leq n-f$ servers for the operation to be considered \emph{complete}. Similarly, a read operation sends a request to all servers and waits for the response of any $c_R \leq n-f$ servers. This strategy ensures that, for every complete version, there are at least $c \coloneqq c_W+c_R-n$ servers that received that version and responded to the read request. Shared memory emulation algorithms require that the \emph{latest} complete version, or a later version, must be recoverable by the read operation. This requirement is referred to as consistency\footnote{More specifically, our decoding requirement is inspired by the consistency criterion known as \emph{atomicity}, or \emph{linearizability}.} in distributed systems \cite{Lynch1996}.
In a replication-based protocol, where servers store an uncoded copy of the latest version that they receive, selecting $c_W$ and $c_R$ such that $c_W+c_R>n$ ensures consistency \cite{Decandia, Lynch1996}. Since for every complete write operation, there are $c$ servers that store the value of that write operation and respond to a given read operation, it seems natural to use a maximum distance separable (MDS) code of dimension $c$ to obtain storage cost savings over replication-based algorithms. However, the use of erasure coding in asynchronous distributed systems where consistency is important leads to interesting algorithmic and coding challenges. This is because, when erasure coding is used, no single server stores the data in its entirety; for instance, if an MDS code of dimension $c$ is used, each server only stores a fraction of $1/c$ of the entire value. Therefore, for a read operation to get a version of the data, at least $c$ servers must send the codeword symbols corresponding to this version. As a consequence, when a write operation updates the data, a server cannot delete the symbol corresponding to the old version before symbols corresponding to a new version propagate to a sufficient number of servers. That is, servers cannot simply store the latest version they receive; they have to store older versions as shown in Fig. \ref{MDS}.
\begin{figure}[h]
\centering
\includegraphics[width=.65\textwidth,height=.25\textheight]{MDS}
\caption{An erasure-coded algorithm where the servers only store the codeword symbol of the latest version is considered, where $n=6, c_W=5$ and $c_R=5$. The old value of the variable is $(x_1, x_2, x_3, x_4)$ and is updated to $(y_1, y_2, y_3, y_4)$. Since all servers may not receive the new codeword symbols simultaneously, a read operation may not be able to decode. \label{MDS}}
\end{figure}
Given that storing multiple versions is inevitable in consistent erasure-coded systems \cite{CadambeCoded_NCA, Dutta, multi_arxiv}, an important opportunity to improve memory efficiency is to exploit correlations between the various versions; this opportunity is the main motivation of our paper. We conduct our study through the \emph{multi-version coding} framework \cite{multi_arxiv}.
\color{black}
\subsection{Multi-Version Coding}
The \emph{multi-version coding} problem abstracts out algorithmic details of shared memory emulation while retaining the essence of consistent storage systems. Specifically, the multi-version coding problem \cite{multi_arxiv} considers a decentralized storage system with $n$ servers that tolerates $f$ crash-stop failures, where the objective is storing a message (read-write variable) of length $K$ bits with $\nu$ versions\footnote{We study the case where each version is $K$ bits long; although certain applications might benefit from dynamic allocation, several important applications use a fixed size for various versions of the value. Furthermore, popular key-value stores expose a fixed-sized value to the client and do not expose ``malloc''-type dynamic allocation.}. The versions are totally ordered; versions with higher ordering are referred to as later versions, and lower ordering as earlier versions. Each server receives an arbitrary subset of the $\nu$ versions, and encodes them. Because of the decentralized nature, a server is unaware of which versions are available at other servers. Inspired by the quorum-based protocols, we refer to any version that has reached at least $c_W$ servers as a \emph{complete} version. A decoder connects to any $c_R$ servers, and must recover the latest complete version, or a later version.
In \cite{multi_arxiv}, it was shown that there is a storage cost inevitable price for maintaining consistency in asynchronous decentralized storage systems. In multi-version coding, for any complete version, for any decoder, there are at least $c$ servers that have received that version and responded to the decoder. In the classical erasure coding model, where $\nu=1$, the Singleton bound dictates that the storage cost per server is at least $K/c$. However for $\nu > 1$, a server cannot simply store the codeword symbol corresponding to one version. In the case where the versions are independent, it was shown in \cite{multi_arxiv} that the storage cost per server is at least
$\frac{\nu}{c+\nu-1}K-\Theta(1)$. Since, for $\nu < c,$ we have $\frac{\nu}{c+\nu-1} \geq \frac{\nu}{2c}$, and since the per-server cost of storing a version is $K/c$, we may interpret the result as follows: when the versions are independent, to compensate for the asynchrony and maintain consistency, a server has to store an amount of data that is, from a cost perspective, tantamount to at least $\nu/2$ versions, each stored using an MDS code of dimension $c$.
Although the study of \cite{multi_arxiv} focuses on coding-theoretic aspects, the insights obtained from this study have been incorporated into distributed algorithms \cite{ zorgui2018storage} and a lower-bound was developed in \cite{Cadambe_Wang_Lynch2016} on the storage cost of any read-write memory emulation algorithm by creating a worst-case execution mimicking the converse of \cite{multi_arxiv}. We believe that merging the coding-theoretic ideas in our paper and the algorithmic insights of \cite{zorgui2018storage} is an interesting area of future work.
\subsection{Contributions}
In this paper, we extend the scope of the multi-version coding problem to the case where the different versions are correlated. Specifically, we consider a decentralized storage system with $n$ servers storing $\nu$ possibly correlated versions of a message. We assume that each message version is $K$ bits long, and model the correlation between successive versions in terms of the bit-strings that represent them. Given a version, we assume that the subsequent version is uniformly distributed in the Hamming ball of radius $\delta_KK$ centered around that given version. Hence, this version can be represented using $\log Vol(\delta_K K, K)$ bits, where $Vol(\delta_K K, K)$ is the volume of the Hamming Ball of radius $\delta_K K$. We derive three main results for this system.
\begin{enumerate}
\item We first study the case where $\delta_K$ is not known and propose an effective scheme based on Reed-Solomon code with a per-server storage cost of $\frac{K}{c}+ (\nu-1) \delta_K K (\log K + o(\log K))$ bits. This scheme obtains the $1/c$ \emph{erasure coding gain} for the first version and stores every subsequent version via delta coding with a cost of $\delta_K K (\log K+o(\log K))$ bits. Thus, this scheme is unable to simultaneously obtain the gains of both erasure and delta coding.
\item We then study the case where $\delta_K$ is known and derive a random binning based scheme with a per-server storage cost of $\frac{K}{c}+\frac{\nu -1}{c} {\log Vol(\delta_K K, K)} + o(\log K)$ bits. From a cost viewpoint, this scheme is tantamount to storing one version using erasure coding with a cost of $K/c$ and performing delta and erasure coding for the subsequent versions leading to a cost of $\frac{\log Vol(\delta K, K)}{c}$ bits per version. This scheme outperforms our first scheme as it simultaneously obtains the gains of both erasure coding and delta coding for subsequent versions. We also show the existence of linear codes that obtain this storage cost.
A cost of $\frac{K}{c}+\frac{\nu -1}{c} {\log Vol(\delta_K K, K)} + o(\log K)$ bits is readily achievable in a setting, where every server receives all the versions, and each server is aware that the other servers have indeed received all the versions. In such a setting, each server can store a fraction of $1/c$ of the first version it receives using an MDS code of dimension $c$. For a new version, each server can store $1/c$ of the compressed difference between this version and the old version using an MDS code of dimension $c$. However, this scheme would fail in our setting because of the decentralized asynchronous nature. For instance, a server which receives versions $1$ and $3$ needs to compress version $3$ with respect to version $1$ and then encodes it, but a different server that receives only versions $2$ and $3$ needs to compute the increment of version $3$ with respect to version $2$ and then encodes it; from a decoder's viewpoint, the erasure-coded symbols stored at the two servers are not compatible. Furthermore, the decentralized nature implies that the server that receives versions $1$ and $3$ must store some data that would enable successful decoding no matter what versions are received by the other servers.
Handling the decentralized and asynchronous nature while achieving both the erasure and the delta coding gain is our main technical contribution.
\item We extend the lower bound of \cite{multi_arxiv} to the case of correlated versions and show our random binning scheme is within a factor $2$ of the information-theoretic optimum in certain regimes.
\end{enumerate}
\subsection{Related Work}
The idea of exploiting the correlation to efficiently update, store or exchange data has a rich history of study starting from the classical Slepian-Wolf problem \cite{slepian1973noiseless,wyner1974recent, pradhan2003distributed, schonberg2004distributed, schonberg2007practical} for compressing correlated distributed sources. Encoding incremental updates efficiently is the motivation of the delta compression techniques used commonly in data storage. The notion of delta compression was refined in \cite{wang2015file,ma2012compression} by modeling the data updates using the edit distance; in particular, these references develop schemes that synchronize a small number of edits between a client and a server efficiently. While we note that the edit distance is relevant to applications such as collaborative text editing, we focus on the classical Hamming metric used more widely in coding theory for the sake of fundamental understanding and other applications that view the data as a table as in Apache Cassandra \cite{CassandradB}, and the writes update only a few entries of the table.
Exploiting correlations to improve efficiency in \emph{distributed} storage and caching settings has been of significant interest \cite{harshan2015compressed, milenkovic_updates, anthapadmanabhan2010update, mazumdar2014update, nakkiran2014fundamental, Prakash_Medard, Tulino_Correlated}. In \cite{harshan2015compressed} and \cite{milenkovic_updates}, coding schemes were developed that use as input, the old and the new version of the data, and output a code that can be used to store both versions efficiently. Capacity-achieving \emph{update-efficient} codes for binary symmetric and erasure channels were studied in \cite{anthapadmanabhan2010update,mazumdar2014update}, where a small change in the message leads to a codeword which is close to the original codeword in Hamming distance. In \cite{nakkiran2014fundamental}, the problem of minimizing the communication cost of updating a ``stale'' server that did not get an updated message, by downloading data from already updated servers, was studied and constructions and tight bounds were developed. A side information problem is presented in \cite{Prakash_Medard}, where the goal is to send an updated version to a remote entity that has as side information an arbitrary linear transform of an old version. The reference shows that the optimal encoding function is related to a maximally recoverable subcode of the linear transform associated with the side information.
Although our problem has common ingredients with previous works, our setting differs as it captures the asynchrony, decentralized nature, and the consistency requirements. An important outcome of our study is that correlation can reduce storage costs despite these requirements.
\subsection*{Organization of the paper}
This paper is organized as follows. Section \ref{Model} presents the multi-version coding problem and the main results of this paper. In Section \ref{Code Constructions}, we provide our code constructions. Section \ref{Lower Bound on the storage cost} provides a lower bound on the storage cost. Finally, conclusions are discussed in Section \ref{Conclusion}.
\subsection{Linear Binning Multi-version Code.}
}
\section{Lower Bound on The Storage Cost}
\label{Lower Bound on the storage cost}
In this section, we derive a lower bound based on the idea of \cite{multi_arxiv} for the case where we have correlated versions, and we require successful decoding with probability that is at least $1-\epsilon_K$. It was noted in \cite{multi_arxiv} that, for the case where $W_1 \rightarrow W_2 \rightarrow \ldots W_{\nu},$ where $W_{i}$ are Bernoulli $1/2$ vectors, with $W_{i}\in\{0,1\}^{K}$ is derived in an i.i.d. manner with flipping each bit of $W_{i-1}$ with probability $p,$ the storage cost of a multi-version code is $\frac{H(W_1,W_2,\ldots,W_{\nu})}{\nu+c-1} = \frac{K}{c+v-1} + \frac{(\nu-1) H(p)}{\nu+c-1}.$ We note that the remark of \cite{multi_arxiv} is applicable when $\delta_K=p,$ that is $\delta_K$ is constant - this is because the usual law of large numbers implies that in both the i.i.d. case and for the case where $\delta_K$ is a constant, most of the sequences of $W_{i}$ are concentrated in small spherical shell of radius $p$ around $W_{i-1}$. We are interested in the more general case where $\delta_{K}$ is not necessarily a constant. The ideas of \cite{multi_arxiv} can be extended to this more general case to show a lower bound of $\frac{K}{c+v-1}+\frac{(\nu-1)\log Vol(\delta_K K)}{c+\nu-1}$. For the case of $\delta_K=o(K)$, note that there is a large gap between this lower bound and our achievable schemes; specifically, all our achievable schemes have a storage cost of $\frac{K}{c}+o(K)$ whereas the lower bound has a storage cost of $\frac{K}{c+\nu-1} + o(K)$.
Since our goal is to evaluate the effectiveness of the scheme of Theorem \ref{}, we will develop a new lower bound of the form $\frac{K}{c}+\lambda \log Vol(\delta_K K) + o(\log Vol(\delta_K K)),$ where $\lambda$ is a constant. Once we obtain such a bound, the second order term in the lower bound $\lambda$ can be compared with the second order terms of the coding schemes presented in previous sections.
To keep our exposition clear, we will simply present the proof for $\nu=2$. The
We begin with some lemmas. Recall that a ball in the Hamming space is a set of the form $\mathcal{B}(\vec{v},r)$.
\begin{lemma}[Gilbert-Varshamov Bound]
The set $\{0,1\}^{K}$ consists of at least $\frac{2^{K}}{\log(Vol(K,2r))}$ disjoint balls of radius $r$.
\end{lemma}
\begin{lemma}
Consider any function $\rho(W_1, W_2, \ldots, W_{\nu})$ where $\rho :\{0,1\}^{\nu K} \rightarrow S$ for some set $S$. Then
$$E\bigg[E\big[ |\{(w_1, w_2,\ldots w_{\nu}): \rho(w_1, w_2,\ldots w_{\nu}) = \rho(W_1, W_2,\ldots W_{\nu})\}| | W_1, W_2 \ldots W_{\nu} \big]\bigg] \geq 2^{\nu k-\log_{2}|S|}$$
\end{lemma}
Let $\mathcal{N}:\{0,1\}^{\nu K} \rightarrow \mathcal{P}(\{0,1\}^{K}$ be a set-valued function as follows:
$\mathcal{N}(W_1, W_2, \ldots, W_{\nu}) = \{\vec{x}: W_{1}, W_{2},\ldots, W_{\nu} \in \mathcal{B}(\vec{x},(\nu-1)\delta_{K}K)\}$
Note that $W_{i} \in \mathcal{N}(W_1, W_2, \ldots, W_{\nu})$ for all $i \in [\nu].$ Our main goal will be to show that
\begin{itemize}
\item From every set of $c$ servers in the system, in every state with a common version among the servers, a decoder can recover at least one element of $\mathcal{N}(W_1, W_2, \ldots, W_{\nu}),$ and
\item There exists a state such that,
\end{itemize}
In this section, we briefly summarize the idea of the proof for $\nu=2$.
\begin{proof}[Proof of Theorem \ref{Theorem 5}]
Consider any $\epsilon_K$-error $(n, c, \nu=2, 2^K, q)$ multi-version code, and consider the first $c \leq n$ servers $T=[c]$ for the converse. Suppose we have two versions $\mathbf W_{[2]}=(\mathbf W_1, \mathbf W_2)$. We partition the set of possible tuples $A_{\delta_K}$ as follows
\begin{align}
A_{\delta_K}= A_{\delta_K, 1} \cup A_{\delta_K, 2},
\end{align}
where $A_{\delta_K, 1}$ is the set of all tuples $(\mathbf{w}_1, \mathbf{w}_2) \in A_{\delta_K}$ for which we can decode the latest common version or a later version successfully for all $\mathbf S \in \mathcal{P}([\nu])^n$. $A_{\delta_K, 2}$ is the set of tuples where we cannot decode successfully at least for one state $\mathbf S \in \mathcal{P}([\nu])^n$, which can be expressed as follows
\begin{align}
A_{\delta_K, 2}= \bigcup_{\mathbf{S} \in \mathcal{P}([\nu])^n} A_{\delta_K, 2}^{(\mathbf S)},
\end{align}
where $A_{\delta_K, 2}^{(\mathbf S)}$ is the set of tuples for which we cannot decode successfully given a state particular $\mathbf S \in \mathcal{P}([\nu])^n$. Consequently, we have
\begin{align}
|A_{\delta_K, 2}| \leq \sum_{\mathbf S \in \mathcal{P}([\nu])^n} |A_{\delta_K, 2}^{(\mathbf S)}|.
\end{align}
For any state $\mathbf S \in \mathcal{P}([\nu])^n$, we require that
$P_e(\mathbf{S}, T) < \epsilon_K.$
Since all tuples in the set $A_{\delta_K}$ are equiprobable, we have
\begin{align}
P_e(\mathbf{S}, T)=\frac{|A_{\delta_K, 2}^{(\mathbf S)}|}{|A_{\delta_K}|},
\end{align} \color{black}
consequently, we have
\begin{align}
|A_{\delta_K, 1}|&=|A_{\delta_K}|-|A_{\delta_K, 2}| \notag \\
& \geq |A_{\delta_K}|-\sum_{\mathbf S \in \mathcal{P}([\nu])^n} |A_{\delta_K, 2}^{(\mathbf S)}| \notag \\
& > |A_{\delta_K}|-\sum_{\mathbf S \in \mathcal{P}([\nu])^n} \epsilon_K |A_{\delta_K}| \notag \\
& > |A_{\delta_K}|(1-\epsilon_K 2^{\nu n} ).
\end{align}
Suppose that $(\mathbf W_1, \mathbf W_2) \in A_{\delta_K, 1}.$ \color{black}Because of the decoding requirements, if $\mathbf W_1$ is available at all servers, then the decoder must be able to obtain $\mathbf W_1$, and if $\mathbf W_2$ is available at all servers, then the decoder must return $\mathbf W_2$. Therefore, there must be two states $\mathbf{S}_1, \mathbf{S}_2 \in \mathcal{P}([\nu])^n$ such that
\begin{itemize}
\item $\mathbf{S}_1$ and $\mathbf{S}_2$ differ only in the state of one server indexed by $B \in [c]$.
\item $\mathbf W_1$ can be recovered from the first $c$ servers in state $\mathbf{S}_1$ and $\mathbf W_2$ can be recovered from the first $c$ servers in $\mathbf{S}_2$.
\end{itemize}
Therefore both $\mathbf W_1$ and $\mathbf W_2$ are decodable from the $c$ codeword symbols of the first $c$ servers in state $\mathbf{S}_1$, and the codeword symbol of the $B$-th server in state $\mathbf{S}_2$. Thus, we require the following
\begin{align*}
c \ q^{c+1} &\geq |A_{\delta_K, 1}| \\
& > |A_{\delta_K}|(1-\epsilon_K 2^{\nu n} )
\end{align*}
Because $\mathbf{W}_{1}$ is uniformly distributed in the set of all $K$ length binary vectors and given $\mathbf W_{1}$, $\mathbf W_{2}$ is uniformly distributed in a Hamming ball of radius $\delta_K K$ centered around $\mathbf W_1$, then we have
\begin{align*}
|A_{\delta_K}| = 2^K Vol(\delta_K K, K). \\
\end{align*}
In this case, the storage cost can be lower-bounded as follows
\begin{align}
\log q \geq \frac{K+ \log Vol(\delta_K K, K)}{c+1}+\frac{\log (1-\epsilon_K 2^{\nu n} )-\log c}{c+1}.
\end{align}
\end{proof}
The lower bound in Corollary \ref{lower bound on binomial} follows since
\begin{align*}
Vol (\delta_K K, K)=\sum\limits_{j=0}^{\delta_K K} \binom{K}{j} &\geq \binom{K}{\delta_K K} \\
&= \prod_{i=0}^{\delta_K K-1} \frac{K-i}{\delta_K K-i} \\
&\geq \left( \frac{1}{\delta_K } \right) ^{\delta_K K}.
\end{align*}
\color{black}
\section{Lower Bound on The Storage Cost (Theorem \ref{Lower Bound Theorem})}
\label{Lower Bound on the storage cost}
In this section, we extend the lower bound on the per-server storage cost of \cite{multi_arxiv} for the case where we have correlated versions, and we require the probability of error to be at most $\epsilon$.
\begin{theorem}
\label{Lower Bound Theorem}[Storage Cost Lower Bound]
An $\epsilon$-error $(n, c, \nu, 2^K, q, \delta_K)$ multi-version code with correlated versions
such that $\mathbf W_{1}\rightarrow \mathbf W_{2} \rightarrow \ldots \rightarrow \mathbf W_{\nu}$ form a Markov chain, $\mathbf W_{m} \in [2^K]$ and given $\mathbf W_{m}$, $\mathbf W_{m+1}$ is uniformly distributed in a Hamming ball of radius $\delta_K K$ centered around $\mathbf W_{m}$ must satisfy
\begin{align}
\log q \geq \frac{K+(\nu-1) \log Vol (\delta_K K, K)}{c+\nu-1} +
\frac{\log (1-\epsilon 2^{\nu n})-\log \binom{c+\nu-1}{\nu} \nu!}{c+\nu-1},
\end{align}
where $\epsilon<1/2^{\nu n}$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{Lower Bound Theorem} for $\nu=2$]
Consider any $\epsilon$-error $(n,c,2, 2^K, q, \delta_K)$ multi-version code, and consider the first $c$ servers, $T=[c]$, for decoding. We recall that the set of possible tuples $A_{\delta_K}$ is partitioned into disjoint sets as follows
\begin{align*}
A_{\delta_K}= A_{\delta_K, 1} \cup A_{\delta_K, 2},
\end{align*}
where $A_{\delta_K, 1}$ is the set of tuples $(\mathbf{w}_1, \mathbf{w}_2) \in A_{\delta_K}$ for which we can decode successfully for all $\mathbf S \in \mathcal{P}([\nu])^n$ and $A_{\delta_K, 2}$ is the set of tuples where we cannot decode successfully at least for one state $\mathbf S \in \mathcal{P}([\nu])^n$, which can be expressed as follows
\begin{align}
A_{\delta_K, 2}= \bigcup_{\mathbf{S} \in \mathcal{P}([\nu])^n} A_{\delta_K, 2}^{(\mathbf S)},
\end{align}
where $A_{\delta_K, 2}^{(\mathbf S)}$ is the set of tuples for which we cannot decode successfully given a particular state $\mathbf S \in \mathcal{P}([\nu])^n$. Consequently, we have
\begin{align}
|A_{\delta_K, 2}| \leq \sum\limits_{\mathbf S \in \mathcal{P}([\nu])^n} |A_{\delta_K, 2}^{(\mathbf S)}|.
\end{align}
For any state $\mathbf S \in \mathcal{P}([\nu])^n$, we require the probability of error, $P_e$, to be at most $\epsilon$. Since all tuples in the set $A_{\delta_K}$ are equiprobable, we have
\begin{align}
P_e=\frac{|A_{\delta_K, 2}^{(\mathbf S)}|}{|A_{\delta_K}|} \leq \epsilon.
\end{align}
Therefore, we have
\begin{align}
\label{lower bound on A1}
|A_{\delta_K, 1}|&=|A_{\delta_K}|-|A_{\delta_K, 2}| \notag \\
& \geq |A_{\delta_K}|-\sum_{\mathbf S \in \mathcal{P}([\nu])^n} |A_{\delta_K, 2}^{(\mathbf S)}| \notag \\ & \geq |A_{\delta_K}|-\sum_{\mathbf S \in \mathcal{P}([\nu])^n} \epsilon |A_{\delta_K}| \notag \\
& \geq |A_{\delta_K}|(1-\epsilon 2^{\nu n} ).
\end{align}
Suppose that $(\mathbf W_1, \mathbf W_2) \in A_{\delta_K, 1}.$ Because of the decoding requirements, if $\mathbf W_1$ is available at all servers, the decoder must be able to obtain $\mathbf W_1$ and if $\mathbf W_2$ is available at all servers, the decoder must return $\mathbf W_2$. Hence, as shown in \cite{multi_arxiv}, there exist $\mathbf{S}_1, \mathbf{S}_2 \in \mathcal{P}([\nu])^n$ such that
\begin{itemize}
\item $\mathbf{S}_1$ and $\mathbf{S}_2$ differ only in the state of one server indexed by $B \in [c]$, and
\item $\mathbf W_1$ can be recovered from the first $c$ servers in state $\mathbf{S}_1$ and $\mathbf W_2$ can be recovered from the first $c$ servers in $\mathbf{S}_2$.
\end{itemize}
Therefore both $\mathbf W_1$ and $\mathbf W_2$ are decodable from the $c$ codeword symbols of the first $c$ servers in state $\mathbf{S}_1$, and the codeword symbol of the $B$-th server in state $\mathbf{S}_2$. Thus, we require the following
\begin{align}
c \ q^{c+1} &\geq |A_{\delta_K, 1}| \notag \\ & \geq |A_{\delta_K}|(1-\epsilon 2^{\nu n} ).
\end{align}
We also have $|A_{\delta_K}| = 2^K Vol(\delta_K K, K)$. Therefore, the storage cost is lower-bounded as follows
\begin{align}
\log q \geq \frac{K+ \log Vol(\delta_K K, K)}{c+1}+\frac{\log (1-\epsilon 2^{\nu n} )-\log c}{c+1}.
\end{align}
\end{proof}
\label{proof for any number of versions}
\noindent We now provide a proof sketch for the case where $\nu \geq 3$.
\begin{proof}[Proof sketch of Theorem \ref{Lower Bound Theorem} for $\nu \geq 3$] Consider any $\epsilon$-error $(n, c, \nu, 2^K, q, \delta_K)$ multi-version code, and consider the first $c \leq n$ servers, $T=[c]$, for decoding. Suppose we have $\nu$ versions $\mathbf W_{[\nu]}=(\mathbf W_1, \mathbf W_2, \cdots, \mathbf W_{\nu})$. Suppose that $\mathbf W_{[\nu]} \in A_{\delta_K, 1}$. We construct auxiliary variables $Y_{[c-1]}, Z_{[\nu]}, B_{[\nu]}$, where $Y_i, Z_j \in [q], i \in [c-1], j \in [\nu]$ , $1 \leq B_1 \leq \cdots \leq B_{\nu} \leq c$ and a permutation $\Pi: [\nu] \rightarrow [\nu]$, such that there is a bijection mapping from these variables to $A_{\delta_K, 1}$. In order to construct these auxiliary variables, we use the algorithm of \cite{multi_arxiv}.
Therefore, we have
\begin{align}
q^{c+\nu-1} \binom{c+\nu-1}{\nu} \nu!
& \geq |A_{\delta_K, 1}| \notag \\ &> |A_{\delta_K}|(1-\epsilon 2^{\nu n} ),
\end{align}
where the first inequality follows since $Y_i, Z_j \in [q]$, there are at most $\binom{c+\nu-1}{\nu}$ possibilities of $B_{[\nu]}$ and at most $\nu!$ possible permutations. We also have $|A_{\delta_K}| = 2^K Vol(\delta_K K, K)^{(\nu-1)}$. Therefore, the storage cost is lower-bounded as follows
\begin{align}
\log q \geq \frac{K+(\nu-1) \log Vol (\delta_K K, K)}{c+\nu-1} +
\frac{\log (1-\epsilon 2^{\nu n})-\log \binom{c+\nu-1}{\nu} \nu!}{c+\nu-1}.
\end{align}
\end{proof}
\color{black}
\section{System Model and Background of Multi-version Codes}
\label{Model}
We start with some notation. We use boldface for vectors. In the $n$-dimensional space over a finite field $\mathbb{F}_p$, the standard basis column vectors are denoted by $\{ \mathbf e_1, \mathbf e_2, \cdots, \mathbf e_n \}$\color{black}. We denote the Hamming weight of a vector $\mathbf{x}$ by $w_H(\mathbf{x})$ and the Hamming distance between any two vectors $\mathbf{x_1}$ and $\mathbf{x_2}$ by $d_H(\mathbf{x_1}, \mathbf{x_2})$. For a positive integer $i$, we denote by $[i]$ the set $\lbrace 1, 2, \cdots, i\rbrace$. For any set of ordered indices $S=\lbrace s_1, s_2, \cdots, s_{|S|}\rbrace \subseteq \mathbb{Z}$, where $s_1< s_2 < \cdots < s_{|S|}$, and for any ensemble of variables $\lbrace X_i : i \in S\rbrace$, the tuple $(X_{s_1}, X_{s_2}, \cdots, X_{s_{|S|}})$ is denoted by $X_S$. We use $\log (.)$ to denote the logarithm to the base $2$ and $H(.)$ to denote the binary entropy function. We use the notation $[2^K]$ to denote the set of $K$-length binary strings. A \emph{code} of length $n$ and dimension $k$ over alphabet $\mathcal{A}$ consists of an injective mapping $\mathcal{C}:\mathcal{A}^{k} \rightarrow \mathcal{A}^{n}$. When $\mathcal{A}$ is a finite field and the mapping $\mathcal{C}$ is linear, then the code is referred to as a \emph{linear code}. We refer to a linear code $\mathcal{C}$ of length $n$ and dimension $k$ as an $(n,k)$ code. An $(n,k)$ linear code is called MDS if the mapping projected to \emph{any} $k$ co-ordinates is invertible.
\subsection{Multi-version Codes (MVCs)}
We now present a variant of the definition of the multi-version code \cite{multi_arxiv}, where we model the correlations. We consider a distributed storage system with $n$ servers that can tolerate $f$ crash-stop server failures. \color{black} The system stores $\nu$ possibly correlated versions of a message where $\mathbf W_u \in [2^K]$ is the $u$-th version, $u \in [\nu]$, and $K$ is the message length in bits. The versions are assumed to be totally ordered, i.e., if $u>l$, $\mathbf W_u$ is interpreted as a \emph{later} version with respect to $\mathbf W_l.$ We assume that $\mathbf W_{1}\rightarrow \mathbf W_{2} \rightarrow \ldots \rightarrow \mathbf W_{\nu}$ form a Markov chain. $\mathbf W_{1}$ is uniformly distributed over the set of $K$ length binary vectors. Given $\mathbf W_{m}, \mathbf W_{m+1}$ is uniformly distributed in a Hamming ball of radius $\delta_K K$, $B(\mathbf W_m, \delta_K K)=\{ \mathbf W: d_H(\mathbf W, \mathbf W_m) \leq \delta_K K \}$, and the volume of the Hamming ball is given by
\begin{align}
Vol(\delta_K K, K)=|B(\mathbf W_m, \delta_K K)| = \sum\limits_{j=0}^{\delta_K K} \binom{K}{j}.
\end{align}
Given a correlation coefficient $\delta_K$, we denote the set of possible tuples $(\mathbf w_1, \mathbf w_2, \cdots, \mathbf w_{\nu})$ under our correlation model by $A_{\delta_K}$. We provide the formal definition next.
\begin{definition}[$\delta_K$-possible Set of Tuples]
\label{def:possible-set}
The set $A_{\delta_K}$ of $\delta_K$-possible set of tuples \\ $(\mathbf w_{1}, \mathbf w_{2}, \cdots, \mathbf w_{\nu})$ is defined as follows
\begin{align}
&A_{\delta_K} (\mathbf W_{1}, \mathbf W_{2}, \cdots, \mathbf W_{\nu})= \{ (\mathbf w_{1}, \mathbf w_{2}, \cdots, \mathbf w_{\nu}): \mathbf w_{1} \in [2^K], \mathbf w_{2} \in B(\mathbf w_1, \delta_K K), \\ &\mathbf w_{3} \in B(\mathbf w_{2}, \delta_K K), \notag \cdots, \mathbf w_{\nu} \in B(\mathbf w_{\nu-1}, \delta_K K) \}.
\end{align}
\end{definition}
\noindent We omit the dependency on the messages and simply write $A_{\delta_K}$, when it is clear from the context. Similarly, we can also define the set of possible tuples $\mathbf w_{F_1}$ given a particular tuple $\mathbf w_{F_2}$, $A_{\delta_K}(\mathbf W_{F_1}|{\mathbf w_{F_2}})$, where $F_1, F_2$ are two subsets of $[\nu]$.
\begin{remark}
Unlike the case of the twin binary symmetric source, in our model, the correlation coefficient $\delta_K$ is a function of $K$ in general and is not necessarily a constant. The more familiar expressions that involve entropies can be obtained when $\delta_K$ is equal to a constant $\delta$ using Stirling's inequality \cite{cover_thomas}. Specifically, for $\delta<1/2$, we have
\begin{align}
KH(\delta)-o(K) \leq \log Vol(\delta K, K) \leq K H(\delta).
\end{align}
\end{remark}
\color{black}
The $i$-th server receives an arbitrary subset of versions $\mathbf{S}(i) \subseteq [\nu]$ that denotes the \emph{state} of that server. We denote the system state by $\mathbf{S}= \lbrace \mathbf{S}(1), \mathbf{S}(2), \cdots, \mathbf{S}(n)\rbrace \in \mathcal{P}([\nu])^n$, where $\mathcal{P}([\nu])$ is the power set of $[\nu]$. For the $i$-th server with state $\mathbf S(i)=\lbrace s_1, s_2, \cdots, s_{|\mathbf S(i)|} \rbrace$, where $s_1 < s_2 < \cdots <s_{|\mathbf S(i)|}$, the server stores a codeword symbol generated by the encoding function $\varphi_{\mathbf S(i)}^{(i)}$ that takes an input $\mathbf W_{\mathbf S(i)}$ and outputs an element from the set $[q]$ that can be represented by $\log q$ bits. In state $\mathbf S \in \mathcal{P}([\nu])^n$, we denote the set of servers that have received $\mathbf W_{u}$ by $\mathcal A_{\mathbf S}(u)=\{j \in [n]: u \in \mathbf{S}(j)\}$ and a version $u \in [\nu]$ is termed \emph{complete} if $|\mathcal A_{\mathbf S}(u)| \geq c_W$, where $c_W \leq n-f$.
The set of complete versions in state $\mathbf S \in \mathcal P([\nu])^n$ is given by $\mathcal C_{\mathbf S} = \{u \in [\nu]: |\mathcal A_{\mathbf S}(u)| \geq c_W\}$ and the latest among them is denoted by $L_{\mathbf S}\coloneqq\max \ \mathcal C_{\mathbf S}$.
The goal of the multi-version coding problem is to devise encoders such that for every decoder that connects to any arbitrary set of $c_R \leq n-f$ servers, the latest complete version or a later version is decodable with probability of error that is at most $\epsilon$ while minimizing the per-server worst-case storage cost. We express this formally next.
\begin{definition}[$\epsilon$-error $(n,c_W, c_R, \nu, 2^K, q, \delta_K)$ multi-version code (MVC)] \label{Multi-version Code Definition} An $\epsilon$-error \\ $(n,c_W, c_R, \nu, 2^K, q, \delta_K)$ multi-version code (MVC) consists of the following
\begin{itemize}
\item encoding functions
$
\varphi_{\mathbf S(i)}^{(i)} \colon {[2^K]}^{|\mathbf S(i)|} \to [q],\ \textrm{for every}\ i \in [n]\ \textrm{and every state}\ \mathbf S(i) \subseteq [\nu]$
\item decoding functions $\psi_{\mathbf{S}}^{(T)} \colon [q]^{c_R} \to [2^K] \cup \lbrace \textit{NULL} \rbrace,$
\end{itemize}
that satisfy the following
\begin{align*}
\Pr \left[ \psi_{\mathbf S}^{(T)} \left( \varphi_{\mathbf S(t_1)}^{(t_1)}, \cdots, \varphi_{\mathbf S(t_{c_R})}^{(t_{c_R})} \right)
=\mathbf W_m \ \text{for some} \ m \geq L_{\mathbf S}, \text{if} \ \mathcal C_{\mathbf S} \neq \emptyset \right] \geq 1-\epsilon,
\end{align*}
for every possible system state $\mathbf{S} \in \mathcal{P}([\nu])^n$ and every set of servers $T=\lbrace t_1, t_2, \cdots, t_{c_R} \rbrace \subseteq [n]$, where the probability is computed over all possible tuples of the message versions.
\end{definition}
We notice that set of possible tuples $A_{\delta_K}$ can be partitioned into disjoint sets as follows
\begin{align}
A_{\delta_K}= A_{\delta_K, 1} \cup A_{\delta_K, 2},
\end{align}
where $A_{\delta_K, 1}$ is the set of tuples $(\mathbf{w}_1, \mathbf{w}_2, \cdots, \mathbf w_\nu) \in A_{\delta_K}$ for which we can decode successfully for all $\mathbf S \in \mathcal{P}([\nu])^n$ and $A_{\delta_K, 2}$ is the set of tuples where we cannot decode successfully at least for one state $\mathbf S \in \mathcal{P}([\nu])^n$. \color{black}
\begin{definition}[Storage Cost of a Multi-version Code]
The storage cost of an $\epsilon$-error \\ $(n, c_W, c_R, \nu, 2^K, q, \delta_K)$ MVC is equal
to $\alpha=\log q$ bits.
\end{definition}
\color{blue}
\color{black}
We next present an alternative decoding requirement that is shown in \cite{multi_arxiv} to be equivalent to the multi-version coding problem defined above. For any set of servers $T \subseteq [n]$, note that $\max \cap_{i \in T} \mathbf S(i)$ denotes \emph{the latest common version among} these servers. The alternate decoding requirement, which we refer to multi-version coding problem with Decoding Requirement A, replaces $c_W, c_R$ by one parameter $c$. The decoding requirement requires that the decoder connects to any $c$ servers and decodes the \emph{latest common version} amongst those $c$ servers, or a later version.
\begin{definition} [$\epsilon$-error $(n, c, \nu, 2^K, q, \delta_K)$ Multi-version code (MVC) with Decoding Requirement A]
An $\epsilon$-error $(n,c, \nu, 2^K, q, \delta_K)$ multi-version code (MVC) consists of the following
\begin{itemize}
\item encoding functions
$\varphi_{\mathbf S(i)}^{(i)} \colon {[2^K]}^{|\mathbf S(i)|} \to [q],\ \textrm{for every}\ i \in [n]\ \textrm{and every state}\ \mathbf S(i) \subseteq [\nu]$
\item decoding functions
$\psi_{\mathbf{S}}^{(T)} \colon [q]^{c} \to [2^K] \cup \lbrace \textit{NULL} \rbrace,$
\end{itemize}
that satisfy the following
\begin{align*}
\Pr \left[ \psi_{\mathbf{S}}^{(T)} \left( \varphi_{\mathbf S(t_1)}^{(t_1)}, \cdots, \varphi_{\mathbf S(t_c)}^{(t_c)} \right) = \mathbf W_m, \text{for some} \ m \geq \max \cap_{i \in T} \mathbf S(i), \ \text{if} \cap_{i \in T} \mathbf S(i) \neq \emptyset \right] \geq 1-\epsilon,
\end{align*}
for every possible system state $\mathbf{S} \in \mathcal{P}([\nu])^n$ and every set of servers $T=\lbrace t_1, t_2, \cdots, t_{c} \rbrace \subseteq [n]$, where the probability is computed over all possible tuples of the message versions.
\end{definition}
\color{black}
\noindent In this paper, we present our achievability results for decoding requirement A and Lemma \ref{achievability lemma} \cite{multi_arxiv} establishes the connection between the two decoding requirements.
\begin{lemma}
\label{achievability lemma}
Consider any positive integers $n,c_W,c_R,c$ such that $c=c_W+c_R-n$.
An $\epsilon$-error $(n, c, \nu, 2^K, q, \delta_K)$ MVC with decoding requirement A exists if and only if an $\epsilon$-error $(n, c_W, c_R, \nu, 2^K, q, \delta_K)$ MVC exists.
\end{lemma}
\remove{Although the mathematical definition of multi-version codes expects that all the versions are available at the server for encoding simultaneously, in practical systems, the versions arrive at the server one after another.
Motivated by the requirement to capture scenarios where versions arrive one after another at the servers, reference \cite{multi_arxiv} defined the notion of \emph{causal} multi-version codes. Informally, the encoding function of a causal multi-version code has the following property: for any state $S$ and any $j \in S$, the symbol stored by a server in state $S$ can be expressed as a function of a codeword symbol created using versions in $S-\{j\}$ and the $j$-th version. We anticipate that causal multi-version codes are more amenable to applications in practical systems, as the encoding does not depend on the order of their arrival. We next provide the formal definition.
\begin{definition}\textbf{(Causal Codes)}
\label{Causal Codes Definition}
A multi-version code is called causal if it satisfies:
for all $S \subseteq [\nu], j \in S, i \in [n]$, there exists a function
\begin{align*}
\hat{\varphi}_{S, j}^{(i)}: [q] \times [2^K] \to [q],
\end{align*}
such that
\begin{align*}
& \varphi_S^{(i)}(\mathbf W_S)= \hat{\varphi}_{S, j}^{(i)} (\varphi_{S \setminus \lbrace j \rbrace}^{(i)} (\mathbf W_{S \setminus \lbrace j \rbrace}), \mathbf W_j).
\end{align*}
\end{definition}}
We now make some remarks interpreting the multi-version coding problem in terms of the underlying system and algorithm that it aims to model.
\\
\begin{remark} \begin{enumerate}
\item[]
\item[A.] The system model has an implicit failure tolerance of $f$, as along as the quorum sizes $c_W, c_R$ are chosen such that $c_W, c_R \leq n-f$. This is because, for a version to be complete, a writer must contact $c_W$ severs, and for a reader, it must obtain responses from $c_R$ servers - choosing $c_W, c_R \leq n-f$ ensures that write and read operations complete provided that the number of failed servers is no larger than $f$ (see \cite{Lynch1996, ABD} for more details).
\item[B.] The parameter $\nu$ can be interpreted as a measure of the number of concurrent writes in the system \cite{multi_arxiv,Cadambe_Wang_Lynch2016,CadambeCoded_NCA, Dutta}. In distributed algorithms, the ordering among the various write operations is determined by carefully constructed protocols usually through the use of \emph{Lamport timestamps} (also known as logical timestamps) \cite{lamport1978time}. For instance, several protocols (e.g., \cite{ABD, Dutta, CadambeCoded_NCA}) involve a separate round of communication for a write to figure out the lamport clock (i.e., version number) before proceeding with dispersing the data. The mult-version coding problem abstracts out this these protocol details into the version number. However, it is worth noting that a ``later version'' is not necessarily arriving to the system after an earlier version - they can be concurrent, and can in fact arrive at different nodes at different orders (e.g., see \cite{multi_arxiv, Cadambe_Wang_Lynch2016}). A later version may simply be viewed as one that could receive a higher Lamport timestamp in a protocol execution.
\item[C.] Unlike the study of \cite{multi_arxiv} which considers $0$-error MVCs, we allow the probability of error to be at most $\epsilon$. See also Remark \ref{rem:zeroerror} for more details.
\end{enumerate}
\end{remark}
\color{black}
\subsection{Background - Replication and Simple Erasure Coding}
\label{sec:background}
Replication and simple MDS codes provide two natural MVC constructions. Suppose that the state of the $i$-th server is $\mathbf S(i)=\{s_1, s_2, \ldots, s_{|\mathbf S(i)|}\}$, where $s_1 < s_2< \ldots< s_{|\mathbf S(i)|}$.
\begin{itemize}
\item \emph{Replication-based MVCs:} In this scheme, each server only stores the latest version it receives. The encoding function is $\varphi_{\mathbf S(i)}^{(i)}(\mathbf W_{\mathbf S(i)}) = \mathbf W_{s_{|\mathbf S(i)|}}$, hence the storage cost is $K$.
\item \emph{Simple MDS codes based MVC (MDS-MVC):} In this scheme, an $(n, c)$ MDS code is used to encode each version separately. Specifically, suppose that $\mathcal{C}:[2^K]\rightarrow [2^{K/c}]^{n}$ is an $(n,c)$ MDS code over alphabet $[2^{K/c}],$ and denote the $i$-th co-ordinate of the output of $\mathcal{C}$ by $\mathcal{C}^{(i)}:[2^K]\rightarrow[2^{K/c}]$. The encoding function is constructed as $\varphi_{\mathbf S(i)}^{(i)}(\mathbf W_{\mathbf S(i)}) = (\mathcal{C}^{(i)}(\mathbf W_{s_1}), \mathcal{C}^{(i)}(\mathbf W_{s_2}),\ldots, \mathcal{C}^{(i)}(\mathbf W_{s_{|\mathbf S(i)|}}))$. That is, each server stores one codeword symbol for each version it receives and the storage cost is $\nu \frac{K}{c}$.
\end{itemize}
An important outcome of the study of \cite{multi_arxiv} is that, when the different versions are independent, i.e., if $\delta_K = 1$, then the storage cost is at least $\frac{\nu K}{\nu+c-1}-\Theta(1)$. In particular, because $\frac{\nu}{\nu+c-1} \geq \frac{1}{2} \min(\frac{\nu}{c}, 1)$, the best possible MVC scheme is, for large $K$, at most twice as cost-efficient as the better among replication and simple erasure coding. In this paper, we show that replication and simple erasure coding are significantly inefficient if the different versions are correlated. Our schemes resemble simple erasure codes in their construction; however, we exploit the correlation between the versions to store fewer bits per server.
\section*{Acknowledgement}
The authors would like to thank A. Tulino and J. Llorca for their helpful comments.
\section{Conclusion}
\label{Conclusion}
In this paper, we have proposed multi-version codes to efficiently store correlated updates of data in a decentralized asynchronous storage system. These constructions are based on Reed-Solomon codes and random binning. An outcome of our results is that the correlation between versions can be used to reduce storage costs in asynchronous decentralized systems, even if there is no single server or client node who is aware of all data versions, in applications where consistency is important. In addition, our converse result shows that these constructions are within a factor of $2$ from the information-theoretic optimum in certain interesting regimes. The development of practical coding schemes for the case where $\delta_K$ is known a priori is an open research question, which would require non-trivial generalizations of previous code constructions for the Slepian-Wolf problem \cite{pradhan2003distributed, schonberg2004distributed}.
\subsection{Summary of Results}
\label{Main Results}
\label{sec:motivation_results}
In order to explain the significance of our results, summarized in Table \ref{table:Schemes Comparison}, we begin with a simple motivating scheme. Consider the MDS-MVC scheme of Section \ref{sec:background}. Assume that we use a Reed-Solomon code over a field $\mathbb{F}_{p}$ of binary characteristic. The generator matrix of a Reed-Solomon code is usually expressed over $\mathbb{F}_{p}$. However, every element in $\mathbb{F}_{p}$ is a vector over $\mathbb{F}_{2}$, and a multiplication over the extension field $\mathbb{F}_{p}$ is a linear transformation over $\mathbb{F}_{2}.$ Therefore, the generator matrix of the Reed-Solomon code can be equivalently expressed over $\mathbb{F}_{2}$ as follows
$$ G = (G^{(1)}, G^{(2)}, \ldots, G^{(n)}),$$
where $G$ is a $K \times nK/c$ binary generator matrix, and $G^{(i)}$ has dimension $K \times K/c$. Because Reed-Solomon codes can tolerate $n-c$ erasures, every matrix of the form $(G^{(t_1)}, G^{(t_2)}, \ldots, G^{(t_c)}),$ where $t_1, t_2,\ldots, t_c$ are distinct elements of $[n]$, has a full rank of $K$ over $\mathbb{F}_2$. \\
We now describe a simple scheme that extends the MDS-MVC by exploiting the correlations and requires the knowledge of $\delta_{K}$. Suppose that the $i$-th server receives the set of versions $\mathbf S(i)=\lbrace s_1, s_2, \cdots, s_{|\mathbf S(i)|}\rbrace$, where $s_1 < s_2< \ldots< s_{|\mathbf S(i)|}$. The server encodes $\mathbf W_{s_{1}}$ using the binary code as $\mathbf W_{s_1} ^{\rm T} G^{(i)}$. For $\mathbf W_{s_{m}}$, where $m>1$, the server finds a difference vector $\mathbf y^{(i)}_{s_m, s_{m-1}}$ that satisfies the following
\begin{enumerate}
\item $\mathbf y^{(i) \ \rm T}_{s_m, s_{m-1}} G^{(i)} = (\mathbf W_{s_{m}}- \mathbf W_{s_{m-1}})^{\rm T} G^{(i)}$ and
\item $w_H(\mathbf y^{(i)}_{s_m, s_{m-1}}) \leq (s_m-s_{m-1}) \delta_K K.$
\end{enumerate}
Although it is not necessary that $\mathbf y^{(i)}_{s_m, s_{m-1}} = \mathbf W_{s_{m}}-\mathbf W_{s_{m-1}}$, the fact that $\mathbf W_{s_{m}}-\mathbf W_{s_{m-1}}$ satisfies these two conditions implies that the encoder can find at least one vector $\mathbf{y}^{(i)}_{s_m,s_{m-1}}$ satisfying these conditions. Since $w_H(\mathbf y^{(i)}_{s_m, s_{m-1}}) \leq (s_m-s_{m-1}) \delta_K K$, an encoder aware of $\delta_{K}$ can represent $\mathbf y^{(i)}_{s_m, s_{m-1}}$ by $\log Vol((s_m-s_{m-1})\delta_K K, K)$ bits. The first condition implies that a decoder that connects to the $i$-th server can obtain $\mathbf W_{s_m}^{\rm T}G^{(i)}$ by applying $\mathbf W_{s_1}^{\rm T}G^{(i)}+ \sum\limits_{\ell=2}^{m}\mathbf y^{(i) \ \rm T}_{s_\ell, s_{\ell-1}} G^{(i)} = \mathbf W_{s_m}^{\rm T}G^{(i)}$. Therefore, from any subset $\{t_1,t_2,\ldots,t_{c}\}$ of $c$ servers, for any common version $s_{m}$ among these servers, a decoder can recover $\mathbf W_{s_m}^{\rm T}G^{(t_{1})}, \mathbf W_{s_m}^{\rm T}G^{(t_{2})},\ldots,\mathbf W_{s_m}^{\rm T}G^{(t_{c})}$ from these servers and can therefore recover $\mathbf W_{s_{m}}$.
The worst-case storage cost of this scheme is obtained when each server receives all the $\nu$ versions, which results in a storage cost of $\frac{K}{c}+ (\nu-1) \log Vol(\delta_K K, K).$ Intuitively, the above scheme stores the first version using erasure coding - $K/c$ bits - and the remaining $(\nu-1)$ versions using delta coding, which adds a storage cost of $\log Vol(\delta_K K, K)$ bits per version.
\begin{table*}[t]
\renewcommand{\arraystretch}{1.2}
\centering
\begin{tabular}{ | p{3.7 cm} | p{7 cm} | p{4cm} | }
\hline
\textbf{Scheme} & \textbf{Worst-case Storage Cost} & \textbf{Regime} \\ \hline
Replication & $K$ & oblivious to $\delta_K$, $\epsilon=0$
\\ \hline
Simple erasure codes & $\nu \frac{K}{c}$& outperforms replication if $\nu<c$, oblivious to $\delta_K$, $\epsilon=0$ \\ \hline
Theorem \ref{Reed-Solomon Update-Efficient MVC Theorem} [Reed-Solomon update-efficient code] & $\frac{K}{c}+ (\nu-1) \delta_K K (\log K + o(\log K))$& asymptotically outperforms the above schemes for $\nu<c$ and $\delta_K=o(1/\log K)$, oblivious to $\delta_K$, $\epsilon=0$ \\\hline
Motivating scheme of Section \ref{sec:motivation_results} &$
\frac{K}{c}+(\nu -1) \log Vol(\delta_K K, K)$& not oblivious to $\delta_K$, $\epsilon=0$\\ \hline
Theorem \ref{Slepian-Wolf Storage Cost Theorem} [Random binning] &$
\frac{K}{c}+\frac{\nu -1}{c} {\log Vol(\delta_K K, K)} + o(\log K)$& asymptotically outperforms the above schemes for $\nu < c$, not oblivious to $\delta_K$, $\epsilon=1/\log K$ \\ \hline
Theorem \ref{Lower Bound Theorem} [Lower bound]
& $\frac{K}{c+\nu-1}+\frac{\nu-1}{c+\nu-1} \log Vol (K, \delta_K K) - \Theta(1)$
& applicable for all $\delta_K$
\\ \hline
\end{tabular}
\caption{Storage cost.}\label{table:Schemes Comparison}
\end{table*}
The scheme we described above motivates the following two questions.\\
\emph{\textbf{Q1}: Can we obtain a MVC construction that is oblivious to the parameter $\delta_K$ with a storage cost of $\frac{K}{c}+ (\nu-1) \log Vol(\delta_K K, K)$?}\\
\emph{\textbf{Q2}: Can we use erasure coding to achieve a storage cost of $\frac{K}{c}+ \frac{\nu-1}{c} \log Vol(\delta_K K, K)$?}
\noindent In Section \ref{Update-efficient Multi-version Codes}, we provide Theorem \ref{Reed-Solomon Update-Efficient MVC Theorem} that answers $Q1$ by developing a $0$-error Reed-Solomon based scheme that does not require the knowledge of $\delta_K$ and obtains the erasure coding gain of $1/c$ for the first version available at a server and stores the subsequent versions via delta coding.
In Section \ref{Slepian-Wolf Based Multi-version Codes}, we provide
Theorem \ref{Slepian-Wolf Storage Cost Theorem} that gives a positive answer to $Q2$ by showing the existence of an $\epsilon$-error storage efficient scheme that obtains the erasure coding factor of $1/c$, not only for the first version, but also for the subsequent versions. Moreover, the scheme is able to harness the delta compression gain.
Finally, in Section \ref{Lower Bound on the storage cost},
Theorem \ref{Lower Bound Theorem} provides a lower bound on the per-server storage cost which implies that for $\nu < c$, constant $\delta_K=\delta$ and $\epsilon=2^{-o(K)}$, the achievable scheme of Theorem \ref{Slepian-Wolf Storage Cost Theorem} is asymptotically at most twice the lower bound. We notice that the regime where $\nu<c$ is interesting as the degree of asynchrony is typically limited as pointed out in \cite{chen2017giza}. \color{black}
| {
"attr-fineweb-edu": 1.517578,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdxs5qg5A57oeA6ND |
\section{Introduction}
Deep convolutional neural networks (CNNs) are particularly successful in certain tasks such as image classification. Such tasks generally entail the approximation of functions of a large number of variables, for instance the number of pixels which determine the content of an image. Learning a generic high-dimensional function is plagued by the \emph{curse of dimensionality}: the rate at which the generalisation error $\epsilon$ decays with the number of training samples $n$ vanishes as the dimensionality $d$ of the input space grows, i.e. $\epsilon(n) \sim n^{-\beta}$ with $\beta=O(1/d)$~\citep{wainwright2019high}. Therefore, the success of CNNs in classifying data whose dimension can be in the hundreds or more~\citep{hestness2017deep,spigler2020asymptotic}
points to the existence of some underlying structure in the task that CNNs can leverage. Understanding the structure of learnable tasks is arguably one of the most fundamental problems in deep learning, and also one of central practical importance---as it determines how many examples are required to learn up to a certain error.
A popular hypothesis is that learnable tasks are local and hierarchical: features at any scale are made of sub-features of smaller scales. Although many works have investigated this hypothesis~\citep{biederman1987recognition, poggio2017and, kondor2018generalization, zhou2018building, deza2020hierarchically, kohler2020rate, poggio2020theoretical, schmidt2020nonparametric, finocchio2021posterior, giordano2022inability}, there are no available predictions for the exponent $\beta$ for deep CNNs trained on tasks with a varying degree of locality or a truly hierarchical structure.
In this paper we perform such a computation in the overparameterised regime, where the width of the hidden layer of the neural networks diverges and the network output is rescaled so as to converge to that of a kernel method~\citep{jacot2018neural, lee2019wide}. Although the deep networks deployed in real scenarios do not generally operate in such regime, the connection with the theory of kernel regression provides a recipe for computing the decay of the generalisation error with the number of training examples. Namely, given an infinitely wide neural network, its generalisation abilities depend on the spectrum of the corresponding kernel~\citep{caponnetto2007optimal, bordelon2020spectrum}: the main challenge is then to characterise this spectrum, especially for deep CNNs whose kernels are rather cumbersome and defined recursively~\citep{arora2019exact}. This characterisation is the main result of our paper, together with the ensuing study of generalisation in deep CNNs.
\subsection{Our contributions}
More specifically, this paper studies the generalisation properties of deep CNNs with non-overlapping patches and no pooling (defined in~\autoref{sec:setup}, see~\autoref{fig:main-msg} for an illustration), trained on a target function $f^*$ by empirical minimisation of the mean squared loss. We consider the infinite-width limit (\autoref{sec:kernels}) where the model parameters change infinitesimally over training, thus the trained network coincides with the predictor of kernel regression with the Neural Tangent Kernel (NTK) of the network. Due to the equivalence with kernel methods, generalisation is fully characterised by the spectrum of the integral operator of the kernel: in simple terms, the projections on the eigenfunctions with larger eigenvalues can be learnt (up to a fixed generalisation error) with fewer training points (see, e.g.,~\citet{bach2021learning}).
\paragraph{Spectrum of deep hierarchical kernels.} Due to the network architecture, the hidden neurons of each layer depend only on a subset of the input variables, known as the receptive field of that neuron (highlighted by coloured boxes in~\autoref{fig:main-msg}, left panel). We find that the eigenfunctions of the NTK of a hierarchical CNN of depth $L\,{+}\,1$ can be organised into sectors $l\,{=}\,1,\dots,L$ associated with the hidden layers of the network (\autoref{sec:kernels}). The eigenfunctions of each sector depend only on the receptive fields of the neurons of the corresponding hidden layer: if we denote with $d_{\text{eff}}(l)$ the size of the receptive fields of neurons in the $l$-th hidden layer, then the eigenfunctions of the $l$-th sector are effectively functions of $d_{\text{eff}}(l)$ variables. We characterise the asymptotic behaviour of the NTK eigenvalues with the degree of the corresponding eigenfunctions (\autoref{th:eig-scaling}) and find that it is controlled by $d_{\text{eff}}(l)$. As a consequence, the eigenfunctions with the largest eigenvalues---the easiest to learn---are those which depend on small subsets of the input variables and have low polynomial degree. This is our main technical contribution and all of our conclusions follow from it.
\paragraph{Adaptivity to the spatial structure of the target.} We use the above result to prove that deep CNNs can adapt to the spatial scale of the target function (\autoref{sec:adaptivity}). More specifically, by using rigorous bounds from the theory of kernel ridge regression~\citep{caponnetto2007optimal} (reviewed in the first paragraph of~\autoref{sec:adaptivity}), we show that when learning with the kernel of a CNN and optimal regularisation, the decay of the error depends on the effective dimensionality of the target $f^*$---i.e., if $f^*$ only depends on $d_{\text{eff}}$ adjacent coordinates of the $d$-dimensional input, then $\epsilon\sim n^{-\beta}$ with $\beta\geq O(1/d_{\text{eff}})$ (\autoref{co:adaptivity}, see~\autoref{fig:main-msg} for a pictorial representation). We find a similar picture in ridgeless regression by using non-rigorous results derived with the replica method~\citep{bordelon2020spectrum,loureiro2021learning} (\autoref{sec:examples}). Notice that for targets which are spatially localised (or sums of spatially localised functions), the rates achieved with deep CNNs are much closer to the Bayes-optimal rates---realised when the architecture is fine-tuned to the structure of the target---than $\beta=O(1/d)$ obtained with the kernel of a fully-connected network. Moreover, we find that hierarchical functions generated by the output of deep CNNs are too rich to be efficiently learnable in high dimensions. We confirm these results through extensive numerical studies.
\begin{figure}[ht]
\centering
\begin{subfigure}[c]{0.58\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/main_msg_tree.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[c]{0.40\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/main_msg_curves.pdf}
\end{subfigure}
\caption{\textbf{Left:} Computational skeleton of a convolutional neural network of depth $L+1\,{=}\,4$ ($L\,{=}\,3$ hidden layers). The leaves of the graph (squares) correspond to input coordinates, and the root (empty circle) to the output. All other nodes represent (infinitely wide layers of) hidden neurons. We define as `meta-patches' (i.e. patches of patches) the sets of input variables that share a common ancestor node along the tree (such as the squares within each coloured rectangle). Each meta-patch coincides with the receptive field of the neuron represented by this common ancestor node, as indicated below the input coordinates. For each hidden layer $l\,{=}\,1,\dots,L$, there is a family of meta-patches having dimensionality $d_{\text{eff}}(l)$.
\textbf{Right:} Sketches of learning curves $\epsilon(n)$ obtained by learning target functions of varying spatial scale with the network on the left. More specifically, the target is a function of a $3$-dimensional patch for the blue curve, a $6$-dimensional patch for the orange curve, and the full input for the green curve. We predict (and confirm empirically) that both the decay of $\epsilon$ with $n$ (full lines) and the rigorous upper bound (dashed lines) are controlled by the effective dimensionality of the target.}
\label{fig:main-msg}
\end{figure}
\subsection{Related work}
\looseness=-1 The benefits of shallow CNNs in the kernel regime have been investigated by~\citet{bietti2022approximation, favero2021locality, misiakiewicz2021learning, xiao2022synergy, xiao2022eigenspace, geifman2022spectral}. \citet{favero2021locality}, and later~\citep{misiakiewicz2021learning, xiao2022synergy}, studied generalisation properties of shallow CNNs, finding that they are able to beat the curse of dimensionality on local target functions. However, these architectures can only approximate functions of single input patches or linear combinations thereof. \citet{bietti2022approximation}, in addition, includes generic pooling layers and begins considering the role of depth by studying the approximation properties of kernels which are integer powers of other kernels. We generalise this line of work by studying CNNs of any depth with nonanalytic (ReLU) activations: we find that the depth and nonanalyticity of the resulting kernel are crucial for understanding the inductive bias of deep CNNs. This result should also be contrasted with the spectrum of the kernels of deep fully-connected networks, whose asymptotics do not depend on depth~\citep{bietti2021deep}. Furthermore, we extend the analysis of generalisation to target functions that have a hierarchical structure similar to that of the networks themselves.
\citet{geifman2022spectral} derive bounds on the spectrum of the kernels of deep CNNs. However, they consider only filters of size one in the first layer and do not include a theoretical analysis of generalisation. Instead, we allow filters of general dimension and give tight estimates of the asymptotic behaviour of eigenvalues, which
allow us to predict generalisation properties. \citet{xiao2022eigenspace} is the closest to our work, as it also investigates the spectral bias of deep CNNs in the kernel regime. However, it considers a different limit where both the input dimension and the number of training points diverge and does not characterise the asymptotic decay of generalisation error with the number of training samples.
\citet{paccolat2021isotropic, malach2021computational, abbe2022merged} use sparse target functions which depend only on a few of the input variables to prove sample complexity separation results between networks operating in the kernel regime and in the feature regime---where the change in parameters during training can be arbitrarily large. In this respect, our work shows that when the few relevant input variables are adjacent, i.e. the target function is spatially localised, deep CNNs achieve near-optimal performances even in the kernel regime.
\section{Notation and setup}
\label{sec:setup}
Our work considers CNNs with nonoverlapping patches and no pooling layers. These networks are fully characterised by the depth $L\,{+}\,1$ (or number of hidden layers $L$) and a set of filters sizes $\left\lbrace s_l\right\rbrace_l$ (one per hidden layer). We call such networks \emph{hierarchical} CNNs.
\begin{definition}[$L$-hidden-layers hierarchical CNN] \label{def:hierarchical-cnn}
Denote by $\sigma$ the normalised ReLU function, $\sigma(x) = \sqrt{2}\max(0,x)$. For each input $\bm{x}\in \mathbb{R}^d$~\footnote{Notice that all our results can be readily extended to image-like input signals $\lbrace x_{ij}\rbrace_{i,j}$ or tensorial objects with an arbitrary number of indices.} and $s$ a divisor of $d$, denote by $\bm{x}_i$ the $i$-th $s$-dimensional patch of $\bm{x}$, $\bm{x}_i\,{=}\,(x_{(i-1)\times s+1},\dots,x_{i\times s})$ for all $i\,{=}\,1,\dots,d/s$. The output of a $L$-hidden-layers hierarchical neural network can be defined recursively as follows.
\begin{align}
&f^{(1)}_{h,i}(\bm{x}) = \sigma\left( \bm{w}^{(1)}_{h}\cdot\bm{x}_{i}\right),\,
\forall h\in[1\2dots H_1],\,\forall i\in[1\2dots p_1];\nonumber\\
&f^{(l)}_{h,i}(\bm{x}) = \sigma\left( \frac{1}{\sqrt{H_{l-1}}}\sum_{h'} \frac{\bm{w}^{(l)}_{h,h'}\cdot\left(\bm{f}^{(l-1)}_{h'}\right)_i}{\sqrt{s_l}}\right),\,
\forall h\in[1\2dots H_l],\, i\in[1\2dots p_l],\, l\in[2\2dots L];\nonumber\\
&f(\bm{x}) = f^{(L+1)}(\bm{x}) = \frac{1}{\sqrt{H_L}} \sum_{h=1}^{H_L}\sum_{i=1}^{p_{L}} \frac{w^{(L+1)}_{h,i}f^{(L)}_{h,i}(\bm{x})}{\sqrt{p_{L}}}.
\end{align}
$H_l$ denotes the width of the $l$-th layer, $s_l$ the filter size ($s_1\,{=}\,s$),
$p_l$ the number of patches ($p_1\,{\equiv}\,p\,{=}\,d/s$).
$\bm{w}^{\scriptscriptstyle(1)}_h\in\mathbb{R}^{s_1}$,
$\bm{w}^{\scriptscriptstyle(l)}_{h,h'}\in\mathbb{R}^{s_l}$,
$w_{h,i}^{\scriptscriptstyle(L+1)}\in\mathbb{R}$.
\end{definition}
Hierarchical CNNs are best visualised by considering their computational skeleton, i.e. the directed acyclic graph obtained by setting $H_l\,{=}\,1$ $\forall$ $l$ (example in \autoref{fig:main-msg}, left, with $L\,{=}\,3$ hidden layers and filter sizes $(s_1,s_2,s_3)\,{=}\,(3,2,2)$). Having nonoverlapping patches, the computational skeleton is an ordered tree, whose root is the output (empty circle at the top of the figure) and the leaves are the input coordinates (squares at the bottom). All the other nodes represent neurons and all the neurons belonging to the same hidden layer have the same distance from the input nodes. The tree structure highlights that the post-activations $f^{l}_i$ of the $l$-th layer depend only on a subset of the input variables, also known as the \emph{receptive field}.
Since the first layer of a hierarchical CNN acts on $s_1$-dimensional patches of the input, it is convenient to consider each $d$-dimensional input signal as the concatenation of $p$ $s$-dimensional patches, with $s\,{=}\,s_1$ and $p\times s\,{=}\,d$. We assume that each patch is normalised to $1$, so that the input space is a product of $p$ $s$-dimensional unit spheres (called multisphere in~\citet{geifman2022spectral}):
\begin{equation}
{\sf M}^p\mathbb{S}^{s-1}:=\prod_{i=1}^p \mathbb{S}^{s-1} \subset \mathbb{S}^{d-1}.
\end{equation}
Notice that the $s$-dimensional patches are also the receptive fields of the first-hidden-layer neurons (as in the blue rectangle in~\autoref{fig:main-msg} for $s\,{=}\,3$). In general, the receptive field of a neuron in the $l$-th hidden layer with $l\,{>}\,1$ is a group of ${\prod_{l'=2}^{l}} s_{l'}$ adjacent patches (as in the orange rectangle of~\autoref{fig:main-msg} for $l\,{=}\,2$, $s_2\,{=}\,2$ or the green rectangle for $l\,{=}\,3$, $s_3\,{=}\,s_2\,{=}\,2$), which we refer to as a \textit{meta-patch}. Due to the correspondence with the receptive fields, each meta-patch is identified with one path on the computational skeleton: the path which connects the output node to the hidden neuron whose receptive field coincides with the meta-patch. If such hidden neuron belongs to the $l$-th hidden layer, the path is specified by a tuple of $L-l\,{+}\,1$ indices, $i_{l+1\to L+1}\,{:=}\, i_{L+1}\dots i_{l+1}$, where each index indicates which branch to select when descending from the root to the neuron node. With this notation, $\bm{x}_{i_{l+1}\to i_{L+1}}$ denotes one of the $p_{l}$ meta-patches of size $\prod_{\scriptscriptstyle l'\leq l} s_{l'}$. Because of the normalisation of the $s_1$-dimensional patches, each meta-patch has an \emph{effective dimensionality} which is lower than its size:
\begin{equation}\label{eq:effective-dim}
\bm{x}_{i_{2\to L+1}} \in \mathbb{S}^{s_1-1}\Rightarrow
\begin{cases}
d_{\text{eff}}(1):=\text{dim}(\bm{x}_{i_{2\to L+1}}) =(s_1-1),&\\
d_{\text{eff}}(l) := \text{dim}(\bm{x}_{i_{l+1\to L+1}}) = (s_1-1){\prod_{l'=2}^{l}} s_{l'},&\forall l\in[2\2dots L].
\end{cases}
\end{equation}
\section{Hierarchical kernels and their spectra}
\label{sec:kernels}
We turn now to the infinite-width limit $H_l\to\infty$: because of the aforementioned equivalence with kernel methods, this limit allows us to deduce the generalisation properties of the network from the spectrum of a kernel. In this section, we present the kernels corresponding to the hierarchical models of~\autoref{def:hierarchical-cnn} and characterise the spectra of the associated integral operators.
We consider specifically two kernels: the \emph{Neural Tangent Kernel} (NTK), corresponding to training all the network parameters~\citep{jacot2018neural}; and the \emph{Random Feature Kernel} (RFK), corresponding to training only the weights of the linear output layer~\citep{rahimi2007random, daniely2016toward}. In both cases, the kernel reads:
\begin{equation}\label{eq:kernel-trick}
\mathcal{K}(\bm{x},\bm{y}) = \sum_{\text{trained params }\theta} \partial_\theta f(\bm{x}) \partial_\theta f(\bm{y}).
\end{equation}
The NTK and RFK of deep CNNs have been derived previously by~\citet{arora2019exact}. In \autoref{app:kernel-lemmas} we report the functional forms of these kernels in the case of hierarchical CNNs. These kernels inherit the hierarchical structure of the original architecture and their operations can be visualised again via the tree graph of~\autoref{fig:main-msg}. In this case, the leaves represent products between the corresponding elements of two inputs $\bm{x}$ and $\bm{y}$., i.e. $x_1 y_1$ to $x_d y_d$, and the root the kernel output $\mathcal{K}(\bm{x},\bm{y})$. The output can be built layer by layer by following the same recipe for each node: first sum the outputs of the previous layer which are connected to the present node, then apply some nonlinear function which depends on the activation function of the network. In particular, for each couple of inputs $\bm{x}$ and $\bm{y}$ on the multisphere ${\sf M}^p\mathbb{S}^{s-1}$, hierarchical kernels depend on $\bm{x}$ and $\bm{y}$ via the $p$ dot products between corresponding $s$-dimensional patches of $\bm{x}$ and $\bm{y}$. As a comparison, \citet{bietti2021deep} showed that the NTK and RFK of a fully-connected network of any depth depend on the full dot product $\bm{x}\cdot\bm{y}$, whereas those of a shallow CNN can be written as the sum of $p$ kernels, each depending on only one of the patch dot products~\citep{favero2021locality}.
Given the kernel, the associated integral operator reads
\begin{equation}\label{eq:kernel-integral}
\left(T_{\mathcal{K}} f\right)(\bm{x}) := \int_{\mathbb{S}^{s-1}} \mathcal{K}(\bm{x},\bm{y}) f(\bm{y})dp(\bm{y}),
\end{equation}
with $dp(\bm{x})$ denoting the uniform distribution of input points on the multisphere. The spectrum of this operator provides, via Mercer's theorem~\citep{mercer1909xvi}, an alternative representation of the kernel $\mathcal{K}(\bm{x},\bm{y})$ and a basis for the space of functions that the kernel can approximate. The asymptotic decay of the eigenvalues, in particular, is crucial for the generalisation properties of the kernel, as it will be clarified at in~\autoref{sec:adaptivity}. Since the input space is a product of $s$-dimensional unit spheres and the kernel depends on the $p$ scalar products between corresponding $s$-dimensional patches of $\bm{x}$ and $\bm{y}$, the eigenfunctions of $T_{\mathcal{K}}$ are products of spherical harmonics acting on the patches (see \autoref{app:harmonics} for definitions and the relevant background). For the sake of clarity, we limit the discussion in the main paper to the case $s\,{=}\,2$, where, since each patch $\bm{x}_i$ is entirely determined by an angle $\theta_i$, the multisphere ${\sf M}^p\mathbb{S}^{s-1}$ reduces to the $p$-dimensional torus and the eigenfunctions to $p$-dimensional plane waves: $e^{i\bm{k}\cdot\bm{\theta}}$ with $\bm{\theta}\,{:=}\,(\theta_1,\dots,\theta_p)$ and label $\bm{k}\,{:=}\,(k_1,\dots,k_p)$. In this case the eigenvalues coincide with the $p$-dimensional Fourier transform of the kernel $\mathcal{K}\left(\cos{\theta_1},\dots,\cos{\theta_p}\right)$ and the large-$\bm{k}$ asymptotics are controlled by the nonanalyticities of the kernel~\citep{widom1963asymptotic}. The general case with patches of arbitrary dimension is presented in the appendix.
\begin{theorem}[Spectrum of hierarchical kernels]\label{th:eig-scaling}
Let $T_{\mathcal{K}}$ be the integral operator associated with a $d$-dimensional hierarchical kernel of depth $L+1$, $L\,{>}\,1$ and filter sizes ($s_1,\dots,s_L$) with ${s_1=2}$. Eigenvalues and eigenfunctions of $T_{\mathcal{K}}$ can be organised into $L$ sectors associated with the hidden layers of the kernel/network. For each $1\,{\leq}\,l\,{\leq}\,L$, the $l$-th sector consists of $(\textstyle\prod_{\scriptscriptstyle l'=1}^{\scriptscriptstyle l} s_{l'})$-\emph{local} eigenfunctions: functions of a single meta-patch $\bm{x}_{i_{l+1\to L+1}}$ which cannot be written as linear combinations of functions of smaller meta-patches. The labels $\bm{k}$ of these eigenfunctions are such that there is a meta-patch $\bm{k}_{i_{l+1 \to L+1}}$ of $\bm{k}$ with no vanishing sub-meta-patches and all the $k_i$'s outside of $\bm{k}_{i_{l+1 \to L+1}}$ are $0$ (because the eigenfunction is constant outside of $\bm{x}_{i_{l+1 \to L+1}}$). The corresponding eigenvalue is degenerate with respect to the location of the meta-patch: we call it $\Lambda^{\scriptscriptstyle(l)}_{\bm{k}_{i_{l+1}\to i_{L+1}}}$. When $\|\bm{k}_{i_{l+1 \to L+1}}\|\to\infty$, with $k\,{=}\,\|\bm{k}_{i_{l+1 \to L+1}}\|$,
\begin{equation}\label{eq:eig-scaling-2d}
\Lambda^{(l)}_{\bm{k}_{i_{l+1\to L+1}}}
= \mathcal{C}_{2,l}\, k^{-2\nu -d_{\rm eff}(l)} + o\left(k^{-2\nu -d_{\rm eff}(l)}\right),
\end{equation}
with $\nu_{\rm NTK}=1/2,\, \nu_{\rm RFK}=3/2$ and $d_{\rm eff}$ the effective dimensionality of the meta-patches defined in~\autoref{eq:effective-dim}. $\mathcal{C}_{2,l}$ is a strictly positive constant for $l\,{\geq}\,2$ whereas for $l\,{=}\,1$ it can take two distinct strictly positive values depending on the parity of $k_{i_{2\to L+1}}$.
\end{theorem}
The proof is in \autoref{app:spectra}, together with the extension to the $s \geq 3$ case (\autoref{th:eig-scaling-app}). Some remarks are in order. It is useful to compare the spectrum in the theorem with the limiting cases of a deep fully-connected network and a shallow CNN. In the former case, the spectrum consists only of the $L$-th sector with $p_L\,{=}\,1$---the global sector. The eigenvalues decay as $\norm{\bm{k}}^{-2\nu -p}$, with $\nu$ depending ultimately on the nonanalyticity of the network activation function (see~\citet{bietti2021deep} or~\autoref{app:spectra}) and $p\,{=}\,d_{\text{eff}}(L)$ the effective dimensionality of the input. As a result, all eigenfunctions with the same $\norm{\bm{k}}$ have the same eigenvalue, even those depending on a subset of the input coordinates. For example, assume that all the components of $\bm{k}$ are zero but $k_1$, i.e. the eigenfunction depends only on the first $2$-dimensional patch: the eigenvalue is $O(k_1^{-2\nu-p})$. By contrast, for a hierarchical kernel, the eigenvalue is $O(k_1^{-2\nu-1})$, much larger than the former as $p\,{>}\,1$.
In the case of a shallow CNN the spectrum consists only of the first sector, so that each eigenfunction depends only on one of the input patches. In this case only one of the $\bm{k}$ can be non-zero, say $k_1$, and the eigenvalue is $O(k_1^{-2\nu-1})$. However, from~\citep{favero2021locality}, a kernel of this kind is only able to approximate functions which depend on one of the input patches or linear combinations of such functions. Instead, for a hierarchical kernel with $p_L\,{=}\,1$, the eigenfunctions of the $L$-th sector are supported on the full input space. Then, if $\Lambda_{\bm{k}}\,{>}\,0$ for all $\bm{k}$, hierarchical kernels are able to approximate any function on the multisphere, dispensing with the need for fine-tuning the kernel to the structure of the target function.
Overall, given an eigenfunction of a hierarchical kernel, the asymptotic scaling of the corresponding eigenvalue depends on the spatial structure of the eigenfunction support. More specifically, the effective dimensionality of the smallest meta-patch which contains all the variables that the eigenfunction depends on. In simple terms, the decay of an eigenvalue with $k$ is slower if the associated eigenfunction depends on a few adjacent patches---but not if the patches are far apart! This is a property of hierarchical architectures which use nonlinear activation functions at all layers. Such a feature disappears if all hidden layers apart from the first have polynomial~\citep{bietti2022approximation} or infinitely smooth~\citep{azevedo2015eigenvalues, scetbon2021spectral} activation functions or if the kernels are assumed to factorise over patches as in~\citet{geifman2022spectral}.
\section{Generalisation properties and adaptivity to spatial structure}\label{sec:adaptivity}
In this section, we study the implications of the peculiar spectra of hierarchical NTKs and RFKs on the generalisation properties of and prove a form of adaptivity to the spatial structure of the target function. We follow the classical analysis of~\citet{caponnetto2007optimal} for kernel ridge regression (see~\citet{bach2021learning, bietti2022approximation} for a modern treatment) and employ a spectral bias ansatz for the ridgeless limit~\citep{bordelon2020spectrum, spigler2020asymptotic, loureiro2021learning, tomasini22failure}.
\paragraph{Theory of kernel ridge regression and source-capacity conditions.} Given a set of $n$ training points $\left\lbrace (\bm{x}_\mu,y_\mu)\right\rbrace_{\mu=1}^n\stackrel{{{\scriptscriptstyle \rm i.i.d.}}}{\sim} p(\bm{x},y)$ for some probability density function $p(\bm{x},y)$ and a regularisation parameter ${\lambda}\,{>}\,0$, the kernel ridge regression estimate of the functional relation between $\bm{x}$'s and $y$'s, or \emph{predictor}, is
\begin{equation}\label{eq:predictor}
f_\lambda^n(\bm{x}) = \argmin_{f\in\mathcal{H}} \left\lbrace \frac{1}{n}\sum_{\mu=1}^n\left( f(\bm{x}_\mu) - y_\mu \right)^2 + \lambda \norm{f}_\mathcal{H} \right\rbrace,
\end{equation}
\looseness=-1 where $\mathcal{H}$ is the Reproducing Kernel Hilbert Space (RKHS) of a (hierarchical) kernel $\mathcal{K}$. If $f(\bm{x})$ denotes the model from which the kernel was obtained via~\autoref{eq:kernel-trick}, the space $\mathcal{H}$ is contained in the span of the network features $\left\lbrace \partial_\theta f(\bm{x})\right\rbrace_{\theta}$ in the infinite-width limit. Alternatively, $\mathcal{H}$ can be defined via the kernel's eigenvalues $\Lambda_{\bm{k}}$ and eigenfunctions $Y_{\bm{k}}$: denoting with $f_{\bm{k}}$ the projections of a function $f$ onto the kernel eigenfunctions, then $f$ belongs to $\mathcal{H}$ if it belongs to the span of the eigenfunctions and
\begin{equation}
\left\lVert f \right\rVert_{\mathcal{H}}^2 = \sum_{\bm{k}\geq\bm{0}} (\Lambda_{\bm{k}})^{-1} \lvert f_{\bm{k}}\rvert^2 < + \infty.
\end{equation}
The performance of the kernel is measured by the generalisation error and its expectation over training sets of fixed size $n$ (denoted with $\mathbb{E}_n$)
\begin{equation}\label{eq:generalisation-real}
\epsilon( f^n_\lambda) = \int d\bm{x}dy\, p(\bm{x},y) \left( f^n_\lambda(\bm{x})-y\right)^2, \quad \overline{\epsilon}(\lambda, n) =\mathbb{E}_n\left[ \epsilon(f^n_\lambda)\right],
\end{equation}
or the \emph{excess} generalisation error, obtained by subtracting from $\overline{\epsilon}(\lambda, n)$ the error of the optimal predictor $f^*(\bm{x})\,{=}\,\textstyle \int dy\, p(\bm{x},y) y$. The decay of the error with $n$ can be controlled via two exponents, depending on the details of the kernel and the target function. Specifically, if $\alpha\,{\geq}\,1$ and $r\,{\geq}\,1-1/\alpha$ satisfy the following conditions,
\begin{align}\label{eq:source-capacity}
\text{capacity: }&\text{Tr}\left(\mathcal{T}_{\mathcal{K}}^{1/\alpha}\right) = \sum_{\bm{k}\geq\bm{0}} (\Lambda_{\bm{k}})^{1/\alpha} < +\infty,\nonumber\\
\text{source: }&\norm{T_{\mathcal{K}}^{\frac{1-r}{2}}f^*}_{\mathcal{H}}^2 = \sum_{\bm{k}\geq\bm{0}} (\Lambda_{\bm{k}})^{-r} \lvert f^*_{\bm{k}}\rvert^2 < +\infty,
\end{align}
then, by choosing a $n$-dependent regularisation parameter $\lambda_n\sim n^{-\sfrac{\alpha}{(\alpha r + 1)}}$, one gets the following bound on generalisation~\citep{caponnetto2007optimal}:
\begin{equation}\label{eq:generalisation-bound}
\overline{\epsilon}(\lambda_n, n)-\epsilon(f^*) \leq \mathcal{C}'n^{-\frac{\alpha r}{\alpha r + 1}}.
\end{equation}
\paragraph{Spectral bias ansatz for ridgeless regression.} The bound above is actually tight in the noisy setting, for instance when having labels $y_\mu\,{=}\,f^*(\bm{x}_\mu) + \xi_\mu$ with $\xi_\mu$ Gaussian. In a noiseless problem where $y_\mu\,{=}\,f^*(\bm{x}_\mu)$ one expects to find the best performances in the ridgeless limit $\lambda\to0$, so that the rate of~\autoref{eq:generalisation-bound} is only an upper bound. In the ridgeless case---where the correspondence between kernel methods and infinitely-wide neural networks actually holds---there are unfortunately no rigorous results for the decay of the generalisation error. Therefore, we provide a heuristic derivation of the error decay based on a spectral bias ansatz. Consider the projections of the target function $f^*$ on the eigenfunctions of the student kernel $Y_{\bm{k}}$ ($f^*_{\bm{k}}$)~\footnote{We are again limiting the presentation to the case $s\,{=}\,2$ but the extension to the general case is immediate.} and assume that kernel methods learn only the $n$ projections corresponding to the highest eigenvalues. Then, if the decay of $f^*_{\bm{k}}$ with $\bm{k}$ is sufficiently slow, one has (recall that both $\lambda$ and $\epsilon(f^*)$ vanish in this setting)
\begin{equation}\label{eq:spectralbias}
\overline{\epsilon}(n) \sim \sum_{\bm{k} \text{ s.t. } \Lambda_{\bm{k}}<\Lambda(n)} \lvert f^*_{\bm{k}}\rvert^2,
\end{equation}
with $\Lambda(n)$ the value of the $n$-th largest eigenvalue of the kernel. This result can be derived using the replica method of statistical physics~(see~\citet{canatar2021spectral, loureiro2021learning, tomasini22failure} and~\autoref{app:spectralbias}) or by assuming that input points lie on a lattice~\citep{spigler2020asymptotic}.
These two approaches rely on the very same features of the problem, namely the asymptotic decay of $\Lambda_{\bm{k}}$ and $\lvert f^*_{\bm{k}}\rvert^2$---see also~\citet{cui2021generalization}. For instance, the capacity condition depends only on the kernel spectrum: $\alpha\geq1$ since $\text{Tr}\left(\mathcal{T}_{K}\right)$ is finite~\citep{scholkopf2002learning}; the specific value is determined by the decay of the ordered eigenvalues with their rank, which in turn depends on the scaling of $\Lambda_{\bm{k}}$ with $\bm{k}$. Similarly, the power-law decay of the ordered eigenvalues with the rank determines the scaling of the $n$-th largest eigenvalue, $\Lambda(n) \sim n^{-\alpha}$. The source condition characterises the regularity of the target function relative to the kernel and depends explicitly of the decay of $\lvert f^*_{\bm{k}}\rvert^2$ with $\bm{k}$, as does the right-hand side of~\autoref{eq:spectralbias}. The source condition was used by~\citet{bach2021learning} to prove that kernel methods are adaptive to the smoothness of the target function: this is because the projections of smoother targets on the eigenfunctions display a faster decay with $\bm{k}$, thus allowing to choose a larger $r$ and leading to better generalisation performances.
The following corollary of~\autoref{th:eig-scaling} (extension to $s_1\,{\geq}\,3$ presented in~\autoref{app:minimax},~\autoref{co:adaptivity-app}) shows that, when the spectrum can be partitioned as in~\autoref{th:eig-scaling}, $r$ also depends on the spatial scale of $f^*$, thus showing that hierarchical kernels display adaptivity to targets which depend only on a subset of the input variables. Specific examples of bounds are considered explicitly in~\autoref{sec:examples}.
\begin{corollary}[Adaptivity to spatial structure]\label{co:adaptivity}
Let $\bm{x},\bm{y}\in {\sf M}^p\mathbb{S}^{s-1}$ with $s\,{=}\,2$ and $\mathcal{K}$ be the RFK/NTK of a hierarchical CNN with $L$ hidden layers, filter sizes ($s_1,\dots,s_L$) with ${s_1=2}$, and $p_L\,{\geq}\,1$. Denote with $\Lambda_{\bm{k}}$ the eigenvalues of the corresponding integral operator $T_{\mathcal{K}}$. Consider a target function $f^*$ on ${\sf M}^p\mathbb{S}^{s-1}$. If there is $l\,{=}\,1,\dots,L$ such that $f^*$ depends only on some meta-patch $\bm{x}_{i_{l+1\to L+1}}$ of the input $\bm{x}$, then only the first $l$ sectors of the spectrum of $T_{\mathcal{K}}$ contribute to the source condition, i.e.
\begin{equation}\label{eq:adaptivity}
\norm{T_{\mathcal{K}}^{\frac{1-r}{2}}f^*}_{\mathcal{H}}^2 = \sum_{l'=1}^l \sum_{i_{l'+1\to L+1}}\sum_{\substack{\bm{k}_{i_{l'+1\to L+1}}}}\left(\Lambda^{(l')}_{\bm{k}_{i_{l'+1\to L+1}}}\right)^{-r} \left\lvert f^*_{\bm{k}_{i_{l'+1\to L+1}}} \right\rvert^2.
\end{equation}
The same holds if $f^*$ is a linear combination of such functions. Similarly, only the projections of $f^*$ onto the first $l$ sectors of the spectrum contribute to~\autoref{eq:spectralbias}.
\end{corollary}
\section{Examples and experiments}
\label{sec:examples}
\paragraph{Source-capacity bound for functions of controlled smoothness and $d_{\rm eff}$.}
Fix $l\in[1\2dots L]$ and consider a target function $f^*$ which only depends on the meta-patch $\bm{x}_{i_{l+1\to L+1}}$. Combining the source condition~(\autoref{eq:adaptivity}) with the asymptotic scaling of eigenvalues~(\autoref{eq:eig-scaling-2d}), we get
\begin{equation}\label{eq:adaptivity-2d}
\norm{T_{\mathcal{K}}^{\frac{1-r}{2}}f^*}_{\mathcal{H}}^2 < +\infty \;\Leftrightarrow\; \sum_{\bm{k}} \|\bm{k}\|^{r(2\nu + d_\text{eff}(l))} \left\lvert f^*_{\bm{k}}\right\rvert^2 < +\infty,
\end{equation}
where $\nu=1/2$ ($3/2$) for the NTK (RFK) and $\bm{k}$ denotes the meta-patch $\bm{k}_{i_{l+1\to L+1}}$ without the subscript to ease notation. Since the eigenvalues depend on the norm of $\bm{k}$, \autoref{eq:adaptivity-2d} is equivalent to a finite-norm condition for all the derivatives of $f^*$ up to order $m\,{<}\,r\,(2\nu + d_{\text{eff}}(l))/2$, $\|\Delta^{m/2} f^*\|^2=\sum_{\bm{k}}\|\bm{k}\|^{2m} |f^*_{\bm{k}}|^2\,{<}\,+\infty$ with $\Delta$ denoting the Laplace operator. As a result, if $f^*$ has derivatives of finite norm up to the $m$-th, then the source exponent can be tuned to $r = 2m/(2\nu+d_{\text{eff}}(l))$, inversely proportional to the effective dimensionality of $f^*$. Since the exponent on the right-hand side of~\autoref{eq:generalisation-bound} is an increasing function of $r$, the smaller the effective dimensionality of $f^*$ the faster the decay of the error---hence hierarchical kernels are adaptive to the spatial structure of $f^*$.
To fully determine the error bound, we need to find the largest $\alpha$ satisfying the capacity condition, which coincides with the exponent describing the decay of eigenvalues sorted in descending order with the rank. This exponent is controlled by the $L$-th sector, which is the most numerous, thus $\alpha\,{=}\,1+2\nu/d_{\text{eff}}(L)$ (see~\autoref{ssec:spectral-decay-app}). As a result, from~\autoref{eq:generalisation-bound},
\begin{equation}\label{eq:ts-bound}
\overline{\epsilon}(n) \leq \mathcal{C}'n^{-\beta}\text{ with }\beta = \frac{2m\,(2\nu+d_{\rm eff}(L))}{2m\,(2\nu+d_{\rm eff}(L))+(2\nu+d_{\rm eff}(l))\,d_{\rm eff}(L)}.
\end{equation}
For instance, if $p_L\,{=}\,1$ then $d_{\rm eff}(L)\,{=}\,p\,{=}\,d/2$ (the number of $2$-dim. patches). Even when $p\,{\gg}\,1$, if $f^*$ depends only on a finite-dimensional meta-patch (or is a sum of such functions) the exponent $\beta$ converges to the finite value $2m/(2(m+\nu) + d_{\rm eff}(l))$. In stark contrast, using a fully-connected kernel to learn the same target results in $r = 2m/(2\nu+p)$, thus $\beta\,{=}\,2m/(2m + p)$---vanishing as $1/p$ when $p\,{\gg}\,1$, thus cursed by dimensionality.
\paragraph{Rates from spectral bias ansatz.} The same picture emerges when estimating the actual decay of the error from~\autoref{eq:spectralbias}. $\Lambda(n)\sim n^{-\alpha}$, whereas $\sum_{\bm{k}}\|\bm{k}\|^{2m} |f^*_{\bm{k}}|^2\,{<}\,+\infty$ implies $|f^*_{\bm{k}}|^2 \lesssim \| \bm{k}\|^{-2 m - d_{\rm eff}(l)}$ for a target supported on a $d_{\rm eff}(l)$-dimensional meta-patch. Plugging such decays in~\autoref{eq:spectralbias} we obtain (details in~\autoref{ssec:sb-app})
\begin{equation}\label{eq:t-depth-one-s-depth-two}
\overline{\epsilon}(n) \sim n^{-\beta} \text{ with }\beta = \frac{2m}{2\nu+d_{\text{eff}}(l)} \frac{2\nu+d_{\rm eff}(L)}{d_{\rm eff}(L)}.
\end{equation}
Again, with $p_L\,{=}\,1$ and $d_{\rm eff}(L)\,{=}\,p$, the exponent remains finite for $p\,{\gg}\,1$. Notice that we recover the results of \citet{favero2021locality} by using a shallow local kernel if the target is supported on $s$-dimensional patches. These results show that hierarchical kernels play significantly better with the approximation-estimation trade-off than shallow local kernels, as they are able to approximate global functions of the input while not being cursed when the target function has a local structure.
\paragraph{Numerical experiments.} We test our predictions by training a hierarchical kernel (\textit{student}) on a random Gaussian function with zero mean and covariance given by another hierarchical kernel (\textit{teacher}). A learning problem is fully specified by the depths, sets of filter sizes, and smoothness exponents ($\nu$) of teacher and student kernels. In particular, the depth and the set of filter sizes of the teacher kernel control the effective dimension of the target function. \autoref{fig:learning_curves_main} shows the learning curves (solid lines) together with the predictions from~\autoref{eq:t-depth-one-s-depth-two} (dashed lines), confirming the picture emerging from our calculations. Panel (a) of \autoref{fig:learning_curves_main} shows a depth-four student learning depth-two, depth-three, and depth-four teachers. This student is not cursed in the first two cases and is cursed in the third one, which corresponds to a global target function. Panel (b) illustrates the curse of dimensionality with the effective input dimension $d_{\rm eff}(L)$ by comparing the learning curves of depth-three students learning global target functions with an increasing number of variables. All our simulations are in excellent agreement with the predictions of~\autoref{eq:t-depth-one-s-depth-two}. The bounds coming from~\autoref{eq:ts-bound} would display a slightly slower decay, as sketched in~\autoref{fig:main-msg}, right panel. All the details of numerical experiments are reported in \autoref{app:numerics}, together with additional results for $s_1\geq 3$~(\autoref{fig:ternary_app}), a comparison between the ridgeless and optimally-regularised cases~(\autoref{fig:noise_app}), and experiments with the CIFAR-10 dataset (\autoref{fig:real_data}).
\paragraph{Curse of dimensionality for hierarchical targets.}
Notice that when the teacher kernel is a hierarchical RKF, the target function can be thought of as the output of a randomly-initialised, infinitely-wide, deep hierarchical CNN. The rate for this target is given by~\autoref{eq:t-depth-one-s-depth-two} with $l\,{=}\,L$ and $m\,{=}\,3/2$, i.e. $\beta\,{=}\,3/d_{\rm eff}(L)$, as we would have obtained for a global non-hierarchical target. Thus, the intrinsically hierarchical structure of these targets is not sufficient to beat the curse of dimensionality. In fact, because of the equivalence of the predictors of kernel ridgeless regression and Bayesian inference, the rate obtained in the case where teacher and student kernel match is Bayes-optimal. We conclude that it is impossible to beat the curse of dimensionality with any method when learning the output of an infinitely-wide, deep hierarchical network with random weights. Therefore, target functions of this kind cannot be a good model of learnable tasks.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/main_ts_a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/main_ts_b.pdf}
\end{subfigure}
\caption{Learning curves for deep convolutional NTKs in a teacher-student setting. \textbf{a.} Depth-four student learning depth-two, depth-three, and depth-four teachers. \textbf{b.} Depth-three models cursed by the effective input dimensionality $d_{\rm eff}(L)$. The numbers inside brackets are the sequence of filter sizes of the kernels. Solid lines are the results of experiments averaged over 16 realisations with the shaded areas representing the empirical standard deviations. The predicted asymptotic scaling $\epsilon \sim n^{-\beta}$ are reported as dashed lines. Details on the numerical experiments are reported in \autoref{app:numerics}.}
\label{fig:learning_curves_main}
\end{figure}
\section{Conclusions and outlook}
\looseness=-1 We have proved that deep CNNs can adapt to the spatial scale of the target function, thus beating the curse of dimensionality if the target depends only on local groups of variables. Yet, if considered as `teachers', they generate functions that cannot be learnt efficiently in high dimensions, even in the Bayes-optimal setting where the student is matched to the teacher. Thus, the architectures we considered are not good models of the hierarchical structure of real data which are efficiently learnable.
Enforcing a stronger notion of compositionality is an interesting endeavour for the future. Following~\citet{poggio2017and}, one may consider a much smaller family of functions of the form
\begin{equation}\label{eq:strong-compositonal-function}
f_{i_{l+1 \to L+1}}(\bm{x}_{i_{l+1 \to L+1}})=f_{i_{l+1 \to L+1}}(f_{i_{l=1\to L+1}}(\bm{x}_{i_{l=1 \to L+1}}),\dots, f_{i_{l=s_l \to L+1}}(\bm{x}_{i_{l=s_l\to L+1}})).
\end{equation}
From an information theory viewpoint, \citet{schmidt2020nonparametric, finocchio2021posterior} showed that it is possible to learn such functions efficiently. However, these arguments do not provide guarantees for any practical algorithm such as stochastic gradient descent. Moreover, preliminary results (not shown) assuming that the functions $f$'s are random Gaussian functions suggest that these tasks are not learnable efficiently by a hierarchical CNN in the kernel regime---see also~\cite{giordano2022inability}. It is unclear whether this remains true when the networks closely resemble the structure of~\autoref{eq:strong-compositonal-function} as in~\citet{poggio2017and}, or when the networks are trained in a regime where features can be learnt from data. Recently, for instance,~\citet{ingrosso2022data} have observed that under certain conditions locality can be learnt from scratch. It is not clear whether compositionality can also be learnt, beyond some very stylised settings~\citep{abbe2022merged}.
Finally, another direction to explore is the stability of the task toward smooth transformations or diffeomorphisms. This form of stability has been proposed as a key element to understanding how the curse of dimensionality is beaten for image datasets~\citep{bruna2013invariant, petrini2021relative}. Such a property can be enforced with pooling operations~\citep{bietti2019inductive,bietti2021sample}; therefore diagonalising the NTK in this case as well would be of high interest.
\section*{Acknowledgements}
We thank Massimo Sorella for pointing to us the relationship between differentiability and asymptotics of the Fourier coefficients. We also thank Antonio Sclocchi, Leonardo Petrini, Umberto Maria Tomasini and Marcello d'Abbicco for helpful discussions. This work was supported by a grant from the Simons Foundation (\#454953 Matthieu Wyart).
\section{Harmonic analysis on the sphere}\label{app:harmonics}
This appendix collects some introductory background on spherical harmonics and dot-product kernels on the sphere~\cite{smola2000regularization}. See~\cite{efthimiou2014spherical, atkinson2012spherical, bach2017breaking} for a complete description. Spherical harmonics are homogeneous polynomials on the sphere $\mathbb{S}^{s-1}\,{=}\,\lbrace \bm{x}\in\mathbb{R}^s\,|\,\lVert \bm{x} \rVert\,{=}\,1\rbrace$, with $\lVert \cdot \rVert$ denoting the L2 norm. Given the polynomial degree $k\in\mathbb{N}$, there are $\mathcal{N}_{k,s}$ linearly independent spherical harmonics of degree $k$ on $\mathbb{S}^{s-1}$, with
\begin{equation}
\mathcal{N}_{k,s} = \frac{2k+s-2}{k}\binom{s+k-3}{k-1},\; \left\lbrace\begin{aligned} &\mathcal{N}_{0,d}=1\quad\forall d,\\ &\mathcal{N}_{k,d}\sim k^{d-2}\quad\text{for } k\gg 1. \end{aligned}\right.
\end{equation}
Thus, we can introduce a set of $\mathcal{N}_{k,s}$ spherical harmonics $Y_{k,\ell}$ for each $k$, with $\ell$ ranging in $1,\dots,\mathcal{N}_{k,s}$, which are orthonormal with respect to the uniform measure on the sphere $d\tau(\bm{x})$,
\begin{equation}\label{eq:sphere-dot-prod}
\left\langle Y_{k,\ell}, Y_{k,\ell'} \right\rangle_{\mathbb{S}^{s-1}} := \int_{\mathbb{S}^{s-1}} d\tau(\bm{x})\, Y_{k,\ell}(\bm{x}) Y_{k,\ell'}(\bm{x}) = \delta_{\ell,\ell'}.
\end{equation}
Because of the orthogonality of homogeneous polynomials with a different degree, the set $\left\lbrace Y_{k,\ell} \right\rbrace_{k,\ell}$ is a complete orthonormal basis for the space of square-integrable functions on the $s$-dimensional unit sphere. Furthermore, spherical harmonics are eigenfunctions of the Laplace-Beltrami operator $\Delta$, which is nothing but the restriction of the standard Laplace operator to $\mathbb{S}^{s-1}$.
\begin{equation}\label{eq:slap-eigvals}
\Delta Y_{k,\ell} = -k(k+s-2)Y_{k,\ell}.
\end{equation}
The Laplace-Beltrami operator $\Delta$ can also be used to characterise the differentiability of functions $f$ on the sphere via the L2 norm of some power of $\Delta$ applied to $f$.
By fixing a direction $\bm{y}$ in $\mathbb{S}^{d-1}$ one can select, for each $k$, the only spherical harmonic of degree $k$ which is invariant for rotations that leave $\bm{y}$ unchanged. This particular spherical harmonic is, in fact, a function of $\bm{x}\cdot\bm{y}$ and is called the Legendre polynomial of degree $k$, $P_{k,s}(\bm{x}\cdot\bm{y})$ (also referred to as Gegenbauer polynomial). Legendre polynomials can be written as a combination of the orthonormal spherical harmonics $Y_{k,\ell}$ via the addition formula~\cite[Thm.~2.9]{atkinson2012spherical},
\begin{equation}
P_{k,s}(\bm{x}\cdot\bm{y}) = \frac{1}{\mathcal{N}_{k,s}} \sum_{\ell=1}^{\mathcal{N}_{k,s}} Y_{k,\ell}(\bm{x})Y_{k,\ell}(\bm{y}).
\end{equation}
Alternatively, $P_{k,s}$ is given explicitly as a function of $t\,{=}\,\bm{x}\cdot\bm{y}\in[-1,+1]$ via the Rodrigues formula~\cite[Thm.~2.23]{atkinson2012spherical},
\begin{equation}\label{eq:rodrigues}
P_{k,s}(t) = \left(-\frac{1}{2}\right)^k \frac{\Gamma\left(\frac{s-1}{2}\right)}{\Gamma\left(k+\frac{s-1}{2}\right)}\left(1-t^2\right)^{\frac{3-s}{2}} \frac{d^k}{dt^k}\left(1-t^2\right)^{k+\frac{s-3}{2}}.
\end{equation}
Legendre polynomials are orthogonal on $[-1,+1]$ with respect to the measure with density $(1-t^2)^{(s-3)/2}$, which is the probability density function of the scalar product between two points on $\mathbb{S}^{s-1}$.
\begin{equation}\label{eq:leg-ortho}
\int_{-1}^{+1} dt\left(1-t^2\right)^{\frac{s-3}{2}}\, P_{k,s}(t)P_{k',s}(t) = \frac{|\mathbb{S}^{s-1}|}{|\mathbb{S}^{s-2}|}\frac{\delta_{k,k'}}{\mathcal{N}_{k,s}},
\end{equation}
with $|\mathbb{S}^{s-1}|$ denoting the surface area of the $s$-dimensional unit sphere.
To sum up, given $\bm{x},\bm{y}\in\mathbb{S}^{s-1}$, functions of $\bm{x}$ or $\bm{y}$ can be expressed as a sum of projections on the orthonormal spherical harmonics $\left\lbrace Y_{k,\ell} \right\rbrace_{k,\ell}$, whereas functions of $\bm{x}\cdot\bm{y}$ can be expressed as a sum of projections on the Legendre polynomials $\left\lbrace P_{k,s}(\bm{x}\cdot\bm{y})\right\rbrace_k $. The relationship between the two expansions is elucidated in the Funk-Hecke formula~\cite[Thm.~2.22]{atkinson2012spherical},
\begin{equation}
\int_{\mathbb{S}^{s-1}} d\tau(\bm{y})\,f(\bm{x}\cdot\bm{y}) Y_{k,\ell}(\bm{y}) = Y_{k,\ell}(\bm{x})\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{-1}^{+1} dt\left(1-t^2\right)^{\frac{s-3}{2}}\, f(t) P_{k,s}(t).
\end{equation}
If the function $f$ has continuous derivatives up to the $k$-th order in $[-1,+1]$, then one can plug Rodrigues' formula in the right-hand side of Funk-Hecke formula and get, after $k$ integrations by parts,
\begin{equation}
\int_{\mathbb{S}^{s-1}} d\tau(\bm{y})\,f(\bm{x}\cdot\bm{y}) Y_{k,\ell}(\bm{y}) = Y_{k,\ell}(\bm{x})\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\frac{\Gamma\left(\frac{s-1}{2}\right)}{2^k \Gamma\left(k+\frac{s-1}{2}\right)}\int_{-1}^{+1} dt\, f^{(k)}(t)\left(1-t^2\right)^{k+\frac{s-3}{2}},
\end{equation}
with $f^{(k)}(t)$ denoting the $k$-th order derivative of $f$ in $t$. This trick also applies to functions which are not $k$ times differentiable at $\pm 1$, provided the boundary terms due to integration by parts vanish.
\subsection{Dot-product kernels on the sphere}
Dot-product kernels are kernels which depend on the two inputs $\bm{x}$ and $\bm{y}$ via their scalar product $\bm{x}\cdot\bm{y}$. When the inputs lie on the unit sphere $\mathbb{S}^{s-1}$, one can use the machinery introduced in the previous section to arrive immediately at the Mercer's decomposition of the kernel~\cite{smola2000regularization}.
\begin{equation}\label{eq:dot-prod-mercer}\begin{aligned}
\mathcal{K}(\bm{x}\cdot\bm{y}) &= \sum_{k\geq 0} \left(\mathcal{N}_{k,s}\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{-1}^{+1} dt\left(1-t^2\right)^{\frac{s-3}{2}}\, \mathcal{K}(t) P_{k,s}(t)\right) P_{k,s}(\bm{x}\cdot\bm{y})\\
&= \sum_{k\geq 0} \left(\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{-1}^{+1} dt\left(1-t^2\right)^{\frac{s-3}{2}}\, \mathcal{K}(t) P_{k,s}(t)\right) \sum_{\ell=1}^{\mathcal{N}_{k,s}} Y_{k,\ell}(\bm{x})Y_{k,\ell}(\bm{y})\\
&:= \sum_{k\geq 0} \Lambda_k \sum_{\ell=1}^{\mathcal{N}_{k,s}} Y_{k,\ell}(\bm{x})Y_{k,\ell}(\bm{y}).
\end{aligned}\end{equation}
In the first line we have just decomposed $\mathcal{K}$ into projections onto the Legendre polynomials, the second line follows immediately from the addition formula, and the third is just a definition of the eigenvalues $\Lambda_k$. Notice that the eigenfunctions of the kernel are orthonormal spherical harmonics and the eigenvalues are degenerate with respect to the index $\ell$. The Reproducing Kernel Hilbert Space (RKHS) of $\mathcal{K}$ can be characterised as follows,
\begin{equation}
\mathcal{H} = \left\lbrace f:\mathbb{S}^{s-1}\to \mathbb{R} \text{ s. t. } \left\lVert f \right\rVert_{\mathcal{H}}:= \sum_{k\geq 0,\Lambda_k\neq 0} \sum_{\ell=1}^{\mathcal{N}_{k,s}} \frac{\left\langle f, Y_{k,l} \right\rangle_{\mathbb{S}^{s-1}}^2}{\Lambda_k}< +\infty\right\rbrace.
\end{equation}
\subsection{Multi-dot-product kernels on the multi-sphere}
Mercer's decomposition of dot-product kernels extends naturally to the case considered in this paper, where the input space is the Cartesian product of $p$ $s$-dimensional unit sphere,
\begin{equation}
{\sf M}^p\mathbb{S}^{s-1} =\left\lbrace \bm{x}=(\bm{x}_1,\dots,\bm{x}_p) \right.\left| \bm{x}_i\in\mathbb{S}^{s-1} \,\forall\,i=1,\dots,p \right\rbrace = \bigtimes_{i=1}^p \mathbb{S}^{s-1}
\end{equation}
which we refer to as the \emph{multi-sphere} following the notation of~\cite{geifman2022spectral}. After defining a scalar product between functions on ${\sf M}^p\mathbb{S}^{s-1}$ by direct extension of~\autoref{eq:sphere-dot-prod}, one can immediately find a set of orthonormal polynomials by taking products of spherical harmonics. With the multi-index notation $\bm{k}\,{=}\,(k_1, \dots, k_p)$, $\bm{\ell}\,{=}\,(\ell_1, \dots, \ell_p)$, for all $\bm{x}\in{\sf M}^p\mathbb{S}^{s-1}$
\begin{equation}
\tilde{Y}_{\bm{k},\bm{\ell}}(\bm{x}) = \prod_{i=1}^p Y_{k_i,\ell_i}(\bm{x}_i),\text{ with }k_i\geq 0\text{, }\ell_i=1,\dots,\mathcal{N}_{k_i,s}=\frac{2k_i+s-2}{k_i}\binom{s+k_i-3}{k_i-1}.
\end{equation}
These product spherical harmonics $\tilde{Y}_{\bm{k},\bm{\ell}}(\bm{x})$ span the space of square-integrable functions on ${\sf M}^p\mathbb{S}^{s-1}$. Furthermore, as each spherical harmonic is an eigenfunction of the Laplace-Beltrami operator, $\tilde{Y}_{\bm{k},\bm{\ell}}$ is an eigenfunction of the sum of Laplace-Beltrami operators on the $p$ unit spheres,
\begin{equation}\label{eq:mslap-eigvals}
\Delta_{p,s} \tilde{Y}_{\bm{k},\bm{\ell}} := \left(\sum_{i=1}^p \Delta_i\right)\prod_{i=1}^p Y_{k_i,\ell_i} = \left(\sum_{i=1}^p\left((-k_i)(k_i+s-2)\right)\right)\ \tilde{Y}_{\bm{k},\bm{\ell}}.
\end{equation}
We can thus characterise the differentiability of functions of the multi-sphere $\mathcal{X}_{s,p}$ via finiteness in L2 norm of some power of $\Delta_{p,s}$.
Similarly, we can consider products of Legendre polynomials to obtain a set of orthogonal polynomials on $[-1,1]^p$~(see~\cite{geifman2022spectral}, appendix A). Then, any function $f$ on ${\sf M}^p\mathbb{S}^{s-1}\times{\sf M}^p\mathbb{S}^{s-1}$ which depends only on the $p$ scalar products between patches,
\begin{equation}\label{eq:multi-dot-product}
f(\bm{x},\bm{y}) = g(\bm{x}_1\cdot\bm{y}_1, \dots, \bm{x}_p\cdot\bm{y}_p),
\end{equation}
can be written as a sum of projections on products of Legendre polynomials
\begin{equation}
\tilde{P}_{\bm{k},s}(\bm{t}) := \prod_{i=1}^p P_{k_i,s}(t_i).
\end{equation}
Following~\cite{geifman2022spectral}, we call such functions \emph{multi-dot-product} kernels. When fixing one of the two arguments of $f$ (say $\bm{x}$), $f$ becomes a function on ${\sf M}^p\mathbb{S}^{s-1}\times{\sf M}^p\mathbb{S}^{s-1}$ and can be written as a sum of projections on the $\tilde{Y}_{\bm{k},\bm{\ell}}$'s. The two expansions are related by the following generalised Funk-Hecke formula,
\begin{equation}\label{eq:multi-fh}\begin{aligned}
\left(\prod_{i=1}^p \int_{\mathbb{S}^{s-1}} d\tau(\bm{y}_i)\right) &g(\bm{x}_1\cdot\bm{y}_1, \dots, \bm{x}_p\cdot\bm{y}_p) \tilde{Y}_{\bm{k},\bm{\ell}}(\bm{y}) = \\
\tilde{Y}_{\bm{k},\bm{\ell}}(\bm{y}) &\left(\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\right)^p \left(\prod_{i=1}^p \int_{-1}^{+1} dt_i\left(1-t_i^2\right)^{\frac{s-3}{2}} P_{k_i,s}(t_i)\right)\,g(t_1,\dots,t_p).
\end{aligned}\end{equation}
Having introduced the product spherical harmonics $\tilde{Y}_{\bm{k},\bm{\ell}}$ as basis of ${\sf M}^p\mathbb{S}^{s-1}$ and the product Legendre polynomials $\tilde{P}_{\bm{k},s}(\bm{t})$ as basis of $[-1,+1]^p$, the Mercer's decomposition of multi-dot-product kernels follows immediately.
\begin{equation}\label{eq:multi-dp-mercer}\begin{aligned}
\mathcal{K}\left(\left\lbrace \bm{x}_i\cdot\bm{y}_i\right\rbrace_i \right) &= \sum_{\bm{k}\geq 0} \left(\prod_{i=1}^p\mathcal{N}_{k_i,s}\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{-1}^{+1} dt_i\left(1-t_i^2\right)^{\frac{s-3}{2}}\, P_{k_i,s}(t_i)\right)\mathcal{K}\left( \left\lbrace t_i \right\rbrace_i\right) P_{\bm{k},s}\left(\left\lbrace \bm{x}_i\cdot\bm{y}_i\right\rbrace_i \right)\\
&= \sum_{k\geq 0} \Lambda_k \sum_{\ell=1}^{\mathcal{N}_{k,s}} Y_{k,\ell}(\bm{x})Y_{k,\ell}(\bm{y}).
\end{aligned}\end{equation}
\section{RFK and NTK of deep convolutional networks} \label{app:kernel-lemmas}
This appendix gives the functional forms of the RFK and NTK of hierarchical CNNs. We refer the reader to \cite{arora2019exact} for the derivation.
\begin{definition}[RFK and NTK of hierarchical CNNs]\label{def:hierarchical-kernels}
Let $\bm{x},\bm{y}\in {\sf M}^p\mathbb{S}^{s-1}={\textstyle\prod_{i=1}^p} \mathbb{S}^{s-1}$. Denote tuples of the kind $i_{l} i_{l+1} \dots i_{m}$ with $i_{l \to m}$ for $m\,{\geq}\,l$. For $m\,{<}\,l$, $i_{l\to m}$ denotes the empty tuple. For each tuple $i_{2\to L+1}$, denote with $t_{i_{2\to L+1}}$ the scalar product between the $s$-dimensional patches of $\bm{x}$ and $\bm{y}$ identified by the same tuple, i.e.
\begin{equation}
t_{i_{2\to L+1}} =\bm{x}_{i_{2\to L+1}}\cdot \bm{y}_{i_{2\to L+1}}
\end{equation}
For $1\,{\leq}\,l\,{\leq}\,L+1$, denote with $\left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l}}$ the sequence of $t$'s obtained by letting the indices of the tuple $i_{2\to l}$ vary in their respective range. Consider a hierarchical CNN with $L$ hidden layers, filter sizes $(s_1,\dots,s_L)$, $p_L\,{\geq}\,1$ and all the weights $w^{\scriptstyle(1)}_{\scriptstyle h,i}, w^{\scriptstyle(l)}_{\scriptstyle h,h',i}, w^{\scriptstyle (L+1)}_{\scriptstyle h,i}$ initialised as Gaussian random numbers with zero mean and unit variance.
\textbf{RFK.} The corresponding RFK (or covariance kernel) is a function $\mathcal{K}_{\text{RFK}}^{\scriptscriptstyle(L+1)}$ of the $p_1\,{=}\,d/s_1$ scalar products $t_{i_L\dots i_1}$ which can be obtained recursively as follows. With $\kappa_1(t)\,{=}\,\left((\pi-\arccos{t})\, t +\sqrt{1-t^2}\right)/\pi$,
\begin{align}
&\mathcal{K}_{\text{RFK}}^{(1)} (t_{i_{2\to L+1}}) = \kappa_1(t_{i_{2\to L+1}}); \nonumber\\
&\mathcal{K}_{\text{RFK}}^{(l)}\left(\left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l}}\right) = \kappa_1\left(\frac{1}{s_l}\sum_{i_l} \mathcal{K}_{\text{RFK}}^{(l-1)}\left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l-1}}\right) \right),\; \forall\, l\in[2\2dots L]\,\text{ if }L\,{>}\,1;\nonumber\\
&\mathcal{K}_{\text{RFK}}^{(L+1)} \left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to L+1}} \right)= \frac{1}{p_L}\sum_{i_{L+1}=1}^{p_L} \mathcal{K}_{\text{RFK}}^{(L)}\left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to L}} \right).
\end{align}
\textbf{NTK.} The NTK of the same hierarchical CNN is also a function of the $p_1\,{=}\,d/s_1$ scalar products $t_{i_L\dots i_2}$ which can be obtained recursively as follows. With $\kappa_0(t)\,{=}\,\left(\pi-\arccos{t}\right)/\pi$,
\begin{align}
&\mathcal{NTK}^{(1)}\left( t_{i_{2\to L+1}} \right) = \kappa_1(t_{i_{2\to L+1}}) + \left(t_{i_{2\to L+1}}\right) \kappa_0(t_{i_{2\to L+1}});\nonumber\\
&\mathcal{NTK}^{(l)} \left(\left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l}}\right) = \mathcal{RFK}^{(l)} (\left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l}}) + \left(\frac{1}{s_l}\sum_{i_l} \mathcal{NTK}^{(l-1)}\left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l-1}}\right)\right)\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\quad \times \kappa_0\left(\frac{1}{s_l}\sum_{i_l} \mathcal{RFK}^{(l-1)}\left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to l-1}}\right) \right),\; \forall\, l\in[2\2dots L]\,\text{ if }L\,{>}\,1;\nonumber\\
&\mathcal{NTK}^{(L+1)} \left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to L+1}} \right)= \frac{1}{p_L}\sum_{i_{L+1}=1}^{p_L} \mathcal{NTK}^{(L)}\left( \left\lbrace t_{i_{2\to L+1}}\right\rbrace_{i_{2\to L}} \right).
\end{align}
\end{definition}
\section{Spectra of deep convolutional kernels} \label{app:spectra}
In this section we state and prove a generalised version of~\autoref{th:eig-scaling} which includes non-binary patches. Our proof strategy is to relate the asymptotic decay of eigenvalues to the singular behaviour of the kernel, as it is customary in Fourier analysis and was done in~\cite{bietti2021deep} for standard dot-product kernel. In ~\autoref{ssec:proof-singular} we perform the singular expansion of hierarchical kernels, in~\autoref{ssec:proof-fourier} we use this expansion to prove \autoref{th:eig-scaling} with $L\,{=}\,2$ ($2$ hidden layers) and $s_1\,{=}\,2$ (patches on the ring), which we then generalise to general $s_1$ in~\autoref{ssec:proof-depth2} and to general depth in~\autoref{ssec:proof-general}.
\begin{theorem}[Spectrum of hierarchical kernels]\label{th:eig-scaling-app}
Let $T_{\mathcal{K}}$ be the integral operator associated with a $d$-dimensional hierarchical kernel of depth $L+1$, $L\,{>}\,1$ and filter sizes ($s_1,\dots,s_L$). Eigenvalues and eigenfunctions of $T_{\mathcal{K}}$ can be organised into $L$ sectors associated with the hidden layers of the kernel/network. For each $1\,{\leq}\,l\,{\leq}\,L$, the $l$-th sector consists of $(\textstyle\prod_{\scriptscriptstyle l'=1}^{\scriptscriptstyle l} s_{l'})$-\emph{local} eigenfunctions: functions of a single meta-patch $\bm{x}_{i_{l+1\to L+1}}$ which cannot be written as linear combinations of functions of smaller meta-patches. The labels $\bm{k}$ of these eigenfunctions are such that there is a meta-patch $\bm{k}_{i_{l+1 \to L+1}}$ of $\bm{k}$ with no vanishing sub-meta-patches and all the $k_i$'s outside of $\bm{k}_{i_{l+1 \to L+1}}$ are $0$ (because the eigenfunction is constant outside of $\bm{x}_{i_{l+1 \to L+1}}$). The corresponding eigenvalue is degenerate with respect to the location of the meta-patch: we call it $\Lambda^{\scriptscriptstyle(l)}_{\bm{k}_{i_{l+1}\to i_{L+1}}}$. When $\|\bm{k}_{i_{l+1 \to L+1}}\|\to\infty$, with $k\,{=}\,\|\bm{k}_{i_{l+1 \to L+1}}\|$,
\begin{enumerate}
\item[i.] if $s_1=2$, then
\begin{equation}
\Lambda^{(l)}_{\bm{k}_{i_{l+1\to L+1}}}
= \mathcal{C}_{2,l}\, k^{-2\nu -d_{\rm eff}(l)} + o\left(k^{-2\nu -d_{\rm eff}(l)}\right),
\end{equation}
with $\nu_{\rm NTK}=1/2,\, \nu_{\rm RFK}=3/2$ and $d_{\rm eff}$ the effective dimensionality of the meta-patches defined in~\autoref{eq:effective-dim}. $\mathcal{C}_{2,l}$ is a strictly positive constant for $l\,{\geq}\,2$ whereas for $l\,{=}\,1$ it can take two distinct strictly positive values depending on the parity of $k_{i_{2\to L+1}}$. \\
\item[ii.] if $s_1 \geq 3$, then for fixed non-zero angles $\bm{k}/k$,
\begin{equation}\label{eq:eig-scaling-general}\begin{aligned}
\Lambda^{(l)}_{\bm{k}_{i_{l+1\to L+1}}} = \mathcal{C}_{s_1,l}\left(\frac{\bm{k}_{i_{l+1 \to L+1}}}{k}\right) k^{-2\nu -d_{\rm eff}(l)} + o\left(k^{-2\nu -d_{\rm eff}(l)}\right),
\end{aligned}\end{equation}
where $\mathcal{C}_{s_1,l}$ is a positive function for $l\,{\geq}\,2$, whereas for $l\,{=}\,1$ it is a strictly positive constant which depends on the parity of $k_{i_{2\to L+1}}$.
\end{enumerate}
\end{theorem}
\subsection{Singular expansion of hierarchical kernels}\label{ssec:proof-singular}
Both the RFK and NTK of ReLU networks, whether deep or shallow, are built by applying the two functions $\kappa_0$ and $\kappa_1$~\cite{Cho2009} (see also~\autoref{def:hierarchical-kernels}),
\begin{equation}
\kappa_0(t) = \frac{(\pi-\arccos{t})}{\pi},\quad \kappa_1(t) = \frac{(\pi-\arccos{t})\,t + \sqrt{1-t^2}}{\pi}.
\end{equation}
The functions $\kappa_0$ and $\kappa_1$ are non-analytic in $t\,{=}\,\pm 1$, with the following singular expansion~\cite{bietti2021deep}. Near $t\,{=}\,1$, with $u\,{=}\,1-t$
\begin{equation}\label{eq:proof-taylor-plus}
\left\lbrace\begin{aligned} \kappa_0(1-u) &= 1 -\frac{\sqrt{2}}{\pi} u^{1/2}+ O(u^{3/2}),\\\kappa_1(1-u) &= 1-u +\frac{2\sqrt{2}}{3\pi} u^{3/2}+ O(u^{5/2}). \end{aligned}\right
\end{equation}
Near $t\,{=}\,-1$, with $u\,{=}\,1+t$,
\begin{equation}\label{eq:proof-taylor-minus}
\left\lbrace\begin{aligned} \kappa_0(-1+u) &= \frac{\sqrt{2}}{\pi} u^{1/2}+ O(u^{3/2}),\\\kappa_1(-1+u) &= \frac{2\sqrt{2}}{3\pi} u^{3/2}+ O(u^{5/2}). \end{aligned}\right.
\end{equation}
As a result, hierarchical kernels have a singular expansion when the $t_{i_{2\to L+1}}$'s are close to $\pm 1$. In particular, the following expansions are relevant for computing the asymptotic scaling of eigenvalues.
\begin{proposition}[RFK when $\bm{x}\,{=}\,\bm{y}$]
The RFK of a hierarchical network of depth $L\,{+}\,1$, filter sizes $(s_1,\dots,s_L)$ and $p_L\,{\geq}\,1$ has the following singular expansion when all $t_{i_{2\to L+1}}\to 1$. With $u_{i_{2\to L+1}}\,{=}\,1-t_{i_{2\to L+1}}$, $c\,{=}\,2\sqrt{2}/(3\pi)$, and $\prod_{l\in I} s_l\,{:=}\,1$ if $I$ is the empty set,
\begin{equation}\begin{aligned}\label{eq:rfk-sing-plus}
\mathcal{K}_{\text{RFK}}^{(L+1)} \left( \left\lbrace 1-u_{i_{2\to L+1}}\right\rbrace_{i_{2\to L+1}} \right) &= 1-\frac{1}{\left(\displaystyle\prod_{2\leq l' \leq L}s_{l'}\right)p_L} \sum_{i_{2\to L+1}} u_{i_{2\to L+1}}\\
& + \frac{c}{p_L}\sum_{l'=1}^{L} \frac{1}{\left(\displaystyle\prod_{l' < l'' \leq L}s_{l''}\right)} \sum_{i_{{l'+1}\to{L+1}}}\left( \frac{\sum_{i_{2\to{l'}}} u_{i_{2\to L+1}}}{\left(\displaystyle\prod_{2 \leq l'' \leq l'}s_{l''}\right)} \right)^{3/2}\\
&+ O(u_{i_{2\to L+1}}^{5/2})
\end{aligned}\end{equation}
\end{proposition}
\emph{Proof.} With $L\,{=}\,1$ one has
(recall that $i_{2\to 1+1}=i_{2\to 2}$ reduces to a single index)
\begin{align}\label{eq:proof-hierarchical-rfk-taylor-1}
\mathcal{K}_{\text{RFK}}^{(1)} (1-u_{i_2}) &= 1-u_{i_{2}}+cu_{i_{2}}^{3/2} + O(u_{i_{2}}^{5/2})\Rightarrow \nonumber\\
\mathcal{K}_{\text{RFK}}^{(1+1)} \left(\left\lbrace 1-u_{i_2}\right\rbrace_{i_2}\right) &= 1-\frac{1}{p_1} \sum_{i_2} u_{i_{2}}+\frac{c}{p_1}\sum_{i_2} u_{i_{2}}^{3/2} + O(u_{i_{2}}^{5/2}).
\end{align}
With $L\,{=}\,2$,
\begin{align}
\mathcal{K}_{\text{RFK}}^{(2)}\left(\left\lbrace 1-u_{i_2}\right\rbrace_{i_2}\right) &= \kappa_1\left(1 - \frac{1}{s_2}\sum_{i_2}u_{i_2,i_3} + \frac{c}{s_2}\sum_{i_2}u_{i_2,i_3}^{3/2} + O(u_{i_2,i_3}^{5/2}) \right)\nonumber\\
&= 1 - \frac{1}{s_2}\sum_{i_2}u_{i_2,i_3} + \frac{c}{s_2}\sum_{i_2}u_{i_2,i_3}^{3/2} + c\left( \frac{1}{s_2}\sum_{i_2}u_{i_2,i_3} \right)^{3/2} + O(u_{i_2,i_3}^{5/2}),
\end{align}
therefore
\begin{align}\label{eq:hierarchical-rfk-taylor-2}
\mathcal{K}_{\text{RFK}}^{(2+1)}\left(\left\lbrace 1-u_{i_2,i_3}\right\rbrace_{i_2,i_3}\right) = &1 - \frac{1}{s_2p_2}\sum_{i_2,i_3} u_{i_2,i_3} + \frac{c}{p_2}\frac{1}{s_2}\sum_{i_2,i_3}u_{i_2,i_3}^{3/2} + \frac{c}{p_2}\sum_{i_3}\left( \frac{1}{s_2}\sum_{i_2}u_{i_2,i_3} \right)^{3/2}\nonumber\\
&+ O(u_{i_2,i_3}^{5/2}).
\end{align}
The proof of the general case follows by induction by applying the function $\kappa_1$ to the singular expansion of the kernel with $L-1$ hidden layers, then using~\autoref{eq:proof-taylor-plus}.
\begin{proposition}[RFK when $\bm{x}\,{=}\,-\bm{y}$]
The RFK of a hierarchical network of depth $L\,{+}\,1$, filter sizes $(s_1,\dots,s_L)$ and $p_L\,{\geq}\,1$ has the following singular expansion when all $t_{i_{2\to L+1}}\to -1$. With $u_{i_{2\to L+1}}\,{=}\,1+t_{i_{2\to L+1}}$, $c\,{=}\,2\sqrt{2}/(3\pi)$ and $\prod_{l\in I} s_l\,{:=}\,1$ if $I$ is the empty set,
\begin{equation}\begin{aligned}
\mathcal{K}_{\text{RFK}}^{(L+1)} \left( \left\lbrace -1+u_{i_{2\to L+1}}\right\rbrace_{i_{2\to L+1}} \right) &= b_L + \frac{c_L}{\left(\displaystyle\prod_{2\leq l' \leq L}s_{l'}\right)p_L} \sum_{i_{2\to L+1}} u_{i_{2\to L+1}}^{3/2} + O(u_{i_{2\to L+1}}^{5/2}),
\end{aligned}\end{equation}
with $b_L\,{=}\,\kappa_1(b_{L-1})$, $b_1\,{=}\,0$; and $c_L\,{=}\, c_{L-1}\kappa_1'(b_{L-1})$, $c_1\,{=}\,c$.
\end{proposition}
\emph{Proof.} This can be proved again by induction. For $L\,{=}\,1$,
\begin{align}
\mathcal{K}_{\text{RFK}}^{(1)} (-1+u_{i_2}) &= cu_{i_{2}}^{3/2} + O(u_{i_{2}}^{5/2})\Rightarrow \nonumber \\ \mathcal{K}_{\text{RFK}}^{(1+1)} \left(\left\lbrace -1+u_{i_2}\right\rbrace_{i_2}\right) &= \frac{c}{p_1} \sum_{i_2} u_{i_{2}}^{3/2} + O(u_{i_{2}}^{5/2}).
\end{align}
Thus, for $L\,{=}\,2$,
\begin{align}
\mathcal{K}_{\text{RFK}}^{(2)} \left(\left\lbrace -1+u_{i_2,i_3}\right\rbrace_{i_2}\right) &= \kappa_1\left(\frac{c}{s_2} \sum_{i_2} u_{i_2,i_3}^{3/2} + O(u_{i_2,i_3}^{5/2})\right)\nonumber \\ &= \kappa_1(0) + \kappa_1'(0)\left(\frac{c}{s_2} \sum_{i_2} u_{i_2,i_3}^{3/2}\right) + O(u_{i_2,i_3}^{5/2}),
\end{align}
so that
\begin{align}
\mathcal{K}_{\text{RFK}}^{(2+1)} \left(\left\lbrace -1+u_{i_2,i_3}\right\rbrace_{i_2,i_3}\right) = \kappa_1(0) + \frac{\kappa_1'(0)c}{s_2 p_2} \sum_{i_2, i_3} u_{i_2,i_3}^{3/2} + O(u_{i_2,i_3}^{5/2}).
\end{align}
The proof is completed by applying the function $\kappa_1$ to the singular expansion of the kernel with $L-1$ hidden layers.
\begin{proposition}[NTK when $\bm{x}\,{=}\,\bm{y}$]
The NTK of a hierarchical network of depth $L\,{+}\,1$, filter sizes $(s_1,\dots,s_L)$ and $p_L\,{\geq}\,1$ has the following singular expansion when all $t_{i_{2\to L+1}}\to 1$. With $u_{i_{2\to L+1}}\,{=}\,1-t_{i_{2\to L+1}}$, $c\,{=}\,\sqrt{2}\pi$, and $\prod_{l\in I} s_l\,{:=}\,1$ if $I$ is the empty set,
\begin{equation}\begin{aligned}
\mathcal{K}_{\text{NTK}}^{(L+1)} \left( \left\lbrace 1-u_{i_{2\to L+1}}\right\rbrace_{i_{2\to L+1}} \right) &= L+1-\frac{c}{p_L}\sum_{l'=1}^{L} \frac{l'}{\left(\displaystyle\prod_{l' < l'' \leq L}s_{l''}\right)} \\ &\times \sum_{i_{{l'+1}\to{L+1}}} \left( \frac{1}{\left(\displaystyle\prod_{2 \leq l'' \leq l'}s_{l''}\right)} \sum_{i_{2\to{l'}}} u_{i_{2\to L+1}} \right)^{1/2} + O(u_{i_{2\to L+1}}^{3/2})
\end{aligned}\end{equation}
\end{proposition}
\begin{proposition}[NTK when $\bm{x}\,{=}\,-\bm{y}$]
The NTK of a hierarchical network of depth $L\,{+}\,1$, filter sizes $(s_1,\dots,s_L)$ and $p_L\,{\geq}\,1$ has the following singular expansion when all $t_{i_{2\to L+1}}\to -1$. With $u_{i_{2\to L+1}}\,{=}\,1+t_{i_{2\to L+1}}$, $c\,{=}\,\sqrt{2}/\pi$ and $\prod_{l\in I} s_l\,{:=}\,1$ if $I$ is the empty set,
\begin{equation}\begin{aligned}
\mathcal{K}_{\text{NTK}}^{(L+1)} \left( \left\lbrace -1+u_{i_{2\to L+1}}\right\rbrace_{i_{2\to L+1}} \right) &= a_L + \frac{c_L}{\left(\displaystyle\prod_{2\leq l' \leq L}s_{l'}\right)p_L} \sum_{i_{2\to L+1}} u_{i_{2\to L+1}}^{3/2} + O(u_{i_{2\to L+1}}^{5/2}),
\end{aligned}\end{equation}
with $a_L\,{=}\,b_L + b_{L-1}\kappa_0(b_{L-1})$, $b_L\,{=}\,\kappa_1(b_{L-1})$, $b_1\,{=}\,0$; and $c_L\,{=}\,c_{L-1}\kappa_0(b_{L-1})$, $c_1\,{=}\,c$. Notice that both $\kappa_1$ and $\kappa_0$ are positive and strictly increasing in $[0,1]$ and $\kappa_1(1)\,{=}\,\kappa_0(1)\,{=}\,1$, thus $b_L\in(0,1)$ and $c_L\,{<}\,c_{L-1}$.
\end{proposition}
The proofs of the two propositions above are omitted, as they follow the exact same steps as the previous two proofs.
\subsection{Patches on the ring}\label{ssec:proof-fourier}
In this section, we prove a restricted version of~\autoref{th:eig-scaling} for the case of $2$-dimensional input patches, since the reduction of spherical harmonics to the Fourier basis simplifies the proof significantly. We also consider, for convenience, hierarchical kernels of depth $3$ with the filter size of the second hidden layer set to $p\,{=}\,d/2$, the total number of $2$-patches of the input. Once this case is understood, extension to arbitrary filter size and arbitrary depth is trivial.
\begin{theorem}[Spectrum of depth-$3$ kernels on $2$-patches]\label{th:eig-scaling-2d}
Let $T_{\mathcal{K}}$ be the integral operator associated with a $d$-dimensional hierarchical kernel of depth $3$, ($2$ hidden layers), with filter sizes ($s_1\,{=}\,2,s_2$) and $p_2\,{=}\,1$, such that $2 s_2\,{=}\,d$ and $s_2\,{=}\,p$ (the number of $2$-patches). Eigenvalues and eigenfunctions of $T_{\mathcal{K}}$ can be organised into $2$ sectors associated with the hidden layers of the kernel/network.
\begin{enumerate}
\item[i.] The first sector consists of $s_1$-\emph{local} eigenfunctions, which are functions of a single patch $\bm{x}_{i}$ for $i\,{=}\,1,\dots,p$. The labels $\bm{k},\bm{\ell}$ of local eigenfunctions are such that all the $k_j$'s with $j\neq i$ are zero (because the eigenfunction is constant outside $\bm{x}_{i}$). The corresponding eigenvalue is degenerate with respect to the location of the patch: we call it $\Lambda^{\scriptscriptstyle(1)}_{k_i}$. When $k_i\to\infty$,
\begin{equation}\label{eq:eig-scaling-2d-local}
\Lambda^{(1)}_{k_i} = \mathcal{C}_{2,1}\, k^{-2\nu -1} + o\left(k^{-2\nu -1}\right),
\end{equation}
with $\nu_{\rm NTK}=1/2,\, \nu_{\rm RFK}=3/2$. $\mathcal{C}_{2,l}$ can take two distinct strictly positive values depending on the parity of $k_{i}$;
\item[ii.] The second sector consists of \emph{global} eigenfunctions, which are functions of the whole input $\bm{x}$. The labels $\bm{k},\bm{\ell}$ of global eigenfunctions are such that at least two of the $k_i$'s are non-zero. We call the corresponding eigenvalue $\Lambda^{\scriptscriptstyle(2)}_{\bm{k}}$. When $\|\bm{k}\|\to\infty$, with $k\,{=}\,\|\bm{k}\|$,
\begin{equation}\label{eq:eig-scaling-2d-global}
\Lambda^{(2)}_{\bm{k}} = \mathcal{C}_{2,2}\, k^{-2\nu -p} + o\left(k^{-2\nu -p}\right),
\end{equation}
\end{enumerate}
\end{theorem}
\emph{Proof.} If we consider binary patches in the first layer, the input space becomes the Cartesian product of two-dimensional unit spheres, i.e. circles, $\mathcal{X}=\prod_{i=1}^d\mathbb{S}^1$. Then, each patch $\bm{x}_i$ corresponds to an angle $\theta_i$ and the spherical harmonics are equivalent to Fourier atoms,
\begin{equation}
Y_0(\theta) = 1, \quad Y_{k,1}(\theta) = e^{ik\theta}, \quad Y_{k,2}(\theta) = e^{-ik\theta}, \quad \forall k \geq 1.
\end{equation}
Therefore, solving the eigenvalue problem for a dot-product kernel $\mathcal{K}(\bm{x}\cdot\bm{y}) = \mathcal{K}\left(\cos(\theta_x - \theta_y)\right)$ with $\bm{x},\,\bm{y} \in \mathbb{S}^1$ reduces to computing its Fourier transform. With $|\mathbb{S}^0|\,{=}\,2$ and $|\mathbb{S}^1|\,{=}\,2\pi$,
\begin{equation}
\frac{1}{2\pi} \int_{-\pi}^{\pi} d\theta_x \, \mathcal{K}\left(\cos(\theta_x - \theta_y)\right) e^{\pm ik\theta_x} = \Lambda_k e^{\pm ik\theta_y}\Rightarrow \Lambda_k = \frac{1}{2\pi} \int_{-\pi}^{\pi} d\theta \, \mathcal{K}\left(\cos\theta\right) e^{\pm ik\theta},
\end{equation}
where we denoted with $\theta$ the difference between the two angles. Similarly, for a multi-dot-product kernel, the eigenvalues coincide with the $p$-dimensional Fourier transform of the kernel, where $p$ is the number of patches,
\begin{align}\label{eq:eval-hier}
\Lambda_{\bm{k}} &= \frac{1}{(2\pi)^p} \int_{-\pi}^{\pi} \left( \prod_{i=1}^p d\theta_i \, e^{\pm ik_i\theta_i} \right) \mathcal{K}\left(\{\cos\theta_i\}_{i=1}^p\right) \nonumber \\
&= \frac{1}{(2\pi)^p} \int_{-\pi}^{\pi} d^p\bm{\theta} \, e^{\pm i\bm{k}\cdot\bm{\theta}} \mathcal{K}\left(\{\cos\theta_i\}_{i=1}^p\right),
\end{align}
with $\bm{k}=(k_1,\dots,k_p)^\top$ the vector of the patch wavevectors and $\bm{\theta}=(\theta_1,\dots,\theta_p)^\top$ the vector of the patch angle differences $\theta_i=\theta_{x,i}-\theta_{y,i}$.
The nonanaliticity of the kernel at $t_i\,{=}\,1$ for all $i$ moves to $\theta_i\,{=}\,0$ for all $i$, whereas those in $t_i\,{=}\,-1$ move to $\theta_i\,{=}\,\pi$ and $-\pi$. The corresponding singular expansion is obtained from~\autoref{eq:rfk-sing-plus} after replacing $t_i$ with $\cos{(\theta_i)}$ and expanding $\cos{(\theta_i)}$ as $1-\theta_i^2/2$, resulting in
\begin{equation}
\mathcal{RFK}^{(2)} (\{\cos\theta_i\}_{i=1}^p) = 1 - \frac{1}{2p} \sum_{i=1}^p \theta_i^2 + \frac{1}{3\pi p} \sum_{i=1}^p |\theta_i|^3 + \frac{2\sqrt{2}}{3\pi} \left( \frac{1}{p} \sum_{i=1}^p \frac{\theta_i^2}{2}\right)^{3/2} + \sum_{i=1}^p O(\theta_i^4).
\end{equation}
The first nonanalytic terms are $\frac{1}{3\pi p} \sum_{i=1}^p |\theta_i|^3$ and $\frac{2\sqrt{2}}{3\pi} \left( \frac{1}{p} \sum_{i=1}^p \frac{\theta_i^2}{2}\right)^{3/2}$. After recalling that the Fourier transform of $\|\bm{\theta}\|^{2\nu}$ with $\bm{\theta} \in \mathbb{R}^p$ decays asymptotically as $\|\bm{\theta}\|^{-2\nu-p}$~\cite{widom1963asymptotic}, one has ($\nu\,{=}\,3/2$)
\begin{align}
\frac{1}{(2\pi)^p} \int_{-\pi}^\pi d^p\bm{\theta} \, e^{\pm i\bm{k}\cdot\bm{\theta}} \frac{1}{3\pi p} \sum_{i=1}^p |\theta_i|^3 \sim \sum_{i=1}^p k_i^{-4} \prod_{j\neq i}\delta_{k_j,0}, \quad \text{for } \|\bm{k}\| \to \infty
\end{align}
and
\begin{align}
\frac{1}{(2\pi)^p} \int_{-\pi}^\pi d^p\bm{\theta} \, e^{\pm i\bm{k}\cdot\bm{\theta}}\|\bm{\theta}\|^{3} \sim \|\bm{k}\|^{-p-3}, \quad \text{for } \|\bm{k}\| \to \infty.
\end{align}
All the other terms in the kernel expansion will result in subleading contributions in the Fourier transform. Therefore, the former of the two equations above yields the asymptotic scaling of eigenvalues of the local sector, whereas the latter yields the asymptotic scaling of the global sector.
The proof for the NTK case is analogous to the RFK case, except that the singular expansion near $\theta_i\,{=}\,0$ is given by
\begin{equation}
\mathcal{NTK}^{(2)} (\{\cos\theta_i\}_{i=1}^p) = 3 - \frac{1}{p}\sum_{i=1}^p \frac{|\theta_i|}{2} - \frac{\sqrt{2}}{\pi}\left( \frac{1}{p}\sum_{i=1}^p \frac{\theta_i^2}{2} \right)^{1/2} + \sum_{i=1}^p O(\theta_i^{3/2}).
\end{equation}
\subsection{Patches on the \texorpdfstring{$s$}{s}-dimensional hypersphere}\label{ssec:proof-depth2}
In this section, we make an additional step towards~\autoref{th:eig-scaling} by extending~\autoref{th:eig-scaling-2d} to the case of $s$-dimensional input patches. We still consider hierarchical kernels of depth $3$ with the filter size of the second hidden layer set to $p\,{=}\,d/s$ (the total number of $s$-patches of the input) so as to ease the presentation. The extension to general depth and filter sizes is presented in~\autoref{ssec:proof-general}.
\begin{theorem}[Spectrum of depth-$3$ kernels on $s$-patches]\label{th:eig-scaling-sd}
Let $T_{\mathcal{K}}$ be the integral operator associated with a $d$-dimensional hierarchical kernel of depth $3$, ($2$ hidden layers), with filter sizes ($s_1\,{=}\,s,s_2$) and $p_2\,{=}\,1$, such that $2 s_2\,{=}\,d$ and $s_2\,{=}\,p$ (the number of $s$-patches). Eigenvalues and eigenfunctions of $T_{\mathcal{K}}$ can be organised into $2$ sectors associated with the hidden layers of the kernel/network.
\begin{enumerate}
\item[i.] The first sector consists of $s_1$-\emph{local} eigenfunctions, which are functions of a single patch $\bm{x}_{i}$ for $i\,{=}\,1,\dots,p$. The labels $\bm{k},\bm{\ell}$ of local eigenfunctions are such that all the $k_j$'s with $j\neq i$ are zero (because the eigenfunction is constant outside of $\bm{x}_{i}$). The corresponding eigenvalue is degenerate with respect to the location of the patch: we call it $\Lambda^{\scriptscriptstyle(1)}_{k_i}$. When $k_i\to\infty$,
\begin{equation}\label{eq:eig-scaling-sd-local}
\Lambda^{(1)}_{k_i} = \mathcal{C}_{s,1}\, k^{-2\nu -(s-1)} + o\left(k^{-2\nu -(s-1)}\right),
\end{equation}
with $\nu_{\rm NTK}=1/2,\, \nu_{\rm RFK}=3/2$. $\mathcal{C}_{s,1}$ can take two distinct strictly positive values depending on the parity of $k_{i}$;
\item[ii.] The second sector consists of \emph{global} eigenfunctions, which are functions of the whole input $\bm{x}$. The labels $\bm{k},\bm{\ell}$ of global eigenfunctions are such that at least two of the $k_i$'s are non-zero. We call the corresponding eigenvalue $\Lambda^{\scriptscriptstyle(2)}_{\bm{k}}$. When $k\equiv \| \bm{k}\|\to\infty$, for fixed non-zero angles $\bm{k}/k$,
\begin{equation}\label{eq:eig-scaling-sd-global}\begin{aligned}
\Lambda^{(2)}_{\bm{k}} = \mathcal{C}_{s,2}\left(\frac{\bm{k}}{k}\right) k^{-2\nu -p(s-1)} + o\left(k^{-2\nu -p(s-1)}\right),
\end{aligned}\end{equation}
where $\mathcal{C}_{s,2}$ is a positive function.
\end{enumerate}
\end{theorem}
\emph{Proof.} A hierarchical RFK/NTK is a multi-dot-product kernel, therefore its eigenfunctions are products of spherical harmonics $\tilde{Y}_{\bm{k},\bm{\ell}}(\bm{x}) = \prod_{i=1}^p Y_{k_i,\ell_i}(\bm{x}_i)$ and the eigenvalues of $\mathcal{K}$ are given by~\autoref{eq:multi-dp-mercer},
\begin{equation}\label{eq:proof-eigvals}
\Lambda_{\bm{k}} = \left(\prod_{i=1}^p\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{-1}^{+1} dt_i\left(1-t_i^2\right)^{\frac{s-3}{2}}\, P_{k_i,s}(t_i)\right)\mathcal{K}\left( \left\lbrace t_i \right\rbrace_i\right).
\end{equation}
The proof follows the following strategy: first, we show that the infinitely differentiable part of $\mathcal{K}$ results in eigenvalues which decay faster than any polynomial of the degrees $k_i$. We then show that the decay is controlled by the most singular term of the singular expansion of the kernel and finally compute such decay by relating it to the number of derivatives of the kernel having a finite l2 norm.
When $\mathcal{K}$ is infinitely differentiable in $[-1,+1]^p$, we can plug Rodrigues' formula~\autoref{eq:rodrigues} for each $P_{k_i,s}(t_i)$ and get
\begin{equation}\label{eq:proof-rodrigues}
\Lambda_{\bm{k}} = \left(\prod_{i=1}^p\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\left(-\frac{1}{2}\right)^{k_i} \frac{\Gamma\left(\frac{s-1}{2}\right)}{\Gamma\left(k_i+\frac{s-1}{2}\right)}\right) \int_{-1}^{+1} d\bm{t}\, \mathcal{K}\left( \bm{t}\right) \left(\prod_{i=1}^p\frac{d^{k_i}}{dt^{k_i}_i}\left(1-t_i^2\right)^{k_i+\frac{s-3}{2}} \right),
\end{equation}
with $\int_{-1}^{+1} d\bm{t}$ denoting integration over the $p$-dimensional hypercube $[-1,+1]^p$. We can simplify the integral further via integration by parts, so as to obtain
\begin{equation}\label{eq:proof-parts}
\Lambda_{\bm{k}} = \left(\prod_{i=1}^p\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\left(\frac{1}{2}\right)^{k_i} \frac{\Gamma\left(\frac{s-1}{2}\right)}{\Gamma\left(k_i+\frac{s-1}{2}\right)}\right) \int_{-1}^{+1} d\bm{t}\, \mathcal{K}^{(\bm{k})}\left( \bm{t}\right) \left(\prod_{i=1}^p\left(1-t_i^2\right)^{k_i+\frac{s-3}{2}} \right),
\end{equation}
where $\mathcal{K}^{(\bm{k})}$ denotes the partial derivative of order $k_1$ with respect to $t_1$, $k_2$ with respect to $t_2$ and so on until $k_p$ with respect to $t_p$. Notice that the function $(1-t^2)^{\frac{d-3}{2}}$ is proportional to the probability measure of the scalar product $t$ between two points sampled uniformly at random on the unit sphere~\cite[Sec.~1.3]{atkinson2012spherical},
\begin{equation}
\lvert\mathbb{S}^{d-1} \rvert = \int_{-1}^{+1}dt\, (1-t^2)^{\frac{d-3}{2}} \int_{\mathbb{S}^{d-2}} dS^{d-2} \Rightarrow \frac{\lvert\mathbb{S}^{d-1} \rvert}{\lvert\mathbb{S}^{d-2} \rvert} \int_{-1}^{+1}dt\, (1-t^2)^{\frac{d-3}{2}} = 1.
\end{equation}
This probability measure converges weakly to a Dirac mass $\delta(t)$ when $d\to\infty$. Recall, in addition, that $\lvert\mathbb{S}^{d-1} \rvert\,{=}\,2 \pi^{d/2}/\Gamma(d/2)$, where $\Gamma$ denotes the Gamma function $\Gamma(x)\,{=}\,\int_0^\infty dx\, x^{z-1}e^{-x}$. Thus, with
converges weakly to a Dirac measure $\delta(t)$ as $c\to \infty$, once properly rescaled. In particular, choosing $k_i$ such that $k_i+(s-3)/2\,{=}\,(d-3)/2$, one has
\begin{equation}
\lim_{k_i\to\infty} \frac{\Gamma\left(k_i+\frac{s}{2}\right)}{\sqrt{\pi}\Gamma\left(k_i+\frac{s-1}{2}\right)}\left(1-t_i^2\right)^{k_i+\frac{s-3}{2}} = \delta(t_i).
\end{equation}
As a result, when $\mathcal{K}$ is infinitely differentiable, one has the following equivalence in the limit where all $k_i$'s are large,
\begin{equation}\label{eq:proof-smooth}
\Lambda_{\bm{k}} \sim \left(\prod_{i=1}^p\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\left(\frac{1}{2}\right)^{k_i} \frac{\Gamma\left(\frac{s-1}{2}\right)}{\Gamma\left(k_i+\frac{s}{2}\right)}\right) \mathcal{K}^{(\bm{k})}\left( \bm{0}\right),
\end{equation}
which implies that, when $\mathcal{K}$ is infinitely differentiable, the eigenvalues decay exponentially or faster with the $k_i$.
Let us now consider the nonanalytic part of $\mathcal{K}$. There are three kinds of terms appearing in the singular expansion of depth-$3$ kernels (cf.~\autoref{ssec:proof-singular}):
\begin{itemize}
\item[\emph{ia)}] $c_{+}\sum_{i} (1-t_i)^\nu$ near $t_i\,{=}\,+1$;
\item[\emph{ib)}] $c_{-}\sum_{i} (1+t_i)^\nu$ near $t_i\,{=}\,-1$;
\item[\emph{ii)}] $c_{+,\text{all}}\left( \sum_{i}(1-t_i)/p \right)^\nu$ near $t_i\,{=}\,+1$ for all $i$;
\end{itemize}
where the exponent $\nu$ is $1/2$ for the NTK and $3/2$ for the RFK. We will not consider terms of the kind \emph{ib)} explicitly, as the analysis is equivalent to that of terms of the kind \emph{ia)}. After replacing $t_i$ with $\cos(\theta_i)$, as in~\autoref{ssec:proof-fourier}, we get again $\sum_i |\theta_i|^{2\nu}$ and $\|\bm{\theta}\|^{2\nu}$ as leading nonanalytic terms. Therefore, we can rewrite the nonanalytic part of the kernel as follows,
\begin{equation}
\mathcal{K}_\text{n.a.}(\bm{\theta}) = \sum_i f_1(|\theta_i|) + f_2(\|\bm{\theta}\|) + \mathcal{\tilde K}(\bm{\theta}),
\end{equation}
where $f_1$, $f_2$ are single-variable functions which behave as $\theta^{2\nu}$ near zero and have compact support, whereas $\mathcal{\tilde K}$ has a singular expansion near $\theta_i\,{=}\,0$ analogous to that of $\mathcal{K}$ but with leading nonanalyticities controlled by an exponent $\nu'\,{\geq}\nu+1$.
Let us look at the contribution to the eigenvalue $\Lambda_{\bm{k}}$ due to the term $f_1(|\theta_i|)$:
\begin{equation}\begin{aligned}\label{eq:local-contribution}
&\left(\prod_{j=1}^p\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{0}^{\pi} d\theta_j\left(\sin{(\theta_j)}\right)^{s-2}\, P_{k_j,s}(\cos{(\theta_j)})\right)f_1(|\theta_i|)\\
= &\left(\prod_{j\neq i}\delta_{k_j,0}\right)\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{0}^{\pi} d\theta\left(\sin{(\theta)}\right)^{s-2}\, P_{k_i,s}(\cos{(\theta)})f_1(|\theta|) = \left(\prod_{j\neq i}\delta_{k_j,0}\right)\left(f_1\right)_{k_1},
\end{aligned}\end{equation}
where we have introduced $\left(f_1\right)_{k}$ as the projection of $f_1(\theta)$ on the $k$-th Legendre polynomial. The asymptotic decay of $\left(f_1\right)_{k}$ is strictly related to the differentiability of $f_1$, which is in turn controlled by action of the Laplace-Beltrami operator $\Delta$ on $f_1$. As a function on the sphere $\mathbb{S}^{s-1}$, $f_1$ depends only on one angle, therefore the Laplace-Beltrami operator acts as follows,
\begin{equation}
\Delta f_1(\theta) = \frac{1}{\sin{(\theta)}^{s-2}}\frac{d}{d\theta}\left(\sin{(\theta)}^{s-2} \frac{df_1}{d\theta}(\theta)\right) = f_1''(\theta) + (d-2)\frac{\cos{(\theta)}}{\sin{(\theta)}}f_1'(\theta).
\end{equation}
In terms of singular behaviour near $\theta\,{=}\,0$, $f_1(\theta)\sim|\theta|^{2\nu}$ implies $\Delta f_1(\theta)\sim|\theta|^{2\nu-2}$, thus $\Delta^m f_1(\theta)\sim|\theta|^{2(\nu-m)}$. Given $\nu$, repeated applications of $\Delta$ eventually result in a function whose l2 norm on the sphere diverges. On the one hand,
\begin{equation}
\| \Delta^{m/2}f_1 \|^2 = \int_0^\pi d\theta\, \sin^{d-2}{(\theta)} f_1(\theta)\Delta^m f_1(\theta).
\end{equation}
The integrand behaves as $|\theta|^{d-2+4\nu-2m}$ near $0$, thus the integral diverges for $m\geq 2\nu + (d-1)/2$. On the other hand, from~\autoref{eq:slap-eigvals},
\begin{equation}
\| \Delta^{m/2}f_1 \|^2 = \sum_k \mathcal{N}_{k,s} \left(k(k+s-2)\right)^m |(f_1)_k|^2.
\end{equation}
As $\mathcal{N}_{k,s}\sim k^{s-2}$ and the sum must converge for $m\,{<}\,2\nu + (d-1)/2$ and diverge otherwise, $(f_1)_k\sim k^{-2\nu -(s-1)}$. The projections of all the other terms in $\mathcal{K}$ on Legendre polynomials of one of the $p$ angles $\theta_i$ display a faster decay with $k$, therefore the above results imply the asymptotic scaling of local eigenvalues. Notice that such scaling matches with the result of~\cite{bietti2021deep}, which was obtained with a different argument.
Finally, let us look at the contribution to the eigenvalue $\Lambda_{\bm{k}}$ due to the term $f_2(\|\bm{\theta}\|)$:
\begin{equation}\begin{aligned}\label{eq:global-contribution}
&\left(\prod_{j=1}^p\frac{|\mathbb{S}^{s-2}|}{|\mathbb{S}^{s-1}|}\int_{0}^{\pi} d\theta_j\left(\sin{(\theta_j)}\right)^{s-2}\, P_{k_j,s}(\cos{(\theta_j)})\right)f_2(\|\bm{\theta}\|) = \left(f_2\right)_{\bm{k}},
\end{aligned}\end{equation}
where we have introduced $\left(f_2\right)_{\bm{k}}$ as the projection of $f_2(\|\bm{\theta}\|)$ on the multi-Legendre polynomial with multi-degree $\bm{k}$. The asymptotic decay of $\left(f_2\right)_{k}$ is again related to the differentiability of $f_2$, controlled by action of the multi-sphere Laplace-Beltrami operator $\Delta_{p,s}$ in~\autoref{eq:mslap-eigvals}. As $f_2$ depends only on one angle per sphere,
\begin{equation}
\Delta_{p,s} f_2(\|\bm{\theta}\|) = \sum_{i=1}^p\left( \partial^2_{\theta_i} f_2(\|\bm{\theta}\|) + (s-2)\frac{\cos{(\theta_i)}}{\sin{(\theta_i)}}\partial_{\theta_i} f_2(\|\bm{\theta}\|)\right).
\end{equation}
Further simplifications occur since $f_2$ depends only on the norm of $\bm{\theta}$. In terms of the singular behaviour near $\|\bm{\theta}\|\,{=}\,0$, $f_2\sim \|\bm{\theta}\|^{2\nu}$ implies $\Delta_{p,s}^m f_2 \sim \|\bm{\theta}\|^{2(\nu-m)}$, thus
\begin{equation}
\| \Delta_{p,s}^{m/2}f_2 \|^2 = \int_{[0,\pi]^p} d^p \bm{\theta}\, \prod_{i=1}^p\left(\sin^{s-2}{(\theta_i)}\right) f_2(\|\bm{\theta}\|)\Delta_{p,s}^m f_2(\|\bm{\theta}\|) < +\infty
\end{equation}
requires $m<2\nu + p(s-1)/2$ (compare with $m<2\nu + (s-1)/2$ for the local contributions). Therefore, one has
\begin{equation}\label{eq:ms-condition}
\| \Delta_{p,s}^{m/2}f_1 \|^2 = \sum_{\bm{k}} \left(\prod_{i=1}^p \mathcal{N}_{k_i,s}\right)\left(\sum_{i=1}^p k_i(k_i+s-2)\right)^m |(f_2)_{\bm{k}}|^2 < +\infty\quad \forall\,m<2\nu + p(s-1)/2,
\end{equation}
while the sum diverges for $m\geq 2\nu + p(s-1)/2$. In addition, since $f_2$ is a radial function of $\bm{\theta}$ which is homogeneous (or scale-invariant) near $\|\bm{\theta}\|\,{=}\,0$, $(f_2)_{\bm{k}}$ can be factorised in the large-$\|\bm{k}\|$ limit into a power of the norm $\|\bm{k}\|^\alpha$ and a finite angular part $\mathcal{C}(\bm{k}/\|\bm{k}\|)$. By plugging the factorisation into~\autoref{eq:ms-condition}, we get
\begin{equation}
(f_2)_{\bm{k}} \sim \mathcal{C}(\bm{k}/\|\bm{k}\|) \|\bm{k}\|^{-2\nu -p(s-1)}, \quad \sum_{\bm{k},\|\bm{k}\|=k} \left( \left(\prod_{i=1}^p (k_i/k)^{s-2}\right) \mathcal{C}(\bm{k}/\|\bm{k}\|)^2 \right) < +\infty
\end{equation}
The projections of all the other terms in $\mathcal{K}$ on multi-Legendre polynomials display a faster decay with $\|\bm{k}\|$, therefore the above results imply the asymptotic scaling of global eigenvalues.
\subsection{General depth}\label{ssec:proof-general}
The generalisation to arbitrary depth is trivial once the depth-$3$ case is understood. For global and $s_1$-local eigenvalues, the analysis of the previous section carries over unaltered. All the other intermediate sectors correspond to the other terms singular expansion of the kernel: from~\autoref{ssec:proof-singular}, these terms can be written as
\begin{equation}\label{eq:proof-final}
\frac{c}{p_L}\frac{1}{\left(\displaystyle\prod_{l' < l'' \leq L}s_{l''}\right)} \sum_{i_{{l'+1}\to{L+1}}}\left( \frac{1}{\left(\displaystyle\prod_{2 \leq l'' \leq l'}s_{l''}\right)} \sum_{i_{2\to{l'}}} \left(1-t_{i_{2\to L+1}}\right) \right)^{\nu},
\end{equation}
for some $l'=2,\dots,L-1$ and fractional $\nu$. In practice, this term is a sum over the $p_{l'}\,{=}\,p_L\prod_{l' < l'' \leq L}s_{l''}$ meta-patches of $\bm{t}$ having size $s_{2\to l'}:=\prod_{2 \leq l'' \leq l'}s_{l''}$. Each summand is the fractional power $\nu$ of the average of the $t_i$'s within a meta-patch. When plugging such term into~\autoref{eq:proof-eigvals}, the integrals over the $t_i$'s which do not belong to that meta-patch yield Kronecker deltas for the corresponding $k_i$'s. The integrals over the $t_i$'s within the meta-patch, instead, can be written as in \autoref{eq:global-contribution} with the product and the norm restricted over the elements of that meta-patch, i.e., $\|\bm{\theta}\| \to \left(\sum_{i_{2 \to l'}} \theta_{i_{2\to L+1}}^2\right)^{1/2}$. Therefore, the scaling of the eigenvalue with $k$ is given again by~\autoref{eq:proof-final}, but with $p$ replaced by the size of the meta-patch $\prod_{2 \leq l'' \leq l'}s_{l''}$, so that the effective dimension of~\autoref{eq:effective-dim} appears at the exponent.
\section{Generalisation bounds for kernel regression and spatial adaptivity}\label{app:minimax}
This appendix provides an introduction to classical generalisation bounds for kernel regression and extends \autoref{co:adaptivity} to patches on the hypersphere.
\subsection{Classical generalisation bounds}
Consider the regression setting detailed in \autoref{sec:adaptivity} of the main text. First, assume that the target function $f^*$ belongs to the RKHS $\mathcal{H}$ of the kernel $\mathcal{K}$. Then, without further assumptions on $\mathcal{K}$, we have the following dimension-free bound on the excess risk, based on Rademacher complexity \cite[Chs. 4, 7]{bach2021learning}, \cite{bietti2022approximation},
\begin{equation}
\overline{\epsilon}(\lambda, n) - \epsilon(f^*)
\leq \mathcal{C} \, \| f^* \|_{\mathcal{H}} \sqrt{\frac{{\rm Tr}(\mathcal{T_K})}{n}},
\end{equation}
where $\mathcal{T_K}$ is the integral operator associated to $\mathcal{K}$. For a hierarchical kernel, having a target with more power in the local sectors can result in a smaller $\| f^* \|_{\mathcal{H}}$, hence a smaller excess risk. However, this gain is only a constant factor in terms of sample complexity and, more importantly, being in the RKHS requires an order of smoothness which typically is of the order of the dimension, which is a very-restrictive assumption in high-dimensional settings. This result can be extended by including more details about the kernel and the target function.
In particular, \cite[Prop. 7.2]{bach2021learning} states that, for $f^*$ in the closure of $\mathcal{H}$, regularisation $\lambda \leq 1$ and $n \geq \frac{5}{\lambda}(1+\log(1/\lambda))$, one has
\begin{equation}\label{eq:bach-risk-decomposition}
\overline{\epsilon}(\lambda, n) - \epsilon(f^*) \leq 16 \, \frac{\sigma^2}{n} \, {\rm Tr}\left((\mathcal{T_K} + \lambda I)^{-1} \mathcal{T_K} \right) + 16 \inf_{f \in \mathcal{H}}\left\{ \| f-f^* \|_{L_2}^2 + \lambda\| f \|_{\mathcal{H}}^2\right\} + \frac{24}{n^2} \, \| f^* \|_{L_{\infty}},
\end{equation}
where $\sigma^2$ bounds the conditional variance of the labels, i.e. ${\mathbb{E}_{(\bm{x},y)\sim p}\left[\left(y-f^*(\bm{x})\right)^2\,|\,\bm{x}\right] < \sigma^2}$.
Then, let us consider the following standard assumptions in the kernel literature \cite{caponnetto2007optimal},
\begin{align}
\text{capacity: }&\text{Tr}\left(\mathcal{T}_{\mathcal{K}}^{1/\alpha}\right) = \sum_{\bm{k}\geq\bm{0}} \sum_{\bm{\ell}} (\Lambda_{\bm{k}})^{1/\alpha} < +\infty,\nonumber\\
\text{source: }&\norm{T_{\mathcal{K}}^{\frac{1-r}{2}}f^*}_{\mathcal{H}}^2 = \sum_{\bm{k}\geq\bm{0}} \sum_{\bm{\ell}} (\Lambda_{\bm{k}})^{-r} (f^*_{\bm{k},\bm{\ell}})^2 < +\infty.
\end{align}
In short, the first assumption characterises the `size' of the RKHS (the larger $\alpha$, the smaller the number of functions in the RKHS), while the second assumption defines the regularity of the target function relative to that of the kernel (when $r=1$, $f^*\in\mathcal{H}$; when $r<1$, $f^*$ is less smooth; when $r>1$, $f^*$ is smoother). Combining these assumptions with \autoref{eq:bach-risk-decomposition}, one gets
\begin{equation}
\overline{\epsilon}(\lambda, n) - \epsilon(f^*)
\leq 16 \, \frac{\sigma^2}{n} \, \mathcal{C}_1 \lambda^{-1/\alpha} + 16 \, \mathcal{C}_2 \, \lambda^r + \frac{24}{n^2} \, \| f^* \|_{L_{\infty}}.
\end{equation}
Optimising for $\lambda$ results in
\begin{equation}
\lambda_n
= \left( \frac{\mathcal{C}_1 \sigma^2}{ \alpha \, r \, \mathcal{C}_2 \, n} \right)^{\frac{\alpha}{\alpha r + 1}},
\end{equation}
and the bound becomes
\begin{equation}
\overline{\epsilon}(\lambda_n, n) - \epsilon(f^*)
\lesssim \mathcal{C}_2^{\frac{2}{\alpha r + 1}} \left( \frac{\mathcal{C}_1 \sigma^2}{n} \right)^{\frac{\alpha r}{\alpha r + 1}} + \frac{1}{n^2} \, \| f^* \|_{L_{\infty}}.
\end{equation}
Finally, when $r>(\alpha-1)/\alpha$, $n \geq \frac{5}{\lambda_n}(1+\log(1/\lambda_n))$ is always satisfied for $n$ large enough.
\subsection{Extension of \autoref{co:adaptivity} to patches on the hypersphere}
\begin{corollary}[Adaptivity to spatial structure]\label{co:adaptivity-app}
Let $\bm{x},\bm{y}\in {\sf M}^p\mathbb{S}^{s-1}$ and $\mathcal{K}$ be the RFK/NTK of a hierarchical CNN with $L$ hidden layers, filter sizes ($s_1,\dots,s_L$), and $p_L\,{\geq}\,1$. Denote with $\Lambda_{\bm{k}}$ the eigenvalues of the corresponding integral operator $T_{\mathcal{K}}$. Consider a target function $f^*$ on ${\sf M}^p\mathbb{S}^{s-1}$. If there is $l\,{=}\,1,\dots,L$ such that $f^*$ depends only on some meta-patch $\bm{x}_{i_{l+1\to L+1}}$ of the input $\bm{x}$, then only the first $l$ sectors of the spectrum of $T_{\mathcal{K}}$ contribute to the source condition, i.e.
\begin{equation}
\norm{T_{\mathcal{K}}^{\frac{1-r}{2}}f^*}_{\mathcal{H}}^2 = \sum_{l'=1}^l \sum_{i_{l'+1\to L+1}}\sum_{\substack{\bm{k}_{i_{l'+1\to L+1}}\\\bm{\ell}_{i_{l'+1\to L+1}}}}\left(\Lambda^{(l')}_{\bm{k}_{i_{l'+1\to L+1}}}\right)^{-r} \left(f^*_{\bm{k}_{i_{l'+1\to L+1}},\,\bm{\ell}_{i_{l'+1\to L+1}}}\right)^2.
\end{equation}
The same holds if $f^*$ is a linear combination of such functions.
\end{corollary}
\section{Statistical mechanics of generalisation in kernel regression}\label{app:spectralbias}
In \cite{bordelon2020spectrum, canatar2021spectral}, the authors derived a heuristic expression for the average-case mean-squared error of kernel (ridge) regression with the replica method of statistical physics \cite{mezard1987spin}. Denoting with $\{\phi_\rho(\bm{x}),\; \Lambda_\rho\}_{\rho\geq1}$ the eigenfunctions and eigenvalues of the kernel and with $c_\rho$ the coefficients of the target function in this basis, i.e. $f^*(\bm{x}) = \sum_{\rho\geq 1} c_\rho \phi_\rho(\bm{x})$, one has
\begin{equation}\label{eq:bordelon1}
\epsilon(\lambda, n)
= \partial_{\lambda}\left(\frac{\kappa_{\lambda}(n)}{n}\right) \sum_\rho \frac{\kappa_\lambda(n)^2}{\left(n\Lambda_\rho + \kappa_\lambda(n)\right)^2} \, \mathbb{E}[c_\rho^2] ,\end{equation}
where $\lambda$ is the ridge and $\kappa(n)$ satisfies the implicit equation
\begin{equation}\label{eq:bordelon2}
\frac{\kappa_\lambda(n)}{n}
= \lambda + \frac{1}{n}\sum_\rho \frac{\Lambda_\rho \kappa_\lambda(n)/n}{\Lambda_\rho + \kappa_\lambda(n)/n}.
\end{equation}
In short, the replica calculation used to obtain these equations consists in defining an energy functional $\mathcal{E}(f)$ related to the empirical MSE and assigning to the predictor $f$ a Boltzmann measure, i.e. $P(f) \propto
e^{-\beta E(f)}$. When $\beta \to \infty$, the measure concentrates around the minimum of $\mathcal{E}(f)$, which coincides with the minimiser of the empirical MSE. Then, since $\mathcal{E}(f)$ depends only quadratically on the projections $c_\rho$, computing the average over data that appears in the definition of the generalisation error, reduces to computing Gaussian integrals. While non-rigorous, this method has been successfully used in physics---to study disordered systems---and in machine learning theory. In particular, the predictions obtained with \autoref{eq:bordelon1} and \autoref{eq:bordelon2} have been validated numerically for both synthetic and real datasets.
In \autoref{eq:bordelon1}, $\kappa_\lambda(n)/n$ plays the role of a threshold: the modal contributions to the error tend to $0$ for $\rho$ such that $\Lambda_\rho \gg \kappa_\lambda(n)/n$, and to $\mathbb{E}[c_\rho^2]$ for $\rho$ such that $\Lambda_\rho \ll \kappa_\lambda(n)/n$. This is equivalent to saying that kernel regression can capture only the modes corresponding to the eigenvalues larger than $\kappa_\lambda(n)/n$ (see also~\cite{jacot2020implicit, jacot2020kernel}).
In the ridgeless limit $\lambda \to 0^+$, this threshold asymptotically tends to the $n$-th eigenvalue of the student, resulting in the intuitive picture presented in the main text. Namely, given $n$ training points, ridgeless regression learns the $n$ projections corresponding to the highest eigenvalues. In particular, assume that the kernel spectrum and the target function projections decay as power laws. Namely, $\Lambda_\rho \sim \rho^{-a}$ and $\mathbb{E}[{c_\rho}^2] \sim \rho^{-b}$, with $2a\,{>}\,b-1$. Furthermore, we can approximate the summations over modes with an integral by using the Euler-MacLaurin formula. Hence, we substitute the eigenvalues with their asymptotic limit $\Lambda_\rho = A\rho^{-a}$. Since, $\kappa_0(n)/n \to 0$ as $n\to \infty$, these two operations result in an error which is asymptotically independent of $n$. In particular,
\begin{align}
\frac{\kappa_0(n)}{n} &= \frac{\kappa_0(n)}{n} \frac{1}{n} \left(\int_0^\infty \frac{A\rho^{-a} }{A\rho^{-a} + \kappa_0(n)/n} \, d\rho + O(1) \right) \nonumber \\
&= \frac{\kappa_0(n)}{n} \frac{1}{n} \left( \left( \frac{\kappa_0(n)}{n} \right)^{-\frac{1}{a}}\int_0^\infty \frac{\sigma^{\frac{1}{a}-1}A^{\frac{1}{a}}a^{-1}}{1 + \sigma} \, d\sigma + O(1) \right).
\end{align}
Since the integration over $\sigma$ is finite and independent of $n$, we obtain that $\kappa_0(n)/n= O(n^{-a})$. Similarly, we find that the mode-independent prefactor $\partial_\lambda \left(\kappa_\lambda(n)/n\right)|_{\lambda=0} = O(1)$.
As a result, we have
\begin{equation}\label{eq:error-scaling1}
\epsilon(n) \sim \sum_{\rho} \frac{n^{-2a}}{\left(A\rho^{-a}+n^{-a}\right)^2} \, \mathbb{E}[c_\rho^2].
\end{equation}
Following the intuitive argument about the thresholding action of $\kappa_0(n)/n \sim n^{-a}$, we can split the summation in \autoref{eq:error-scaling1} into modes where $\Lambda_\rho\gg\kappa_0(P)/n$, $\Lambda_\rho \sim \kappa_0(n)/n$ and $\Lambda_\rho\ll\kappa_0(n)/n$,
\begin{equation}
\epsilon(n) \sim \sum_{\rho \ll n} \frac{n^{-2a}}{\left(A\rho^{-a}\right)^2}\mathbb{E}[c_\rho^2] +\sum_{\rho \sim n} \frac{1}{2}\mathbb{E}[c_\rho^2] + \sum_{\rho \gg n} \mathbb{E}[c_\rho^2].
\end{equation}
Finally, \autoref{eq:spectralbias} is obtained by noticing that, under the assumption on the decay of $\mathbb{E}[c_\rho^2]$, the contribution of the summation over $\rho \ll n$ is subleading in $n$, whereas the other two can be merged together.
\section{Examples}\label{app:examples}
\subsection{Decay of eigenvalues with the rank}\label{ssec:spectral-decay-app}
\paragraph{Shallow kernels.} Consider a depth-two kernel with filters of size $s$. Our goal is to compute the scaling of the eigenvalues of the kernel $\Lambda_{\bk}$ with their rank $\rho$. The eigenvalues decay with $\bm{k}$ as
\begin{equation}
\Lambda_\bk
\sim \sum_{i=1}^p k_i^{-2\nu_S - (s-1)} \prod_{j \neq i} \delta_{k_j,0}.
\end{equation}
In order to take into account their algebraic multiplicity, we introduce the eigenvalue density $\mathcal{D}(\Lambda)$, whose asymptotic form for small eigenvalues is
\begin{align}
\mathcal{D}(\Lambda)
&= \sum_{\bk,\,\bell} \delta(\Lambda - \Lambda_\bk) \nonumber \\
&\sim \sum_\bk \left(\prod_{i=1}^p k_i^{s-2}\right) \delta\left(\Lambda - \sum_{i=1}^p k_i^{-2\nu-(s-1)} \prod_{j \neq i} \delta_{k_j,0}\right) \nonumber \\
&\sim \sum_{i=1}^p \sum_{k_i} k_i^{s-2} \delta\left(\Lambda - k_i^{-2\nu-(s-1)}\right) \nonumber \\
&\sim \int_1^{\infty} dk \, k^{s-2} \delta\left(\Lambda - k^{-2\nu-(s-1)}\right) \nonumber \\
&\sim \Lambda^{-1-\frac{s-1}{2\nu+(s-1)}}.
\end{align}
Thus, the scaling of $\Lambda(\rho)$ can be determined self-consistently,
\begin{equation}
\rho
= \int_{\Lambda(\rho)}^{\Lambda(1)} d\Lambda \, \mathcal{D}(\Lambda) \sim \Lambda(\rho)^{-\frac{s-1}{s\nu+(s-1)}}
\,\Rightarrow\,
\Lambda(\rho)
\sim \rho^{-1-\frac{2\nu}{s-1}}.
\end{equation}
\paragraph{Deep kernels.} Consider a kernel of depth $L+1$ with filter sizes $(s_1, \dots, s_L)$ and $p_L=1$. For each sector $l$, one can compute the density of eigenvalues $\mathcal{D}_{(l)}(\Lambda)$. Depending on $s_1$, there are two different cases.
If $s_1=2$,
\begin{align}
\mathcal{D}_{(l)}(\Lambda)
&= \sum_{\bk} \delta(\Lambda - \Lambda_\bk^{(l)}) \nonumber \\
&\sim \sum_{i_{l+1\to L+1}} \sum_{\bk_{i_{l+1\to L+1}}} \delta\left(\Lambda - \mathcal{C}_{2,l} \, \|\bk_{i_{l+1\to L+1}}\|^{-2\nu -d_{\rm eff}(l)}\right) \nonumber \\
&\sim \int_1^{\infty} dk \, k^{d_{\rm eff}(l)-1} \delta\left(\Lambda - \mathcal{C}_{2,l} \, k^{-2\nu -d_{\rm eff}(l)}\right) \nonumber \\
&\sim \Lambda^{-1-\frac{d_{\rm eff}(l)}{2\nu+d_{\rm eff}(l)}}.
\end{align}
If $s_1 \geq 3$,
\begin{align}
\mathcal{D}_{(l)}(\Lambda)
&= \sum_{\bk,\,\bell} \delta(\Lambda - \Lambda_\bk^{(l)}) \nonumber \\
&\sim \sum_{i_{l+1\to L+1}} \sum_{\substack{\bk_{i_{l+1\to L+1}},\\ \bell_{i_{l+1\to L+1}}}} \delta\left(\Lambda - \mathcal{C}_{s_1,l}\left(\frac{\bm{k}_{i_{l+1 \to L+1}}}{\|\bk_{i_{l+1\to L+1}}\|}\right) \, \|\bk_{i_{l+1\to L+1}}\|^{-2\nu -d_{\rm eff}(l)}\right) \nonumber \\
&\sim \Lambda^{-1-\frac{d_{\rm eff}(l)}{2\nu+d_{\rm eff}(l)}}.
\end{align}
When summing over all layers $l$'s, the asymptotic behaviour of the total density of eigenvalues $\mathcal{D}(\Lambda) = \sum_l \mathcal{D}_{(l)}(\Lambda)$ is dictated by the density of the sector with the slowest decay, i.e. the last one. Hence,
\begin{equation}
\mathcal{D}(\Lambda)
\sim \Lambda^{-1-\frac{d_{\rm eff}(L)}{2\nu+d_{\rm eff}(L)}}.
\end{equation}
Therefore, similarly to the shallow case, one finds self-consistently that the $\rho$-th eigenvalue of the kernel decays as
\begin{equation}
\Lambda(\rho)
\sim \rho^{-1-\frac{2\nu}{d_{\rm eff}(L)}}.
\end{equation}
\subsection{Rates from spectral bias ansatz}\label{ssec:sb-app}
Consider a target function $f^*$ which only depends on the meta-patch $\bm{x}_{i_{l+1\to L+1}}$ and with square-integrable derivatives up to order $m$, i.e. $\|\Delta^{m/2} f^*\|^2<+\infty$, with $\Delta$ denoting the Laplace operator. Moreover, consider a hierarchical kernel of depth $L+1$ with filter sizes $(s_1, \dots, s_L)$ and $p_L=1$. We want to compute the asymptotic scaling of the error by using \autoref{eq:spectralbias}, i.e.
\begin{equation}\label{eq:spectral-app}
\overline{\epsilon}(n) \sim \sum_{\bm{k},\bm{\ell} \text{ s.t. } \Lambda_{\bm{k}}<\Lambda(n)} \lvert f^*_{\bm{k},\bm{\ell}}\rvert^2.
\end{equation}
In the previous section, we showed that the $n$-th eigenvalue of the kernel $\Lambda(n)$ decays as
\begin{equation}
\Lambda(n)
\sim n^{-1-\frac{2\nu}{d_{\rm eff}(L)}}.
\end{equation}
Since by construction the target function depends only on a meta-patch of the $l$-th sector, the only non-zero projections will be the ones on eigenfunctions of the first $l$ sectors. Thus, all the $\bk$'s corresponding to the sectors of layers with $l'>l$ do not contribute to the sum. In particular, the sum is dominated by the $\bk$'s of the largest sector and the set $\{\bk \text{ s.t. } \Lambda_{\bk} < \Lambda(n)\}$ is the set of $\bk_{i_{l+1\to L+1}}$'s with norm larger than $n^{\frac{2\nu + d_{\rm eff}(L)}{(2\nu + d_{\rm eff}(l))\, d_{\rm eff}(L)}}$.
Finally, we notice that the finite-norm condition on the derivatives,
\begin{equation}
\|\Delta^{m/2}f^* \|^2 = \sum_{\bm{k}} \left(\prod_{i=1}^p \mathcal{N}_{k_i,s}\right) \left(\sum_{i=1}^p k_i(k_i+s-2)\right)^m |f^*_{\bm{k},\bm{\ell}}|^2 < + \infty,
\end{equation}
implies $|f^*_{\bm{k},\bm{\ell}}|^2 \lesssim \|\bm{k}\|^{-2m-d_{\rm eff}(L)}$ (see \autoref{ssec:proof-depth2}).
Hence, plugging everything in \autoref{eq:spectral-app} we find
\begin{equation}
\overline{\epsilon}(n)
\sim n^{-\frac{2m}{2\nu+d_{\text{eff}}(l)} \frac{2\nu+d_{\rm eff}(L)}{d_{\rm eff}(L)}}.
\end{equation}
\section{Numerical experiments}\label{app:numerics}
\subsection{Experimental setup}
Experiments were run on a high-performance computing cluster with nodes having Intel Xeon Gold processors with 20 cores and 192 GB of DDR4 RAM. All codes are written in PyTorch~\cite{paszke2019pytorch}.
The repository containing all codes used to obtain the reported results can be found at \href{https://github.com/pcsl-epfl/convolutional\_neural\_kernels}{https://github.com/pcsl-epfl/convolutional\_neural\_kernels}.
\subsection{Teacher-student learning curves}
In order to obtain the learning curves, we generate $n+n_{\rm test}$ random points uniformly distributed on the product of hyperspheres over the patches. We use $n \in \{128,\; 256,\; 512,\; 1024,\; 2048,\; 4096,\; 8192\}$ and $n_{\rm test}=8192$. For each value of $n$, we sample a Gaussian random field with zero mean and covariance given by the teacher kernel. Then, we compute the kernel regression predictor of the student kernel, and we estimate the generalisation error as the mean squared error of the obtained predictor on the $n_{\rm test}$ unseen example. The expectation over the teacher randomness is obtained by averaging over 16 independent sets of random input points and realisations of the Gaussian random fields. As teacher and student kernels, we use the analytical forms of the neural tangent kernels of hierarchical convolutional networks, with different combinations of depths and filter sizes.
\paragraph{Depth-two and depth-three architectures.} \autoref{fig:learning_curves_app} reports the learning curves of depth-two and depth-three kernels with binary filters at all layers. Depth-three students defeat the curse of dimensionality when learning depth-two teachers, achieving a similar performance of depth-two students matched to the teacher's structure. However, as we predict, these students encounter the curse of dimensionality when learning depth-three teachers.
\paragraph{Ternary filters.} \autoref{fig:ternary_app} reports the learning curves for kernels with 3-dimensional filters and confirms our predictions in the $s_1 \geq 3$ case.
\paragraph{Comparison with the noisy and optimally-regularised case.} Panel (a) of \autoref{fig:noise_app} compares the learning curves obtained in the optimally-regularised and ridgeless cases for noisy and noiseless data, respectively. The first case corresponds to the setting studied in \cite{caponnetto2007optimal}, in which the source-capacity formalism applies. In contrast with the second setting---which is the one used in the teacher-student scenarios and where it holds the correspondence between kernel methods and neural networks---\textit{i)} we add to the labels a Gaussian random noise with standard deviation $\sigma=0.1$, \textit{ii)} for each $n$, we select the ridge resulting in the best generalisation performance. We observe that the decay obtained in the bound derived from the source-capacity conditions is exactly the one found numerically, i.e. the rate of the bound is tight. As a further check, panel (b) shows that the optimal ridge decays as prescribed.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_2layers_a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_2layers_b.pdf}
\end{subfigure}
\caption{Learning curves for deep convolutional NTKs ($\nu=1/2$) in a teacher-student setting. \textbf{a.} Depth-two teachers learned by depth-two (matched) and depth-three (mismatched) students. Both these students are not cursed by the input dimension. \textbf{b.} Depth-three students learning depth-two and depth-three teachers. These students are cursed only in the second case. The numbers inside brackets are the sequence of filter sizes of the kernels. Solid lines are the results of experiments averaged over 16 realisations with the shaded areas representing the empirical standard deviations. The predicted asymptotic scaling $\epsilon \sim n^{-\beta}$ are reported as dashed lines.}
\label{fig:learning_curves_app}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_ternary_2layers.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_ternary_3layers.pdf}
\end{subfigure}
\caption{Learning curves for deep convolutional NTKs ($\nu=1/2$) with filters of size 3 in a teacher-student setting. \textbf{a.} Depth-three students learning depth-two and depth-three teachers. These students are cursed only in the second case. \textbf{b.} Depth-three models are cursed by the effective input dimensionality. The numbers inside brackets are the sequence of filter sizes of the kernels. Solid lines are the results of experiments averaged over 16 realisations with the shaded areas representing the empirical standard deviations. The predicted asymptotic scaling $\epsilon \sim n^{-\beta}$ are reported as dashed lines.}
\label{fig:ternary_app}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_noise_lc.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_noise_regu.pdf}
\end{subfigure}
\caption{Noisy (optimally-regularised) vs noiseless (ridgeless) learning curves for depth-three deep convolutional NTKs ($\nu=1/2$) in a teacher-student setting. \textbf{a.} Comparison between the learning curves in the noisy and noiseless case. Dashed lines represent the rates predicted with source-capacity bounds and replica calculations, respectively. Shaded areas represent the empirical standard deviations. \textbf{b.} Decay of the optimal ridge with the number of training points.}
\label{fig:noise_app}
\end{figure}
\subsection{Illustration of different teacher-student scenarios}
In this subsection, we comment on the results obtained in the different teacher-student scenarios of \autoref{fig:learning_curves_main}, panel (a), and \autoref{fig:learning_curves_app}, panel (a). To ease notation, in the following we always consider the NTK for both teacher and student kernels, i.e. smoothness exponent $\nu_T = \nu_S = 1/2$. However, we point out that when the teacher kernel is a hierarchical RFK ($\nu_T=3/2$), the target function corresponds to the output of an infinitely-wide, deep hierarchical network at initialisation\footnote{See, e.g, \citet{lee2017deep} for the equivalence between infinitely-wide networks and Gaussian random fields with covariance given by the RFK.}. The error rates are obtained from \autoref{eq:t-depth-one-s-depth-two}, after setting the smoothness exponent $m=\nu_T$ (the smoothness exponent of the teacher covariance kernel).
The first case we consider consists of one-hidden-layer convolutional teacher (left) and student (right) kernels.
\begin{figure*}[ht!]
\centering
\begin{minipage}[c]{0.60\textwidth}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_0.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_4.pdf}
\end{subfigure}
\end{minipage}
\hfill
\begin{minipage}[c]{0.3\textwidth}
\centering
$$\overline{\epsilon}(n) \sim n^{-\frac{1}{s_1-1}}$$
\end{minipage}
\addtocounter{figure}{-1}
\end{figure*}
As highlighted in blue, the output of the teacher is a linear combination (dashed lines indicate the linear output weights) of $s_1$-dimensional functions of the input patches. If the structure of the student is matched to the one of the teacher, the learning problem becomes effectively $(s_1-1)$-dimensional and the error decays as $n^{-1/(s_1-1)}$, instead of $n^{-1/d_{\rm eff}}$, with $d_{\rm eff}$ the total input dimension with the number of spherical constraints subtracted (one per patch). Notice that the role of the student's structure, i.e. the algorithm, is as crucial as the role of the teacher, i.e. the task. Indeed, using a fully-connected student with no prior on the task's locality would result in an error's decay cursed by dimensionality. However, in contrast to fully-connected students, shallow convolutional students are only able to learn tasks with the same structure. In particular, any task entailing non-linear interactions between patches---which are arguably crucial in order to learn image data---belongs to their null space.
\pagebreak
As we illustrated in the main text, to solve this strong constraint on the hypothesis space, one has to consider deep convolutional architectures. In particular, consider the same shallow teacher of the previous paragraph (left) learnt by a depth-four convolutional student (right).
\begin{figure*}[ht!]
\centering
\begin{minipage}[c]{0.60\textwidth}
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_0.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_3.pdf}
\end{subfigure}
\end{minipage}
\hfill
\begin{minipage}[c]{0.3\textwidth}
\centering
$$\overline{\epsilon}(n) \sim n^{-\frac{1}{s_1} \frac{1+d_{\rm eff}(3)}{d_{\rm eff}(3)}}$$
\end{minipage}
\addtocounter{figure}{-1}
\end{figure*}
Remarkably, this student is able to learn the teacher without being cursed by input dimensionality. Indeed, as the number of patches diverges, the error decay asymptotes to $n^{-1/s_1}$. This rate is slightly worse than the one obtained by the student matched with the teacher, which is proven to be the Bayes-optimal case, but far from being cursed. Intuitively, this fast rate is obtained because the student eigenfunctions of the first sector, i.e. constant outside a single patch, correspond to large eigenvalues and bias the learning dynamics towards $s_1$-local functions. Yet, this student is also able to represent functions which are considerably more complex.
Now consider a depth-three teacher (left) learned by a depth-four student (right).
\begin{figure*}[ht!]
\centering
\begin{minipage}[c]{0.60\textwidth}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_1.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_3.pdf}
\end{subfigure}
\end{minipage}
\hfill
\begin{minipage}[c]{0.3\textwidth}
\centering
$$\overline{\epsilon}(n) \sim n^{-\frac{1}{1+d_{\rm eff}(2)} \frac{1+d_{\rm eff}(3)}{d_{\rm eff}(3)}}$$
\end{minipage}
\addtocounter{figure}{-1}
\end{figure*}
As highlighted in orange, the output of the teacher is a linear combination of a composition of non-linear functions acting on patches and coupling them. In this setting, the error decay is controlled by the effective dimension of the second layer. In fact, when the number of patches diverges, the error decay asymptotes to $n^{-1/d_{\rm eff}(2)}$. In general, this behaviour is a result of what we called `adaptivity to the spatial structure' of the target.
Finally, consider both teacher and student with the complete hierarchy, i.e. the receptive fields of the neurons in the penultimate layers coincide with the full input.
\begin{figure*}[ht!]
\centering
\begin{minipage}[c]{0.60\textwidth}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_2.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figures/app_dag_3.pdf}
\end{subfigure}
\end{minipage}
\hfill
\begin{minipage}[c]{0.3\textwidth}
\centering
$$\overline{\epsilon}(n) \sim n^{-\frac{1}{d_{\rm eff}(3)}}$$
\end{minipage}
\addtocounter{figure}{-1}
\end{figure*}
In this case, we show that the error decays as $n^{-1/d_{\rm eff}(3)}$, i.e. the rate is cursed by the input dimension. The physical meaning of this result is that the hierarchical structure we are considering is still too complex and cannot be learnt efficiently. In other words, these hierarchical convolutional networks are excellent students, since they can adapt to the spatial structure of the task, but bad teachers, since they generate global functions which are too complex to be learnt efficiently.
\subsection{CIFAR-2 learning curves}
\autoref{fig:real_data} shows the learning curves of the
neural tangent kernels of different architectures applied to pairs of classes of the CIFAR-10 dataset. In particular, the task is built by selecting two CIFAR-10 classes, e.g. plane and car, and assigning label $+1$ to the elements belonging to one class and label $-1$ to the remaining ones. Learning is again achieved by minimising the empirical mean squared error using a `student' kernel. We find that the kernels with the worst performance are the ones corresponding to shallow fully-connected and convolutional architectures. Instead, for all the pairs of classes considered here, deep hierarchical convolutional kernels achieve the best performance.
\begin{figure}
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_cifar_plane_car.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/app_cifar_bird_cat.pdf}
\end{subfigure}
\caption{Learning curves of the neural tangent kernels of fully-connected (F-NTK) and convolutional (C-NTK) networks with various depths learning to classify two CIFAR-10 classes in a regression setting. Deep hierarchical convolutional kernels achieve the best performance. Shaded areas represent the empirical standard deviations obtained averaging over different training sets. \textbf{a.} Plane vs car. \textbf{b.} Cat vs bird.}
\label{fig:real_data}
\end{figure}
| {
"attr-fineweb-edu": 1.956055,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdyE5qhDACs7pJonT | \section{Introduction}
The study of perturbed Hammerstein integral equations often arises in the study of real world phenomena. For example the equation
\begin{equation*
u(t)= t h(u(1)) + \int_0^1 k(t,s)f(s,u(s))\,ds,
\end{equation*}
occurs when dealing with the solvability of the boundary value problem~(BVP)
\begin{equation}\label{bvp-intro}
u''(t)+f(t,u(t))=0,\ u(0)=0,\ u'(1)=h(u(\beta)).
\end{equation}
The BVP~\eqref{bvp-intro} can be used as a model for the steady-states of heated bar of length~$1$, where the left end is kept at ambient temperature and a controller in the right end adds or removes heat according to the temperature registered by a sensor placed in a point $\beta$ of the bar. The controller placed in the right end may act in a linear or in a nonlinear manner, depending on the nature of the function $h$. There exists now a (relatively) wide literature on heat-flow problems of this kind, we refer the reader to the papers~\cite{fama, df-gi-jp-prse, hGlkDaC, gijwnodea, gijwems,
nipime, jwpomona, jwwcna04, webb-therm} for the cases of linear response and to~\cite{gi-caa, gi-tmna, kamkee, kamkee2, kapala, palamides} for the nonlinear cases.
Note that the idea of using perturbed Hammerstein integral equations in order to deal with the existence of solutions of BVPs with nonlinear
BCs has been used with success in a number of papers, see the manuscripts~\cite{amp, Cabada1, ac-gi-at-tmna, genupa, dfdorjp, Goodrich3, Goodrich4, Goodrich5, Goodrich6, Goodrich7, Goodrich8, Goodrich9, gi-caa, kejde, paola, ya1,ya2} and references therein. In particular, in the recent paper~\cite{Goodrich9}, by means of the classical Krasnosel'ski\u\i{}-Guo fixed point theorem of cone compression/expansion, Goodrich studied the existence of positive solutions of the equation
\begin{equation}\label{Chris}
u(t)={\gamma_1}(t)h_{1}(\alpha_{1}[u])+ h_{2}(\alpha_{2}[u]) +\lambda \int_0^1 k(t,s)f(s,u(s))\,ds,
\end{equation}
where $\lambda$ is parameter and $\phi_{1}, \phi_{2}$ are linear functionals on the space $C[0,1]$ realized as Stieltjes integrals with signed measures, namely
\begin{equation}\label{signed}
\alpha_{i}[u]:=\int_{0}^{1}u(s)\,dA_i(s),
\end{equation}
with $A_i$ a function of bounded variation. The results of~\cite{Goodrich9} complement the earlier ones by the author~\cite{gi-caa}, where only positive measures were employed.
The functional formulation~\eqref{signed} has proven to be particularly useful in order to handle multi-point and integral BCs. For an introduction to nonlocal BCs, we refer the reader to the reviews~\cite{Cabada1, Conti, rma, sotiris, Stik, Whyburn} and the papers~\cite{kttmna, ktejde, Nieto, Picone, jw-gi-jlms}.
On the other hand, in a recent paper~\cite{webb-ejqtde}, Webb gave, using fixed point index theory, a general set-up for the existence of positive solutions of second order BVPs where linear BCs of the type $\alpha[u']=\int_{0}^{1}u'(s)\,dA(s)$ occur, a particular example being the BVP
\begin{equation*
u''(t)+f(t,u(t))=0,\ u(0)=0,\ u'(1)=\alpha[u'].
\end{equation*}
Also by means fixed point index theory, Zang and co-authors~\cite{zang} discussed the existence of positive, increasing solutions of the BVP
\begin{equation*
u''(t)+f(t,u(t),u'(t))=0,\ u(0)=\alpha[u],\ u'(1)=0,
\end{equation*}
where $\alpha[u]$ is a linear, bounded functional on the space $C[0,1]$.
Nonlinear functional BCs were investigated by Mawhin et al. in~\cite{Mawhin}, where the authors prove, by means of degree theory, the existence of a solution of a system of BVPs which, in the scalar case, reduces to
\begin{equation*
u''(t)+f(t,u(t),u'(t))=0,\ u(0)=a,\ u'(1)=N[u'],
\end{equation*}
here $a$ is a fixed number and $N$ is a compact functional defined on the space $C[0,1]$.
Here we study an integral equation related to~\eqref{Chris}, where we allow a dependence in the derivative of the nonlinearity $f$ and we allow the (not necessarily linear) functionals to act on the space $C^1[0,1]$, namely
\begin{equation}\label{perhamm-intro}
u(t)=\eta_1{\gamma_1}(t)h_1[u]+ \eta_2{\gamma_2}(t)h_2[u] +\lambda \int_0^1 k(t,s)f(s,u(s), u'(s))\,ds,
\end{equation}
where $h_1, h_2$ are suitable compact functionals on the space $C^1[0,1]$ and $\eta_1, \eta_2, \lambda$ are non-negative parameters. Multi-parameter problems of this kind have been studied recently by the author~\cite{gi-tmna} in the context of systems of elliptic equations (without gradient dependence) subject to functional BCs. Here, in the spirit of the paper~\cite{gi-tmna}, we provide existence and non-existence results for the equation~\eqref{perhamm-intro} that take into account the parameters $\eta_1, \eta_2, \lambda$. One advantage of considering the functionals in the space $C^1[0,1]$ is that it allows us to consider an interplay between function and derivative dependence in the BCs, this is illustrated in the examples of Section 3. Our methodology involves the classical fixed point index for the existence result and an elementary argument for the non-existence
result.
As an application we discuss the solvability of the BVP
\begin{equation*
u''(t)+\lambda f(t,u(t),u'(t))=0,\ u(0)=\eta_1h_1[u],\ u'(1)=\eta_2h_2[u],
\end{equation*}
and illustrate, in two examples,
how our methodology can be used in presence of nonlinear functionals that involve also nonlocal conditions.
\section{Main results}
In this Section we study the existence and non-existence of solutions of the perturbed Hammerstein equation of the type
\begin{equation}\label{perhamm}
u(t)=\eta_1{\gamma_1}(t)h_1[u]+ \eta_2{\gamma_2}(t)h_2[u] +\lambda \int_0^1 k(t,s)f(s,u(s), u'(s))\,ds:=Tu(t).
\end{equation}
Throughout the paper we make the following assumptions on the terms that occur in~\eqref{perhamm}.
\begin{itemize}
\item[$(C_1)$] $k:[0,1] \times[0,1]\rightarrow [0,+\infty)$ is measurable and continuous in $t$ for almost every (a.e.) ~$s$,
that is, for every $\tau\in [0,1] $ we have
\begin{equation*}
\lim_{t \to \tau} |k(t,s)-k(\tau,s)|=0 \ \text{for a.e.}\ s \in [0,1] ,
\end{equation*}{}
furthermore there exist a function $\Phi \in L^{1}(0,1)$ such that
$0\leq k(t,s)\leq \Phi(s)$ for $t \in [0,1]$ and a.e.~$s\in [0,1]$.
\item[$(C_2)$] The partial derivative $\partial_t k(t,s)$ is non-negative and continuous in $t$ for a.e.~$s$ and there exists $\Psi \in L^{1}(0,1)$ such that
$0\leq \partial_t k(t,s) \leq \Psi(s)$ for $t \in [0,1]$ and a.e.~$s\in [0,1]$.
\item[$(C_3)$] $f:[0,1]\times [0,+\infty) \times [0,+\infty) \to [0,+\infty)$ is continuous.
\item[$(C_4)$] $\gamma_1, \gamma_2 \in C^1 [0,1] $ and $\gamma_1 (t), \gamma_2 (t), \gamma'_1 (t), \gamma'_2 (t)\geq 0\ge 0\ \text{for every}\ t\in [0,1]$.
\item[$(C_5)$] $\eta_1, \eta_2, \lambda \in [0,+\infty)$.
\end{itemize}
Due to the hypotheses above, we use the space $C^1[0,1]$ endowed with the norm $$\|u\|:=\max\{\|u\|_\infty, \|u'\|_\infty\},$$ where $\|u\|_\infty:=\max_{t\in[0,1]}|u(t)|$.
We recall that a cone $K$ in a real Banach space $X$ is a closed convex set such that $\lambda x\in K$ for every $x \in K$ and for all $\lambda\geq 0$ and satisfying $K\cap (-K)=\{0\}$. Here, in order to discuss the solvability of~\eqref{perhamm}, we work in the cone of non-negative, non-decreasing functions
$$
P:=\{u\in C^1[0,1]:\ u(t),u'(t)\ge 0\ \text{for every}\ t\in [0,1] \},
$$
and we require the nonlinear functionals $h_1,h_2$ to act positively on the cone $P$ and to be compact, that is:
\begin{itemize}
\item[$(C_6)$] $h_1,h_2: P \to [0,+\infty)$ are continuous and map bounded sets into bounded sets.
\end{itemize}
We make use of the following basic properties of the fixed point index, we refer the reader to~\cite{amann, guolak} for more details.
\begin{pro}\cite{amann, guolak} Let $K$ be a cone in a real Banach space $X$ and let
$D$ be an open bounded set of $X$ with $0 \in D_{K}$ and
$\overline{D}_{K}\ne K$, where $D_{K}=D\cap K$.
Assume that $\tilde{T}:\overline{D}_{K}\to K$ is a compact map such that
$x\neq \tilde{T}x$ for $x\in \partial D_{K}$. Then the fixed point index
$i_{K}(\tilde{T}, D_{K})$ has the following properties:
\begin{itemize}
\item[$(1)$] If there exists $e\in K\setminus \{0\}$
such that $x\neq \tilde{T}x+\lambda e$ for all $x\in \partial D_K$ and all
$\lambda>0$, then $i_{K}(\tilde{T}, D_{K})=0$.
\item[$(2)$] If $\tilde{T}x \neq \lambda x$ for all $x\in
\partial D_K$ and all $\lambda > 1$, then $i_{K}(\tilde{T}, D_{K})=1$.
\item[(3)] Let $D^{1}$ be open in $X$ such that
$\overline{D^{1}}_{K}\subset D_K$. If $i_{K}(\tilde{T}, D_{K})=1$ and $i_{K}(\tilde{T},
D_{K}^{1})=0$, then $\tilde{T}$ has a fixed point in $D_{K}\setminus
\overline{D_{K}^{1}}$. The same holds if
$i_{K}(\tilde{T}, D_{K})=0$ and $i_{K}(\tilde{T}, D_{K}^{1})=1$.
\end{itemize}
\end{pro}
We define the set
$$
P_{\rho}:=\{u\in P: \|u\|<\rho\}
$$
and the quantities
$$
\overline{f}_{\rho}:=\max_{[0,1]\times [0,\rho]^2}f(t,u,v),\quad \underline{f}_{\rho}:=\min_{[0,1]\times [0,\rho]^2}f(t,u,v),\quad H_{i,\rho}:=\sup_{u\in \partial P_{\rho}} h_{i}[u],
$$
$$
K:=\int_0^1 k(1,s)\,ds,\quad K^*:=\sup_{t\in[0,1]}\int_0^1 \partial_t k(t,s)\,ds.
$$
With these ingredients we can state the following existence and localization result.
\begin{thm}\label{thmsol}
Assume there exist $r,R\in (0,+\infty)$, with $r<R$ such that the following two inequalities are satisfied:
\begin{equation}\label{idx1}
\max\Bigl\{\lambda \overline{f}_R K+\sum_{i=1}^{2}\eta_i{\gamma_i}(1)H_{i,R},\ \lambda \overline{f}_R K^{*}+\sum_{i=1}^{2} \eta_i\|{\gamma'_i}\|_{\infty} H_{i,R} \Bigr \}\leq R,
\end{equation}
\begin{equation}\label{idx0}
\lambda\underline{f}_r \min\{ K, K^{*} \}\geq r.
\end{equation}
Then the equation~\eqref{perhamm} has a solution $u\in P$ such that $$r\leq \|u\| \leq R.$$
\end{thm}
\begin{proof}
It is routine to prove that, under the assumptions $(C_{1})-(C_{6})$, the operator $T$ maps $P$ into $P$ and is compact.
If $T$ has a fixed point either on $\partial {P_r}$ or $\partial {P_R}$ we are done.
Assume now that $T$ is fixed point free on $\partial {P_r}\cup\partial {P_R}$, we are going to prove that $T$ has a fixed point in
$ P_R\setminus \overline {P_r}$.
We firstly prove that
$
\sigma u\neq Tu\ \text{for every}\ u\in \partial P_{R}\
\text{and every}\ \sigma >1.
$
If this does not hold, then there exist $u\in \partial P_{R}$ and $\sigma >1$ such that $\sigma u= Tu$.
Note that if $\| u\| = R$ either $\| u\|_{\infty} = R$ or $\| u'\|_{\infty} = R$.
Assume that $\| u\|_{\infty} = R$. In this case we obtain, for $t\in [0,1]$,
\begin{multline}\label{ineq1}
\sigma u(t)=\eta_1{\gamma_1}(t)h_1[u]+ \eta_2{\gamma_2}(t)h_2[u] +\lambda \int_0^1 k(t,s)f(s,u(s), u'(s))\,ds \\
\leq \eta_1{\gamma_1}(1)H_{1,R}+ \eta_2{\gamma_2}(1)H_{2,R}+\lambda \overline{f}_{R} \int_0^1 k(1,s)\,ds\leq R.
\end{multline}
Taking the supremum for $t\in [0,1]$ in~\eqref{ineq1} gives $\sigma \leq 1$, a contradiction.
Assume that $\| u'\|_{\infty} = R$. In this case we obtain, for $t\in [0,1]$,
\begin{multline}\label{ineq2}
\sigma u'(t)=\eta_1{\gamma'_1}(t)h_1[u]+ \eta_2{\gamma'_2}(t)h_2[u] +\lambda \int_0^1 \partial_t k(t,s)f(s,u(s), u'(s))\,ds \\
\leq \eta_1\|{\gamma'_1}\|_{\infty} H_{1,R}+ \eta_2\|{\gamma'_2}\|_{\infty} H_{2,R}+\lambda \overline{f}_{R} \int_0^1 \partial_t k(t,s)\,ds\leq R.
\end{multline}
Taking the supremum for $t\in [0,1]$ in~\eqref{ineq2} yields $\sigma \leq 1$, a contradiction.
Therefore we have $i_{P}(T, P_R)=1.$
We now consider the function $g(t):= t$ in $[0,1]$, note that $g\in P$. We show that
\begin{equation*}
u\neq Tu+\sigma g\ \text{for every}\ u\in \partial P_{r}\ \text{and every}\ \sigma >0.
\end{equation*}
If not, there exists $u\in \partial P_{r}$ and $\sigma >0$ such that
$
u= Tu+\sigma g .
$
Assume that $\| u\|_{\infty} = r$. In this case we obtain, for $t\in [0,1]$,
\begin{multline}\label{ineq3}
u(t)= \eta_1{\gamma_1}(t)h_1[u]+ \eta_2{\gamma_2}(t)h_2[u] \\
+ \lambda \int_0^1 k(t,s)f(s,u(s), u'(s))\,ds +\sigma t
\geq \lambda \underline{f}_{r} \int_0^1 k(t,s) \,ds+\sigma t.
\end{multline}
Taking the supremum for $t\in [0,1]$ in~\eqref{ineq3} gives $r\geq r+ \sigma$, a contradiction.
Assume that $\| u'\|_{\infty} = r$. In this case we obtain, for $t\in [0,1]$,
\begin{multline}\label{ineq4}
u'(t)=\eta_1{\gamma'_1}(t)h_1[u]+ \eta_2{\gamma'_2}(t)h_2[u] \\
+\lambda \int_0^1 \partial_t k(t,s)f(s,u(s), u'(s))\,ds + \sigma \geq \lambda \underline{f}_{r} \int_0^1 \partial_t k(t,s)\,ds +\sigma.
\end{multline}
Taking the supremum for $t\in [0,1]$ in~\eqref{ineq4} yields $r\geq r+ \sigma$, a contradiction.
Thus we obtain $i_{P}(T, P_{r})=0.$
Therefore we have
$$i_{P}(T, P_R \setminus \overline{P_r})=i_{P}(T, P_R )-i_{P}(T, P_r )=1,$$
which proves the result.
\end{proof}
We now prove, by an elementary argument, a non-existence result.
\begin{thm}\label{nonexthm}
Assume that there exist $\tau, \xi_1, \xi_2\in (0,+\infty)$ such that
\begin{equation}
0\leq f (t,u,v)\leq \tau u,\ \text{for every}\ (t,u,v)\in [0,1]\times[0,\infty)^2,
\end{equation}
\begin{equation}
h_i[u]\leq \xi_i \|u\|_{\infty}, \ \text{for every}\ u \in P\ \text{and}\ i=1,2,
\end{equation}
\begin{equation}\label{nonexineq}
\lambda \tau K+\sum_{i=1}^{2}\eta_i \xi_i{\gamma_i}(1)<1.
\end{equation}
Then the equation~\eqref{perhamm} has at most the zero solution in $P$.
\end{thm}
\begin{proof}
Assume that there exist $u\in P\setminus {0}$ such that $u$ is a fixed point for $T$. Then $\|u\|_{\infty}=\rho$, for some $\rho>0$.
Then we have
\begin{multline}\label{ineq5}
u(t)=\eta_1{\gamma_1}(t)h_1[u]+ \eta_2{\gamma_2}(t)h_2[u] +\lambda \int_0^1 k(t,s)f(s,u(s), u'(s))\,ds \\
\leq \eta_1{\gamma_1}(1)h_1[u]+ \eta_2{\gamma_2}(1)h_2[u]+\lambda \tau \int_0^1 k(1,s) u(s)\,ds\\
\leq \eta_1{\gamma_1}(1)\xi_1\|u\|_{\infty}+ \eta_2{\gamma_2}(1)\xi_2\|u\|_{\infty}+\lambda \tau \|u\|_{\infty}\int_0^1 k(1,s) \,ds.
\end{multline}
Taking the supremum for $t\in [0,1]$ in~\eqref{ineq5} gives $\rho < \rho$, a contradiction.
\end{proof}
\section{Two examples}
We now illustrate the applicability of the results of Section 2. In particular we focus on the BVP
\begin{equation}\label{bvpex}
u''(t)+\lambda f(t,u(t),u'(t))=0,\ u(0)=\eta_1h_1[u],\ u'(1)=\eta_2h_2[u].
\end{equation}
It is routine to show (for some details, see for example~\cite{gi-mc})
that the solutions of~\eqref{bvpex} can be written in the form
\begin{equation*
u(t)=\eta_1{\gamma_1}(t)h_1[u]+ \eta_2{\gamma_2}(t)h_2[u] +\lambda \int_0^1 k(t,s)f(s,u(s), u'(s))\,ds,
\end{equation*}
where the kernel $k$ is the Green's function associated to the right focal BCs
$$u(0)=u'(1)=0,$$ namely
\begin{equation*
k(t,s)=\begin{cases}
s & \text{if}\;s \leq t,\\
t &\text{if}\; s>t,
\end{cases}
\end{equation*}
and ${\gamma_1}(t)=1$ and ${\gamma_2}(t)=t$ are solutions of the BVPs
$$
{\gamma_1}''(t)=0,\ {\gamma_1}(0)=1,\ {\gamma_1}'(1)=0.
$$
$$
{\gamma_2}''(t)=0,\ {\gamma_2}(0)=0,\ {\gamma_2}'(1)=1.
$$
In this case we have
\begin{equation*}
{\gamma_1}'(t)=0,\quad {\gamma_2}'(t)=1\quad \text{and}\quad
\partial_t k(t,s)=\begin{cases}
0 & \text{if}\;s \leq t,\\
1 &\text{if}\; s>t.
\end{cases}
\end{equation*}
Therefore the assumptions $(C_1), (C_2)$ and $(C_4)$ are satisfied with $\Phi(s)=s$ and $\Psi(s)=1$.
By direct calculation we have $K=\frac{1}{2}$ and $K^*=1$.
\begin{ex}
Let us consider the BVP
\begin{equation}\label{bvpex1}
u''(t)+{\lambda e^{t(u(t)+u'(t))}}=0,\ u(0)=\eta_1h_1[u],\ u'(1)=\eta_2h_2[u],
\end{equation}
where
$$
h_{1}[u]=u(1/4)+(u'(3/4))^2,\ h_{2}[u]=\int_{0}^1u^3(s)+u'(s)\,ds.
$$
Let us fix $r=1/20$ and $R=1$, then we have
$$\overline{f}_{1}=e^2, \underline{f}_{\frac{1}{20}}=1, H_{1,1},H_{2,1}\leq 2.$$
Therefore the condition~\eqref{idx1} is satisfied if
\begin{equation}\label{idx1ex}
\max\Bigl\{\lambda \frac{e^2}{2}+2\eta_1+2\eta_2,\ \lambda e^2 +2\eta_2 \Bigr \}\leq 1,
\end{equation}
and the condition~\eqref{idx0} reads
\begin{equation}\label{idx0ex}
\lambda\geq \frac{1}{10}.
\end{equation}
For the range of parameters that satisfy the inequalities~\eqref{idx1ex}-\eqref{idx0ex}, Theorem~\ref{thmsol} provides the existence of at least a nondecreasing, nonnegative solution $u$ of the BVP~\eqref{bvpex1} with $1/20\leq \|u\| \leq 1$; this occurs, for example, for $\lambda=1/10, \eta_1=1/11, \eta_2=1/12$.
\end{ex}
\begin{ex}
Let us now consider the BVP
\begin{equation}\label{bvpex2}
u''(t)+{\lambda u(t)(2-t\sin (u(t)u'(t)))}=0,\ u(0)=\eta_1h_1[u],\ u'(1)=\eta_2h_2[u],
\end{equation}
where
$$
h_{1}[u]=u(1/4)\cos^2 (u'(3/4)),\ h_{2}[u]=u(3/4)\sin^2 (u'(1/4)).
$$
In this case we may take $\tau=3, \xi_1=\xi_2=1$. Then the condition~\eqref{nonexineq} required by Theorem~\ref{nonexthm} reads
\begin{equation}\label{bvpex2rg}
\frac{3}{2}\lambda+\eta_1+\eta_2<1.
\end{equation}
For the range of parameters that satisfy the inequality~\eqref{bvpex2rg}, Theorem~\ref{nonexineq} guarantees that the only possible solution in $P$ of the BVP~\eqref{bvpex2} is the trivial one; this occurs, for example, for $\lambda=1/3, \eta_1=1/4, \eta_2=1/5$.
\end{ex}
\section*{Acknowledgement}
G. Infante was partially supported by G.N.A.M.P.A. - INdAM (Italy).
| {
"attr-fineweb-edu": 1.750977,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe085qoTAhv86igdI |
\chapter{Acknowledgments}
I want to thank my advisor, Professor Jonatan Gomez, for his guidance and academic support during this research. I value his insights, remarks, motivation, and expertise that notably contributed to this thesis.
I want to thank Professor Luis Eugenio Andrade, Biologist, for his support at the beginning of this work, his help with Epigenetics understanding, and the comments and suggestions during academic meetings.
I express my gratitude to Aimer Alonso Gutierrez, Biologist and passionate about Epigenetics, for all the comments and suggestions. His knowledge reinforces and refines different aspects of this thesis.
My recognition goes out to the Artificial Life (ALife) Research Group, for all the comments and suggestions during formal and informal meetings.
Finally, I would like to acknowledge my family for their support and constant encouragement to finish this research.
\chapter{Examples of Individuals with Tags}\label{append}
\begin{table}[H]
\centering
\caption{Individual representation for Binary functions, $D=20$.}
\label{apptable1}
\begin{tabular}{lllllllllllllllllllll}
& \cellcolor[HTML]{34CDF9}0 & & & & 1 & 1 & & 0 & & \cellcolor[HTML]{34CDF9}0 & & & 1 & & 0 & & & 0 & 1 & \\
& \cellcolor[HTML]{34CDF9}1 & & & & 1 & 0 & & 0 & & \cellcolor[HTML]{34CDF9}1 & & & 0 & & 0 & & & 0 & 1 & \\
& \cellcolor[HTML]{34CDF9}0 & & & & 0 & 1 & & 0 & & \cellcolor[HTML]{34CDF9}0 & & & 1 & & 1 & & & 0 & 1 & \\
& \cellcolor[HTML]{FFCC67}0 & & & & 1 & 0 & & 1 & & \cellcolor[HTML]{67FD9A}1 & & & 0 & & 0 & & & 0 & 0 & \\
& \cellcolor[HTML]{FFCC67}1 & & & & 0 & 0 & & 1 & & \cellcolor[HTML]{67FD9A}1 & & & 0 & & 0 & & & 0 & 0 & \\
& \cellcolor[HTML]{FFCC67}0 & & & & 1 & 1 & & 1 & & \cellcolor[HTML]{67FD9A}0 & & & 1 & & 1 & & & 1 & 1 & \\
& \cellcolor[HTML]{FFCC67}0 & & & & 0 & 1 & & 0 & & \cellcolor[HTML]{67FD9A}0 & & & 1 & & 0 & & & 0 & 0 & \\
\multirow{-8}{*}{Epigenotype} & \cellcolor[HTML]{FFCC67}0 & & & & 1 & 1 & & 1 & & \cellcolor[HTML]{67FD9A}0 & & & 0 & & 1 & & & 0 & 0 & \\
Genotype & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & 0 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 \\
BitString & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 \\
Phenotype & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}0
\end{tabular}
\end{table}
The illustrated individual in Appendix \ref{apptable1} describes a solution for the Deceptive Order Four Trap function with a dimension of $20$. The individual depicted in Appendix \ref{apptable2} describes a solution for Rastrigin function with a problem dimension of $2$. Individuals representation shows in the first row the tags attached to specific alleles, only colored tags are read during tags decoding process ({\em epiGrowingFunction}). The second row presents the genotype code; the third row exhibits the bit string generated by the epigenetic growth function. Finally, the fourth row shows the phenotype representation of the individual. For Real defined functions, binary strings of {\em32}-bits encode real values.
\begin{landscape}
\begin{table}
\centering
\caption{Individual representation for Real functions, $D=2$.}
\label{apptable2}
\begin{tabular}{lllllllllllllllllllllllllllllllll}
& & & \cellcolor[HTML]{34CDF9}0 & 1 & & & & & 0 & 1 & 0 & 0 & & 1 & 1 & 1 & 0 & & & & & 0 & & 1 & & & & & \cellcolor[HTML]{34CDF9}0 & & 1 & \\
& & & \cellcolor[HTML]{34CDF9}1 & 0 & & & & & 1 & 1 & 0 & 0 & & 0 & 0 & 1 & 1 & & & & & 0 & & 1 & & & & & \cellcolor[HTML]{34CDF9}0 & & 0 & \\
& & & \cellcolor[HTML]{34CDF9}0 & 1 & & & & & 1 & 1 & 1 & 1 & & 1 & 1 & 0 & 1 & & & & & 0 & & 0 & & & & & \cellcolor[HTML]{34CDF9}1 & & 1 & \\
& & & \cellcolor[HTML]{FFC702}1 & 0 & & & & & 0 & 1 & 1 & 1 & & 1 & 1 & 0 & 1 & & & & & 0 & & 0 & & & & & \cellcolor[HTML]{FD6864}1 & & 0 & \\
& & & \cellcolor[HTML]{FFC702}0 & 0 & & & & & 0 & 1 & 0 & 0 & & 0 & 0 & 0 & 1 & & & & & 0 & & 0 & & & & & \cellcolor[HTML]{FD6864}1 & & 0 & \\
& & & \cellcolor[HTML]{FFC702}1 & 1 & & & & & 1 & 1 & 1 & 1 & & 1 & 1 & 1 & 1 & & & & & 0 & & 0 & & & & & \cellcolor[HTML]{FD6864}0 & & 1 & \\
& & & \cellcolor[HTML]{FFC702}1 & 1 & & & & & 0 & 0 & 1 & 1 & & 1 & 0 & 1 & 1 & & & & & 1 & & 0 & & & & & \cellcolor[HTML]{FD6864}0 & & 0 & \\
\multirow{-8}{*}{Epigenotype} & & & \cellcolor[HTML]{FFC702}0 & 1 & & & & & 1 & 0 & 0 & 0 & & 1 & 0 & 1 & 0 & & & & & 1 & & 1 & & & & & \cellcolor[HTML]{FD6864}1 & & 1 & \\
Genotype & 1 & 0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}0 & 1 & 1 & 1 & 1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 \\
BitString & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 \\
Phenotype & \multicolumn{32}{c}{\cellcolor[HTML]{FFFC9E}2.552499999404536} \\
& & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & \\
Epigenotype & 1 & 0 & & 1 & & & & & & & 0 & & & & & 0 & 0 & 0 & & & & \cellcolor[HTML]{34CDF9}0 & & 1 & & & 1 & & & & & 0 \\
& 1 & 1 & & 1 & & & & & & & 0 & & & & & 0 & 0 & 1 & & & & \cellcolor[HTML]{34CDF9}0 & & 0 & & & 0 & & & & & 0 \\
& 1 & 1 & & 1 & & & & & & & 0 & & & & & 1 & 0 & 1 & & & & \cellcolor[HTML]{34CDF9}1 & & 0 & & & 0 & & & & & 0 \\
& 1 & 0 & & 0 & & & & & & & 1 & & & & & 0 & 1 & 0 & & & & \cellcolor[HTML]{FD6864}1 & & 1 & & & 0 & & & & & 0 \\
& 0 & 1 & & 0 & & & & & & & 1 & & & & & 0 & 0 & 1 & & & & \cellcolor[HTML]{FD6864}1 & & 0 & & & 1 & & & & & 1 \\
& 1 & 0 & & 0 & & & & & & & 0 & & & & & 1 & 0 & 1 & & & & \cellcolor[HTML]{FD6864}0 & & 0 & & & 0 & & & & & 0 \\
& 1 & 0 & & 1 & & & & & & & 1 & & & & & 1 & 0 & 1 & & & & \cellcolor[HTML]{FD6864}0 & & 1 & & & 1 & & & & & 1 \\
& 0 & 0 & & 1 & & & & & & & 0 & & & & & 0 & 1 & 1 & & & & \cellcolor[HTML]{FD6864}1 & & 1 & & & 1 & & & & & 0 \\
Genotype & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}0 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 & \cellcolor[HTML]{C0C0C0}1 \\
BitString & \cellcolor[HTML]{EFEFEF}0 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 & \cellcolor[HTML]{EFEFEF}1 \\
Phenotype & \multicolumn{32}{c}{\cellcolor[HTML]{9AFF99}-0.005000001190928138}
\end{tabular}
\end{table}
\end{landscape}
\chapter{Standard and ReGen EAs Samples for Statistical Analysis}\label{appendB}
Standard and ReGen GAs Samples are tabulated from Appendices \ref{table1} to \ref{table8} by function. Tables present fitness samples of twenty implementations with crossover rates (X) from $0.6$ to $1.0$, generational (G), and steady state (SS) population replacements. Columns report EA implementations and rows contain the runs per algorithm, in total there are thirty runs. Algorithms are represented by numbers from one to twenty, for example, GGAX06 refers to a standard generational GA with crossover rate $0.6$ and the samples are tabulated in column name {\em1}\footnote{
GGAX06 (1), GGAX07 (2), GGAX08 (3), GGAX09 (4), GGAX10 (5), SSGAX06 (6), SSGAX07 (7), SSGAX08 (8), SSGAX09 (9), SSGAX10 (10), ReGenGGAX06 (11), ReGenGGAX07 (12), ReGenGGAX08 (13), ReGenGGAX09 (14), ReGenGGAX10 (15), ReGenSSGAX06 (16), ReGenSSGAX07 (17), ReGenSSGAX08 (18), ReGenSSGAX09 (19), ReGenSSGAX10 (20).}.
On the other hand, \textsc{HaEa}\xspace implementations in Appendix \ref{table9} are grouped by function, each function reports four implementations: standard generational \textsc{HaEa}\xspace (GHAEA (1)), steady state \textsc{HaEa}\xspace (SSHAEA (2)), ReGen generational \textsc{HaEa}\xspace (ReGenGHAEA (3)), and ReGen steady state \textsc{HaEa}\xspace (ReGenSSHAEA (4)). Columns report EA implementations and rows contain the runs per algorithm, in total there are thirty runs. Algorithms are represented by numbers from one to four.
\begin{landscape}
\begin{table}
\centering
\caption{Deceptive Order Three Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table1}
\footnotesize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
3438 & 3426 & 3456 & 3448 & 3434 & 3422 & 3444 & 3442 & 3420 & 3438 & 3562 & 3564 & 3542 & 3546 & 3584 & 3570 & 3578 & 3578 & 3592 & 3578 \\
3436 & 3426 & 3434 & 3428 & 3448 & 3448 & 3446 & 3424 & 3428 & 3446 & 3560 & 3570 & 3578 & 3570 & 3596 & 3584 & 3584 & 3542 & 3554 & 3580 \\
3426 & 3414 & 3436 & 3440 & 3430 & 3436 & 3424 & 3418 & 3438 & 3414 & 3582 & 3578 & 3576 & 3596 & 3580 & 3564 & 3570 & 3592 & 3566 & 3588 \\
3438 & 3428 & 3436 & 3450 & 3432 & 3432 & 3438 & 3412 & 3432 & 3440 & 3570 & 3584 & 3548 & 3566 & 3548 & 3572 & 3570 & 3582 & 3582 & 3582 \\
3422 & 3420 & 3432 & 3438 & 3442 & 3422 & 3454 & 3432 & 3444 & 3444 & 3574 & 3596 & 3586 & 3556 & 3584 & 3546 & 3586 & 3586 & 3566 & 3574 \\
3454 & 3426 & 3446 & 3446 & 3442 & 3426 & 3438 & 3424 & 3426 & 3434 & 3554 & 3574 & 3584 & 3594 & 3560 & 3568 & 3530 & 3588 & 3592 & 3588 \\
3438 & 3436 & 3448 & 3416 & 3430 & 3422 & 3424 & 3438 & 3438 & 3454 & 3570 & 3584 & 3584 & 3576 & 3582 & 3584 & 3580 & 3568 & 3590 & 3592 \\
3442 & 3438 & 3450 & 3438 & 3426 & 3420 & 3422 & 3434 & 3446 & 3436 & 3568 & 3566 & 3578 & 3590 & 3552 & 3554 & 3566 & 3594 & 3598 & 3580 \\
3438 & 3450 & 3422 & 3438 & 3438 & 3412 & 3448 & 3444 & 3418 & 3448 & 3566 & 3582 & 3576 & 3588 & 3576 & 3566 & 3578 & 3576 & 3598 & 3564 \\
3436 & 3420 & 3418 & 3438 & 3440 & 3438 & 3440 & 3438 & 3442 & 3442 & 3584 & 3586 & 3590 & 3590 & 3568 & 3568 & 3586 & 3590 & 3598 & 3582 \\
3428 & 3432 & 3436 & 3450 & 3448 & 3440 & 3428 & 3444 & 3434 & 3406 & 3578 & 3580 & 3592 & 3590 & 3584 & 3596 & 3590 & 3576 & 3574 & 3576 \\
3448 & 3428 & 3440 & 3422 & 3426 & 3434 & 3432 & 3434 & 3418 & 3432 & 3584 & 3566 & 3580 & 3586 & 3596 & 3566 & 3566 & 3586 & 3586 & 3594 \\
3438 & 3446 & 3430 & 3452 & 3442 & 3422 & 3432 & 3450 & 3434 & 3422 & 3580 & 3592 & 3580 & 3574 & 3588 & 3566 & 3580 & 3564 & 3576 & 3594 \\
3424 & 3430 & 3432 & 3452 & 3426 & 3442 & 3426 & 3442 & 3428 & 3430 & 3578 & 3570 & 3568 & 3566 & 3584 & 3588 & 3570 & 3584 & 3586 & 3562 \\
3426 & 3426 & 3418 & 3444 & 3430 & 3420 & 3432 & 3448 & 3424 & 3434 & 3578 & 3600 & 3588 & 3592 & 3580 & 3576 & 3556 & 3594 & 3576 & 3586 \\
3444 & 3430 & 3434 & 3430 & 3424 & 3424 & 3446 & 3434 & 3436 & 3452 & 3576 & 3566 & 3584 & 3588 & 3594 & 3578 & 3576 & 3582 & 3588 & 3596 \\
3464 & 3442 & 3422 & 3454 & 3444 & 3436 & 3434 & 3444 & 3444 & 3424 & 3592 & 3564 & 3578 & 3576 & 3592 & 3574 & 3574 & 3594 & 3578 & 3586 \\
3438 & 3420 & 3440 & 3434 & 3432 & 3450 & 3432 & 3438 & 3436 & 3444 & 3562 & 3584 & 3580 & 3576 & 3578 & 3586 & 3574 & 3566 & 3586 & 3588 \\
3448 & 3420 & 3438 & 3458 & 3438 & 3438 & 3434 & 3416 & 3448 & 3450 & 3590 & 3544 & 3560 & 3572 & 3580 & 3554 & 3572 & 3570 & 3590 & 3584 \\
3418 & 3432 & 3424 & 3430 & 3432 & 3438 & 3428 & 3444 & 3424 & 3446 & 3594 & 3586 & 3540 & 3562 & 3586 & 3570 & 3576 & 3588 & 3588 & 3596 \\
3444 & 3428 & 3418 & 3444 & 3418 & 3432 & 3432 & 3434 & 3448 & 3432 & 3586 & 3586 & 3582 & 3578 & 3558 & 3586 & 3582 & 3598 & 3588 & 3566 \\
3438 & 3430 & 3442 & 3438 & 3438 & 3420 & 3440 & 3416 & 3452 & 3446 & 3592 & 3584 & 3580 & 3588 & 3590 & 3596 & 3568 & 3586 & 3586 & 3598 \\
3422 & 3450 & 3430 & 3440 & 3428 & 3424 & 3428 & 3434 & 3422 & 3420 & 3580 & 3588 & 3582 & 3600 & 3586 & 3586 & 3546 & 3582 & 3594 & 3592 \\
3422 & 3430 & 3420 & 3416 & 3436 & 3450 & 3436 & 3446 & 3442 & 3446 & 3580 & 3596 & 3598 & 3580 & 3586 & 3584 & 3576 & 3548 & 3586 & 3594 \\
3432 & 3414 & 3438 & 3448 & 3452 & 3434 & 3434 & 3450 & 3434 & 3454 & 3596 & 3574 & 3580 & 3590 & 3578 & 3560 & 3586 & 3572 & 3592 & 3596 \\
3434 & 3442 & 3436 & 3438 & 3440 & 3432 & 3452 & 3442 & 3442 & 3428 & 3570 & 3576 & 3590 & 3588 & 3574 & 3572 & 3542 & 3574 & 3586 & 3580 \\
3434 & 3436 & 3442 & 3440 & 3432 & 3412 & 3444 & 3414 & 3436 & 3436 & 3596 & 3596 & 3564 & 3566 & 3600 & 3582 & 3594 & 3580 & 3568 & 3592 \\
3422 & 3422 & 3440 & 3434 & 3446 & 3428 & 3424 & 3440 & 3432 & 3440 & 3566 & 3576 & 3578 & 3558 & 3588 & 3574 & 3570 & 3590 & 3576 & 3564 \\
3424 & 3424 & 3438 & 3428 & 3428 & 3420 & 3432 & 3442 & 3440 & 3420 & 3578 & 3576 & 3592 & 3584 & 3590 & 3596 & 3594 & 3594 & 3580 & 3592 \\
3420 & 3432 & 3420 & 3432 & 3458 & 3424 & 3422 & 3420 & 3414 & 3426 & 3580 & 3576 & 3590 & 3586 & 3570 & 3568 & 3582 & 3570 & 3566 & 3600
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Deceptive Order Four Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table2}
\footnotesize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
390 & 389 & 389 & 389 & 387 & 394 & 389 & 387 & 386 & 390 & 440 & 445 & 442 & 444 & 448 & 441 & 437 & 445 & 447 & 437 \\
391 & 396 & 388 & 389 & 401 & 386 & 391 & 384 & 392 & 397 & 449 & 447 & 447 & 444 & 447 & 444 & 447 & 445 & 444 & 445 \\
388 & 394 & 396 & 391 & 392 & 391 & 387 & 395 & 385 & 393 & 441 & 447 & 445 & 449 & 447 & 443 & 447 & 447 & 444 & 443 \\
387 & 390 & 395 & 388 & 399 & 385 & 384 & 393 & 395 & 394 & 447 & 446 & 444 & 445 & 445 & 445 & 446 & 439 & 448 & 446 \\
394 & 390 & 396 & 395 & 387 & 388 & 392 & 396 & 393 & 388 & 447 & 446 & 443 & 445 & 444 & 444 & 443 & 440 & 449 & 443 \\
387 & 389 & 386 & 396 & 388 & 382 & 392 & 391 & 388 & 388 & 439 & 446 & 433 & 443 & 444 & 444 & 449 & 448 & 445 & 444 \\
387 & 393 & 387 & 390 & 393 & 389 & 386 & 391 & 387 & 391 & 442 & 444 & 447 & 447 & 447 & 443 & 441 & 448 & 449 & 446 \\
395 & 388 & 393 & 397 & 396 & 390 & 383 & 392 & 390 & 394 & 449 & 446 & 440 & 444 & 446 & 445 & 446 & 445 & 447 & 446 \\
389 & 383 & 388 & 388 & 395 & 383 & 385 & 389 & 388 & 386 & 442 & 445 & 447 & 449 & 447 & 443 & 446 & 444 & 446 & 446 \\
390 & 381 & 391 & 387 & 392 & 391 & 386 & 390 & 388 & 389 & 439 & 445 & 446 & 446 & 449 & 447 & 448 & 439 & 444 & 446 \\
388 & 387 & 389 & 387 & 383 & 388 & 383 & 395 & 385 & 392 & 446 & 442 & 443 & 443 & 438 & 442 & 442 & 445 & 446 & 448 \\
382 & 386 & 385 & 390 & 391 & 385 & 381 & 386 & 389 & 393 & 445 & 446 & 448 & 443 & 443 & 435 & 446 & 445 & 447 & 442 \\
385 & 389 & 391 & 398 & 386 & 383 & 392 & 391 & 385 & 397 & 447 & 446 & 449 & 448 & 448 & 449 & 444 & 445 & 446 & 446 \\
397 & 389 & 391 & 391 & 394 & 384 & 384 & 384 & 394 & 384 & 443 & 444 & 443 & 448 & 449 & 443 & 447 & 447 & 443 & 446 \\
393 & 393 & 392 & 390 & 395 & 392 & 387 & 394 & 381 & 389 & 443 & 447 & 446 & 447 & 449 & 444 & 448 & 446 & 446 & 446 \\
384 & 385 & 384 & 395 & 387 & 392 & 387 & 387 & 397 & 391 & 443 & 447 & 441 & 445 & 448 & 446 & 445 & 444 & 447 & 448 \\
387 & 393 & 391 & 390 & 390 & 389 & 387 & 386 & 390 & 384 & 442 & 446 & 441 & 448 & 444 & 449 & 446 & 446 & 444 & 445 \\
399 & 379 & 384 & 391 & 385 & 397 & 386 & 392 & 390 & 389 & 450 & 446 & 441 & 446 & 441 & 442 & 444 & 443 & 448 & 448 \\
388 & 397 & 380 & 391 & 397 & 387 & 389 & 387 & 373 & 393 & 447 & 447 & 438 & 448 & 441 & 443 & 443 & 447 & 449 & 446 \\
397 & 393 & 393 & 390 & 398 & 391 & 393 & 395 & 383 & 387 & 447 & 449 & 444 & 447 & 446 & 444 & 445 & 446 & 447 & 439 \\
395 & 392 & 390 & 387 & 391 & 386 & 386 & 388 & 388 & 386 & 445 & 449 & 443 & 445 & 448 & 441 & 447 & 445 & 445 & 446 \\
384 & 388 & 387 & 390 & 393 & 386 & 392 & 389 & 390 & 388 & 446 & 448 & 446 & 441 & 444 & 442 & 446 & 443 & 441 & 447 \\
387 & 383 & 389 & 393 & 383 & 379 & 385 & 387 & 396 & 392 & 448 & 444 & 446 & 445 & 448 & 446 & 450 & 445 & 444 & 447 \\
397 & 384 & 393 & 383 & 398 & 382 & 390 & 394 & 390 & 388 & 445 & 445 & 445 & 445 & 448 & 446 & 443 & 438 & 444 & 448 \\
388 & 395 & 394 & 390 & 393 & 396 & 387 & 379 & 382 & 396 & 443 & 448 & 442 & 449 & 449 & 443 & 449 & 449 & 447 & 448 \\
393 & 389 & 391 & 389 & 390 & 393 & 388 & 390 & 394 & 399 & 439 & 449 & 443 & 444 & 446 & 442 & 444 & 445 & 447 & 439 \\
394 & 385 & 389 & 392 & 386 & 387 & 382 & 386 & 382 & 397 & 448 & 446 & 447 & 447 & 442 & 442 & 447 & 441 & 443 & 447 \\
388 & 391 & 399 & 389 & 399 & 390 & 386 & 391 & 388 & 395 & 449 & 446 & 449 & 449 & 445 & 445 & 448 & 445 & 448 & 444 \\
386 & 383 & 390 & 391 & 393 & 384 & 381 & 394 & 390 & 395 & 444 & 447 & 448 & 446 & 448 & 447 & 446 & 445 & 447 & 445 \\
385 & 388 & 390 & 386 & 395 & 376 & 390 & 383 & 390 & 394 & 445 & 444 & 447 & 448 & 444 & 440 & 447 & 447 & 443 & 444
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Royal Road Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table3}
\footnotesize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
256 & 240 & 240 & 256 & 280 & 80 & 112 & 104 & 104 & 128 & 352 & 360 & 360 & 360 & 352 & 344 & 352 & 352 & 360 & 344 \\
216 & 184 & 264 & 272 & 312 & 104 & 96 & 120 & 128 & 112 & 360 & 360 & 360 & 360 & 352 & 344 & 352 & 344 & 352 & 360 \\
208 & 248 & 240 & 264 & 256 & 120 & 96 & 128 & 128 & 96 & 360 & 360 & 360 & 360 & 360 & 344 & 344 & 352 & 352 & 360 \\
168 & 200 & 248 & 280 & 280 & 136 & 80 & 112 & 96 & 144 & 360 & 344 & 360 & 360 & 360 & 344 & 352 & 360 & 360 & 352 \\
224 & 224 & 264 & 272 & 272 & 80 & 112 & 128 & 80 & 112 & 360 & 360 & 360 & 360 & 360 & 352 & 360 & 352 & 360 & 360 \\
200 & 256 & 264 & 272 & 296 & 96 & 128 & 120 & 56 & 120 & 360 & 360 & 360 & 360 & 360 & 360 & 336 & 312 & 360 & 360 \\
232 & 216 & 264 & 288 & 272 & 80 & 96 & 88 & 120 & 128 & 360 & 360 & 360 & 360 & 360 & 352 & 360 & 352 & 360 & 344 \\
184 & 224 & 248 & 272 & 272 & 72 & 88 & 144 & 104 & 136 & 360 & 360 & 360 & 352 & 352 & 336 & 352 & 360 & 352 & 352 \\
184 & 216 & 240 & 280 & 312 & 88 & 88 & 160 & 120 & 128 & 360 & 360 & 360 & 360 & 360 & 360 & 352 & 360 & 360 & 352 \\
264 & 200 & 216 & 264 & 280 & 96 & 88 & 80 & 144 & 136 & 360 & 360 & 360 & 360 & 360 & 352 & 344 & 352 & 344 & 352 \\
200 & 224 & 232 & 264 & 264 & 88 & 96 & 104 & 104 & 136 & 360 & 360 & 360 & 360 & 360 & 344 & 360 & 360 & 360 & 360 \\
224 & 216 & 232 & 232 & 304 & 88 & 96 & 128 & 176 & 112 & 344 & 360 & 360 & 360 & 360 & 352 & 336 & 336 & 344 & 360 \\
168 & 208 & 240 & 264 & 312 & 112 & 120 & 88 & 104 & 104 & 360 & 360 & 360 & 360 & 360 & 360 & 336 & 352 & 360 & 352 \\
224 & 216 & 240 & 208 & 320 & 96 & 120 & 128 & 128 & 112 & 360 & 360 & 360 & 360 & 360 & 328 & 360 & 352 & 360 & 360 \\
176 & 232 & 248 & 216 & 296 & 136 & 104 & 112 & 104 & 120 & 360 & 360 & 360 & 360 & 360 & 352 & 344 & 320 & 360 & 360 \\
160 & 216 & 248 & 248 & 280 & 88 & 88 & 112 & 104 & 128 & 360 & 360 & 360 & 360 & 360 & 344 & 328 & 344 & 360 & 352 \\
256 & 248 & 256 & 296 & 280 & 72 & 64 & 120 & 96 & 120 & 360 & 352 & 360 & 360 & 360 & 352 & 352 & 352 & 352 & 344 \\
232 & 192 & 216 & 248 & 288 & 88 & 128 & 104 & 128 & 144 & 352 & 360 & 352 & 352 & 360 & 344 & 352 & 352 & 344 & 360 \\
216 & 208 & 248 & 296 & 264 & 104 & 88 & 120 & 104 & 96 & 352 & 352 & 360 & 360 & 360 & 352 & 360 & 352 & 360 & 360 \\
208 & 224 & 240 & 240 & 272 & 96 & 88 & 96 & 136 & 120 & 352 & 360 & 360 & 352 & 360 & 344 & 352 & 352 & 360 & 360 \\
168 & 224 & 248 & 288 & 288 & 128 & 96 & 112 & 104 & 120 & 360 & 360 & 360 & 352 & 360 & 360 & 352 & 336 & 352 & 352 \\
200 & 224 & 248 & 288 & 288 & 88 & 128 & 96 & 112 & 136 & 360 & 352 & 360 & 360 & 360 & 360 & 360 & 344 & 360 & 360 \\
200 & 208 & 256 & 264 & 288 & 136 & 96 & 96 & 88 & 80 & 352 & 360 & 360 & 360 & 360 & 344 & 352 & 360 & 352 & 360 \\
184 & 216 & 264 & 240 & 264 & 120 & 96 & 72 & 112 & 128 & 352 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 352 & 352 \\
160 & 232 & 264 & 288 & 312 & 80 & 112 & 120 & 96 & 104 & 360 & 360 & 360 & 360 & 360 & 352 & 360 & 360 & 352 & 352 \\
168 & 216 & 248 & 264 & 256 & 104 & 72 & 80 & 136 & 160 & 360 & 360 & 360 & 360 & 360 & 344 & 336 & 360 & 336 & 360 \\
216 & 248 & 256 & 272 & 264 & 120 & 72 & 96 & 128 & 128 & 360 & 360 & 360 & 352 & 360 & 336 & 360 & 360 & 360 & 352 \\
184 & 208 & 264 & 256 & 264 & 80 & 112 & 104 & 152 & 112 & 360 & 360 & 360 & 360 & 360 & 352 & 360 & 352 & 352 & 352 \\
184 & 240 & 232 & 248 & 288 & 96 & 80 & 136 & 120 & 120 & 352 & 344 & 360 & 360 & 360 & 360 & 344 & 360 & 352 & 360 \\
176 & 208 & 216 & 240 & 280 & 104 & 88 & 96 & 80 & 176 & 352 & 360 & 352 & 360 & 360 & 352 & 328 & 344 & 360 & 352
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Max Ones Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table4}
\footnotesize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 \\
360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360 & 360
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Rastrigin Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table5}
\scriptsize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
8.821 & 16.925 & 17.668 & 6.890 & 12.467 & 10.058 & 5.658 & 13.224 & 7.675 & 8.346 & 1.025 & 1.034 & 3.340 & 0.020 & 0.029 & 0.028 & 0.020 & 1.025 & 0.018 & 0.029 \\
4.695 & 8.487 & 6.029 & 5.685 & 7.170 & 6.469 & 8.996 & 4.190 & 8.969 & 7.050 & 2.038 & 0.025 & 0.025 & 0.031 & 1.017 & 1.026 & 0.033 & 0.034 & 1.020 & 0.030 \\
10.883 & 11.481 & 11.148 & 7.373 & 9.042 & 16.263 & 16.694 & 11.852 & 9.621 & 5.029 & 0.037 & 2.027 & 2.010 & 0.025 & 0.010 & 1.019 & 0.018 & 0.015 & 0.016 & 0.020 \\
6.638 & 12.630 & 16.947 & 9.698 & 11.349 & 8.382 & 13.939 & 13.364 & 15.860 & 5.565 & 0.036 & 0.034 & 1.022 & 3.003 & 0.020 & 1.102 & 1.038 & 0.023 & 0.019 & 0.026 \\
9.881 & 24.151 & 23.382 & 9.000 & 11.102 & 13.115 & 7.649 & 15.759 & 8.495 & 11.857 & 0.028 & 1.012 & 0.016 & 0.025 & 1.334 & 0.014 & 0.016 & 0.031 & 0.029 & 0.019 \\
9.811 & 11.291 & 10.149 & 13.611 & 12.122 & 6.996 & 14.611 & 10.682 & 10.328 & 4.372 & 2.203 & 0.020 & 0.022 & 0.037 & 0.010 & 0.033 & 0.013 & 1.028 & 0.015 & 0.021 \\
8.844 & 7.067 & 10.632 & 6.024 & 4.705 & 13.976 & 10.343 & 7.968 & 7.535 & 10.845 & 1.022 & 2.198 & 0.025 & 0.026 & 0.016 & 0.042 & 2.014 & 1.003 & 0.035 & 1.025 \\
13.239 & 10.120 & 7.492 & 9.639 & 8.633 & 12.806 & 12.604 & 6.014 & 15.484 & 11.868 & 1.021 & 0.024 & 0.018 & 0.016 & 0.024 & 1.016 & 0.010 & 0.022 & 1.015 & 0.026 \\
9.183 & 10.611 & 4.700 & 16.800 & 14.149 & 8.325 & 9.458 & 14.993 & 11.339 & 5.372 & 0.018 & 0.025 & 0.045 & 0.025 & 0.030 & 1.019 & 2.014 & 1.020 & 1.025 & 0.020 \\
14.170 & 18.893 & 7.502 & 10.305 & 6.317 & 6.687 & 9.338 & 4.855 & 8.526 & 5.165 & 1.005 & 1.011 & 0.015 & 0.010 & 0.026 & 1.013 & 1.013 & 0.028 & 0.034 & 0.015 \\
14.924 & 17.254 & 15.112 & 12.265 & 19.823 & 15.109 & 8.299 & 13.318 & 15.874 & 6.029 & 1.010 & 2.000 & 0.035 & 0.021 & 0.020 & 0.024 & 0.017 & 0.034 & 0.029 & 0.020 \\
6.550 & 14.500 & 10.645 & 11.607 & 4.700 & 7.642 & 19.495 & 19.126 & 7.311 & 8.644 & 0.027 & 0.020 & 0.009 & 0.024 & 0.029 & 1.021 & 0.026 & 0.016 & 0.021 & 0.019 \\
15.662 & 5.325 & 9.670 & 24.116 & 8.540 & 14.253 & 4.674 & 8.021 & 11.656 & 6.160 & 1.010 & 1.163 & 1.000 & 1.009 & 0.025 & 1.015 & 1.011 & 1.012 & 0.020 & 1.022 \\
19.493 & 13.650 & 10.990 & 6.315 & 8.159 & 7.503 & 9.472 & 9.700 & 12.141 & 3.512 & 0.030 & 0.025 & 0.011 & 0.025 & 1.026 & 1.036 & 0.024 & 3.021 & 0.023 & 0.029 \\
7.368 & 6.654 & 9.818 & 13.285 & 14.133 & 8.692 & 18.326 & 10.503 & 5.695 & 3.517 & 2.010 & 0.019 & 0.029 & 0.035 & 0.020 & 0.030 & 0.020 & 2.007 & 0.034 & 0.040 \\
14.452 & 5.669 & 10.522 & 5.831 & 4.831 & 13.592 & 14.474 & 12.150 & 7.155 & 13.340 & 1.013 & 1.019 & 1.041 & 1.015 & 1.025 & 5.344 & 1.341 & 0.023 & 0.036 & 0.025 \\
16.839 & 8.163 & 8.630 & 11.120 & 5.884 & 4.679 & 14.938 & 12.344 & 3.512 & 5.831 & 0.019 & 0.999 & 0.025 & 0.021 & 0.013 & 0.024 & 1.020 & 0.030 & 1.011 & 0.029 \\
6.854 & 10.935 & 18.322 & 8.482 & 4.690 & 10.639 & 6.826 & 7.642 & 9.251 & 13.978 & 0.015 & 2.141 & 0.021 & 1.015 & 0.037 & 1.025 & 1.349 & 0.018 & 0.018 & 0.034 \\
13.080 & 6.987 & 2.203 & 4.517 & 4.836 & 16.136 & 5.530 & 13.505 & 11.321 & 7.633 & 1.311 & 1.203 & 1.019 & 0.037 & 1.000 & 1.030 & 0.011 & 0.022 & 1.014 & 0.025 \\
16.137 & 8.192 & 13.963 & 9.959 & 12.315 & 10.293 & 8.355 & 6.170 & 5.388 & 8.000 & 0.030 & 0.028 & 0.025 & 1.010 & 0.025 & 0.026 & 1.014 & 1.020 & 1.015 & 0.020 \\
11.318 & 21.257 & 12.999 & 14.507 & 10.389 & 13.303 & 14.338 & 9.793 & 11.878 & 10.674 & 0.038 & 0.033 & 1.096 & 0.027 & 0.019 & 1.010 & 1.010 & 0.013 & 0.025 & 1.005 \\
11.154 & 16.072 & 6.319 & 11.093 & 7.479 & 10.960 & 8.500 & 8.312 & 12.010 & 6.502 & 0.022 & 0.005 & 1.344 & 0.031 & 0.019 & 0.024 & 1.015 & 0.021 & 0.024 & 0.030 \\
2.390 & 15.257 & 5.535 & 10.178 & 7.537 & 11.290 & 12.347 & 11.220 & 11.103 & 6.343 & 1.006 & 0.020 & 0.034 & 0.020 & 1.012 & 0.020 & 1.099 & 0.023 & 0.016 & 2.010 \\
11.824 & 4.648 & 6.357 & 7.315 & 6.348 & 11.189 & 11.477 & 8.494 & 14.770 & 2.208 & 0.016 & 0.027 & 0.025 & 2.012 & 0.025 & 1.041 & 0.021 & 2.029 & 0.015 & 1.024 \\
13.144 & 10.822 & 9.216 & 5.986 & 2.203 & 21.758 & 3.522 & 7.165 & 8.031 & 5.993 & 0.028 & 0.036 & 0.003 & 1.015 & 0.025 & 0.034 & 0.019 & 1.006 & 0.019 & 0.008 \\
8.827 & 11.003 & 14.320 & 8.952 & 5.997 & 7.512 & 12.128 & 4.686 & 11.498 & 17.617 & 2.043 & 0.029 & 0.018 & 1.339 & 0.021 & 3.653 & 1.020 & 1.015 & 1.336 & 1.004 \\
10.677 & 10.301 & 13.601 & 21.457 & 9.704 & 21.965 & 12.598 & 10.948 & 3.371 & 9.165 & 0.030 & 0.028 & 0.020 & 0.015 & 0.284 & 2.038 & 0.018 & 0.032 & 2.024 & 0.025 \\
6.552 & 8.296 & 21.234 & 8.192 & 12.798 & 18.785 & 7.385 & 11.239 & 8.346 & 3.371 & 1.018 & 1.010 & 0.029 & 1.035 & 0.036 & 0.024 & 0.034 & 0.024 & 0.015 & 1.010 \\
14.923 & 4.512 & 10.328 & 10.220 & 5.195 & 10.001 & 14.270 & 4.296 & 12.598 & 5.507 & 1.011 & 2.006 & 0.026 & 1.038 & 1.030 & 0.016 & 2.031 & 0.016 & 0.010 & 1.020 \\
21.910 & 11.736 & 11.500 & 9.798 & 7.497 & 14.096 & 19.618 & 11.801 & 5.685 & 8.516 & 1.015 & 0.022 & 1.000 & 0.022 & 0.015 & 0.014 & 1.019 & 2.193 & 1.003 & 0.016
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Rosenbrock Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table6}
\scriptsize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
0.389 & 0.350 & 0.597 & 0.464 & 0.282 & 1.056 & 0.504 & 1.107 & 1.423 & 0.361 & 0.254 & 0.335 & 0.220 & 1.734 & 0.160 & 0.445 & 0.205 & 0.113 & 0.975 & 0.321 \\
2.150 & 4.640 & 0.277 & 0.385 & 2.225 & 1.048 & 0.411 & 0.471 & 1.071 & 11.332 & 0.335 & 0.226 & 0.593 & 0.283 & 0.434 & 0.643 & 0.199 & 0.235 & 0.295 & 0.486 \\
0.505 & 0.360 & 1.144 & 10.815 & 0.496 & 0.973 & 0.417 & 0.341 & 0.300 & 0.623 & 0.203 & 0.257 & 0.251 & 0.215 & 0.135 & 0.254 & 0.169 & 0.268 & 0.144 & 0.153 \\
0.356 & 0.594 & 0.404 & 0.658 & 0.332 & 11.167 & 0.485 & 0.308 & 0.445 & 0.363 & 0.332 & 0.589 & 1.376 & 0.133 & 0.083 & 0.046 & 0.282 & 0.212 & 0.150 & 0.411 \\
0.480 & 0.283 & 0.331 & 11.445 & 10.102 & 0.623 & 1.847 & 4.572 & 0.446 & 0.304 & 0.331 & 0.492 & 0.219 & 0.314 & 0.163 & 3.257 & 0.260 & 0.147 & 0.268 & 0.169 \\
0.330 & 0.413 & 10.829 & 0.343 & 0.685 & 1.058 & 0.660 & 0.408 & 0.596 & 0.276 & 0.239 & 0.219 & 0.101 & 0.251 & 0.099 & 0.151 & 0.135 & 0.318 & 0.343 & 0.211 \\
0.353 & 0.407 & 10.731 & 0.479 & 0.277 & 0.295 & 4.506 & 10.195 & 5.862 & 0.346 & 0.302 & 0.212 & 0.077 & 0.386 & 0.297 & 0.075 & 0.223 & 0.102 & 0.185 & 0.288 \\
0.421 & 0.646 & 0.280 & 0.604 & 0.364 & 0.367 & 11.399 & 0.386 & 11.196 & 0.378 & 0.102 & 0.126 & 0.685 & 0.234 & 0.267 & 0.050 & 0.167 & 0.051 & 0.268 & 0.238 \\
0.649 & 0.610 & 0.414 & 0.474 & 0.834 & 1.465 & 0.277 & 0.333 & 0.565 & 0.293 & 0.514 & 0.204 & 0.260 & 0.534 & 0.142 & 0.716 & 1.489 & 0.110 & 0.196 & 0.143 \\
0.379 & 1.885 & 0.463 & 0.375 & 0.362 & 11.532 & 0.398 & 5.946 & 0.413 & 0.379 & 0.240 & 0.226 & 0.135 & 0.399 & 0.222 & 0.178 & 0.298 & 0.226 & 0.308 & 0.270 \\
0.292 & 0.287 & 10.438 & 0.344 & 11.398 & 0.977 & 0.928 & 1.527 & 0.593 & 0.513 & 0.664 & 0.278 & 0.294 & 0.246 & 0.336 & 0.230 & 0.147 & 0.320 & 0.285 & 0.275 \\
0.459 & 0.282 & 6.017 & 11.475 & 0.358 & 0.298 & 0.664 & 0.276 & 5.661 & 4.479 & 0.246 & 0.175 & 0.323 & 0.412 & 0.144 & 0.318 & 0.473 & 0.360 & 0.089 & 1.810 \\
18.694 & 0.382 & 0.564 & 4.563 & 0.789 & 0.982 & 1.151 & 0.314 & 0.349 & 0.316 & 0.229 & 0.143 & 0.225 & 0.187 & 0.118 & 0.255 & 0.580 & 1.908 & 0.325 & 0.320 \\
0.417 & 0.410 & 11.389 & 0.395 & 4.438 & 0.273 & 0.465 & 0.326 & 0.981 & 1.097 & 0.234 & 0.341 & 0.277 & 0.140 & 0.256 & 1.028 & 0.080 & 0.138 & 0.063 & 0.198 \\
2.793 & 5.910 & 0.418 & 4.491 & 0.526 & 11.398 & 11.347 & 0.981 & 0.467 & 11.208 & 0.512 & 0.380 & 0.185 & 0.213 & 0.202 & 0.309 & 0.481 & 0.239 & 0.150 & 0.136 \\
0.342 & 0.412 & 10.513 & 0.522 & 0.445 & 5.729 & 0.312 & 0.623 & 4.911 & 0.976 & 0.209 & 0.295 & 0.097 & 0.302 & 1.063 & 0.249 & 0.700 & 0.078 & 0.236 & 0.100 \\
0.467 & 10.865 & 0.411 & 0.465 & 0.370 & 0.729 & 0.634 & 1.062 & 0.429 & 0.669 & 0.316 & 0.222 & 0.072 & 0.267 & 0.087 & 0.063 & 0.181 & 0.093 & 0.190 & 0.509 \\
2.238 & 0.335 & 0.638 & 0.323 & 0.266 & 0.612 & 0.364 & 1.425 & 0.319 & 0.440 & 2.440 & 0.606 & 0.253 & 0.265 & 0.173 & 0.308 & 0.447 & 1.774 & 0.141 & 0.319 \\
0.475 & 0.269 & 0.282 & 0.469 & 0.366 & 0.354 & 0.441 & 0.506 & 0.414 & 5.724 & 0.580 & 0.144 & 0.136 & 0.266 & 0.218 & 3.389 & 0.219 & 0.359 & 0.149 & 0.094 \\
0.765 & 0.618 & 4.479 & 2.749 & 0.342 & 4.473 & 0.611 & 0.477 & 0.330 & 0.631 & 0.189 & 0.319 & 0.059 & 0.225 & 0.225 & 0.209 & 0.366 & 0.460 & 0.121 & 0.490 \\
10.093 & 0.449 & 0.472 & 0.333 & 1.105 & 4.270 & 0.569 & 0.412 & 10.488 & 0.623 & 0.280 & 0.203 & 0.232 & 0.151 & 0.166 & 0.500 & 0.438 & 0.238 & 0.288 & 0.322 \\
0.919 & 4.056 & 0.702 & 0.433 & 0.367 & 0.512 & 10.093 & 10.151 & 0.405 & 2.066 & 0.329 & 0.379 & 0.288 & 0.188 & 0.100 & 1.457 & 0.400 & 1.137 & 0.617 & 0.267 \\
4.622 & 4.517 & 2.198 & 6.003 & 0.335 & 1.192 & 0.283 & 0.672 & 0.550 & 0.465 & 0.179 & 0.147 & 0.201 & 0.185 & 0.088 & 0.542 & 3.254 & 0.142 & 0.623 & 0.259 \\
0.489 & 4.479 & 11.402 & 0.483 & 0.481 & 1.940 & 0.332 & 0.397 & 0.273 & 0.860 & 0.188 & 0.245 & 0.212 & 0.194 & 0.258 & 0.221 & 0.245 & 0.292 & 0.271 & 0.257 \\
0.981 & 0.324 & 0.402 & 0.382 & 0.981 & 2.239 & 0.387 & 0.928 & 0.476 & 0.494 & 0.610 & 0.377 & 0.284 & 0.126 & 0.290 & 0.148 & 0.193 & 0.318 & 0.290 & 0.061 \\
0.312 & 0.374 & 0.481 & 1.115 & 0.414 & 1.022 & 0.469 & 2.408 & 0.337 & 0.353 & 0.258 & 1.093 & 0.474 & 0.278 & 0.064 & 0.233 & 1.670 & 0.050 & 0.185 & 0.214 \\
0.418 & 1.330 & 0.330 & 1.882 & 0.455 & 4.911 & 0.545 & 0.444 & 0.787 & 0.347 & 0.232 & 0.285 & 0.252 & 0.063 & 0.341 & 0.115 & 0.124 & 0.158 & 0.565 & 0.149 \\
0.985 & 0.399 & 0.444 & 5.745 & 0.376 & 0.460 & 0.708 & 0.440 & 0.419 & 0.466 & 0.363 & 0.121 & 0.256 & 0.271 & 0.215 & 0.248 & 0.202 & 0.524 & 0.149 & 0.317 \\
0.472 & 0.467 & 6.004 & 0.462 & 1.791 & 4.644 & 0.333 & 0.382 & 0.283 & 1.105 & 0.307 & 0.311 & 0.320 & 0.272 & 0.112 & 0.178 & 0.292 & 0.343 & 0.114 & 0.157 \\
0.342 & 0.367 & 4.288 & 0.326 & 0.445 & 0.436 & 0.501 & 0.272 & 11.482 & 11.131 & 0.329 & 0.381 & 0.485 & 0.239 & 0.139 & 0.209 & 0.115 & 0.353 & 0.124 & 0.110
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Schwefel Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table7}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
267.4 & 30.5 & 91.5 & 61.0 & 4E-04 & 91.5 & 508.7 & 237.6 & 131.5 & 297.9 & 3.4E-04 & 118.439 & 17.208 & 7.656 & 3.5E-04 & 3.1E-04 & 2.8E-04 & 2.3E-04 & 30.497 & 2.6E-04 \\
192.5 & 61.0 & 118.4 & 118.5 & 61.0 & 354.5 & 162.0 & 118.4 & 30.5 & 4E-04 & 7.655 & 2.8E-04 & 2.3E-04 & 30.497 & 7.655 & 30.497 & 2.4E-04 & 2.7E-04 & 2.4E-04 & 2.6E-04 \\
4E-04 & 162.0 & 4E-04 & 148.9 & 118.5 & 119.1 & 61.0 & 2E-04 & 149.0 & 30.5 & 3.4E-04 & 2.9E-04 & 30.497 & 3.0E-04 & 2.6E-04 & 17.208 & 17.208 & 3.2E-04 & 3.0E-04 & 2.9E-04 \\
30.5 & 179.4 & 30.5 & 118.5 & 118.4 & 328.4 & 148.9 & 213.7 & 4E-04 & 30.5 & 23.391 & 3.6E-04 & 2.6E-04 & 47.705 & 2.7E-04 & 17.208 & 3.2E-04 & 2.6E-04 & 2.9E-04 & 2.6E-04 \\
61.0 & 30.5 & 4E-04 & 118.4 & 122.2 & 402.6 & 61.0 & 30.5 & 236.9 & 30.5 & 2.5E-04 & 2.6E-04 & 2.6E-04 & 3.2E-04 & 2.6E-04 & 2.9E-04 & 118.681 & 7.655 & 57.327 & 2.3E-04 \\
162.0 & 148.9 & 122.2 & 61.0 & 30.5 & 212.2 & 192.5 & 119.2 & 420.0 & 4E-04 & 2.4E-04 & 3.1E-04 & 3.2E-04 & 2.6E-04 & 2.7E-04 & 2.7E-04 & 3.1E-04 & 2.7E-04 & 2.6E-04 & 2.4E-04 \\
61.0 & 118.5 & 30.5 & 30.5 & 6E-04 & 179.5 & 118.5 & 61.0 & 162.0 & 149.6 & 30.490 & 3.2E-04 & 10.106 & 3.7E-04 & 17.208 & 3.6E-04 & 2.5E-04 & 2.4E-04 & 17.208 & 3.5E-04 \\
583.4 & 209.9 & 30.5 & 148.9 & 30.5 & 271.1 & 192.5 & 361.1 & 61.0 & 118.4 & 2.4E-04 & 3.5E-04 & 2.8E-04 & 2.9E-04 & 3.4E-04 & 3.6E-04 & 47.705 & 2.6E-04 & 118.439 & 2.4E-04 \\
517.3 & 4E-04 & 148.9 & 299.4 & 3E-04 & 61.0 & 61.0 & 61.0 & 4E-04 & 30.5 & 2.7E-04 & 3.7E-04 & 2.7E-04 & 3.1E-04 & 3.3E-04 & 2.3E-04 & 2.8E-04 & 118.439 & 2.6E-04 & 2.6E-04 \\
162.0 & 416.4 & 30.5 & 30.5 & 4E-04 & 148.9 & 148.9 & 118.4 & 30.5 & 61.0 & 30.497 & 30.242 & 2.7E-04 & 2.9E-04 & 2.6E-04 & 30.497 & 17.208 & 3.2E-04 & 38.152 & 3.2E-04 \\
4E-04 & 417.0 & 148.9 & 30.5 & 118.5 & 30.5 & 149.0 & 355.6 & 179.4 & 118.4 & 0.488 & 38.152 & 2.5E-04 & 3.0E-04 & 3.0E-04 & 2.3E-04 & 119.143 & 3.1E-04 & 2.3E-04 & 2.3E-04 \\
298.6 & 61.0 & 364.2 & 30.5 & 149.0 & 459.9 & 30.5 & 4E-04 & 4E-04 & 192.5 & 2.9E-04 & 2.8E-04 & 2.6E-04 & 3.6E-04 & 2.8E-04 & 148.936 & 3.2E-04 & 3.1E-04 & 2.6E-04 & 2.7E-04 \\
223.0 & 356.8 & 148.9 & 131.5 & 30.5 & 209.9 & 267.4 & 30.5 & 118.5 & 61.0 & 2.3E-04 & 2.6E-04 & 2.6E-04 & 3.5E-04 & 3.2E-04 & 30.497 & 118.460 & 2.6E-04 & 2.6E-04 & 2.6E-04 \\
91.5 & 187.9 & 30.5 & 30.5 & 4E-04 & 267.4 & 162.0 & 179.7 & 131.5 & 122.2 & 3.7E-04 & 31.046 & 39.757 & 2.9E-04 & 3.2E-04 & 2.9E-04 & 3.0E-04 & 3.1E-04 & 17.208 & 3.5E-04 \\
391.0 & 61.0 & 61.0 & 118.5 & 4E-04 & 310.9 & 280.4 & 148.9 & 118.5 & 149.6 & 26.830 & 2.9E-04 & 3.0E-04 & 2.7E-04 & 2.6E-04 & 2.8E-04 & 118.439 & 2.9E-04 & 3.4E-04 & 2.9E-04 \\
148.9 & 267.4 & 119.1 & 4E-04 & 148.9 & 179.5 & 3E-04 & 398.9 & 4E-04 & 210.0 & 2.9E-04 & 118.460 & 3.2E-04 & 3.4E-04 & 3.0E-04 & 30.497 & 2.5E-04 & 30.497 & 2.6E-04 & 2.9E-04 \\
118.4 & 61.0 & 131.5 & 4E-04 & 267.4 & 443.1 & 30.5 & 61.0 & 91.5 & 152.7 & 30.497 & 30.497 & 2.3E-04 & 2.8E-04 & 3.2E-04 & 3.5E-04 & 2.6E-04 & 2.7E-04 & 118.439 & 3.0E-04 \\
179.4 & 420.8 & 3E-04 & 30.5 & 4E-04 & 280.5 & 180.1 & 61.0 & 162.0 & 131.5 & 2.3E-04 & 3.2E-04 & 2.3E-04 & 2.6E-04 & 2.9E-04 & 45.808 & 30.497 & 3.1E-04 & 7.655 & 2.7E-04 \\
30.5 & 30.5 & 30.5 & 162.0 & 30.5 & 284.2 & 310.9 & 61.0 & 242.0 & 4E-04 & 3.7E-04 & 3.2E-04 & 2.3E-04 & 7.656 & 2.8E-04 & 3.7E-04 & 118.439 & 3.3E-04 & 3.5E-04 & 2.3E-04 \\
192.5 & 148.9 & 149.0 & 61.0 & 30.5 & 210.0 & 162.0 & 118.4 & 149.6 & 30.5 & 7.655 & 3.1E-04 & 133.749 & 3.2E-04 & 3.2E-04 & 2.4E-04 & 2.4E-04 & 2.9E-04 & 2.3E-04 & 2.3E-04 \\
293.5 & 149.0 & 148.9 & 61.0 & 3E-04 & 162.0 & 30.5 & 118.5 & 280.4 & 30.5 & 30.497 & 17.208 & 2.6E-04 & 3.0E-04 & 3.4E-04 & 2.8E-04 & 30.497 & 2.4E-04 & 2.7E-04 & 2.4E-04 \\
30.5 & 179.4 & 61.0 & 583.4 & 118.4 & 179.4 & 183.2 & 3E-04 & 118.5 & 131.5 & 3.8E-04 & 148.681 & 30.497 & 2.8E-04 & 2.6E-04 & 148.936 & 5.1E-04 & 7.655 & 17.208 & 3.1E-04 \\
148.9 & 148.9 & 61.0 & 30.5 & 30.5 & 91.5 & 4E-04 & 149.0 & 30.5 & 4E-04 & 2.9E-04 & 2.6E-04 & 2.9E-04 & 3.4E-04 & 2.5E-04 & 57.327 & 3.3E-04 & 7.655 & 161.375 & 2.9E-04 \\
741.9 & 149.0 & 148.9 & 30.5 & 4E-04 & 223.0 & 61.0 & 368.4 & 280.4 & 4E-04 & 2.3E-04 & 30.497 & 2.6E-04 & 2.9E-04 & 3.2E-04 & 118.740 & 2.6E-04 & 7.656 & 2.4E-04 & 3.7E-04 \\
242.0 & 179.5 & 3E-04 & 30.5 & 149.0 & 91.5 & 606.8 & 4E-04 & 4E-04 & 61.0 & 4.1E-04 & 3.2E-04 & 2.6E-04 & 2.9E-04 & 2.5E-04 & 0.001 & 118.439 & 2.8E-04 & 118.439 & 2.6E-04 \\
30.5 & 148.9 & 3E-04 & 5E-04 & 61.0 & 192.5 & 30.5 & 293.5 & 4E-04 & 118.5 & 7.655 & 2.4E-04 & 23.391 & 2.6E-04 & 2.3E-04 & 69.199 & 3.5E-04 & 3.4E-04 & 118.439 & 2.6E-04 \\
122.7 & 183.2 & 119.1 & 30.5 & 61.0 & 297.9 & 162.0 & 131.5 & 30.5 & 30.5 & 2.4E-04 & 4.6E-04 & 17.208 & 3.3E-04 & 3.0E-04 & 2.4E-04 & 2.7E-04 & 30.497 & 2.6E-04 & 2.9E-04 \\
209.9 & 355.3 & 179.4 & 61.0 & 5E-04 & 30.5 & 30.5 & 118.4 & 118.4 & 118.5 & 2.6E-04 & 2.6E-04 & 3.3E-04 & 2.9E-04 & 2.9E-04 & 26.830 & 3.3E-04 & 3.5E-04 & 2.9E-04 & 2.7E-04 \\
149.0 & 61.0 & 250.0 & 3E-04 & 131.5 & 118.4 & 209.9 & 4E-04 & 179.7 & 30.5 & 30.497 & 7.656 & 2.7E-04 & 2.6E-04 & 2.8E-04 & 2.6E-04 & 3.8E-04 & 3.6E-04 & 2.6E-04 & 3.1E-04 \\
179.5 & 122.0 & 61.0 & 118.4 & 250.0 & 122.0 & 149.0 & 298.6 & 237.2 & 119.1 & 118.740 & 31.046 & 2.7E-04 & 3.2E-04 & 3.5E-04 & 119.143 & 2.3E-04 & 2.7E-04 & 2.9E-04 & 2.7E-04
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Griewank Fitness Sampling: Ten Classic GAs and Ten ReGen GAs with different crossover rates and 30 runs. Best fitness value per run.}
\label{table8}
\scriptsize
\begin{tabular}{llllllllllllllllllll}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\\hline
0.146 & 0.237 & 0.120 & 0.152 & 0.090 & 0.077 & 0.202 & 0.075 & 0.131 & 0.334 & 0.085 & 0.030 & 0.083 & 0.268 & 0.122 & 0.058 & 0.070 & 0.084 & 0.045 & 0.176 \\
0.092 & 0.503 & 0.156 & 0.124 & 0.114 & 0.586 & 0.244 & 0.167 & 0.119 & 0.131 & 0.021 & 0.037 & 0.190 & 0.042 & 0.100 & 0.111 & 0.016 & 0.057 & 0.066 & 0.024 \\
0.201 & 0.262 & 0.113 & 0.119 & 0.041 & 0.141 & 0.269 & 0.220 & 0.389 & 0.145 & 0.084 & 0.020 & 0.117 & 0.010 & 0.036 & 0.206 & 0.045 & 0.067 & 0.083 & 0.064 \\
0.235 & 0.148 & 0.196 & 0.132 & 0.274 & 0.140 & 0.158 & 0.160 & 0.105 & 0.299 & 0.058 & 0.087 & 0.160 & 0.098 & 0.063 & 0.055 & 0.109 & 0.081 & 0.199 & 0.062 \\
0.163 & 0.409 & 0.199 & 0.370 & 0.162 & 0.261 & 0.157 & 0.118 & 0.244 & 0.310 & 0.050 & 0.085 & 0.084 & 0.080 & 0.076 & 0.134 & 0.037 & 0.253 & 0.064 & 0.025 \\
0.135 & 0.262 & 0.189 & 0.131 & 0.048 & 0.144 & 0.218 & 0.463 & 0.197 & 0.232 & 0.050 & 0.040 & 0.028 & 0.139 & 0.051 & 0.042 & 0.088 & 0.080 & 0.012 & 0.077 \\
0.132 & 0.089 & 0.220 & 0.093 & 0.165 & 0.244 & 0.111 & 0.169 & 0.160 & 0.056 & 0.244 & 0.086 & 0.157 & 0.008 & 0.076 & 0.109 & 0.058 & 0.036 & 0.044 & 0.057 \\
0.252 & 0.079 & 0.212 & 0.093 & 0.110 & 0.271 & 0.208 & 0.433 & 0.215 & 0.179 & 0.072 & 0.078 & 0.088 & 0.165 & 0.154 & 0.090 & 0.094 & 0.059 & 0.013 & 0.022 \\
0.365 & 0.223 & 0.143 & 0.168 & 0.134 & 0.218 & 0.129 & 0.179 & 0.215 & 0.131 & 0.070 & 0.074 & 0.032 & 0.044 & 0.130 & 0.027 & 0.026 & 0.051 & 0.079 & 0.047 \\
0.235 & 0.156 & 0.137 & 0.121 & 0.394 & 0.493 & 0.226 & 0.203 & 0.172 & 0.217 & 0.035 & 0.092 & 0.018 & 0.069 & 0.046 & 0.121 & 0.086 & 0.151 & 0.053 & 0.127 \\
0.123 & 0.246 & 0.487 & 0.157 & 0.129 & 0.413 & 0.098 & 0.271 & 0.122 & 0.161 & 0.100 & 0.059 & 0.154 & 0.049 & 0.081 & 0.035 & 0.073 & 0.038 & 0.178 & 0.090 \\
0.264 & 0.256 & 0.207 & 0.211 & 0.149 & 0.110 & 0.087 & 0.334 & 0.155 & 0.290 & 0.072 & 0.020 & 0.074 & 0.062 & 0.165 & 0.045 & 0.027 & 0.000 & 0.026 & 0.010 \\
0.396 & 0.155 & 0.307 & 0.128 & 0.114 & 0.130 & 0.124 & 0.142 & 0.113 & 0.083 & 0.071 & 0.017 & 0.044 & 0.160 & 0.030 & 0.081 & 0.110 & 0.067 & 0.077 & 0.048 \\
0.119 & 0.134 & 0.213 & 0.207 & 0.037 & 0.106 & 0.171 & 0.268 & 0.097 & 0.186 & 0.069 & 0.074 & 0.084 & 0.081 & 0.090 & 0.072 & 0.074 & 0.116 & 0.037 & 0.033 \\
0.094 & 0.251 & 0.136 & 0.271 & 0.262 & 0.568 & 0.114 & 0.072 & 0.147 & 0.190 & 0.040 & 0.102 & 0.061 & 0.045 & 0.046 & 0.065 & 0.056 & 0.030 & 0.146 & 0.041 \\
0.223 & 0.085 & 0.378 & 0.165 & 0.062 & 0.184 & 0.129 & 0.099 & 0.149 & 0.236 & 0.035 & 0.062 & 0.003 & 0.064 & 0.067 & 0.039 & 0.089 & 0.034 & 0.080 & 0.049 \\
0.152 & 0.224 & 0.215 & 0.126 & 0.074 & 0.199 & 0.130 & 0.363 & 0.166 & 0.105 & 0.047 & 0.068 & 0.069 & 0.077 & 0.116 & 0.039 & 0.073 & 0.034 & 0.071 & 0.097 \\
0.156 & 0.164 & 0.134 & 0.224 & 0.080 & 0.267 & 0.183 & 0.191 & 0.139 & 0.289 & 0.048 & 0.002 & 0.039 & 0.050 & 0.031 & 0.075 & 0.008 & 0.037 & 0.055 & 0.036 \\
0.142 & 0.184 & 0.026 & 0.196 & 0.088 & 0.320 & 0.201 & 0.225 & 0.172 & 0.199 & 0.057 & 0.030 & 0.056 & 0.035 & 0.185 & 0.053 & 0.010 & 0.023 & 0.104 & 0.077 \\
0.188 & 0.037 & 0.227 & 0.293 & 0.082 & 0.158 & 0.115 & 0.085 & 0.317 & 0.103 & 0.103 & 0.080 & 0.243 & 0.202 & 0.091 & 0.026 & 0.031 & 0.003 & 0.015 & 0.100 \\
0.207 & 0.280 & 0.123 & 0.147 & 0.188 & 0.138 & 0.150 & 0.260 & 0.232 & 0.397 & 0.089 & 0.015 & 0.000 & 0.060 & 0.158 & 0.054 & 0.124 & 0.033 & 0.020 & 0.148 \\
0.277 & 0.257 & 0.210 & 0.072 & 0.124 & 0.187 & 0.158 & 0.209 & 0.087 & 0.091 & 0.040 & 0.044 & 0.054 & 0.184 & 0.110 & 0.094 & 0.178 & 0.091 & 0.187 & 0.044 \\
0.113 & 0.218 & 0.351 & 0.175 & 0.152 & 0.559 & 0.493 & 0.118 & 0.120 & 0.329 & 0.120 & 0.018 & 0.076 & 0.007 & 0.052 & 0.006 & 0.055 & 0.091 & 0.044 & 0.103 \\
0.117 & 0.143 & 0.186 & 0.182 & 0.199 & 0.107 & 0.399 & 0.492 & 0.181 & 0.185 & 0.206 & 0.098 & 0.018 & 0.028 & 0.086 & 0.101 & 0.037 & 0.071 & 0.096 & 0.145 \\
0.119 & 0.089 & 0.418 & 0.152 & 0.142 & 0.249 & 0.204 & 0.149 & 0.140 & 0.141 & 0.079 & 0.060 & 0.102 & 0.097 & 0.075 & 0.072 & 0.012 & 0.095 & 0.087 & 0.032 \\
0.148 & 0.142 & 0.106 & 0.201 & 0.171 & 0.522 & 0.211 & 0.301 & 0.286 & 0.142 & 0.067 & 0.070 & 0.057 & 0.130 & 0.035 & 0.039 & 0.027 & 0.082 & 0.067 & 0.023 \\
0.101 & 0.147 & 0.045 & 0.134 & 0.131 & 0.143 & 0.089 & 0.188 & 0.194 & 0.210 & 0.082 & 0.057 & 0.086 & 0.064 & 0.055 & 0.064 & 0.078 & 0.085 & 0.068 & 0.132 \\
0.130 & 0.193 & 0.108 & 0.234 & 0.148 & 0.095 & 0.128 & 0.364 & 0.113 & 0.189 & 0.053 & 0.084 & 0.138 & 0.075 & 0.142 & 0.067 & 0.043 & 0.059 & 0.022 & 0.071 \\
0.186 & 0.260 & 0.133 & 0.266 & 0.256 & 0.142 & 0.205 & 0.153 & 0.149 & 0.088 & 0.013 & 0.234 & 0.096 & 0.077 & 0.045 & 0.099 & 0.027 & 0.130 & 0.032 & 0.059 \\
0.119 & 0.282 & 0.165 & 0.094 & 0.167 & 0.162 & 0.142 & 0.104 & 0.088 & 0.188 & 0.129 & 0.082 & 0.033 & 0.086 & 0.137 & 0.064 & 0.201 & 0.043 & 0.010 & 0.037
\\\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Fitness Sampling: Two standard \textsc{HaEa}\xspace and Two ReGen \textsc{HaEa}\xspace implementations with 30 runs. Best fitness value per run.}
\label{table9}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
\multicolumn{4}{c}{Deceptive Order Three} & \multicolumn{4}{c}{Deceptive Order Four} & \multicolumn{4}{c}{Ratrigin} & \multicolumn{4}{c}{Schwefel} & \multicolumn{4}{c}{Griewank} \\
1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 & 1 & 2 & 3 & 4 \\\hline
3430 & 3440 & 3596 & 3576 & 392 & 387 & 447 & 447 & 5.8084 & 5.6855 & 0.0337 & 0.0298 & 1.489E+02 & 9.149E+01 & 2.284E-04 & 2.636E-04 & 0.1294 & 0.3327 & 0.0609 & 0.0619 \\
3456 & 3438 & 3586 & 3584 & 391 & 393 & 446 & 446 & 17.8005 & 18.9239 & 0.0147 & 0.0223 & 1.184E+02 & 2.979E+02 & 2.672E-04 & 2.459E-04 & 0.0805 & 0.1648 & 0.1170 & 0.0261 \\
3444 & 3446 & 3592 & 3594 & 390 & 387 & 446 & 443 & 5.3688 & 19.2643 & 0.0198 & 0.0160 & 9.149E+01 & 4.131E-04 & 2.870E-04 & 3.050E+01 & 0.2237 & 0.3112 & 0.0248 & 0.0559 \\
3454 & 3436 & 3590 & 3590 & 395 & 393 & 448 & 449 & 14.7865 & 10.5025 & 0.0000 & 0.0099 & 3.515E-04 & 1.496E+02 & 2.284E-04 & 2.284E-04 & 0.2458 & 0.1858 & 0.1020 & 0.0985 \\
3436 & 3450 & 3570 & 3590 & 393 & 382 & 441 & 447 & 6.6905 & 14.0264 & 0.0246 & 0.0259 & 6.099E+01 & 6.099E+01 & 2.591E-04 & 1.184E+02 & 0.0965 & 0.4865 & 0.0517 & 0.0822 \\
3430 & 3434 & 3588 & 3574 & 391 & 394 & 449 & 444 & 12.0701 & 20.2973 & 0.0239 & 0.0099 & 4.130E-04 & 4.438E-04 & 2.365E-04 & 2.591E-04 & 0.1813 & 0.0913 & 0.0228 & 0.0607 \\
3438 & 3426 & 3578 & 3598 & 389 & 395 & 449 & 449 & 15.1366 & 6.5026 & 1.3411 & 0.0296 & 6.099E+01 & 2.099E+02 & 2.284E-04 & 2.591E-04 & 0.0622 & 0.4339 & 0.0614 & 0.0283 \\
3438 & 3444 & 3590 & 3586 & 394 & 392 & 446 & 446 & 4.1784 & 13.2038 & 0.0149 & 0.0048 & 4.131E-04 & 3.109E+02 & 3.096E-04 & 7.655E+00 & 0.3097 & 0.2075 & 0.0494 & 0.0099 \\
3422 & 3442 & 3576 & 3590 & 392 & 405 & 442 & 446 & 12.1856 & 6.9979 & 0.0148 & 0.0187 & 3.823E-04 & 2.099E+02 & 2.283E-04 & 2.899E-04 & 0.2064 & 0.1752 & 0.1728 & 0.0426 \\
3426 & 3428 & 3598 & 3596 & 395 & 392 & 449 & 446 & 5.7429 & 22.4438 & 0.0248 & 0.0298 & 3.050E+01 & 3.321E+02 & 3.044E-04 & 3.515E-04 & 0.5050 & 0.1125 & 0.0142 & 0.0356 \\
3432 & 3418 & 3588 & 3590 & 393 & 391 & 449 & 446 & 8.9663 & 16.6233 & 0.0112 & 0.0248 & 3.823E-04 & 2.935E+02 & 2.283E-04 & 2.364E-04 & 0.1039 & 0.3101 & 0.0007 & 0.0567 \\
3430 & 3422 & 3596 & 3592 & 396 & 394 & 448 & 446 & 16.6328 & 12.2330 & 0.0209 & 0.0149 & 6.074E+01 & 3.050E+01 & 2.284E-04 & 2.591E-04 & 0.2305 & 0.0818 & 0.0173 & 0.0617 \\
3444 & 3420 & 3582 & 3570 & 399 & 388 & 447 & 444 & 9.4359 & 16.1912 & 0.0198 & 0.0273 & 3.823E-04 & 1.220E+02 & 2.591E-04 & 2.591E-04 & 0.2643 & 0.3328 & 0.0466 & 0.0738 \\
3450 & 3442 & 3594 & 3596 & 394 & 389 & 449 & 446 & 16.1239 & 10.1660 & 0.0248 & 0.0177 & 3.823E-04 & 2.674E+02 & 3.188E-04 & 2.591E-04 & 0.2036 & 0.2282 & 0.0497 & 0.0495 \\
3442 & 3426 & 3596 & 3590 & 395 & 394 & 446 & 448 & 7.2021 & 5.3528 & 0.0389 & 0.0284 & 6.099E+01 & 4.462E+02 & 2.591E-04 & 2.364E-04 & 0.1272 & 0.1607 & 0.0522 & 0.0296 \\
3426 & 3430 & 3566 & 3596 & 393 & 391 & 446 & 448 & 13.8471 & 12.2382 & 0.0112 & 0.0112 & 4.368E+02 & 1.184E+02 & 2.591E-04 & 2.445E-04 & 0.2504 & 0.1152 & 0.0780 & 0.0342 \\
3422 & 3432 & 3578 & 3588 & 389 & 394 & 442 & 449 & 13.9872 & 16.4488 & 0.0099 & 1.0186 & 3.207E-04 & 1.185E+02 & 2.283E-04 & 3.196E-04 & 0.2690 & 0.3538 & 0.0745 & 0.0684 \\
3438 & 3436 & 3588 & 3588 & 397 & 395 & 448 & 447 & 2.0198 & 7.3038 & 1.0236 & 1.0225 & 4.130E-04 & 1.492E+02 & 2.592E-04 & 2.283E-04 & 0.1056 & 0.0869 & 0.0484 & 0.0531 \\
3430 & 3420 & 3570 & 3586 & 394 & 387 & 447 & 446 & 19.4519 & 3.8559 & 0.0198 & 0.0099 & 3.050E+01 & 4.131E-04 & 2.284E-04 & 3.050E+01 & 0.1198 & 0.2346 & 0.0314 & 0.0307 \\
3446 & 3448 & 3576 & 3594 & 396 & 385 & 447 & 445 & 6.8260 & 16.2976 & 0.0197 & 0.0198 & 3.050E+01 & 3.050E+01 & 2.591E-04 & 2.592E-04 & 0.2196 & 0.2147 & 0.0261 & 0.0591 \\
3452 & 3418 & 3592 & 3592 & 390 & 395 & 449 & 449 & 11.1479 & 15.3320 & 0.0274 & 0.0149 & 3.050E+01 & 1.490E+02 & 2.283E-04 & 2.899E-04 & 0.3161 & 0.4101 & 0.0746 & 0.0559 \\
3446 & 3448 & 3576 & 3562 & 398 & 392 & 448 & 447 & 7.6809 & 12.3748 & 0.0294 & 0.0298 & 3.050E+01 & 6.099E+01 & 2.283E-04 & 2.591E-04 & 0.2613 & 0.1142 & 0.0404 & 0.0298 \\
3440 & 3442 & 3588 & 3586 & 388 & 391 & 447 & 448 & 11.1480 & 13.3377 & 0.0182 & 0.0177 & 4.131E-04 & 2.711E+02 & 3.050E+01 & 2.591E-04 & 0.3308 & 0.1421 & 0.0370 & 0.0969 \\
3446 & 3432 & 3596 & 3596 & 395 & 387 & 449 & 444 & 16.5726 & 17.9414 & 0.0197 & 1.0143 & 4.130E-04 & 3.050E+01 & 2.899E-04 & 2.284E-04 & 0.2313 & 0.0914 & 0.1219 & 0.0530 \\
3436 & 3426 & 3598 & 3590 & 397 & 391 & 449 & 444 & 9.8528 & 20.9124 & 0.0099 & 0.0098 & 1.222E+02 & 1.489E+02 & 2.592E-04 & 2.283E-04 & 0.5083 & 0.2356 & 0.1160 & 0.0425 \\
3456 & 3434 & 3578 & 3600 & 401 & 394 & 448 & 447 & 12.8651 & 6.3322 & 0.0112 & 0.0246 & 3.515E-04 & 3.050E+01 & 2.672E-04 & 2.284E-04 & 0.1499 & 0.0751 & 0.0680 & 0.0693 \\
3446 & 3430 & 3592 & 3594 & 393 & 387 & 446 & 449 & 6.4923 & 16.9398 & 0.0099 & 0.0147 & 3.050E+01 & 1.801E+02 & 2.592E-04 & 3.050E+01 & 0.1251 & 0.7126 & 0.0270 & 0.0746 \\
3432 & 3460 & 3586 & 3588 & 396 & 396 & 449 & 442 & 13.9234 & 17.4008 & 0.0215 & 0.0198 & 3.207E-04 & 1.191E+02 & 2.284E-04 & 2.591E-04 & 0.3905 & 0.3456 & 0.0044 & 0.0447 \\
3436 & 3440 & 3566 & 3598 & 394 & 383 & 448 & 446 & 14.8389 & 15.8254 & 1.0198 & 0.0198 & 4.746E-04 & 1.490E+02 & 2.672E-04 & 9.049E-01 & 0.1547 & 0.1656 & 0.0188 & 0.0387 \\
3422 & 3444 & 3584 & 3576 & 395 & 393 & 445 & 446 & 10.2835 & 13.0179 & 1.0322 & 0.0198 & 4.131E-04 & 6.099E+01 & 2.283E-04 & 2.591E-04 & 0.1183 & 0.2691 & 0.0149 & 0.0960
\\\hline
\end{tabular}
\end{table}
\end{landscape}
\chapter{Introduction}
Optimization is a common task in people's lives. Investors use passive investment strategies that avoid excessive risk while obtaining great benefits. A conventional application of calculus is calculating a function minimum or maximum value. Manufacturers strive for the maximum efficiency of their production procedures. Companies lessen production costs or maximize revenue, for example, by reducing the amount of material used to pack a product with a particular size without detriment of quality. Software Engineers design applications to improve the management of companies' processes. The school bus route that picks a group of students up from their homes to the school and vice-versa, every school day, must take into account distances between homes and time. Optimization is an essential process, is present in many activities, contributes to decision science, and is relevant to the analysis of physical systems.
Making use of the optimization process requires identifying some objective, a quantitative measure of the system's performance under consideration. The objective can be profit, time, energy, or any resource numerically represented; the objective depends on problem characteristics, named as variables or unknowns. The purpose is to obtain variables values that optimize the objective; variables may present constraints or restrictions, for example, quantities such as the distance between two points and the interest rate on loan must be positive. The process of identifying objectives, variables, and constraints for a problem is known as modeling. The first step in the optimization process is to build an appropriate model. Once the model is formulated, an optimizer (a problem-solving strategy for solving optimization problems, such as equations, analytic solvers, algorithms, among others) can be implemented to find a satisfactory solution. There is no unique optimization solver but a set of optimizers, each of which is related to a particular optimization problem type. Picking a suitable optimizer for a specific problem is fundamental, it may determine whether the problem is tractable or not and whether it finds the solution \cite{CAVAZZUTI, KOZIEL, NOCEDAL}.
After a problem-solving strategy is applied to the model, the next step is to recognize if it succeeds in finding a solution. Some mathematical expressions known as optimal conditions or assumptions help to validate variable sets in order to know if they satisfy assumptions. If assumptions are not satisfied, variables may facilitate the current estimate of the solution to be adjusted. The model may be enhanced by implementing techniques such as a one-at-a-time {\em sensitivity analysis} technique that analyzes the influence of one parameter on the cost function at a time and exposes the solution susceptibility to changes in the model and data. Interpretation in terms of the applicability of obtained solutions may recommend ways in which the model can be refined or corrected. The optimization process is repeated if changes are introduced to the model \cite{CAVAZZUTI, KOZIEL, NOCEDAL}.
An optimization algorithm is a method that iteratively executes and compares several solutions until it finds an optimum or satisfactory solution. Two optimization algorithms types widely used today are deterministic and stochastic. Deterministic algorithms do not involve randomness; these algorithms require well-defined rules and steps to find a solution. In contrast, stochastic algorithms comprise in their nature probabilistic translation rules \cite{CAVAZZUTI}. The use of randomness might enable the method to escape from local optima and subsequently reach a global optimum. Indeed, this principle of randomization is an effective way to design algorithms that perform consistently well across multiple data sets, for many problem types \cite{CAVAZZUTI, KOZIEL}. Evolutionary algorithms are a kind of stochastic optimization methods.
Evolutionary Algorithms (EAs) are a subset of population-based, metaheuristic optimization algorithms of the Evolutionary Computation field, which describes mechanisms inspired by natural evolution, the process that drives biological evolution. There are many types of evolutionary algorithms, the most widely known: Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Strategies (ES), and Hybrid Evolutionary Algorithms (HEAs). The common underlying idea behind all EAs is the same, an initial population of individuals, a parent selection process that considerate the aptitude of each individual, and a transformation process that allows the creation of new individuals through crossing and mutation. Candidate solutions act like the population's individuals for an optimization problem, and the fitness function determines the quality of solutions. The population's evolution then occurs after the repeated application of the above mechanisms \cite{CAVAZZUTI, EIBEN, HOLLAND, KOZIEL}.
Many computer scientists have been interested in understanding the phenomenon of adaptation as it occurs in nature and developing ways in which natural adaptation mechanisms might be brought into computational methods. Current evolutionary algorithms are suitable for some of the most important computational problems in many areas, for example, linear programming problems (manufacturing and transportation), convex problems (communications and networks), complementarity problems (economics and engineering), and combinatorial problems (mathematics) such as the traveling salesman problem (TSP), the minimum spanning tree problem (MST), and the knapsack problem \cite{EIBEN, MITCHELL}. However, some computational problems involve searching through a large number of solution possibilities and require strategies that facilitate the adaptability of individuals in order to perform well in a changing environment \cite{MITCHELL}. In recent years, several authors have worked on hybrid strategies to improve the efficiency of population-based global search methods, so the adaptive behavior of populations can be rapidly manifested under selective pressure. Nevertheless, the possibility of finding good solutions is parameter dependant or long term search dependant.
Developing problem solvers (algorithms) is a common task in Computer Science. There are different motives behind the design of algorithms. One is that engineers analyze processes in nature to mimic ``natural problem solvers'' which may help to achieve approximate solutions to the optimal one \cite{BEDAU, EIBEN}. Another motivation from a technical perspective is the growing demand for algorithms to solve hard or intractable problems (with time and space requirements). The above implies a requirement for robust algorithms with satisfying performance; consequently, there is a need for algorithms that apply to an extended number of problems, algorithms with less customization for specific optimization problems, and produce suitable (not necessarily optimal) solutions within a reasonable time \cite{EIBEN}. A third motivation is one that is present in every science: inquisitiveness. For example, evolutionary processes are topics of scientific investigation where the primary purpose is to understand evolution. From this viewpoint, evolutionary computing serves as a tool to conduct experiments to emulate processes from traditional biology. Evolutionary Algorithms may provide an answer to the challenge of using automated solution methods for a broader set of problems \cite{BEDAU, EIBEN}.
It is challenging to find simpler ways to improve any kind of search process. However, a future full of possibilities may start by designing new models that capture the essence of vital mechanisms present in living systems. An example of this is Epigenetics, which studies the epigenome, the epigenome-influencing factors present in the environment, epigenetic changes, and the inheritable epigenetic changes in gene expression. The DNA consists of two long chains of nucleotides, on which thousands of genes are encoded. The complete set of genes in an organism is known as its genome. The DNA is spooled around proteins called histones. Both the DNA and histones are marked with chemical tags, also known as epigenetic tags. Some regulatory proteins, histones, and epigenetic tags form a second structural layer called the epigenome \cite{GHR, NIGMS, UTAH}. All chemical tags that are adhered to the entire DNA and histones regulate the activity and expression of all genes within the genome. The biological mechanisms that involve attaching epigenetic tags to or removing them from the genome are known as epigenetic changes or epigenetic mechanisms. Two examples of epigenetic changes are DNA Methylation and Histone Acetylation. When epigenetic tags bind to DNA and alter its function, they have `` marked'' the genome; such marks do not modify the DNA sequence. Instead, they modify the way DNA's instructions are used by cells \cite{GHR, NHGRI, NIGMS}.
Epigenetics also brings up the concept of environment. Some changes occur during individuals' lifespan, and environmental factors can originate those changes. Epigenetic changes remain as cells divide and, in some cases, might be inherited through generations. Environmental factors, such as an individual's diet and exposure to pollutants, may also impact the epigenome \cite{GHR, NIGMS, UTAH}. Epigenetics has a potential role in our understanding of inheritance, natural selection, and evolution. Therefore, a change in gene expression in the adult \emph{$P_0$} generation caused by the environment might also be carried over into the \emph{$P_1$} generation or beyond, leading to a kind of long term memory \cite{UTAH}.
Epigenetics drives modern technological advances, also challenges and reconsiders conventional paradigms of evolution and biology. Due to recent epigenetic discoveries, early findings on genetics are being explored in different directions. Both genetics and epigenetics help to better understand functions and relations that DNA, RNA, proteins, and the environment produce regarding heritage and health conditions. Epigenetics will not only help to understand complex processes related to embryology, imprinting, cellular differentiation, aging, gene regulation, and diseases but also study therapeutic methods. The incorporation of epigenetic elements in EAs allows robustness, a beneficial feature in changing environments where learning and adaptation are required along the evolutionary process \cite{RICALDE, GHR, NIGMS, UTAH}. Adaptation is a crucial evolutionary process that adjusts the fitness of traits and species to become better suited for survival in specific environments \cite{JEFFERY}.
Epigenetic mechanisms like DNA Methylation and Histone Modification are vital memory process regulators. Their ability to dynamically control gene transcription in response to environmental factors promotes prolonged memory formation. The consistent and self-propagating nature of these mechanisms, especially DNA methylation, implies a molecular mechanism for memory preservation. Learning and memory are seen as permanent changes of behavior generated in response to a temporary environmental input \cite{ZOVKIC}. Organisms' ability to permanently adapt their behavior in response to environmental stimulus relies on functional phenotypic plasticity \cite{DEANS}. Epigenetic mechanisms intervene in biological processes such as phenotype plasticity, memory formation between generations, and epigenetic modification of behavior influenced by the environment. The previous fact leads researchers to improve evolutionary algorithms' performance in solving hard mathematical problems or real-world problems with continuous environmental changes by contemplating the usage of epigenetic mechanisms \cite{RICALDE}.
\section{Contributions of the Thesis}
This approach is inspired by the epigenetic regulation process, a biological mechanism widely studied by the Epigenetic field. This thesis aims to present a technique that models the adaptive and learning principles of epigenetics. The dynamics of DNA Methylation and Histone Modification has been summarized into five fundamental elements: first, a metaphorical representation of {\em Epigenetic tags}; second, a structural layer above the chromosome structure used to bind tags ({\em Epigenotype}); third, a marker ({\em Marking Function}) that comprises three actions: add, modify, and remove tags located on alleles; fourth, a tags interpreter or decoder ({\em Epigenetic Growing function}); and fifth, an inheritance process ({\em Crossover Operator}) to pass such tags on to the offspring. So that, the technique may reflect the adaptability of individuals during evolution, the ability of individuals to learn from the environment, the possibility of reducing time in the search process, and the discovery of better solutions in a given time.
The epigenetic mechanisms DNA Methylation and Histone Modification have been characterized to abstract principles and behaviors (from the epigenotype, tags, marking, reading, and inheritance biological elements) for the metaphorical designing of epigenetic components. In this thesis, the {\em Epigenotype} structure represents individuals' second layer, where tags are attached. The designed technique takes advantage of such a layer to influence the direction of the search process. The {\em Epigenotype} is made up of tags, {\em Epigenetic Tags} are represented with a binary string sequence of 0's and 1's, a tag implies a rule to interpret segments (alleles) of an individual's genome. The tags contain two sections; the first section denotes an operation, that is, a binary operation that operates on an individual's chromosome; the second section of the tags contains the gene size. The gene size indicates the number of alleles on which operates a binary operation. Tags determine individuals' gene expression; in other words, how alleles will be expressed, whether 1 or 0, depending on the operation applied to them.
The {\em Marking Function} involves writing, modifying, and erasing tags based on a metaphorical representation from writers, erasers, and maintenance enzymes. These actions act with a distributed probability of being applied to a single allele of a chromosome. Also, these epigenetic changes are framed into marking periods; such periods represent the environment, an abstract element that has been a point of reference to assess results of this technique. This mechanism allows performing the marking process in defined periods during the evolution process. The {\em Epigenetic Growing Function} represents the behavior of reader enzymes to interpret the epigenetic code (tags) on genotypes and then build phenotypes. The {\em Epigenetic Growing Function} plays the role of tags decoder or interpreter. After scanning the original individual's genome with its equivalent epigenotype (tags) and applying the operations to a copy of the chromosome, the {\em Epigenetic Growing Function} generates a resulting bit string to build phenotypes that are evaluated and scored. The {\em Crossover Operator} keeps its essence, but now, it includes tags as part of the exchange of genetic and epigenetic material between two chromosomes to create progeny. These epigenetic components are part of the proposed technique for the evolutionary algorithms' framework.
The epigenetic technique is implemented in classic Genetic Algorithms (GAs) and standard versions of \textsc{HaEa}\xspace (Hybrid Adaptive Evolutionary Algorithm, designed to adapt genetic operators rates while solving a problem \cite{GOMEZa, GOMEZb}). The epigenetic components described previously are embedded in the algorithms' logic. The epigenetic implementations are named ReGen GA (GA with regulated genes) and ReGen \textsc{HaEa}\xspace (\textsc{HaEa}\xspace with regulated genes). Finally, the validation of the technique is made by comparing GA and \textsc{HaEa}\xspace classic versions versus epigenetic implementations of GA and \textsc{HaEa}\xspace, through a set of experiments/benchmarks to determine the proposed approach applicability. The comparison includes a set of experiments with binary and real encoding problems to identify the behavior of individuals under marking pressure. After comparing results between EAs standard versions and epigenetic EAs, it can be noticed that the current technique finds similar good solutions to standard EAs and better ones in subsequent iterations. The optimal solution (global optimum) is not always reached, but still, this model performs better in most cases.
\section{Overview}
Chapter~\ref{chapter2} presents state of the art. An overview of optimization, evolutionary algorithms with an emphasis on genetic algorithms, and hybrid evolutionary algorithms. The chapter also describes the biological basis of this research and a brief review of related work in the literature.
Chapter~\ref{chapter3} explains the proposed approach in detail. The chapter includes {\em Tags} encoding, selected operations, gene sizes, {\em Epigenotype} representation, the {\em Marking Function}, the {\em Epigenetic Growing Function}, the {\em Crossover Operator}, and a generic evolutionary algorithm pseudocode with the epigenetic components of the proposed technique.
Chapter~\ref{chapter4} introduces the implementation of the epigenetic technique on Genetic Algorithms. This chapter reports results on selected experimental functions with binary and real encoding for determining the model's applicability. Additionally, the chapter presents the analysis and discussion of results.
Chapter~\ref{chapter5} presents the implementation of the epigenetic technique on \textsc{HaEa}\xspace. The \textsc{HaEa}\xspace variations use two genetic operators: single point Crossover (enhanced to include tags) and single bit Mutation. Experimental functions with binary and real encoding are represented along with the analysis and discussion of results.
Chapter~\ref{chapter6} brings this book to a close with a short recapitalization and future research directions of this thesis.
Appendix~\ref{append} exhibits an example of individuals representation. The appendix includes individuals with a marked genotype in binary representation and its phenotypic interpretation. In Appendix~\ref{appendB} standard and ReGen EAs Samples for statistical analysis are included.
\chapter{State of the Art}\label{chapter2}
Optimization is a significant paradigm with extensive applications. In many engineering and industrial tasks, some processes require optimization to minimize cost and energy consumption or maximize profit, production, performance, and process efficiency. The effective use of available resources requires a transformation in scientific thinking. The fact that most real-world tasks have much more complicated circumstances and variables to change systems' behavior may help in switching current reasoning. Optimization is much more meaningful in practice since resources and time are always limited. Three components of an optimization process are modeling, the use of specific problem-solving strategies (optimizer), and a simulator \cite{CAVAZZUTI, KOZIEL, NOCEDAL}.
A problem can be represented by using mathematical equations that can be transformed into a numerical model and be numerically solved. This phase must ensure that the right numerical schemes for discrete or continuous optimization are used. Another fundamental step is to implement the right algorithm or optimizer to find an optimal combination of design variables. A vital optimization capability is to produce or search for new solutions from previous solutions, which leads to the search process convergence. The final aim of a search process is to discover solutions that converge at the global optimum, even though this can be hard to achieve. In terms of computing time and cost, the most crucial step is using an efficient evaluator or simulator. In most cases, an optimization process often involves evaluating an objective function, which will verify if a proposed solution is feasible \cite{CAVAZZUTI, KOZIEL, NOCEDAL}.
Optimization includes problem-solving strategies in which randomness is present in the search procedure (Stochastic) or mechanical rules without any random nature (Deterministic). Deterministic algorithms work in a mechanically and predetermined manner without any arbitrary context. For such an algorithm, it reaches the same final solution if it starts with the same state. Oppositely, if there is some randomness in the algorithm, it usually reaches a different output every time the algorithm runs, even though it starts with the same input. Evolutionary Algorithms are examples of stochastic strategies \cite{CAVAZZUTI, KOZIEL}. Stochastic optimization methods seem to be the most innovative and advanced strategies for optimization. Compared to deterministic optimization methods, they have both advantages and disadvantages: they are less mathematically complex, use randomness in the search scheme, and have a slower convergence approaching the optimum solution \cite{CAVAZZUTI}. Optimization attempts to find the best possible solution among all available ones; optimization models a problem in terms of some evaluation function and then employ a problem solver strategy to minimize or maximize the objective function. The evaluation function represents the quality of the given solutions. Some methodologies aim to optimize some functions, but most of the problems are so large that it can be impossible to guarantee whether the obtained solution is optimal or not \cite{BURKE}.
Developing problem solvers (algorithms) is a common task in Computer Science. Engineers have always looked at nature's solutions to mimic ``natural problem solvers'' which may help to achieve approximate solutions to the optimal one \cite{EIBEN}. When complex natural phenomena are analyzed in the context of computational processes, our understanding of nature changes, leading to the design of powerful bio-inspired techniques to solve hard problems. Life has shown outstanding growth in complexity over time; life exhibits complex adaptive behavior in many natural elements, starting from individual cells to any living being, and even to evolving systems. Artificial Life (ALife), at first, focuses on understanding the essential properties of any life form, then uses synthetic methods ({\em soft, hard, and wet}) to represent such systems \cite{BEDAU, MITPRESS}. A characteristic of computing inspired by nature is the metaphorical use of concepts, principles, and biological mechanisms. ALife concentrates on complex systems that involve life, adaptation, and learning. By creating new types of life-like phenomena, artificial life continually challenges researchers to review and think over what it is to be alive, adaptive, intelligent, creative, and whether it is possible to represent such phenomena. Besides, ALife aims to capture the simple essence of vital processes, abstracting away as many details of living systems or biological mechanisms as possible \cite{BEDAU}.
An example of this is the evolution process by natural selection, a central idea in biology. Biological evolution is the change in acquired traits over succeeding generations of organisms. The alteration of traits arises when variations are introduced into a population by gene mutation, genetic recombination, or erased by natural selection or genetic drifts. Adaptation is a crucial evolutionary process in which traits and species' fitness adjust for being better suited for survival in specific environments. The environment acts to promote evolutionary change through shifts in development \cite{JEFFERY}. The evolution of artificial systems is an essential element of ALife, facilitating valuable modeling tools and automated design methods \cite{MITCHELL3}. Evolutionary Algorithms are used as tools to solve real problems and as scientific models of the evolutionary process. They have been applied to a large variety of optimization tasks, including transportation problems, manufacturing, networks, as well as numerical optimization \cite{EIBEN, MITCHELL3}. However, the search for optimal solutions to some problem is not the only application of EAs; their nature as flexible and adaptive methods allow them to be applied in diverse areas from economic modeling and simulation to the study of diverse biological processes during adaptation \cite{EIBEN}.
\section{Evolutionary Algorithms}
Evolutionary Algorithms (EAs) are a subset of population-based, metaheuristic optimization algorithms of the Evolutionary Computation field, which uses mechanisms inspired by natural evolution, the process that drives biological evolution. There are many sorts of evolutionary algorithms, the most widely known: Genetic Algorithm (GA), Genetic Programming (GP), Evolutionary Strategies (ES), and Hybrid Evolutionary Algorithms (HEA). The general fundamental idea behind all these EAs is the same; given a population of individuals within some environment with limited resources, only the fittest survive. To define a particular EA, there are some components, or operators that need to be specified. The most important are: the representation of individuals, an evaluation (fitness) function, an initial population of individuals, a parent selection process that considerate the aptitude of each individual, a transformation process that allows the creation of new individuals through crossing and mutation, and a survivor selection mechanism (replacement) \cite{CAVAZZUTI, EIBEN, HOLLAND, KOZIEL}.
\subsection{Genetic Algorithms}
GAs are adaptive heuristic search computational methods based on genetics and the process that drives biological evolution, which is natural selection \cite{EIBEN}. Holland \cite{HOLLAND} presented the GA as the biological evolution process abstraction and formulated a theory about adaptation. Holland intended to understand adaptation and discover alternatives in which natural adaptation mechanisms might be brought into computer methods. The most used EA to solve constrained and unconstrained optimization problems is the traditional GA \cite{CAVAZZUTI, HOLLAND, KOZIEL}, also today, the most prominent and widely evolution models used in artificial life systems. They have been implemented as tools for solving scientific models of evolutionary processes and real problems \cite{MITCHELL3}.
A Genetic Algorithm explores through a space of chromosomes, and each chromosome denotes a candidate solution to a particular problem. Bit strings usually represent chromosomes in a GA population; each bit position (locus) in the chromosome has one out of two possible values (alleles), 0 and 1. These concepts are analogically brought from biology, but GAs use a simpler abstraction of those biological elements \cite{MITCHELL, MITCHELL2}. The most important elements in defining a GA are the encoding scheme (hugely depends on the problem), an initial population, a parent selection mechanism, variation operators such as recombination, mutation, and a replacement mechanism \cite{EIBEN, MITCHELL2}. The GA often requires a fitness objective function that assigns a score to each chromosome in the current population \cite{MITCHELL, MITCHELL2}. Once an optimization problem has been set up, the search process takes place by evaluating the population of individuals during several iterations. In the course of the evolution process, the chromosomes change from one population to another. An individual's performance depends on how well it satisfies the specified problem with its current schema of strings (the most common is binary alphabet \{0,1\}). The genetic algorithms obey to a population evolution model where the fittest survive \cite{HOLLAND}.
\subsubsection{Binary Codification}
The binary encoding uses the binary digit, or a bit, as the fundamental unit of information, there are only two possibilities $0$ or $1$. The genotype simply consists of a string or vector of ones and zeroes. For a particular problem, it is important to decide the string's length and how it will be interpreted to produce a phenotype. When deciding the genotype to phenotype mapping for a problem, it is essential to ensure the encoding allows all possible bit strings to express a valid solution to a given problem \cite{EIBEN}.
\subsubsection{Real Codification}
Real numbers represent any quantity along a number line. Because reals lie on a number line, their size is comparable. One real can be greater or less than another and used on arithmetic operations. Real numbers ($\mathbb{R}$) include the rational numbers ($\mathbb{Q}$), which include the integers ($\mathbb{Z}$), which include the natural numbers ($\mathbb{N}$). Examples: $3.44, -56.1, 2, 3/6, -1$. When values come from a continuous rather than a discrete distribution, usually, the most sensible way to represent a problem's candidate solution is through real values. For example, they may represent physical quantities such as the dimension of an object. The genotype for a solution with {\em k} genes is a vector $(x_1,...,x_k)$ with $x_i$ $\in \mathbb{R}$ \cite{EIBEN}.
\subsection{Hybrid Evolutionary Algorithms}
Hybridization of evolutionary algorithms is growing in the EA community due to their capabilities in handling several real-world problems, including complexity, changing environments, imprecision, uncertainty, and ambiguity. For diverse problems, a standard evolutionary algorithm might be efficient in finding solutions. As stated in the literature, standard evolutionary algorithms may fail to obtain optimal solutions for many types of problems. The above exposes the need for creating hybrid EAs, mixed with other heuristics. Some of the possible motives for hybridization include performance improvement of evolutionary algorithms (example: speed of convergence), quality enhancement of solutions obtained by evolutionary algorithms, and to include evolutionary algorithms as part of a larger system \cite{EIBEN, GROSAN}.
There are many ways of mixing techniques or strategies from population initialization to offspring generation. Populations may be initialized by consolidating previous solutions, using heuristics, or local search, among others. Local search methods may be included within initial population members or among the offspring. EAs Hybridation may involve operators from other algorithms, penalty-reward mechanisms, or domain-specific knowledge to the search process. The exploitation and exploration relationship produced during the execution determine the evolutionary algorithm behavior. Adaptive evolutionary algorithms produce exploitation/exploration relations that may avoid premature convergence and improve final results. Merging problem-specific knowledge for particular problems can also enhance Evolutionary Algorithms' performance \cite{EIBEN, GROSAN}.
As described in the literature, various techniques, heuristics, or metaheuristics are used to improve the evolutionary algorithms' general efficiency. Common hybrid strategies are compiled as follows: Hybridization between two EAs, Neural network-assisted EAs, Fuzzy logic assisted EA, Particle swarm optimization (PSO) assisted EA, Ant colony optimization (ACO) assisted EA, Bacterial foraging optimization assisted EA, and Hybridization between EAs and other heuristics (such as local search, tabu search, simulated annealing, hill climbing, dynamic programming, greedy random adaptive search procedure, among others.) \cite{GROSAN}.
\section{Overview of Epigenetics}
How living beings (particularly humans) respond to their environment is determined by inheritance, and the different experiences lived during development. Inheritance is traditionally viewed as the transfer of variations in DNA (Deoxyribonucleic Acid) sequence from parent to child. However, another possibility to consider in the gene-environment interaction is the trans-generational response. This response requires a mechanism to transmit environmental exposure information that alters the gene expression of the next generations \cite{PEMBREY}.
Two examples of trans-generational effects were found in Överkalix, a remote town in northern Sweden, and the Netherlands. The study conducted in Överkalix with three generations born in 1890, 1905, and 1920 revealed that the high or low availability of food for paternal grandfathers and fathers (during childhood or their slow growth period) influenced the risk of cardiovascular disease and diabetes mellitus mortality in their male children and grandchildren \cite{KAATIa, KAATIb, PEMBREY}. On the other hand, the study carried out on a group of people in gestation and childhood during the period of famine experienced between the winter of 1944 and 1945 in the Netherlands, evidenced that people with low birth weight, developed with higher probability, health problems such as diabetes, hypertension, obesity or cardiovascular disease during their adult life. The research concludes that famine during gestation and childhood has life-long effects on health. Such effects vary depending on the timing of exposure and the evolution of the recovery period \cite{KYLE}.
In this sense, gene expression can be affected in such a way that it reflects habits that shape an individual's lifestyle, even the ``experiences" of a generation might be passed down to progeny that have not necessarily lived in similar conditions to their parents. It is in this context that epigenetics, area that studies the modifications that affect gene expression, offers many answers regarding epigenetic regulation (this includes gene activation and recruitment of enzymes that add or remove epigenetic tags) and the inheritance of genetic conditions (susceptibility to disease) onto offspring \cite{GHR, UTAH}. As an organism grows and develops, chemical markers silence genome sections at strategic times and specific locations. Epigenetics is the study of those chemical reactions and the factors that influence them \cite{UTAH}. Certain factors, conditioned by habits and environment, are capable of interacting with genes and modifying their function without altering their molecular composition (DNA base sequence) \cite{GHR, LOBO}.
In 1942, Waddington described epigenetics as ``the branch of biology which studies the causal interactions between genes and their products, which bring the phenotype into being'' \cite{WADDINGTON}. However, the term epigenetics has been approached more broadly, recognizing its practical and theoretical importance in biology, without leaving aside Waddington's conception and the different concepts that have emerged to refer to the subject \cite{TAMAYO}. Such as the recognition of alternative development pathways, the existence of complex networks in development processes, phenotypic stability, plasticity, and the influence of the environment on organisms throughout their development \cite{JABLONKA, TAMAYO}. Today, Waddington's views on epigenetics are closely associated with phenotypic plasticity, which is a gene's ability to produce multiple phenotypes. Identifying regulatory interactions gene to gene and gene to protein explain the gene expression changes that Waddington named epigenetics \cite{DEANS}.
Holliday (1994) offered two definitions of epigenetics. His first definition describes epigenetics as ``the study of the changes in gene expression, which occur in organisms with differentiated cells, and the mitotic inheritance of given patterns of gene expression'' \cite{HOLLIDAY}. A second definition states that epigenetics is ``nuclear inheritance, which is not based on differences in DNA sequence'' \cite{HOLLIDAY}. Holliday redefined epigenetics in a way that was more precise and considerately focused on the inheritance of expression states \cite{DEANS}. For these two definitions, literature refers to gene regulation (Waddington definition) and epigenetic inheritance (Holliday definition) as intragenerational and transgenerational epigenetics. Epigenetics study involves both intragenerational and transgenerational epigenetics. The former refers to gene expression modifications through epigenetic marks (e.g., DNA methylation and Histone modification) that result in a modified phenotype within an individual's lifespan. The latter refers to the inheritance of a modified phenotype from parental generations with no DNA sequence changes. The same epigenetic markers mentioned above may be responsible; however, this category focuses on the act of inheritance \cite{BURGGREN}.
On the NIH Roadmap Initiative side, the epigenetic definition includes: ``Epigenetics is an emerging frontier of science that involves the study of changes in gene activity regulation and expression that are not dependent on gene sequence. Epigenetics refers to both heritable changes in gene activity and expression (in the progeny of cells or individuals) and also stable, long-term alterations in the transcriptional potential of a cell that are not necessarily heritable'' \cite{NIEHS}. Both Waddington's and Holliday's definitions seem to be part of the contemporary epigenetic description.
Today, a wide variety of behaviors and health indicators in people are linked to epigenetic mechanisms \cite{WEINHOLD}. Epigenetic processes are natural and essential for many organism functions; however, if epigenetic misregulation occurs (due to unfavorable environmental conditions, for example), significant adverse effects on health and behavior can happen. Some types of epigenetic modifications include methylation, acetylation, phosphorylation, ubiquitylation, and sumoylation. Among the best known and widely studied epigenetic mechanisms are DNA methylation and Histone modification (see Fig. \ref{c2fig1}). Other epigenetic modifications and considerations may emerge as research in the area of epigenetics progress \cite{GHR, NIGMS, WEINHOLD}. In fact, any mechanism that allocates regulatory data on genes without altering the nucleotide sequence is considered ``epi'', ``on top of'' or ``in addition to'' genetics. Examples of how epigenetic mechanisms affect gene expression are seen in processes like gene imprinting, cellular differentiation, and gene regulation during lifetime \cite{GHR, NIGMS, UTAH}.
\subsection{Epigenetic Mechanisms}
Humans have 23 pairs of chromosomes in each body cell; each pair has one chromosome from the mother and another from the father. A chromosome is composed of DNA and proteins. The DNA consists of two long chains made up of nucleotides, on which thousands of genes are encoded. The complete set of genes in an organism is known as its genome. The DNA is spooled around proteins called histones. Both the DNA and the Histones are marked with chemical tags, also known as epigenetic tags. The histones and the epigenetic tags form a second structural layer that is called the epigenome. The epigenome (epigenotype) comprises all chemical tags adhered to the entire DNA and Histones as a way to regulate the genes' activity (gene expression) within the genome. The biological mechanisms that involve attaching epigenetic tags to or removing them from the genome are known as epigenetic changes or epigenetic mechanisms \cite{GHR, NIGMS, UTAH}.
\begin{figure*}
\centering
\includegraphics[width=6.5in]
{imagesThesis/dnahistone.jpg}
\includegraphics[width=6.5in]
{imagesThesis/Metilacion.png}
\caption{Epigenetic Mechanisms.}
\label{c2fig1}
\end{figure*}
\subsubsection{DNA Methylation}
DNA methylation mechanism conducts the addition or elimination of methyl groups ({\em CH3}), predominantly where cytosine bases consecutively occur \cite{WEINHOLD}. In other words, chemical markers called methyl groups are bound to cytosines at {\em CpG sites} in DNA. Methyl groups silence genes by disrupting the interactions between DNA and the proteins that regulate it \cite{NHGRI}. Genome regions that have a high density of {\em CpGs} are known as {\em CpG islands}, and DNA methylation of these islands leads to transcriptional repression \cite{AARON}.
Methylation is sparsely found but globally spread in indefinite {\em CpG sequences} throughout the entire genome, except for {\em CpG islands}, or specific stretches (approximately one kilobase in length) where high {\em CpG contents} are found. These methylated sequences can drive to improper gene silencing, such as tumor suppressor genes' silencing in cancer cells. Studies confirm that methylation close to gene promoters differs considerably among cell types, with methylated promoters associated with low or no transcription \cite{PHILLIPS}.
DNA methylation represents the best-characterized and best-known epigenetic mechanism. DNA methylation is bound to the genome by {\em Dnmt3a} and {\em Dnmt3b} methyltransferases. These enzymes catalyze a methyl group's attachment to the cytosine DNA base on the fifth carbon ({\em C5}). DNA methylation maintenance is preserved when cells divide, and it is carried out by {\em Dnmt1} enzyme. Together, the mentioned enzymes guarantee that DNA methylation markers are fixed and passed onto succeeding cellular generations. In this way, DNA methylation is a cellular memory mechanism that transmits essential gene expression programming data along with it \cite{PAIGE}.
\subsubsection{Histone Modification}
Histone modification is a covalent posttranslational change to histone proteins, which includes methylation, acetylation, phosphorylation, ubiquitylation, and sumoylation. All these changes influence the DNA transcription process. Histone Acetyltransferases, for exmple, are responsible for Histone Acetylation; these enzymes attach acetyl groups to lysine residues on Histone tails. In contrast, Histone Deacetylases ({\em HDACs}) remove acetyl groups from acetylated lysines. Usually, the presence of acetylated lysine on Histone tails leads to an accessible chromatin state that promotes transcriptional activation of selected genes; oppositely, lysine residues deacetylation conducts to restricted chromatin and transcriptional inactivation \cite{GILBERT}.
The DNA is indirectly affected, DNA in cells is wounded around proteins called Histones, which form reel-like structures, allowing DNA molecules to stay ordered in the form of chromosomes within the cell's nucleus as depicted in Fig~\ref{c2fig1}. When Histones have chemical labels, other proteins in cells detect these markers and determine if the DNA region is accessible or ignored in a particular cell \cite{NHGRI}.
\subsection{Gene Regulatory Proteins}
Epigenetic regulation comprises the mechanisms by which epigenetic changes such as methylation, acetylation, and others can impact phenotype. Regulatory proteins conduct the epigenetic regulation process. These proteins have two main functions; the first involves switching specific genes on or off (gene activation); the second is related to the recruitment of enzymes that add, read or remove epigenetic tags from genes \cite{UTAH}.
\subsubsection{Enzymes: Writers, Readers, and Erasers}
Gene regulatory proteins recruit enzymes to add, read, and remove epigenetic tags; these processes are performed on the DNA, the Histones, or both, as explained previously \cite{UTAH}. These enzymes are seen as epigenetic tools, a family of epigenetic proteins known as readers, writers, and erasers \cite{RAO}. Epigenetics involves a highly complex and dynamically reversible set of structural modifications to DNA and histone proteins at a molecular level; these modifications evidence a second layer on the chromatin structure.
The progress of epigenetic research has allowed the identification of crucial players performing these changes. These chemical alterations are catalyzed by enzymes referred to as epigenetic modifiers of the chromatin. The different enzymes that catalyze these modifications portray the epigenetic space's diversity and the complexity of gene expression regulation \cite{RAO}.
These epigenetic modifiers are classified as ``writers'', ``readers'', and ``erasers''. {\em Writers} add to DNA or Histones chemical units ranging from a single methyl group to ubiquitin proteins. For example, DNA methyltransferases ({\em DNMTs}) are responsible for introducing the {\em C5-methylation} on {\em CpG} dinucleotide sequences. Such molecular structures not only influence the relation between DNA and histone proteins but also recruit non-coding RNAs ({\em ncRNAs}) and chromatin remodellers. On the other hand, the specialized domain-containing proteins that recognize and interpret those modifications are {\em Readers}; the binding interactions recognize through so-called reader modules specific modification codes or marks within the chemically modified nucleic acids and proteins and then perform conformational changes in chromatins and provide signals to control chromatin dynamics. Finally, {\em Erasers}, a dedicated type of enzyme expert in removing chemical markers, guarantee a reversible process. In order to achieve that, a group of eraser enzymes catalyzes the removal of the written information, ensuring a balanced and dynamic process \cite{RAO, PAIGE, UTAH}.
\subsubsection{Gene Silencing and Repression}
As explained above, epigenetics means ``upon'', ``above'' or ``over'' genetics. Epigenetics describes a type of chemical reaction resulting from epigenetic modifications that alter DNA's physical structure without altering its nucleotide sequence. These epigenetic modifications cause genes to be more or less accessible during the transcription process. In general, environmental conditions influence the interactions and chemical reactions of the epigenotype, which can mark genes with specific chemical labels that direct actions such as gene silencing, activation or repression of a gene (activity), which translates into a modification in its function \cite{GHR, NIGMS, UTAH}.
Epigenetic mechanisms, in particular, Histone and DNA modifications, go beyond the idea of switching genes off and on. Gene silencing refers to a mechanism where large regions of a chromosome become transcriptionally inactive due to the compact wrapping of histone proteins that restricts the activity of DNA polymerases, which situate nucleotide units into a new chain of nucleic acid \cite{SANTER}. DNA regions that are highly packed are known to be part of the heterochromatin structure. In contrast, DNA relatively broadened form what is known as euchromatin. For a long time, it was assumed that heterochromatin is transcriptionally deedless compared to euchromatin. Nevertheless, many recent studies have questioned this conception of transcriptionally silent heterochromatin \cite{SHAFA}. Those studies indicate that the concept of equivalence between open chromatin with active transcription and compact chromatin with inactive transcription is not always applicable to all genes. Active genes have been located in tight chromatin regions and inactive genes in open chromatin regions \cite{CLARK}.
Gene repression is the inactivation of individual genes whose products are necessary to preserve cell functions, such as producing essential enzymes or cofactors \cite{SANTER}. The prevention of basal transcription machinery formation is considered the first mechanism through which gene expression down-regulation occurs. This type of transcriptional repression is obtained by directly altering the functional interactions of a transcription factor. Another gene repression mechanism is the inactivation of the transcriptional function of an activator. In this case, through different mechanisms (for example, the protein-protein interaction, covalent modification, and degradation), the repressor can affect an activator's capacity. The repression mechanisms require repressors binding to components of the basal transcriptional machinery or transcriptional activators. Epigenetic modifications that affect the chromatin structure close to the target genes may trigger these repression mechanisms \cite{CESARO}.
The recruitment of Histone acetyltransferase enzymes ({\em HATs}) allows the {\em H3} and {\em H4} histone tails acetylation. This mechanism promotes interactions between DNA and histones. The result is a relaxed structure surrounding the core promoter that is available to the general transcription process. Activator proteins interact with the general transcription factors to intensify DNA binding and initiation of transcription. The earlier means that activator proteins' recruitment helps raise the transcription rate, leading to gene activation \cite{CLARK}. Methylation is related to both gene activation and repression; and each mechanism depends on the degree of methylation. Inactive genes or silent chromosome regions are highly methylated in their {\em CpG islands} compared with the same gene on the active chromosome \cite{FESTEN, SHAFA}.
There are other important considerations around the expression of genes. Gene regulation mechanisms (silencing, repression, activation) depend mostly on the cell's epigenetic condition, which controls the gene expression timing and degree at a specific time \cite{SHAFA}. Silencing gene expression is not just about switching chromatin areas entirely off, or gene repression fully suppressing a gene function. The dynamics of these mechanisms also involve decreasing the level of transcription by gradually reducing gene expression, depending on tags bind location or regions, and how many tag groups are attached. So, it is possible to evidence sections of the chromosome where the gene expression is not totally inactivated but strongly reduced. In the same way, active genes and regions with expression levels moderated. The binding of proteins to particular DNA elements or regulatory regions to control transcription and mechanisms that modulate translation of {\em mRNA} may also be moderated \cite{CESARO, CLARK, FESTEN, SHAFA}.
\subsection{Epigenetic Memory and Adaptation}
Today epigenetic modifications such as DNA methylation and histone tail modifications are known as essential regulators in the consolidation and propagation of memory. These mechanisms' ability to dynamically control gene transcription in response to environmental factors establishes the consolidation of long-term memory formation. Additionally, these mechanisms, particularly DNA methylation, has a persistent and self-propagating nature that suggests a molecular mechanism for memory maintenance \cite{ZOVKIC}.
Learning and memory are seen as permanent alterations of behavior produced as a result of a changing environment \cite{ZOVKIC}. For a temporal stimulus to promote long-term behavior changes, cells must experience several cellular stimuli and molecular modifications that establish a lasting memory. The molecular foundation for memory preservation is notable when short-lived environmental stimuli induce self-perpetuating biochemical reactions required to maintain molecular changes. Holliday \cite{HOLLIDAYb} proposed that epigenetic mechanisms, particularly DNA methylation, possess the biochemical properties required to transmit memories throughout life. DNA methylation is recognized as a solid and self-perpetuating regulator of cellular identity through the establishment and spread of persistent heritable changes in gene expression through cell divisions \cite{BIRD}. This earlier suggests that epigenetic mechanisms are able to provide a suitable molecular baseline for memory consolidation and maintenance.
In this case, epigenetic tags help to long-term memorize how genes should be expressed; changes in gene expression can lead living beings to adapt to their environment. Epigenetic markers represent a type of cellular memory, a cell epigenetic profile, a collection of tags that describe expression states of genes, and also the totality of the signals received during an individual's existence \cite{UTAH}. Adaptation is vital in organisms' development process. The fitness of traits and species is continuously adjusted, so individuals are better suited to survive in particular environments and qualified to face different conditions \cite{JEFFERY}. The environment continually acts to promote transformation through changes in development. The organism's ability to permanently adapt its behavior to environmental changes depends entirely on functional phenotypic plasticity and the genome's capability to produce multiple phenotypes. Propagation of expression states and cell memory is part of the heritable memory conception, an explicit property of epigenetic gene regulation \cite{DEANS, UTAH}.
\subsection{Epigenetics and Evolution}
Nowadays, epigenetics is known not only because of its relevance for medicine, farming, and species preservation, but also because studies have revealed the importance of epigenetic mechanisms in inheritance and evolution. Particularly, evidencing epigenetic inheritance in systems where non-DNA alterations are transmitted in cells. Also, the organism diversity broadens the heredity theory and defy the current gene-centered neo-Darwinian version of Darwinism \cite{JABLONKA}. Epigenetics as science does not intend to oppose early ideas of evolutionary theory. In fact, some authors suggest considering modern epigenetics as neo-Lamarckian \cite{PENNY} or close to the original argument proposed by Baldwin (known as Baldwin effect) \cite{EGAS}. Early authors were undergoing studies that are expanding the knowledge about inheritance and evolution. Currently, the epigenetics research community continues learning through epigenetics studies \cite{PENNY}, even when the idea of epigenetic inheritance and its influence on evolution is still controversial \cite{BURGGREN}. This set of theories, along with others like Mendelian principles and Hardy-Weinberg law, try explaining inheritance and living organisms diversity as stated by the tenets of genetic traits heredity from parent organisms to their children \cite{BURGGREN, JABLONKA, PENNY}. Those theories are based on some factors or conditions, name them statistical, environmental, needs, survival, among others. Despite this, some researchers think it is not helpful attributing modern ideas to early researchers, since, it can be misleading \cite{PENNY}.
A fundamental principle of evolution is that natural selection alters organisms behavior over long periods by shaping populations traits. Natural selection has no particular inclination, this process acts on organisms with poor or improved fitness, which derives from mutations accumulation; these mutations can enhance resulting phenotypic modifications. However, phenotypic changes at the population level and beyond generally occur over many thousands of generations when a genotype with a modified phenotype of higher fitness slowly places in the general population or a genotype with lower fitness is eliminated from the population \cite {BURGGREN}. Epigenetic inheritance changes the evolutionary perspective, as mentioned previously, the genome slowly transforms through the processes of random variation and natural selection; it takes a large number of generations for a genetic trait to prevail in a population. The epigenome, otherwise, changes rapidly as a consequence of being affected by signals from the environment. Epigenetic changes may occur in many organisms at one time; through epigenetic heritage, some parents experiences may pass on to the next generations; and the epigenome remains flexible as the environment changes. Epigenetic inheritance allows an organism to continuously adjust its gene expression to suit its environment without affecting its DNA code \cite {UTAH}.
The increment of individuals' fitness in a population may derive from epigenetic or genetic changes over thousands and thousands of generations. However, the epigenetic inheritance impact might not only be potentially evidenced in posterity generations but also be perceived immediately in a population. The inheritance of epigenetically shaped phenotypes may result from the continuous inheritance of epigenetic tags over generations. An epigenetically-inherited phenotype does not need to be fixed on the genotype to have a prominent influence on the evolution of traits. Instead, what directs the genotype variation in a population is the individuals' capability to survive despite unevenly-distributed epigenetic tags that produce suitable or unsuitable phenotypes subject to natural selection. Intragenerational and transgenerational epigenetics, therefore, are not mutually exclusive. It is evident that an alteration in gene expression in the adult generation phenotype {\ em P0} by DNA methylation and Histone Acetylation, for example, might also be passed onto {\ em P1} generation or beyond \cite{BURGGREN}.
\section{Related Work}
There is a predominant focus in the literature on the in-depth study of epigenetic mechanisms, especially those that may be associated with the diagnosis, prevention, and treatment of diseases with apparently less emphasis on what mechanisms do at the phenotypic level of an individual, particularly between generations. Models/Strategies focused on epigenetic changes occurring during one generation's life span or transmitted through generations, or at an individual level have been the target to identify the most recent achievements around this topic in the evolutionary algorithms community. Those models have been developed with different approaches. Several authors have worked on hybrid strategies to improve the solution capacity of population-based global search methods, so the adaptive behavior of populations can be rapidly manifested under selective pressure. Such strategies aim to address a wider variety of computational problems by mimicking biological mechanisms or social changes. Below, some approaches are briefly described; they entail adaptation and learning behaviors, two characteristics that this thesis is studying.
Dipankar Dasgupta et al. (1993) \cite{DASGUPTA} introduce the structured Genetic Algorithm (sGA). Though this strategy does not mention epigenetic mechanisms, it involves gene activation, an essential mechanism in gene regulation to control genes states: repression (different from silencing) and expression. This genetic model includes redundant genetic material and a gene activation mechanism that utilizes a multi-layered structure (hierarchical) for the chromosome. Each gene in higher levels acts as a switchable pointer that activates or deactivates sets of lower-level genes. At the evaluation stage, only the active genes of an individual are translated into phenotypic functionality. It also includes a long-term distributed memory within the population enabling adaptation in non-stationary environments. Its main disadvantage is the use of a multi-level representation with optional search spaces that could be activated at the same time, leading to express a bit string that may be too long for the problem solution.
Tanev and Yuta (2008) \cite{TENEV} describe the first model mentioning Epigenetics in the EA community. In this model, they focus on an improved predator-prey pursuit problem. They present individuals with double cell, comprising somatic cell and germ cell, both with their respective chromatin granules. In the simulation, they use the Modification of Histones to evidence the role this mechanism plays in regulating gene expression and memory (epigenetic learning, EL). The Genetic Programming Algorithm defines a set of stimulus-response rules to model the reactive behavior of predator agents. The beneficial effect of EL on GP's performance characteristics is verified on the evolution of predator agents' social behavior. They report that EL helps to double improve the computational performance of the implemented GP. Additionally, the simulation evidences the phenotypic variety of genotypically similar individuals and their preservation from the negative effects of genetic recombination. Phenotypic preservation is achieved by silencing particular genotypic regions and turning them on when the probability of expressing beneficial phenotypic traits is high.
Satish Periyasamy et al. (2008) present the Epigenetic Algorithm (EGA), based on the intragenerational adaptation of biological cells, optimization strategy that bio-molecules use for their functional processes. They adapt concepts of bio-molecular degradation and autocatalysis, which are ubiquitous cellular processes and pivotal for the adaptive dynamics and evolution of an intelligent cellular organization. The algorithm is used to achieve optimization for organizations' internal structures, with a focus on the autopoietic behavior of the systems. Additionally, the authors present a simulation with agent-based cell modeling. This artificial model is called SwarmCell; the model is built as an autopoietic system that represents a minimal biological cell. The authors state that their epigenetic algorithm can demonstrate to be a fundamental extension to existing evolutionary systems and swarm intelligence models. They discuss improving problem-solving capabilities by implementing epigenetic strategies in their model \cite{PERIYA}.
Epigenetic Tracking by Alessandro Fontana (2009) is a mathematical model of biological cells \cite{FONTANA}. The computer simulation generates complex 3-dimensional cellular structures with the potential to reproduce the complexity typical of living beings. The simulation shows the homogeneous distribution of stem cells, which are dynamically and continuously created during development from non-stem cells. The model uses an integer number genetic encoding scheme controlled by a regulatory network with epigenetic regulatory states (on and off) to represent signals in distinct development phases. A two-dimensional cellular grid and a GA operating on the genome allow generating arbitrary 2-or-3-dimensional shapes.
The EpiAL model by Sousa and Costa (2010) \cite{SOUSAa, SOUSAb}, is based on two main entities: agents and the environment, for which, epigenetics is considered as the ability for an agent to modify its phenotypic expression due to environmental conditions. An agent has regulatory structures that, given inputs from the environment, can act upon the genotype, regulating its expression. They also consider the epigenetic marks to be inherited between generations, through the transmission of partial or full markers (methylation values off/on), allowing the existence of acquired traits (methyl marks) to be transmitted through generations of agents. The environment models a two-dimensional grid with transposable locations or separated ones by a wall. Each location has different characteristics, temperature, light, and food that can change gradually; the agents intend to survive and procreate. Agents behavior is encoded on binary strings. Methylation marks regulate the activation of genes. An EA controls the survival and reproduction of different organisms. Non-epigenetically modified populations find it difficult to survive in changing environments, while epigenetically modified populations are capable to regulate themselves under changing conditions.
Chikumbo et al. (2012) \cite{CHIKUMBOa} propose a Multi-Objective Evolutionary Algorithm with epigenetic silencing for the land use management problem. The algorithm intention is to decrease the ecological footprint while ensuring sustainability in land use management through asserted decision making. The chromosome encodes each possible paddock use, and the system emulates gene regulation with epigenetic silencing based on histone modification and RNA editing mechanisms. A visualization tool of pareto frontier is used, composing fourteen objectives into three general criteria: a set of time-series, farm management strategies, and their related spatial arrangements of land uses. However, improvements in epigenetic variations are not estimated as the approach is not compared to any other standard Multi-Objective EA. In 2015, the authors introduced an improvement by using a similar epigenetics-based modification, such improvent is described in Triple Bottomline Many-Objective-Based Decision Making for a Land Use Management Problem \cite{CHIKUMBOb}. The change involves the use of Hyper Radial Visualization (HRV), 3D Modeling, and Virtual Reality to diminish the functions of fourteen objectives and visualize solutions in a simpler representation to be interpreted by an expert group. The triple bottom line is represented by the economic, environmental, and social complex (stakeholder preferences) factors.
Arnold C and et al. (2013) \cite{ARNOLD} propose a theoretical mechanism to explain how a cell can reconstruct its epigenetic state after the replication process. The replication process may be responsible for epigenetic information loss, such information is accumulated over a cell lifetime. They hypothesize that the different combinations of reader and writer units in histone-modifying enzymes use local rewriting rules capable of computing the acquired patterns. To demonstrate this, they use a flexible stochastic simulation system to study histone modification states' dynamics. The implementation is a flexible stochastic simulation system based on the Gillespie algorithm, which models the master equation of a detailed chemical model. An evolutionary algorithm is then implemented to find enzyme mixtures for stable inheritance and patterns across multiple cell divisions with high precision. Once such patterns are found, chromatin is rebuilt.
Turner et al. (2013) formally describe the Artificial Epigenetic Regulatory Network (AERN), an extended version of their previous artificial gene regulation (AGN) model. AERN uses an analog of DNA methylation combined with chromatin modifications as its epigenetic elements, giving the network the ability to change its epigenetic information during the evolution and execution of epigenetic frames. Epigenetic control enables the network to evolve. In the model, subsets of genes are more likely to perform a given objective, when present an active state. The inclusion of epigenetic data gives the network the capacity to designate different genes to diverse tasks, completely directing gene expression as stated by its operating environment. The goal is to follow specific trajectories governed by evolution rules and represented in chaotic dynamics (Chirikov's standard map). The net evolves by making use of a GA. Because of the ability to deactivate genes, the network increases its efficiency. Consequently, with objectives including deactivated genes, a minimum computational effort is required to achieve at least the first iteration of the network simulation. The epigenetic mechanism improves the performance of the model based on the authors' report \cite{TURNER}.
Przybilla and colleagues (2014) \cite{PRZYBILLA} introduce a computational model to understand epigenetic changes in aging stem cells in a population of cells where each contains an artificial genome. The transcription of the genes encoded in the genome is controlled by DNA methylation, histone modification, and the action of a CIS-regulatory network. The dynamic of the model is determined by the molecular crosstalk between the different epigenetic mechanisms. The epigenetic states of genes are subject to different types of fluctuations. The model provides a mechanistic understanding of age‐related epigenetic drifts. The researchers aim at linking epigenetic mechanisms to phenotypic changes of cells to derive hypotheses on the emergence of age‐related phenotypes (ARPs) on a population level. They combine their model of transcriptional regulation with an individual cell‐based model of stem cell populations, which allows them to simulate aging on the molecular, cellular, and population level. The authors hypothesize that ARPs are a consequence of epigenetic drifts, which originate the limited cellular capability to inherit epigenetic information.
La Cava and colleagues (2014) \cite{LACAVAa} describe a method to solve the symbolic regression problem using Developmental Linear Genetic Programming (DLGP) with an epigenetic hill climber (EHC). The EHC helps to optimize the epigenetic properties of linear genotypes that represent equations. In addition to having a genotype composed of a list of instructions, the Epigenetic Hill Climber (EHC) creates a binary array of equivalent length in each individual, referred to as an Epiline. During genotype to phenotype mapping, only instructions from the list with an active state in the corresponding Epiline are executed. Their implementation is based on two main characteristics: first, inheritability by coevolution of Epilines with their respective genotype; and second, the use of EHC, which mimics the acquisition of lifetime learning by epigenetic adaptation. The EHC implementation evaluates epigenetic changes to determine whether individuals should be updated. Epigenetic modifications to an individual are kept only if the fitness is improved or not changed, based on the active genes (instructions). The same method is implemented to solve program synthesis problems in Inheritable Epigenetics in GP (2015) \cite{LACAVAb} and GP with Epigenetic Local Search (2015) \cite{LACAVAc}. La Cava reports that the addition of epigenetics results in faster convergence, less bloat, and an improved ability to find exact solutions on several symbolic regression problems.
The epigenetic algorithm (EGA) by Birogul (2016) \cite{BIROGUL} adapts epigenetic concepts to the classical GA structure. The EGA includes epicrossover operator, epimutation operator, and epigenetic factors. Also, his technique explains how epigenetic inheritance is achieved across populations. The designed EGA is applied to the Mobile Network Frequency Planning that is a constrained optimization problem. He uses data from real base stations BCCH (Broadcast Control Channel) in a GSM network (Global System for Mobile Communications) to test his approach. He states that EGA obtained better results in a shorter time and less iteration than classical GAs when implementing both algorithms in order to solve the mentioned constrained optimization problem.
The epiGenetic algorithm (epiGA) by Daniel Stolfi and Enrique Alba (2018) \cite{STOLFIa, STOLFIb}, consists of four epigenetic operators. The first operator is the Individual Initialization that creates individuals made up of cells. Second, the Nucleosome Generation operator that creates the nucleosome structure where the DNA is collapsed and made inaccessible during reproduction. Third operator, the Nucleosome Based Reproduction, where the most promising cells combine following epigenetic rules. The last operator, called Epigenetic Mechanisms, is where some rules are applied according to DNA methylation and the surrounding environment. Each individual in the population contains M cells, which can represent different solutions to the problem. Four binary vectors of the same size of the chromosome with the problem representation integrate each cell. One vector contains the encoded solution; two vectors comprise the chromosomes of the cell's parents and another vector where the binary mask (nucleosome mask) representing the nucleosome structure is stored. The foundation of epiGA is epigenesis. The authors focused on how the DNA and histones are collapsed to form nucleosomes, how this affects the gene replication during reproduction, and how the epigenetic mechanisms modify the gene expression through methylation, contributing to building the bio-inspired operators of the algorithm. The epiGA is used to solve the Multidimensional Knapsack Problem (MKP). They report that the algorithm does perform similarly or better than published results in the literature.
Esteban Ricalde (2019) proposes an approach that describes a new mechanism for Genetic Programming inspired by epigenetics. The mechanism activates and deactivates chromosomal regions triggered by environmental changes. The epigenetic approach is based on the DNA methylation mechanism and implements a tree-based scheme for evolving executable instructions. Only conditional nodes are affected by the mechanism. Ricalde also introduces an adaptive factor strategy to assess the environment local variation. The mechanism takes into account changes in the environment to conduct epigenetic mutations. The author reports GP performance improvements when solving problems with changing environmental conditions, such environments promote individuals to adapt easier. This strategy aims to present an innovative method for the traffic signal control problem. The method defines the evolution process of actuated traffic controllers by the use of GP. The adaptive factor strategy is focused on traffic signals optimization context \cite{RICALDE}.
The Memetic Algorithm (MA), is a cultural-based strategy (1989) \cite{MOSCATO}, inspired by the description of memes in Dawkins {\em The Selfish Gene} book. A `meme' denotes the idea of a unit of imitation in cultural evolution, which in some aspects, is analogous to the gene in GAs. Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots, food, music, or ways of building arches \cite{DAWKINS}. The MAs extend the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations. The MA combines the population-based global search and the local search heuristic made by each individual, capable of performing local refinements without genetic representation constraints. The earlier may represent a high computational cost due to the separated individual learning process or local improvement for a problem search. Moscato coined the name `Memetic Algorithm' to cover a wide range of techniques where the evolutionary search is extended by adding one or more phases of local search, or the use of problem-specific information.
According to the analysis of state of the art, it can be noticed that the different approaches (except the Memetic Algorithm) focus on common elements abstracted from the dynamics of the epigenetic mechanisms. These elements involve:
\begin{enumerate}
\item The activation and deactivation (gene activation) of individuals' chromosomes through epigenetic mutations triggered by environmental changes. These mutations modify the markers (with off and on states) during the lifespan of the individual.
\item The use of active genes to evaluate individuals' ability to adapt and survive (fitness).
\item The learning behavior through the notion of memory across generations by propagating the best active genes (the ones that make the individual fittest).
\item Moreover, the particular effects the environment can produce during the development of individuals within a generation and their progeny.
\end{enumerate}
Despite the different usage of Epigenetic, previous approaches have evidenced that the incorporation of epigenetic components in EAs facilitates robustness. Robustness \cite{FELIX} is essential to ensure the permanence of phenotypic attributes potentially subjected to genetic and non-genetic modification; robustness also permits genetic and non-genetic changes to increase. Such variations will possibly introduce evolutionary alterations to a population and make individuals adapt.
\chapter{Evolutionary Algorithms with Regulated Genes: ReGen EAs}\label{chapter3}
The previous chapter described optimization and evolutionary processes as inspiration to design problem solvers; also, an epigenetics overview, the relation between epigenetics and evolution, memory and adaptation from epigenetics point of view, and the different approaches that implemented Epigenetics into Evolutionary Algorithms.
State of the art shows that epigenetic mechanisms play a fundamental role in biological processes. Some of such processes are phenotype plasticity, memory consolidation within generations, and environmentally influenced epigenetic modifications. The earlier leads researchers to think about applying epigenetic mechanisms to enhance evolutionary algorithms performance in solving hard mathematical problems or real-world problems with continuous environmental changes \cite{RICALDE}.
This approach is not supported on the main idea of switching genes off and on (gene activation mechanism), or silencing chromosome sections like most of the approaches previously described. Epigenetic mechanisms, in particular, Histone and DNA modifications, go beyond the idea of activating and deactivating genes. As mentioned in state of the art, these mechanisms also involve decreasing or promoting the level of transcription by gradually reducing or increasing expression, depending on tags bind location or regions, and how many tag groups are attached \cite{CESARO, CLARK, FESTEN, SHAFA}. Methylation, for example, is sparsely found, but globally spread in indefinite {\em CpG sequences} throughout the entire genome, except for {\em CpG islands}, or specific stretches where high {\em CpG contents} are found \cite{PHILLIPS}.
Based on the preceding, this thesis assumes individuals' chromosomes to be entirely active; that is to say, epigenetic states on/off do not restrict gene/allele expression. Individuals' genotype is regulated by designed epigenetic tags that encode different meanings from on and off states. Tags encode rules with specific operations to be applied to the chromosome during the reading process (decoding) to generate the respective phenotype at a subsequent time and then evaluate it.
Another important consideration is that the operations described in this thesis to perform the decoding process are not brought from biology. These operations are not present in the Epigenetic mechanisms neither but are plausible to solve computational problems. They are part of the elements involved in the epigenetic components of this approach and do not represent any biological operation. Biochemical processes Methylation and Histone Modification are regulated by an ``epigenetic code'' written as modifications in DNA and Histones and read by molecular units that specifically identify single or combined modifications, in this approach, an operation plus gene size represent a modification.
Epigenetics is the set of self-referential interactions between a group of genes, not necessarily from a gene towards itself, but a gene effect on another gene. Writers, readers, and erasers, for example, direct some interactions. These kinds of interactions are referred to as epigenetic mutations in the literature and are reversible \cite{RAO}. Hence, they are not classical mutations (on nucleotide sequence), neither in the algorithmic context, but they are transient mutations, highly reversible, without any restriction to reversion. They can be seen as mutations on interactions level, not on the interacting objects (genes). Traditional evolutionary algorithms assume a finite number of genes, and to obtain novelty, they require not only mutations in their chromosome but also new genes. Epigenetics satisfies that need. Epigenetics becomes a problem-solver; it optimizes the number of genes and reduces classic mutation dependence. Epigenetics accelerates reversible mutation (environment may help on this) and reduces the cost of deleterious mutation (that reduce individual's fitness) or unresponsive mutations. In this approach, the given interactions during the marking (writing, erasing, and modifying actions) and reading processes may be seen as ``mutations''. However, these ``mutations'' are reversible, which is not the case with biological mutations on nucleotide sequences.
This approach aims to introduce a technique for Evolutionary Algorithms based on the adaptive and learning dynamics of the Epigenetic Regulation process. So that, the technique may reflect the adaptability of individuals during evolution, the ability of individuals to learn from the environment, the possibility of reducing time in the search process, and the discovery of better solutions in a given time. Based on the preceding, the dynamics of DNA Methylation and Histone Modification have been summarized into five fundamental elements that form the basis of this approach. First, a metaphorical representation of {\em Epigenetic tags} that are not off/on states, instead they represent reading rules to interpret sections (alleles) of an individual's genome. Second, a structural layer above the chromosome structure used to bind tags ({\em Epigenotype}). Third, a marker ({\em Marking Function}) to add, remove and modify tags, this process is performed between defined marking periods, simulating periods where individuals' genetic codes are affected by external factors (as seen in study cases of Överkalix and Dutch famine). Fourth, a tags interpreter or decoder ({\em Epigenetic Growing Function}) to generate individuals' phenotypes from their epigenotype-genotype structures. The marker and decoder are based on different enzymes' behaviors and principles (writers, readers, erasers, and the ones that work on markers maintenance). Finally, an inheritance process ({\em Crossover Operator}) to pass such tags onto the offspring (transgenerational non-genetic inheritance over subsequent generations).
The purpose of this chapter is to introduce the proposed technique to design epigenetic evolutionary algorithms with binary encoding. The epigenetic components of the model are described as follows: section \ref{c3s1} shows a detailed description of {\em Tags'} representation in the model; section \ref{c3s2} briefly characterizes the {\em Epigenotype}; section \ref{c3s3} explains the {\em Marking process}; section \ref{c3s4} describes the {\em Epigenetic Growing function}; section \ref{c3s5} illustrates adjustments on a {\em Crossover operator} to inherit tags in succeeding generations; and at the end, the Pseudocode of the epigenetic EA and a summary of this chapter are presented in sections \ref{c3s6} and \ref{c3s7} respectively.
\section{Tags and Encoding}\label{c3s1}
Epigenetic tags in the ReGen EA, are represented with a binary string sequence of 0's and 1's, and are located on alleles. Each set of tags is built with {\em 8}-bits, the first three bits represent a bit operation {\em(Circular shift, Transpose, Set to, Do nothing, Right shift by one, Add one, Left shift by one, and Subtract one)} and the last {\em 5}-bits represent the gene size. Note that, the decimal representation of the {\em 5}-bits from {\em00001} to {\em11111} is used with no changes, but for {\em00000} the decimal value is thirty two. The first 3-bits string sequence uses a one-to-one mapping to a rule that performs a simple bit operation on chromosomes. The gene size says the number of alleles that are involved in the bit operation. Fig.~\ref{c3fig2} shows the tags' representation in the ReGen EA.
\subsection{Bit Operations}
Eight operations have been defined according to the {\em 3}-bits binary strings depicted in Fig.~\ref{c3fig2}. Each combination maps to a simple bit operation to be applied on a copy of the chromosome. The operations only impact the way alleles are read when evaluating the entire chromosome. An operation can be applied in a way that can affect a specific number of alleles/bits in later positions based on the {\em 5}-bits binary string that denotes the gene size $l$. The {\em Bit Operations} are described as follows:
\subsubsection{Circular shift}({\em 000}). Circularly shifts a specified number of bits to the right: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the shift tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+i} = x_{k+i-1}$ for all $i=1,...,l-1$ and $y_{k}=x_{k+l-1}$.
\subsubsection{Transpose}({\em 001}). Transposes a specified number of bits: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the transposition tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+i} = x_{k+l-1-i}$ for all $i=0,1,...,l-1$.
\subsubsection{Set to}({\em 010}). Sets a specified number of bits to a given value, the value of the marked bit: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the set-to tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+i} = x_k$ for all $i=0,1,...,l-1$.
\subsubsection{Do nothing}({\em 011}). Does not apply any operation to a specified number of bits: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the do-nothing tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+i} = x_{k+i}$ for all $i=0,1,...,l-1$.
\subsubsection{Right shift by one}({\em 100}). A right arithmetic shift of one position moves each bit to the right by one. This operation discards the least significant bit and fills the most significant bit with the previous bit value (now placed one position to the right). This operation shifts a specified number of bits: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the right shift by one tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k}=x_k$ and $y_{k+i} = x_{k+i-1}$ for all $i=1,...,l-1$.
\subsubsection{Add one}({\em 101}). Adds one to a specified number of bits: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the add one tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+l-1-i} = x_{k+l-1-i} + 1 + carry$ for all $i=0,1,...,l-1$. In the case the number of bits in the result is greater than the initial addends, the decoder discards the rightmost bit in the binary number (least significant bit) in order to set the bits in the resulting chromosome.
\subsubsection{Left shift by one}({\em 110}). A left arithmetic shift of one position moves each bit to the left by one. This operation fills the least significant bit with zero and discards the most significant bit. This operation shifts a specified number of bits: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the left shift by one tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+i} = x_{k+i+1}$ for all $i=0,1,...,l-2$ and $y_{k+l-1}=0$.
\subsubsection{Subtract one}({\em 111}). Subtracts one to a specified number of bits: starting at the marked bit up to $l$ bits ahead. Let $x$ be a binary string $x = (x_1, x_2, x_3,..., x_n)$, and $x_k$ be a bit marked with the subtract one tag and $l$ the gene size encoded by the tag. If the mark is read by the decoder, the decoded bit string $y$ will have $y_{k+l-1-i} = x_{k+l-1-i} + borrow - borrowed - 1$ for all $i=0,1,...,l-1$. When $1$ is subtracted from $0$, the borrow method is applied. The borrowing digit (zero) essentially obtains ten from borrowing ($borrow = 10$), and the digit that is borrowed from is reduced by one ($borrowed = 1$).
\par These operations have been selected due to their simplicity and capacity to generate, discover, and combine many possible building blocks. {\em Set to} operation, for example, it can be dominant depending on the optimization problem when maximizing or minimizing a function. The current operations combine short, high-fitness schemas resulting in high-quality building blocks of solutions after the epigenetic growing function is applied. If any allele has tags bound to it, regions of the chromosome will be read as the {\em Operation} states. In section~\ref{c3s4}, the epigenetic growing function and the application of these {\em Bit Operations} are explained in more detail.
\subsection{Gene Sizes}
As mentioned above, the last {\em 5}-bits of a tag represent the gene size. The gene size determines the number of alleles involved in the bit operation during the decoding process, Table~\ref{c3table1} briefly shows some binary strings and their respective values. These genes sizes have been proposed based on the order-$i$ schemas of the functions selected to perform experiments and the transformation of binary strings of 32 bits to real values. Fig.~\ref{c3fig2} depicts the complete structure of a tag.
\begin{table}[ht]
\centering
\caption{Gene Sizes}
\label{c3table1}
\begin{tabular}{cccccccl}
\hline
String & Value & String & Value & String & Value & String & Value\\\hline
\verb"00001 "& 1 & \verb"00101" & 5 & \verb"11001" & 25 & \verb"11101" & 29\\
\verb"00010" & 2 & \verb"00110" & 6 & \verb"11010" & 26 & \verb"11110" & 30\\
\verb"00011" & 3 & \verb"00111" & 7 & \verb"11011" & 27 & \verb"11111" & 31\\
\verb"00100" & 4 & \verb"01000" & 8 & \verb"11100" & 28 & \verb"00000" & 32\\
\hline
\end{tabular}
\end{table}
\section{Epigenotype}\label{c3s2}
The {\em Epigenotype} is a structural layer on the chromosome top structure used to attach tags. This second layer represents individuals' epigenome, and it is a structure with the same size as an individual's chromosome. This epigenetic component holds a set of epigenetic changes that influence the direction of the search process. It is coded as a multidimensional vector $(m \cdot n)$, where $m$ is the tag length, and $n$ is the length of the individual's chromosome.
\section{Marking Function}\label{c3s3}
The Marking function adds tags to or removes tags from alleles of any chromosome in the solution space. Additionally, it can modify the {\em8-bits} binary string tags. The marking process works with a marking rate, that is, the probability of applying the function on every bit of a chromosome. When the probability is positive, the function generates a probability of adding a tag to one single allele, removing a tag from one single allele, or modifying a tag on any allele. These actions cannot happen simultaneously and are mutually exclusive. Tags are randomly added or removed from any allele. Also, the {\em modify} action randomly changes any of the eight positions in the binary string. The distribution of these actions is given as follows in Equation~\ref{c3eq1}.
\begin{eqnarray}
P_{Marking} =
\begin{cases}
\text{No Marking $0.98$}\\
\text{Add a tag (8-bits) $0.007$} \\
\text{Remove a tag (8-bits) $0.007$} \\
\text{Modify a tag (any bit of a tag) $0.006$}
\end{cases}
\label{c3eq1}
\end{eqnarray}
\begin{figure}
\centering
\includegraphics[width=2.8in]{imagesThesis/a.png}
\includegraphics[width=2.8in]{imagesThesis/b.png}
\includegraphics[width=2.8in]{imagesThesis/c.png}
\includegraphics[width=2.8in]{imagesThesis/d.png}
\caption{General representation of the {\em Marking} function: a) shows a chromosome with no tags on it; b) depicts the addition of four tags to a chromosome; c) illustrates tags' bit modification in red; and d) presents a chromosome with two removed tags.}
\label{c3fig1}
\end{figure}
The probability of marking a single bit of a chromosome is defined by taking into account three factors. First, biologically epigenetics marks of the sort of Methylation, for example, are dispersed in indeterminate {\em CpG sequences} on the genome, except for {\em CpG islands}, or specific areas where high {\em CpG contents} are present. Despite that, they can affect gene expression. They are powerful because of what they encode, not for the quantity, this means for better or worse, a few tags can have the influence or potential to make individuals bits being interpreted in such a way that good or poor results could be obtained. The second factor aims to avoid chromosomes over-marking, if each bit is marked, it could cause over-processing during the decoding process. The third factor is related to the definition of a marking probability that allows the {\em Marking function} to keep the marking process balanced; a probability value that ensures tags diversity and a considerable number of marked positions.
Based on previous considerations, experiments to define a marking probability, involve tuning the marking process with different rate values from {\em0.1} to {\em1.0}. The higher the rate, the less effective is the marking function. When the rate is reduced, the marking function reveals an equilibrium between the applied actions and the obtained solutions, after running experiments with lower rates from {\em0.01} to {\em0.09}, the rate of {\em0.02} has shown to be enough to influence the search process and help ReGen EAs in finding solutions closer to the optimum. Consequently, the probability of marking a single bit has been set to {\em\textbf{0.02}}. Following the definition of such a probability rate, the probability distribution for adding, removing, and modifying is set based on the significance of having a considerable number of tags and a variety of them. If tags are added, they should be eventually removed, or chromosomes will be over marked; for this, the approach gets rid of tags with the same probability as the {\em add} tag action. Then, the {\em modify} tag action uses a lower probability to recombine the {\em8-bits} of a tag and generate different decoding frames.
The influence or impact of the designed actions is mostly that they altogether:
\begin{enumerate}
\item Let individuals have a reasonable quantity of tags,
\item Allow bit combination for tags, and
\item Ensure discovering building blocks during tags interpretation that could generate solutions that are not neighbor to current solutions to escape from a local optimum.
\end{enumerate}
\subsection{Adding a tag}
This action writes tags on any chromosome. {\em Add} is a metaphorical representation of {\em writer} enzymes. Fig.~\ref{c3fig1} presents a chromosome (image {\em \textbf{a}}) with no tags. In image {\em \textbf{b}}, four tags are added at positions {\em2}, {\em6}, {\em8}, and {\em10}, based on the defined {\em add tag} probability of {\em0.007}.
\subsection{Modifying a tag}
This action modifies tags on any chromosome. {\em Modify} is a metaphorical representation of {\em maintenance} enzymes. In Fig.~\ref{c3fig1}, image {\em \textbf{c}} illustrates modified tags at positions {\em6} and {\em8}, bits in red changed. This action is applied under the defined {\em modify tag} probability of {\em0.006} and then randomly changes any of the eight positions in the binary string with a rate of $1.0/l$, where $l$ is the tag's length.
\subsection{Removing a tag}
This action erases tags from any chromosome. {\em Remove} is a metaphorical representation of {\em eraser} enzymes. Fig.~\ref{c3fig1}, depicts a chromosome with removed tags. In Image {\em \textbf{d}}, two tags at positions {\em6} and {\em10} are not longer bound to the chromosome. The tag removal is performed with the defined {\em remove tag} probability of {\em0.007}.
\begin{landscape}
\begin{figure}
\centering
\includegraphics[height=4in, width=9.3in]{imagesThesis/structure.png}
\caption{General representation of an individual with its epigenotype. The bottom section shows the tag's interpretation process to generate a bit string used to build the individual's phenotype.}
\label{c3fig2}
\end{figure}
\end{landscape}
\section{Epigenetic Growing Function}\label{c3s4}
This function is a metaphorical representation of {\em reader} enzymes. The epigenetic growing function generates bit strings from individuals' genotypes-epigenotypes for eventual phenotypes creation. Tags allow this function to build different individuals before the quality or fitness of each solution is evaluated. The growth happens in the binary search space (coded solutions), but is reflected in the solution space (actual solutions). From a mathematical point of view, the search space is transformed and reduced when the tags' interpretation is performed; this ensures both exploration and exploitation. The first to reach different promising regions in a smaller search space and the second to search for optimal solutions within the given region. The bigger are {\em gene size} values, the less variety of building blocks will result during decoding. Tags may lead individuals to be represented as closer feasible solutions to some extreme (minimum or maximum) in the search space. When this function is applied, chromosomes grow in the direction of minimum or maximum points, depending on the problem. This process differs from mutations or hyper-mutations which modify chromosomes and maintain genetic diversity from one generation of a population of chromosomes to another on a broader search space.
The Epigenetic Growing function acts like an interpreter or decoder of tags located over a particular allele. This function scans each allele of a chromosome and the tags that directly affect it, so that, the phenotypic variations can be reflected when evaluating individuals. During the decoding process, alleles are not changed; the chromosome keeps its binary encoding fixed. This means the individual's genotype is not altered. Note that the scope of the {\em Operations} to be applied depends on the {\em gene size} indicator. If an {\em Operation} has been already applied, and there is another one to be applied, the epigenetic growing function considers the interpretation of the previous bits in order to continue its decoding process. An example of the prior process to phenotype generation is illustrated in Fig.~\ref{c3fig2}. The example shows the decoding process for each bit with or without tags.
On the top of Fig.~\ref{c3fig2}, a chromosome with a size of {\em34} and eight tags is depicted. Alleles in positions {\em1, 9, 14, 18, 23, 26, 29} and {\em31} are marked with colored tags. The decoding starts from left to right. The first position is scanned, as it has a tag bound to it, the function initiates a tag identification. The tag in red is {\em01001000}, the first three bits {\em010} indicate an operation that is {\em Set to}. It means to set a specified number of bits to the same value of the allele, which is {\em1}. The specified number of bits to be {\em Set to}, is indicated by the last five bits of the tag, which are {\em01000}, this refers to a gene size ($l$) of {\em5}-bits. Then, the resulting interpretation is to set all bits to {\em1}, starting at the marked bit up to the gene size minus one ($l-1$). After finishing the first decoding, the epigenetic growth function continues scanning at position ($l+1$) and keeps the previous result. This process is repeated until the entire chromosome is scanned; each result is concatenated to generate a final bit string; the length of the chromosome and the resulting string keep fixed. At the bottom of Fig.~\ref{c3fig2}, a final interpretation is shown, the concatenated string is the source to build the phenotype, which is evaluated and gives the score for the individual.
\section{Crossover Operator}\label{c3s5}
The epigenetic tags added to the chromosome are inherited by the offspring during Crossover. Transgenerational epigenetic inheritance transmits epigenetic markers from one individual to another (parent-child transmission) that affects the offspring's traits without altering the sequence of nucleotides of DNA. The idea here is to perform the recombination process based on the selected operator, as usual, the only difference is that tags located on alleles will be copied along with the genetic information. This model presents the Single Point Crossover as an illustrative example of genetic and epigenetic recombination. So, a calculated cross point {\em x} will be applied to the chromosome at {\em x} position. By doing this process, the offspring will inherit alleles with their tags. Fig.~\ref{c3fig3} shows the exchange of genetic code and epigenetic tags. A Simple Point Crossover operation is performed over given parents at cross point {\em10}. {\em Offspring 1} inherited from {\em Parent 1}, part of the genetic code plus its tags in positions {\em1, 9} and {\em10}. From {\em Parent 2}, it also inherited part of the genetic code plus some tags in positions {\em11, 15} and {\em22}. {\em Offspring 2} got part of the genetic code plus its tags in positions {\em13} and {\em16} from {\em Parent 1}. From {\em Parent 2}, it got part of the genetic code plus its tags in positions {\em4} and {\em9}.
\begin{landscape}
\begin{figure}
\centering
\includegraphics[height=6in, width=8in]
{imagesThesis/xover.png}
\caption{Illustrative example of genetic and epigenetic recombination: Simple Point Crossover operation.}
\label{c3fig3}
\end{figure}
\end{landscape}
\section{Pseudo Code}\label{c3s6}
The sequence of steps for the proposed ReGen EA is defined in Algorithm~\ref{c3algo1} and Algorithm~\ref{c3algo2}. Note that the pseudo-code includes the same elements of a generic evolutionary algorithm. The epigenetic components are embedded, as defined in Algorithm~\ref{c3algo1}. The ReGen EA's behavior is similar to EA's standard versions until defined marking periods and tags decoding processes take place. Note that the reading process is different when chromosomes are marked, phenotypes are built based on tags interpretation. This process firstly identifies the operation to be applied on a specific section of a chromosome and secondly the gene size to define the scope of the bit operation, as depicted in Fig.~\ref{c3fig2}.
The epigenetic EA incorporates a pressure function to perform the marking process during a specific period. A range of iterations determines a period. Marking periods represent the environment, an abstract element that has been a point of reference to assess the results of this model. At line $7$, the function {\em markingPeriodON} validates the beginning of defined periods, this indicates that the marking process can be performed starting from iteration $a$ to iteration $b$. Any number of marking periods can be defined. Periods could be between different ranges of iterations. Additionally, the {\em epiGrowingFunction} is embedded at line $10$. The {\em epiGrowingFunction} interprets tags and generates a bit string used to build the phenotype before initiating the fitness evaluation of individuals. The standard EA uses the individual's genotype to be evaluated; in contrast, the epigenetic technique uses the resulting phenotype from the tags decoding process. This technique is called ReGen EA, which means Evolutionary Algorithm with Regulated Genes.
\begin{algorithm}[H]
\caption{Pseudo code of a ReGen EA}
\begin{algorithmic}[1]
\State \textbf{initialize} {\em population with random candidate solutions}
\State \textbf{evaluate} {\em each candidate}
\Repeat
\State \textbf{select} {\em parents}
\State \textbf{recombine} {\em pairs of parents}
\State \textbf{mutate} {\em the resulting offspring}
\If {\Call{markingPeriodON}{iteration}}
\State \Call{applyMarking}{offspring}
\EndIf
\State $phenotypes \gets$ \textsc{decode}(\Call{epiGrowingFunction}{offspring})
\State \textbf{evaluate} {\em phenotypes of the new candidates}
\State \textbf{select} {\em individuals for the next generation}
\Until{\em Termination condition is satisfied}
\end{algorithmic}
\label{c3algo1}
\end{algorithm}
\begin{algorithm}[H]
\caption{Pseudo code of a ReGen EA}
\begin{algorithmic}[1]
\Function{markingPeriodON}{$it$}
\State $start \gets startValue$
\State $end \gets endValue$
\If { $start$ $ \geq it \leq $ $end$}
\State \Return $true$
\EndIf
\State \Return $false$
\EndFunction
\newline
\Function{applyMarking}{offspring}
\State $mark \gets P_{Marking}$ \Comment{probability of 0.02}
\State $notModify \gets P_{Adding} + P_{Removing}$ \Comment{same probability of 0.35 to add and remove}
\For{\textbf{each} allele $\in$ offspring$_{i}$ chromosome}
\If {$mark$}
\If {$notModify$}
\If {add}
\If {notMarked}
\State{add tag}
\EndIf
\Else
\If {isMarked}
\State{remove tag}
\EndIf
\EndIf
\Else
\If {isMarked}
\State{modify any tag' bit with a rate of 1.0/tag length}
\EndIf
\EndIf
\EndIf
\EndFor
\EndFunction
\newline
\Function{epiGrowingFunction}{offspring}
\State $bitStrings \gets$ offspring
\If {offspring isMarked}
\State $bitStrings \gets$ \textbf{read offspring marks}
\EndIf
\State \Return $bitStrings$
\EndFunction
\end{algorithmic}
\label{c3algo2}
\end{algorithm}
\section{Summary}\label{c3s7}
This chapter describes the proposed epigenetic technique under the scope of this thesis. Five fundamental elements form the basis of the designed technique (ReGen EA): first, a metaphorical representation of {\em Epigenetic Tags} as binary strings; second, a layer on chromosome top structure used to bind tags ({\em Epigenotype}); third, a {\em Marking Function} to add, remove, and modify tags; fourth, a {\em Epigenetic Growing Function} that acts like an interpreter, or decoder of tags located on the {\em Epigenotype}; and fifth, tags inheritance by the offspring during {\em Crossover}. The abstraction presented in this chapter describes a way to address a large number of computational problems with binary and real encoding. This technique may find approximately optimal solutions to hard problems that are not efficiently solved with other techniques.
\chapter{ReGen GA: Binary and Real Codification}\label{chapter4}
Genetic Algorithm with Regulated Genes (ReGen GA) is the implementation of the proposed epigenetic model on a classic GA. The general terminology of a GA includes population, chromosomes, genes, genetic operators, among others. The ReGen GA has a layer to attach tags and involves two functions named {\em Marking} and {\em Epigenetic Growing}. The first function simulates periods in which individuals' genetic codes are affected by external factors, represented by the designed tags. The second function generates bit strings from genotypes and their respective epigenotypes for phenotypes formation (see Algorithm~\ref{c4algo1}). Also, the ReGen GA uses Simple Point {\em Crossover} operator to perform recombination and transmission of epigenetic markers from one individual to its descendants.
This chapter aims to present the application of the proposed epigenetic model. This implementation intends to address real and binary encoding problems. Experimental functions with binary and real encoding have been selected to determine the model applicability. The experiments will evidence the effect of the tags on population behavior. In section~\ref{c4s1}, experimental setups and parameters configuration used for the selected functions are described. In section~\ref{c4s2}, a set of binary experiments is presented, implementing Deceptive (orders three and four), Royal Road, and Max Ones functions. Additionally, some experimental results and their analysis are exhibited in subsections~\ref{c4s2ss2} and \ref{c4s2ss3}. In section~\ref{c4s3}, a set of real experiments is presented, implementing Rastrigin, Rosenbrock, Schwefel, and Griewank functions. Also, some experimental results and their analysis are reported in subsections~\ref{c4s3ss2} and \ref{c4s3ss3}. At the end of this chapter, a summary is given in section~\ref{c4s4}.
\begin{algorithm}[H]
\caption{Pseudo code of the ReGen GA}
\begin{algorithmic}[1]
\State \textbf{initialize} {\em population with random candidate solutions}
\State \textbf{evaluate} {\em each candidate}
\Repeat
\State \textbf{select} {\em parents}
\State \textbf{recombine} {\em pairs of parents}
\State \textbf{mutate} {\em the resulting offspring}
\If {\Call{markingPeriodON}{iteration}}
\State \Call{applyMarking}{offspring}
\EndIf
\State $phenotypes \gets$ \textsc{decode}(\Call{epiGrowingFunction}{offspring})
\State \textbf{evaluate} {\em phenotypes of the new candidates}
\State \textbf{select} {\em individuals for the next generation}
\Until{\em Termination condition is satisfied}
\end{algorithmic}
\label{c4algo1}
\end{algorithm}
\section{General Configuration}\label{c4s1}
The following configuration applies to all experiments presented in this chapter. It is well known that an algorithm can be tweaked (e.g., the operators in a GA) to improve performance on specific problems, even though, this thesis intends to avoid giving too many advantages to the performed GA implementations in terms of parametrization. The classic GA and the ReGen GA parameters are tuned with some variations on only two standard operators, Single Bit Mutation and Simple Point Crossover. Also, for all experiments, three marking periods have been defined, note that the defined number of periods are just for testing purposes. Marking Periods can be appreciated in figures of reported results delineated with vertical lines. Vertical lines in blue depict the starting point of marking periods and gray lines, the end of them.
The set up for classic GAs includes: $30$ runs; $1000$ iterations; population size of $100$ individuals; a tournament of size $4$ for parents selection; generational (GGA) and steady state (SSGA, in which replacement policy is elitism) replacements to choose the fittest individuals for the new population; each bit in the chromosome has a mutation rate of $1.0/l$, where $l$ is the chromosome length, while the single point crossover rates are set from $0.6$ to $1.0$.
The set up for the ReGen GA includes: $30$ runs; $1000$ iterations; population size of $100$ individuals; a tournament of size $4$ for parents selection; generational (GGA) and steady state (SSGA, in which replacement policy is elitism) replacements to choose the fittest individuals for the new population; each bit in the chromosome has a mutation rate of $1.0/l$, where $l$ is the chromosome length, while the single point crossover rates are set from $0.6$ to $1.0$; a marking probability of $0.02$ (the probability to add a tag is $0.35$, to remove a tag is $0.35$, and to modify a tag $0.3$) and three marking periods have been defined. Such periods start at iterations 200, 500, and 800, with a duration of 150 iterations each.
It is worth mentioning that the crossover rate of $0.7$, along with the mutation rate of $1.0/l$, are considered good parameters to solve binary encoding problems \cite{TBACK, MITCHELL2}, even though, five crossover rates are used to evaluate tags inheritance impact. Table~\ref{c4table1} shows a summary of the general setup for the experiments.
\begin{table}[h]
\centering
\caption{General configuration with 5 different Crossover rates}
\label{c4table1}
\begin{tabular}{lcc}
\hline
Factor Name & \textbf{Classic GA} & \textbf{ReGen GA}\\\hline
Mutation Operator Rate & $1.0/l$ & $1.0/l$\\
Crossover Operator Rate & 0.6 - 1.0 & 0.6 - 1.0 \\
Marking Rate & {\em none} & 0.02\\
Marking Periods & {\em none} & 3 \\
Population Size & 100 & 100 \\
Generations & 1000 & 1000 \\
Runs & 30 & 30 \\
Parent selection & {\em Tournament} & {\em Tournament} \\\hline
\end{tabular}
\end{table}
\section{Binary Problems}\label{c4s2}
Chromosomes encoding is one of the challenges when trying to solve problems with GAs; encoding definition depends on the given problem. Binary encoding is the most traditional and simple, essentially because earlier GA implementations used this encoding type. This section reports experiments with four different binary functions.
\subsection{Experiments}\label{c4s2ss1}
Performing experiments use binary encoding for determining the proposed technique applicability. In binary encoding, a vector with binary values encodes the problem's solution. Table~\ref{c4table2} shows a simple example of functions with a single fixed bit string length. These functions have been chosen to work on the first approximation to test this technique. Be aware that this does not mean a different bit string length is not allowed. Any length value can be set. The selected functions and fixed string length values are just for the purpose of making the experiments simpler and easier to understand.
\begin{table}[h]
\centering
\caption{Experimental Functions}
\label{c4table2}
\begin{tabular}{cccl}
\hline
Function & Genome Length & Global Optimum\\ \hline
\verb"Deceptive 3" & 360 & 3600 \\
\verb"Deceptive 4" & 360 & 450 \\
\verb"Royal Road" & 360 & 360 \\
\verb"Max Ones" & 360 & 360 \\\hline
\end{tabular}
\end{table}
\subsubsection{Deceptive Order Three and Deceptive Order Four Trap}
The deceptive functions proposed by Goldberg in 1989 are challenging problems for conventional genetic algorithms (GAs), which mislead the search to some local optima (deceptive attractors) rather than the global optimum \cite{GOLDBERG}. An individual's fitness is defined as indicated in Table~\ref{c4table3} and Table~\ref{c4table4} for Deceptive order three and Deceptive order four trap, respectively.
\begin{table}[H]
\centering
\caption{Order Three Function}
\label{c4table3}
\begin{tabular}{ccccl}
\hline
String & Value & String & Value \\\hline
\verb"000 "& 28 & \verb"100" & 14\\
\verb"001" & 26 & \verb"101" & 0\\
\verb"010" & 22 & \verb"110" & 0\\
\verb"011" & 0 & \verb"111" & 30\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Order Four Trap Function}
\label{c4table4}
\begin{tabular}{ccccl}
\hline
String & Value & String & Value \\\hline
\verb"0000" & 5 & \verb"1000" & 1\\
\verb"0001" & 1 & \verb"1001" & 2\\
\verb"0010" & 1 & \verb"1010" & 2\\
\verb"0011" & 2 & \verb"1011" & 3\\
\verb"0100" & 1 & \verb"1100" & 2\\
\verb"0101" & 2 & \verb"1101" & 3\\
\verb"0110" & 2 & \verb"1110" & 3\\
\verb"0111" & 3 & \verb"1111" & 4\\\hline
\end{tabular}
\end{table}
\subsubsection{Max Ones and Royal Road}
The Max Ones' problem (or BitCounting) is a simple problem that consists of maximizing the number of $1$'s in a chain. The fitness of an individual is defined as the number of bits that are 1. Formally, this problem can be described as finding a string $x = (x_1, x_2, x_4,..., x_n)$, where $x_i \in \{0,1\}$, which maximizes the following Equation~\ref{c4eq1}:
\begin{eqnarray}
f(x)=\sum_{i=1}^{n}x_i\;
\label{c4eq1}
\end{eqnarray}
The Royal Road function developed by Forrest and Mitchell in 1993 \cite{FORREST-MITCHELL}, consists of a list of partially specified bit strings (schemas) with a sequence of 0's and 1's. A schema performs well when all bits are set to 1. For the experiments, order-8 schemas are configured.
A simple Royal Road function, $R_{1}$ is defined by Equation~\ref{c4eq2}. $R_{1}$ consists of a list of partially specified bit strings (schemas) $s_{i}$ in which (\textquoteleft${*}$\textquoteright) denotes a wild card (i.e., allowed to be either 0 or 1). A bit string $x$ is said to be an instance of a schema $s, x \in s$, if $x$ matches $s$ in the defined (i.e., non-\textquoteleft${*}$\textquoteright) positions. The fitness $R_{1}(x)$ of a bit string $x$ is defined as follows:
\begin{eqnarray}
R_{1}(x)=\sum_{i=1}^{8}\delta_{i}(x)o(s_{i}), where \ \delta_{i}(x) =
\begin{cases}
\text{$1$ if $x \in s_{i}$} \\
\text{$0$ otherwise}
\end{cases}
\label{c4eq2}
\end{eqnarray}
\subsection{Results}\label{c4s2ss2}
Based on the defined configuration, both classic GA and ReGen GA are compared to identify the behavior of tags during individuals' evolution. Results are tabulated from Table~\ref{c4table5} to Table~\ref{c4table8}, these tables present the binary functions: Deceptive order three (D3), Deceptive order four trap (D4), Royal Road (RR), and Max ones (MO). Both EA implementations with generational (GGA), steady state (SSGA) replacements, and five crossover rates per technique. For each rate, the best fitness based on the maximum median performance is reported, following the standard deviation of the observed value, and the iteration where the reported fitness is found. The iteration is enclosed in square brackets.
Graphs from Fig.~\ref{c4fig1} to Fig.~\ref{c4fig4} illustrate the fitness of best individuals of populations in the experiments, reported fitnesses are based on the maximum median performance. Each graph shows the tendency of best individuals per technique. For ReGen GA and Classic GA, two methods are applied: steady state and generational population replacements. The fitness evolution of individuals can be appreciated by tracking green and red lines that depict the best individual's fitness for classic GAs. Blue and black lines trace the best individual's fitness for ReGen GAs. From top to bottom, each figure displays individuals' behavior with crossover rates from $0.6$ to $1.0$. Figures on the right corner show defined marking periods. Vertical lines in blue depict the starting of a marking period, lines in gray delimit the end of such periods.
\newpage
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Deceptive Order 3}
\label{c4table5}
\begin{tabular}{p{1cm}cccccl}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{ Deceptive Order 3}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $3436 \pm11.86 [206]$ & $3430 \pm10.80 [192]$ & $3578 \pm11.70 [894]$ & $3573 \pm12.60 [926]$\\
0.7 & $3429 \pm09.50 [846]$ & $3433 \pm09.06 [263]$ & $3577 \pm13.31 [919]$ & $3576 \pm14.77 [929]$\\
0.8 & $3436 \pm10.31 [224]$ & $3438 \pm12.38 [171]$ & $3580 \pm16.20 [911]$ & $3582 \pm13.90 [884]$\\
0.9 & $3438 \pm11.12 [171]$ & $3435 \pm10.60 [163]$ & $3582 \pm15.54 [928]$ & $3586 \pm11.76 [918]$\\
1.0 & $3435 \pm09.33 [218]$ & $3437 \pm12.51 [145]$ & $3584 \pm13.49 [957]$ & $3587 \pm11.83 [854]$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=1.9in]{imagesThesis/D3063.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3062.pdf}
\includegraphics[width=1.9in]{imagesThesis/D306.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3073.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3072.pdf}
\includegraphics[width=1.9in]{imagesThesis/D307.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3083.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3082.pdf}
\includegraphics[width=1.9in]{imagesThesis/D308.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3093.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3092.pdf}
\includegraphics[width=1.9in]{imagesThesis/D309.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3103.pdf}
\includegraphics[width=1.9in]{imagesThesis/D3102.pdf}
\includegraphics[width=1.9in]{imagesThesis/D310.pdf}
\caption{Deceptive Order $3$. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig1}
\end{figure}
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Deceptive Order 4}
\label{c4table6}
\begin{tabular}{p{1cm}cccccl}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{ Deceptive Order 4}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $388.0 \pm4.62 [175]$ & $387.5\pm4.84 [273]$ & $445.0 \pm3.32 [916]$ & $443.5 \pm2.86 [909] $\\
0.7 & $389.0 \pm4.82 [155]$ & $387.0 \pm3.37 [191]$ & $446.0 \pm1.83 [900]$ & $446.0 \pm2.74 [846] $\\
0.8 & $390.0 \pm3.88 [156]$ & $390.0 \pm4.32 [158]$ & $444.5\pm3.94 [898]$ &
$445.0 \pm3.57 [608] $\\
0.9 & $390.0 \pm3.40 [139]$ & $ 388.5\pm5.20 [131]$ & $ 445.5\pm2.16 [943]$ & $446.0 \pm2.14 [897] $\\
1.0 & $392.5\pm4.82 [148]$& $ 391.5\pm4.07 [155]$ & $ 446.5\pm2.89 [963]$ & $446.0 \pm3.12 [854]$ \\\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=1.9in]{imagesThesis/D4063.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4062.pdf}
\includegraphics[width=1.9in]{imagesThesis/D406.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4073.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4072.pdf}
\includegraphics[width=1.9in]{imagesThesis/D407.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4083.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4082.pdf}
\includegraphics[width=1.9in]{imagesThesis/D408.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4093.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4092.pdf}
\includegraphics[width=1.9in]{imagesThesis/D409.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4103.pdf}
\includegraphics[width=1.9in]{imagesThesis/D4102.pdf}
\includegraphics[width=1.9in]{imagesThesis/D410.pdf}
\caption{Deceptive Order $4$. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig2}
\end{figure}
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Royal Road}
\label{c4table7}
\begin{tabular}{p{1cm}cccccl}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{ Royal Road}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $ 200 \pm28.34 [928]$ & $ 96.0 \pm18.25 [810]$ & $ 360 \pm6.64 [632]$ & $ 352 \pm10.81 [879]$\\
0.7 & $ 216 \pm19.19 [919]$ & $ 96.0 \pm15.72 [440]$ & $ 360 \pm9.27 [593]$ & $ 352 \pm14.92 [523]$\\
0.8 & $ 248 \pm15.93 [977]$ & $ 112 \pm21.83 [569]$ & $ 360 \pm7.86 [523]$ & $ 352 \pm13.45 [539]$\\
0.9 & $ 264 \pm23.76 [950]$ & $ 180 \pm23.37 [887]$ & $ 360 \pm6.64 [499]$ & $ 360 \pm08.27 [929]$\\
1.0 & $ 280 \pm20.58 [951]$ & $ 188 \pm16.74 [436]$ & $ 360 \pm6.12 [381]$ & $ 356 \pm07.20 [867] $\\\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=1.9in]{imagesThesis/RR063.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR062.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR06.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR073.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR072.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR07.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR083.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR082.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR08.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR093.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR092.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR09.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR103.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR102.pdf}
\includegraphics[width=1.9in]{imagesThesis/RR10.pdf}
\caption{Royal Road. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig3}
\end{figure}
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Max Ones}
\label{c4table8}
\begin{tabular}{p{1cm}cccccl}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{ Max Ones}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $ 360 \pm0.92 [197]$ & $ 360 \pm0.74 [192]$ & $ 360 \pm0.89 [194]$ & $ 360 \pm0.87 [183]$\\
0.7 & $ 360 \pm1.03 [172]$ & $ 360 \pm0.66 [174]$ & $ 360 \pm0.98 [169]$ & $ 360 \pm0.98 [164]$\\
0.8 & $ 360 \pm1.27 [155]$ & $ 360 \pm0.83 [158]$ & $ 360 \pm0.87 [159]$ & $ 360 \pm1.06 [155]$\\
0.9 & $ 360 \pm0.89 [145]$ & $ 360 \pm0.92 [143]$ & $ 360 \pm0.89 [146]$ & $ 360 \pm1.00 [135]$\\
1.0 & $ 360 \pm0.89 [136]$ & $ 360 \pm0.87 [138]$ & $ 360 \pm0.83 [138]$ & $ 360 \pm0.89 [130]$\\\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=1.9in]{imagesThesis/MO063.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO062.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO06.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO073.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO072.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO07.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO083.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO082.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO08.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO093.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO092.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO09.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO103.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO102.pdf}
\includegraphics[width=1.9in]{imagesThesis/MO10.pdf}
\caption{Max Ones. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig4}
\end{figure}
\newpage
Based on tabulated results in Table~\ref{c4table5} for Deceptive Order Three, it can be noted that ReGen GA performs better than the classic GA. ReGen GA is able to discover varied optimal solutions until achieving the total of configured iterations. However, it does not report solutions with the global optimum ($3600$); the best solutions are close to the peak value; no better solutions than the reported are reached. In Fig.~\ref{c4fig1} is noticeable that the pressure applied on chromosomes, at iterations $200$, $500$, and $800$, does cause a change in the evolution of individuals. After starting the marking period, the fitness improves to be closer to the optimum, and populations improve their performance once tags are added. The ReGen GA found a variety of suited solutions during the evolution process, exposing the proposed approach's ability to discover novelties that are not identified by the classic GA. Fig.~\ref{c4fig1} also shows that classic GA performance is under ReGen GA performance in all crossover rates levels.
Tabulated results in Table~\ref{c4table6} for Deceptive Order Four Trap function show that ReGen GA performs better than the classic GA. ReGen GA solutions surpass the local maximum of $440$, but do not reach the global optimum ($450$). No better solutions than the reported are reached. In Fig.~\ref{c4fig2} is notable that the pressure applied on chromosomes at iteration $200$ produces a change in the evolution of individuals. After starting the marking period, the fitness raises near the optimum, and populations improve their performance once individuals' chromosomes are marked. Fig.~\ref{c4fig2} also shows that classic GA performance is under ReGen GA performance in all crossover rates levels. ReGen GA solutions reach a local optimum above $400$; in contrast, classic GA solutions are under $400$ for all crossover rates.
Next in order, tabulated results in Table~\ref{c4table7} for Royal Road function reflect that ReGen GA does reach solutions with the global optimum ($360$) in each crossover rate for generational replacement. Nevertheless, for steady state replacement, solutions with the absolute maximum are reported only for one crossover rate. In Fig.~\ref{c4fig3} is noticeable that the pressure applied on chromosomes at iterations $200$, $500$, and $800$ causes a significant change in the evolution of individuals. After starting the marking period, the fitness improves to reach the optimum, populations improve their performance once the tags are added. Fig.~\ref{c4fig3} also shows that the classic GA performance is under ReGen GA performance in all crossover rates levels. The classic GA does not reach suitable solutions for this experiment. Additionally, the maximum fitnesses are gotten in late iterations. ReGen GA obtained better solutions in earlier iterations.
Finally, tabulated results in Table~\ref{c4table8} for Max Ones' objective function display that both ReGen GA and classic GA have a similar performance. Experiments show that for the Max Ones' function, optimal solutions are found in both implementations. In Fig.~\ref{c4fig4}, the pressure applied to chromosomes during marking periods does not cause any change in the evolution of individuals. The reason is that before the marking period started, individual scores are near the optimum, or the global optimum is already found. After starting the marking period, the fitnesses keep stable for the best individuals. Fig.~\ref{c4fig4} also shows that both performances are similar in all crossover rates levels.
\subsection{Statistical Analysis}\label{c4s2ss3}
Three different tests are performed, One-Way ANOVA test, Pairwise Student's t-test, and Paired Samples Wilcoxon Test (also known as Wilcoxon signed-rank test). The data set ReGen EAs Samples in Appendix \ref{appendB} is used, the samples contain twenty EA implementations for each of the following functions: Deceptive Order Three, Deceptive Order Four Trap, Royal Road, and Max Ones. The samples refer to the best fitness of a solution found in each run, the number of executions per algorithm is $30$. Different implementations involve classic GAs and ReGen GAs with Generational (G) and Steady State (SS) population replacements, and crossover rates from $0.6$ to $1.0$.
The null hypothesis is a type of conjecture used in statistics that proposes that there is no difference between specific characteristics of a data-generating process. ANOVA test is being performed to evaluate the null hypothesis. The ANOVA test is an analysis of variance that is used to determine if a statistically significant difference exists in the performance of various EAs. If the given p-value for each combination of EA variations is smaller than $0.05$ (alpha value), then variances differ, such that there is a statistically significant difference between algorithms. When the null hypothesis is false, it brings up the alternative hypothesis, which proposes that there is a difference. When significant differences between groups (EAs) are found, Student's T-test is used to interpret the result of one-way ANOVA tests. Multiple pairwise-comparison T-test helps to determine which pairs of EAs are different. The T-test concludes if the mean difference between specific pairs of EAs is statistically significant. In order to identify any significant difference in the median fitness, between two experimental conditions (classic GAs and ReGen GAs), Wilcoxon signed-rank test is performed. For the Wilcoxon test, crossover rates are ignored, and EAs are classified into four groups: GGAs vs. ReGen GGAs and SSGAs vs. ReGen SSGAs.
Based on the ReGen EAs Samples in Appendix \ref{appendB}, the analysis of variance is computed to know the difference between evolutionary algorithms with different implementations. Variations include classic GAs and ReGen GAs, replacement strategies (Generational and Steady State), and crossover rates from $0.6$ to $1.0$, algorithms are twenty in total. Table~\ref{c4table9} shows a summary for each algorithm and function. The summary presents the number of samples per algorithm ($30$), the sum of the fitness, the average fitness, and their variances. Results of the ANOVA single factor are tabulated in Table~\ref{c4table10}.
\begin{landscape}
\begin{table}[H]
\centering
\caption{Anova Single Factor: SUMMARY}
\label{c4table9}
\scriptsize
\begin{tabular}{llllllllllllll}
\hline
\multicolumn{2}{l}{} & \multicolumn{3}{c}{\textbf{Deceptive Order Three}} & \multicolumn{3}{c}{\textbf{Deceptive Order Four Trap}} & \multicolumn{3}{c}{\textbf{Royal Road}} & \multicolumn{3}{c}{\textbf{Max Ones}} \\
Groups & Count & Sum & Average & Variance & Sum & Average & Variance & Sum & Average & Variance & Sum & Average & Variance \\\hline
GGAX06 & 30 & 103036 & 3434.533333 & 119.4298851 & 11695 & 389.8333333 & 20.55747126 & 6040 & 201.3333333 & 852.2298851 & 10800 & 360 & 0 \\
GGAX07 & 30 & 102898 & 3429.933333 & 88.96091954 & 11662 & 388.7333333 & 20.54712644 & 6616 & 220.5333333 & 294.3264368 & 10800 & 360 & 0 \\
GGAX08 & 30 & 103016 & 3433.866667 & 102.3264368 & 11701 & 390.0333333 & 16.3091954 & 7384 & 246.1333333 & 210.4643678 & 10800 & 360 & 0 \\
GGAX09 & 30 & 103164 & 3438.8 & 113.2689655 & 11713 & 390.4333333 & 10.87471264 & 7880 & 262.6666667 & 490.2988506 & 10800 & 360 & 0 \\
GGAX10 & 30 & 103080 & 3436 & 83.31034483 & 11757 & 391.9 & 24.50689655 & 8504 & 283.4666667 & 325.2229885 & 10800 & 360 & 0 \\
SSGAX06 & 30 & 102898 & 3429.933333 & 107.1678161 & 11626 & 387.5333333 & 23.42988506 & 2976 & 99.2 & 364.5793103 & 10800 & 360 & 0 \\
SSGAX07 & 30 & 103046 & 3434.866667 & 78.53333333 & 11611 & 387.0333333 & 11.68850575 & 2928 & 97.6 & 288.662069 & 10800 & 360 & 0 \\
SSGAX08 & 30 & 103038 & 3434.6 & 129.9724138 & 11686 & 389.5333333 & 17.42988506 & 3304 & 110.1333333 & 395.8436782 & 10800 & 360 & 0 \\
SSGAX09 & 30 & 103020 & 3434 & 100.9655172 & 11649 & 388.3 & 25.38965517 & 3392 & 113.0666667 & 563.7885057 & 10800 & 360 & 0 \\
SSGAX10 & 30 & 103084 & 3436.133333 & 148.9471264 & 11739 & 391.3 & 16.56206897 & 3696 & 123.2 & 364.5793103 & 10800 & 360 & 0 \\
ReGenGGAX06 & 30 & 107326 & 3577.533333 & 124.6022989 & 13340 & 444.6666667 & 10.43678161 & 10720 & 357.3333333 & 19.12643678 & 10800 & 360 & 0 \\
ReGenGGAX07 & 30 & 107364 & 3578.8 & 146.3724138 & 13383 & 446.1 & 2.644827586 & 10744 & 358.1333333 & 20.67126437 & 10800 & 360 & 0 \\
ReGenGGAX08 & 30 & 107328 & 3577.6 & 200.3862069 & 13324 & 444.1333333 & 12.32643678 & 10784 & 359.4666667 & 4.11954023 & 10800 & 360 & 0 \\
ReGenGGAX09 & 30 & 107372 & 3579.066667 & 175.6505747 & 13378 & 445.9333333 & 4.616091954 & 10760 & 358.6666667 & 9.195402299 & 10800 & 360 & 0 \\
ReGenGGAX10 & 30 & 107412 & 3580.4 & 163.6965517 & 13373 & 445.7666667 & 7.840229885 & 10776 & 359.2 & 5.95862069 & 10800 & 360 & 0 \\
ReGenSSGAX06 & 30 & 107234 & 3574.466667 & 161.291954 & 13310 & 443.6666667 & 7.609195402 & 10480 & 349.3333333 & 67.67816092 & 10800 & 360 & 0 \\
ReGenSSGAX07 & 30 & 107202 & 3573.4 & 211.0758621 & 13367 & 445.5666667 & 7.21954023 & 10496 & 349.8666667 & 101.2229885 & 10800 & 360 & 0 \\
ReGenSSGAX08 & 30 & 107394 & 3579.8 & 176.3724138 & 13337 & 444.5666667 & 7.564367816 & 10504 & 350.1333333 & 135.4298851 & 10800 & 360 & 0 \\
ReGenSSGAX09 & 30 & 107486 & 3582.866667 & 119.3609195 & 13375 & 445.8333333 & 4.281609195 & 10648 & 354.9333333 & 41.85747126 & 10800 & 360 & 0 \\
ReGenSSGAX10 & 30 & 107544 & 3584.8 & 115.2 & 13351 & 445.0333333 & 7.688505747 & 10656 & 355.2 & 29.13103448 & 10800 & 360 & 0 \\\hline
\end{tabular}
\end{table}
\end{landscape}
\begin{table}[H]
\centering
\caption{Anova Single Factor: ANOVA}
\label{c4table10}
\begin{tabular}{lllllll}
\hline
\multicolumn{7}{c}{\textbf{Deceptive Order Three}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 3141837.193 & 19 & 165359.8523 & 1240.0941 & 0 & 1.60449 \\
Within Groups & 77339.86667 & 580 & 133.3445977 & & & \\
& & & & & & \\
Total & 3219177.06 & 599 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Deceptive Order Four Trap}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 465618.6183 & 19 & 24506.24307 & 1888.5604 & 0 & 1.60449 \\
Within Groups & 7526.166667 & 580 & 12.97614943 & & & \\
& & & & & & \\
Total & 473144.785 & 599 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Royal Road}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 6329162.56 & 19 & 333113.8189 & 1453.2537 & 0 & 1.60449 \\
Within Groups & 132947.2 & 580 & 229.2193103 & & & \\
& & & & & & \\
Total & 6462109.76 & 599 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Max Ones}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 5.53E-25 & 19 & 2.91E-26 & 1 & 0.459 & 1.60449 \\
Within Groups & 1.69E-23 & 580 & 2.91E-26 & & & \\
& & & & & & \\
Total & 1.74E-23 & 599 & & & & \\\hline
\end{tabular}
\end{table}
As P-values for Deceptive Order Three, Deceptive Order Four Trap, and Royal Road functions are less than the significance level $0.05$, the results allow concluding that there are significant differences between groups, as shown in Table~\ref{c4table10} ({\em P-value} columns). In one-way ANOVA tests, significant P-values indicate that some group means are different, but it is not evident which pairs of groups are different. In order to interpret one-way ANOVA test' results, multiple pairwise-comparison with Student's t-test is performed to determine if the mean difference between specific pairs of the group is statistically significant. Also, paired-sample Wilcoxon tests are computed.
ANOVA test for Max Ones' samples shows that the P-value is higher than the significance level 0.05, this result means that there are no significant differences between algorithms (EAs) listed above in the model summary Table~\ref{c4table9}. Therefore, no multiple pairwise-comparison Student's t-tests between means of groups are performed; neither, paired-sample Wilcoxon test is computed.
\begin{landscape}
\begin{figure}[H]
\centering
\includegraphics[width=4.2in]{imagesThesis/boxD3.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxD32.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxD4.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxD42.pdf}
\caption{From top to bottom: Deceptive Order Three and Deceptive Order Four Trap Functions. On the left, EAs with Generational replacement (GGA) and Steady State replacement (SSGA) with Crossover rates from $0.6$ to $1.0$. On the right, EAs grouped by Generational replacement (GGA) and Steady State replacement (SSGA).}
\label{c4fig5}
\end{figure}
\end{landscape}
\begin{figure}[H]
\centering
\includegraphics[width=4.2in]{imagesThesis/boxRR.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxRR2.pdf}
\caption{Royal Road Function. On top, EAs with Generational (GGA) and Steady State (SSGA) replacements with Crossover rates from $0.6$ to $1.0$. On the bottom, EAs grouped by Generational replacement (GGA) and Steady State replacement (SSGA).}
\label{c4fig6}
\end{figure}
Box plots in Fig.~\ref{c4fig5} and Fig.~\ref{c4fig6} depict the median fitness of EAs' best solutions (ReGen EAs Samples in Appendix \ref{appendB}). On the left, twenty EAs' variations with different crossover rates: Gray ($0.6$), Orange ($0.7$), Blue ($0.8$), White ($0.9$), and Yellow ($1.0$). On the right, figures illustrate the median fitness of classic and epigenetic EAs, which are grouped by population replacement type: Gray (GGA), Orange (ReGen GGA), Blue (ReGen SSGA), and White (SSGA). For Deceptive Order Three function, the median fitness for each Epigenetic EA is close to the global optimum ($3600$), while the median fitnesses for classic GAs are under the local optimum ($3450$). On the other hand, Deceptive Order Four Trap median fitness is above $440$ for all Epigenetic implementations; in contrast, for classic GAs, the median fitness does not exceed $400$. The same occurs for Royal Road function; the median fitness reported for epigenetic evolutionary algorithms outpoints local optimum ($320$), while traditional GAs median fitness maximum value is $320$. So, based on these data, it seems that Epigenetic GAs find better solutions than classic GAs. However, it is needed to determine whether this finding is statistically significant.
\begin{landscape}
\begin{table}[H]
\centering
\caption{D3 Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table11}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & 0.17764919 & - & - & - & - & - & - & - & - & - \\
GGAX08 & 0.87373395 & 0.24754713 & - & - & - & - & - & - & - & - \\
GGAX09 & 0.21528009 & \cellcolor[HTML]{EFEFEF}0.00554289 & 0.14859587 & - & - & - & - & - & - & - \\
GGAX10 & 0.71303353 & 0.0705529 & 0.57069862 & 0.43796354 & - & - & - & - & - & - \\
ReGenGGAX06 & \cellcolor[HTML]{EFEFEF}1.48E-203 & \cellcolor[HTML]{EFEFEF}1.40E-209 & \cellcolor[HTML]{EFEFEF}1.94E-204 & \cellcolor[HTML]{EFEFEF}1.24E-197 & \cellcolor[HTML]{EFEFEF}1.54E-201 & - & - & - & - & - \\
ReGenGGAX07 & \cellcolor[HTML]{EFEFEF}3.23E-205 & \cellcolor[HTML]{EFEFEF}3.71E-211 & \cellcolor[HTML]{EFEFEF}4.47E-206 & \cellcolor[HTML]{EFEFEF}2.13E-199 & \cellcolor[HTML]{EFEFEF}2.72E-203 & 0.75006919 & - & - & - & - \\
ReGenGGAX08 & \cellcolor[HTML]{EFEFEF}1.23E-203 & \cellcolor[HTML]{EFEFEF}1.22E-209 & \cellcolor[HTML]{EFEFEF}1.62E-204 & \cellcolor[HTML]{EFEFEF}1.01E-197 & \cellcolor[HTML]{EFEFEF}1.26E-201 & 0.98736535 & 0.76386956 & - & - & - \\
ReGenGGAX09 & \cellcolor[HTML]{EFEFEF}1.48E-205 & \cellcolor[HTML]{EFEFEF}2.02E-211 & \cellcolor[HTML]{EFEFEF}2.04E-206 & \cellcolor[HTML]{EFEFEF}8.99E-200 & \cellcolor[HTML]{EFEFEF}1.23E-203 & 0.70352874 & 0.95386568 & 0.71303353 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}2.81E-207 & \cellcolor[HTML]{EFEFEF}5.69E-213 & \cellcolor[HTML]{EFEFEF}3.77E-208 & \cellcolor[HTML]{EFEFEF}1.26E-201 & \cellcolor[HTML]{EFEFEF}2.19E-205 & 0.42937099 & 0.69400018 & 0.43796354 & 0.74066234 & - \\
ReGenSSGAX06 & \cellcolor[HTML]{EFEFEF}2.62E-199 & \cellcolor[HTML]{EFEFEF}1.48E-205 & \cellcolor[HTML]{EFEFEF}3.09E-200 & \cellcolor[HTML]{EFEFEF}3.08E-193 & \cellcolor[HTML]{EFEFEF}2.93E-197 & 0.39042554 & 0.20794839 & 0.3822627 & 0.17764919 & 0.07707801 \\
ReGenSSGAX07 & \cellcolor[HTML]{EFEFEF}8.27E-198 & \cellcolor[HTML]{EFEFEF}3.48E-204 & \cellcolor[HTML]{EFEFEF}9.41E-199 & \cellcolor[HTML]{EFEFEF}1.09E-191 & \cellcolor[HTML]{EFEFEF}9.80E-196 & 0.22880804 & 0.11278114 & 0.22116359 & 0.09315297 & \cellcolor[HTML]{EFEFEF}0.03350474 \\
ReGenSSGAX08 & \cellcolor[HTML]{EFEFEF}1.73E-206 & \cellcolor[HTML]{EFEFEF}2.31E-212 & \cellcolor[HTML]{EFEFEF}2.34E-207 & \cellcolor[HTML]{EFEFEF}8.46E-201 & \cellcolor[HTML]{EFEFEF}1.33E-204 & 0.54845698 & 0.80065744 & 0.56134173 & 0.86012623 & 0.88238029 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}1.73E-210 & \cellcolor[HTML]{EFEFEF}5.11E-216 & \cellcolor[HTML]{EFEFEF}2.89E-211 & \cellcolor[HTML]{EFEFEF}5.83E-205 & \cellcolor[HTML]{EFEFEF}1.39E-208 & 0.11646875 & 0.23327459 & 0.12124417 & 0.26599333 & 0.50716527 \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}9.47E-213 & \cellcolor[HTML]{EFEFEF}2.57E-218 & \cellcolor[HTML]{EFEFEF}2.10E-213 & \cellcolor[HTML]{EFEFEF}1.95E-207 & \cellcolor[HTML]{EFEFEF}4.35E-211 & 0.02681405 & 0.07375646 & 0.02823607 & 0.08927929 & 0.20079383 \\
SSGAX06 & 0.17764919 & 1 & 0.24754713 & \cellcolor[HTML]{EFEFEF}0.00554289 & 0.0705529 & \cellcolor[HTML]{EFEFEF}1.40E-209 & \cellcolor[HTML]{EFEFEF}3.71E-211 & \cellcolor[HTML]{EFEFEF}1.22E-209 & \cellcolor[HTML]{EFEFEF}2.02E-211 & \cellcolor[HTML]{EFEFEF}5.69E-213 \\
SSGAX07 & 0.94586954 & 0.14859587 & 0.80065744 & 0.24754713 & 0.77767313 & \cellcolor[HTML]{EFEFEF}4.02E-203 & \cellcolor[HTML]{EFEFEF}8.80E-205 & \cellcolor[HTML]{EFEFEF}3.33E-203 & \cellcolor[HTML]{EFEFEF}3.86E-205 & \cellcolor[HTML]{EFEFEF}7.54E-207 \\
SSGAX08 & 0.98736535 & 0.17392479 & 0.86012623 & 0.22116359 & 0.72683516 & \cellcolor[HTML]{EFEFEF}1.79E-203 & \cellcolor[HTML]{EFEFEF}3.86E-205 & \cellcolor[HTML]{EFEFEF}1.48E-203 & \cellcolor[HTML]{EFEFEF}1.80E-205 & \cellcolor[HTML]{EFEFEF}3.30E-207 \\
SSGAX09 & 0.89581456 & 0.23327459 & 0.9798171 & 0.1615203 & 0.60061513 & \cellcolor[HTML]{EFEFEF}2.90E-204 & \cellcolor[HTML]{EFEFEF}6.71E-206 & \cellcolor[HTML]{EFEFEF}2.37E-204 & \cellcolor[HTML]{EFEFEF}3.05E-206 & \cellcolor[HTML]{EFEFEF}5.59E-208 \\
SSGAX10 & 0.69400018 & 0.06448843 & 0.54845698 & 0.4643557 & 0.9798171 & \cellcolor[HTML]{EFEFEF}2.32E-201 & \cellcolor[HTML]{EFEFEF}4.02E-203 & \cellcolor[HTML]{EFEFEF}1.89E-201 & \cellcolor[HTML]{EFEFEF}1.79E-203 & \cellcolor[HTML]{EFEFEF}3.23E-205
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{D3 Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table12}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.79147218 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.11646875 & 0.05569495 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}0.00897573 & \cellcolor[HTML]{EFEFEF}0.00290977 & 0.39042554 & - & - & - & - & - & - \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}0.00105773 & \cellcolor[HTML]{EFEFEF}0.00027428 & 0.14533483 & 0.61389238 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}1.48E-205 & \cellcolor[HTML]{EFEFEF}3.48E-204 & \cellcolor[HTML]{EFEFEF}2.31E-212 & \cellcolor[HTML]{EFEFEF}5.11E-216 & \cellcolor[HTML]{EFEFEF}2.57E-218 & - & - & - & - \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}7.65E-199 & \cellcolor[HTML]{EFEFEF}2.38E-197 & \cellcolor[HTML]{EFEFEF}4.47E-206 & \cellcolor[HTML]{EFEFEF}4.57E-210 & \cellcolor[HTML]{EFEFEF}2.21E-212 & 0.14859587 & - & - & - \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}3.22E-199 & \cellcolor[HTML]{EFEFEF}1.01E-197 & \cellcolor[HTML]{EFEFEF}2.04E-206 & \cellcolor[HTML]{EFEFEF}2.05E-210 & \cellcolor[HTML]{EFEFEF}1.05E-212 & 0.17392479 & 0.95386568 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}4.72E-200 & \cellcolor[HTML]{EFEFEF}1.44E-198 & \cellcolor[HTML]{EFEFEF}3.30E-207 & \cellcolor[HTML]{EFEFEF}3.71E-211 & \cellcolor[HTML]{EFEFEF}2.66E-213 & 0.23327459 & 0.83276418 & 0.88238029 & - \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}4.51E-197 & \cellcolor[HTML]{EFEFEF}1.51E-195 & \cellcolor[HTML]{EFEFEF}1.94E-204 & \cellcolor[HTML]{EFEFEF}2.06E-208 & \cellcolor[HTML]{EFEFEF}6.31E-211 & 0.06448843 & 0.75006919 & 0.70352874 & 0.57069862
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{D4 Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table13}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & 0.29677823 & - & - & - & - & - & - & - & - & - \\
GGAX08 & 0.85224562 & 0.21774467 & - & - & - & - & - & - & - & - \\
GGAX09 & 0.59061606 & 0.09801927 & 0.71632051 & - & - & - & - & - & - & - \\
GGAX10 & \cellcolor[HTML]{EFEFEF}0.04153318 & \cellcolor[HTML]{EFEFEF}0.00125756 & 0.06818451 & 0.15999358 & - & - & - & - & - & - \\
ReGenGGAX06 & \cellcolor[HTML]{EFEFEF}1.17E-246 & \cellcolor[HTML]{EFEFEF}8.61E-251 & \cellcolor[HTML]{EFEFEF}6.80E-246 & \cellcolor[HTML]{EFEFEF}2.34E-244 & \cellcolor[HTML]{EFEFEF}1.43E-238 & - & - & - & - & - \\
ReGenGGAX07 & \cellcolor[HTML]{EFEFEF}5.36E-252 & \cellcolor[HTML]{EFEFEF}6.26E-256 & \cellcolor[HTML]{EFEFEF}2.87E-251 & \cellcolor[HTML]{EFEFEF}8.11E-250 & \cellcolor[HTML]{EFEFEF}3.13E-244 & 0.16928761 & - & - & - & - \\
ReGenGGAX08 & \cellcolor[HTML]{EFEFEF}1.31E-244 & \cellcolor[HTML]{EFEFEF}8.13E-249 & \cellcolor[HTML]{EFEFEF}7.53E-244 & \cellcolor[HTML]{EFEFEF}2.76E-242 & \cellcolor[HTML]{EFEFEF}2.03E-236 & 0.63324079 & 0.05347637 & - & - & - \\
ReGenGGAX09 & \cellcolor[HTML]{EFEFEF}2.19E-251 & \cellcolor[HTML]{EFEFEF}2.45E-255 & \cellcolor[HTML]{EFEFEF}1.13E-250 & \cellcolor[HTML]{EFEFEF}3.43E-249 & \cellcolor[HTML]{EFEFEF}1.37E-243 & 0.22613672 & 0.87161088 & 0.07995101 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}8.61E-251 & \cellcolor[HTML]{EFEFEF}8.57E-255 & \cellcolor[HTML]{EFEFEF}4.55E-250 & \cellcolor[HTML]{EFEFEF}1.43E-248 & \cellcolor[HTML]{EFEFEF}6.17E-243 & 0.29677823 & 0.76019487 & 0.11371468 & 0.87161088 & - \\
ReGenSSGAX06 & \cellcolor[HTML]{EFEFEF}8.29E-243 & \cellcolor[HTML]{EFEFEF}4.82E-247 & \cellcolor[HTML]{EFEFEF}4.98E-242 & \cellcolor[HTML]{EFEFEF}1.94E-240 & \cellcolor[HTML]{EFEFEF}1.63E-234 & 0.34659749 & \cellcolor[HTML]{EFEFEF}0.01507201 & 0.67268757 & \cellcolor[HTML]{EFEFEF}0.02432668 & \cellcolor[HTML]{EFEFEF}0.03819933 \\
ReGenSSGAX07 & \cellcolor[HTML]{EFEFEF}4.55E-250 & \cellcolor[HTML]{EFEFEF}4.82E-254 & \cellcolor[HTML]{EFEFEF}2.64E-249 & \cellcolor[HTML]{EFEFEF}8.27E-248 & \cellcolor[HTML]{EFEFEF}3.71E-242 & 0.3961808 & 0.63324079 & 0.16928761 & 0.73618084 & 0.85224562 \\
ReGenSSGAX08 & \cellcolor[HTML]{EFEFEF}2.82E-246 & \cellcolor[HTML]{EFEFEF}1.97E-250 & \cellcolor[HTML]{EFEFEF}1.60E-245 & \cellcolor[HTML]{EFEFEF}5.62E-244 & \cellcolor[HTML]{EFEFEF}3.60E-238 & 0.91925507 & 0.14042521 & 0.69248564 & 0.19171249 & 0.25184377 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}4.97E-251 & \cellcolor[HTML]{EFEFEF}5.16E-255 & \cellcolor[HTML]{EFEFEF}2.61E-250 & \cellcolor[HTML]{EFEFEF}8.13E-249 & \cellcolor[HTML]{EFEFEF}3.39E-243 & 0.26627775 & 0.80406077 & 0.09801927 & 0.91925507 & 0.94288345 \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}4.68E-248 & \cellcolor[HTML]{EFEFEF}4.21E-252 & \cellcolor[HTML]{EFEFEF}2.67E-247 & \cellcolor[HTML]{EFEFEF}8.95E-246 & \cellcolor[HTML]{EFEFEF}4.85E-240 & 0.73618084 & 0.31284451 & 0.3961808 & 0.3961808 & 0.49602171 \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}0.02222998 & 0.25184377 & \cellcolor[HTML]{EFEFEF}0.01232805 & \cellcolor[HTML]{EFEFEF}0.00330137 & \cellcolor[HTML]{EFEFEF}6.21E-06 & \cellcolor[HTML]{EFEFEF}4.20E-255 & \cellcolor[HTML]{EFEFEF}6.88E-260 & \cellcolor[HTML]{EFEFEF}3.42E-253 & \cellcolor[HTML]{EFEFEF}2.08E-259 & \cellcolor[HTML]{EFEFEF}6.86E-259 \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}0.00465945 & 0.09801927 & \cellcolor[HTML]{EFEFEF}0.00231558 & \cellcolor[HTML]{EFEFEF}0.0005018 & \cellcolor[HTML]{EFEFEF}4.40E-07 & \cellcolor[HTML]{EFEFEF}8.21E-257 & \cellcolor[HTML]{EFEFEF}4.64E-261 & \cellcolor[HTML]{EFEFEF}5.16E-255 & \cellcolor[HTML]{EFEFEF}9.71E-261 & \cellcolor[HTML]{EFEFEF}2.04E-260 \\
SSGAX08 & 0.78430534 & 0.45468783 & 0.6529301 & 0.3961808 & \cellcolor[HTML]{EFEFEF}0.01834505 & \cellcolor[HTML]{EFEFEF}8.27E-248 & \cellcolor[HTML]{EFEFEF}4.45E-253 & \cellcolor[HTML]{EFEFEF}8.95E-246 & \cellcolor[HTML]{EFEFEF}1.89E-252 & \cellcolor[HTML]{EFEFEF}7.02E-252 \\
SSGAX09 & 0.14042521 & 0.69248564 & 0.09261653 & \cellcolor[HTML]{EFEFEF}0.03509616 & \cellcolor[HTML]{EFEFEF}0.00021879 & \cellcolor[HTML]{EFEFEF}2.47E-252 & \cellcolor[HTML]{EFEFEF}2.23E-257 & \cellcolor[HTML]{EFEFEF}1.97E-250 & \cellcolor[HTML]{EFEFEF}8.21E-257 & \cellcolor[HTML]{EFEFEF}2.75E-256 \\
SSGAX10 & 0.15999358 & \cellcolor[HTML]{EFEFEF}0.01012854 & 0.22613672 & 0.41519752 & 0.59061606 & \cellcolor[HTML]{EFEFEF}5.77E-241 & \cellcolor[HTML]{EFEFEF}1.56E-246 & \cellcolor[HTML]{EFEFEF}7.78E-239 & \cellcolor[HTML]{EFEFEF}6.80E-246 & \cellcolor[HTML]{EFEFEF}2.89E-245
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{D4 Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table14}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.06311581 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.3961808 & 0.34659749 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}0.03221118 & 0.80406077 & 0.22613672 & - & - & - & - & - & - \\
ReGenSSGAX10 & 0.19171249 & 0.63324079 & 0.67268757 & 0.45468783 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}1.67E-251 & \cellcolor[HTML]{EFEFEF}3.51E-258 & \cellcolor[HTML]{EFEFEF}8.57E-255 & \cellcolor[HTML]{EFEFEF}4.33E-259 & \cellcolor[HTML]{EFEFEF}2.17E-256 & - & - & - & - \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}2.63E-253 & \cellcolor[HTML]{EFEFEF}7.65E-260 & \cellcolor[HTML]{EFEFEF}1.72E-256 & \cellcolor[HTML]{EFEFEF}1.53E-260 & \cellcolor[HTML]{EFEFEF}4.26E-258 & 0.6529301 & - & - & - \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}5.62E-244 & \cellcolor[HTML]{EFEFEF}3.78E-251 & \cellcolor[HTML]{EFEFEF}2.01E-247 & \cellcolor[HTML]{EFEFEF}4.21E-252 & \cellcolor[HTML]{EFEFEF}3.43E-249 & \cellcolor[HTML]{EFEFEF}0.04934242 & \cellcolor[HTML]{EFEFEF}0.01232805 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}1.08E-248 & \cellcolor[HTML]{EFEFEF}1.43E-255 & \cellcolor[HTML]{EFEFEF}5.36E-252 & \cellcolor[HTML]{EFEFEF}1.72E-256 & \cellcolor[HTML]{EFEFEF}1.13E-253 & 0.47513258 & 0.22613672 & 0.23957048 & - \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}5.89E-237 & \cellcolor[HTML]{EFEFEF}1.75E-244 & \cellcolor[HTML]{EFEFEF}1.44E-240 & \cellcolor[HTML]{EFEFEF}1.60E-245 & \cellcolor[HTML]{EFEFEF}2.05E-242 & \cellcolor[HTML]{EFEFEF}0.0001064 & \cellcolor[HTML]{EFEFEF}1.02E-05 & 0.08609593 & \cellcolor[HTML]{EFEFEF}0.00231558
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{RR Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table15}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & \cellcolor[HTML]{EFEFEF}1.64E-06 & - & - & - & - & - & - & - & - & - \\
GGAX08 & \cellcolor[HTML]{EFEFEF}2.21E-27 & \cellcolor[HTML]{EFEFEF}1.83E-10 & - & - & - & - & - & - & - & - \\
GGAX09 & \cellcolor[HTML]{EFEFEF}2.49E-46 & \cellcolor[HTML]{EFEFEF}1.19E-24 & \cellcolor[HTML]{EFEFEF}3.78E-05 & - & - & - & - & - & - & - \\
GGAX10 & \cellcolor[HTML]{EFEFEF}4.37E-73 & \cellcolor[HTML]{EFEFEF}2.66E-48 & \cellcolor[HTML]{EFEFEF}5.20E-20 & \cellcolor[HTML]{EFEFEF}2.08E-07 & - & - & - & - & - & - \\
ReGenGGAX06 & \cellcolor[HTML]{EFEFEF}5.59E-168 & \cellcolor[HTML]{EFEFEF}1.09E-144 & \cellcolor[HTML]{EFEFEF}8.81E-112 & \cellcolor[HTML]{EFEFEF}8.33E-90 & \cellcolor[HTML]{EFEFEF}3.21E-62 & - & - & - & - & - \\
ReGenGGAX07 & \cellcolor[HTML]{EFEFEF}6.43E-169 & \cellcolor[HTML]{EFEFEF}1.10E-145 & \cellcolor[HTML]{EFEFEF}7.83E-113 & \cellcolor[HTML]{EFEFEF}7.12E-91 & \cellcolor[HTML]{EFEFEF}2.93E-63 & 0.86524143 & - & - & - & - \\
ReGenGGAX08 & \cellcolor[HTML]{EFEFEF}1.81E-170 & \cellcolor[HTML]{EFEFEF}2.51E-147 & \cellcolor[HTML]{EFEFEF}1.42E-114 & \cellcolor[HTML]{EFEFEF}1.20E-92 & \cellcolor[HTML]{EFEFEF}5.40E-65 & 0.6320313 & 0.7738968 & - & - & - \\
ReGenGGAX09 & \cellcolor[HTML]{EFEFEF}1.53E-169 & \cellcolor[HTML]{EFEFEF}2.42E-146 & \cellcolor[HTML]{EFEFEF}1.57E-113 & \cellcolor[HTML]{EFEFEF}1.39E-91 & \cellcolor[HTML]{EFEFEF}5.93E-64 & 0.7738968 & 0.90582906 & 0.86524143 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}3.67E-170 & \cellcolor[HTML]{EFEFEF}5.31E-147 & \cellcolor[HTML]{EFEFEF}3.14E-114 & \cellcolor[HTML]{EFEFEF}2.70E-92 & \cellcolor[HTML]{EFEFEF}1.20E-64 & 0.67967992 & 0.82408949 & 0.94563689 & 0.90582906 & - \\
ReGenSSGAX06 & \cellcolor[HTML]{EFEFEF}1.92E-158 & \cellcolor[HTML]{EFEFEF}1.27E-134 & \cellcolor[HTML]{EFEFEF}3.42E-101 & \cellcolor[HTML]{EFEFEF}4.02E-79 & \cellcolor[HTML]{EFEFEF}5.72E-52 & 0.05012867 & \cellcolor[HTML]{EFEFEF}0.03093773 & \cellcolor[HTML]{EFEFEF}0.01289906 & \cellcolor[HTML]{EFEFEF}0.02203385 & \cellcolor[HTML]{EFEFEF}0.01555171 \\
ReGenSSGAX07 & \cellcolor[HTML]{EFEFEF}4.39E-159 & \cellcolor[HTML]{EFEFEF}2.69E-135 & \cellcolor[HTML]{EFEFEF}6.71E-102 & \cellcolor[HTML]{EFEFEF}7.85E-80 & \cellcolor[HTML]{EFEFEF}1.22E-52 & 0.06851925 & \cellcolor[HTML]{EFEFEF}0.04303471 & \cellcolor[HTML]{EFEFEF}0.01867431 & \cellcolor[HTML]{EFEFEF}0.03093773 & \cellcolor[HTML]{EFEFEF}0.02203385 \\
ReGenSSGAX08 & \cellcolor[HTML]{EFEFEF}2.15E-159 & \cellcolor[HTML]{EFEFEF}1.25E-135 & \cellcolor[HTML]{EFEFEF}3.02E-102 & \cellcolor[HTML]{EFEFEF}3.49E-80 & \cellcolor[HTML]{EFEFEF}5.67E-53 & 0.07937734 & 0.05012867 & 0.02203385 & \cellcolor[HTML]{EFEFEF}0.03656145 & \cellcolor[HTML]{EFEFEF}0.02624799 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}3.86E-165 & \cellcolor[HTML]{EFEFEF}1.07E-141 & \cellcolor[HTML]{EFEFEF}1.28E-108 & \cellcolor[HTML]{EFEFEF}1.34E-86 & \cellcolor[HTML]{EFEFEF}4.11E-59 & 0.58909905 & 0.45928081 & 0.28402906 & 0.38220426 & 0.31346893 \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}1.88E-165 & \cellcolor[HTML]{EFEFEF}5.02E-142 & \cellcolor[HTML]{EFEFEF}5.74E-109 & \cellcolor[HTML]{EFEFEF}5.96E-87 & \cellcolor[HTML]{EFEFEF}1.87E-59 & 0.6320313 & 0.49787684 & 0.31346893 & 0.41972909 & 0.34677055 \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}8.98E-100 & \cellcolor[HTML]{EFEFEF}4.98E-125 & \cellcolor[HTML]{EFEFEF}3.73E-157 & \cellcolor[HTML]{EFEFEF}1.12E-176 & \cellcolor[HTML]{EFEFEF}5.80E-200 & \cellcolor[HTML]{EFEFEF}1.07E-270 & \cellcolor[HTML]{EFEFEF}2.43E-271 & \cellcolor[HTML]{EFEFEF}3.14E-272 & \cellcolor[HTML]{EFEFEF}9.53E-272 & \cellcolor[HTML]{EFEFEF}4.43E-272 \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}6.71E-102 & \cellcolor[HTML]{EFEFEF}4.27E-127 & \cellcolor[HTML]{EFEFEF}4.39E-159 & \cellcolor[HTML]{EFEFEF}1.61E-178 & \cellcolor[HTML]{EFEFEF}1.10E-201 & \cellcolor[HTML]{EFEFEF}6.43E-272 & \cellcolor[HTML]{EFEFEF}2.32E-272 & \cellcolor[HTML]{EFEFEF}5.71E-273 & \cellcolor[HTML]{EFEFEF}1.08E-272 & \cellcolor[HTML]{EFEFEF}5.71E-273 \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}3.58E-85 & \cellcolor[HTML]{EFEFEF}9.95E-111 & \cellcolor[HTML]{EFEFEF}1.08E-143 & \cellcolor[HTML]{EFEFEF}7.14E-164 & \cellcolor[HTML]{EFEFEF}6.62E-188 & \cellcolor[HTML]{EFEFEF}1.60E-261 & \cellcolor[HTML]{EFEFEF}3.24E-262 & \cellcolor[HTML]{EFEFEF}2.44E-263 & \cellcolor[HTML]{EFEFEF}1.14E-262 & \cellcolor[HTML]{EFEFEF}4.01E-263 \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}3.00E-81 & \cellcolor[HTML]{EFEFEF}7.45E-107 & \cellcolor[HTML]{EFEFEF}5.08E-140 & \cellcolor[HTML]{EFEFEF}2.36E-160 & \cellcolor[HTML]{EFEFEF}1.32E-184 & \cellcolor[HTML]{EFEFEF}5.25E-259 & \cellcolor[HTML]{EFEFEF}1.07E-259 & \cellcolor[HTML]{EFEFEF}7.95E-261 & \cellcolor[HTML]{EFEFEF}3.83E-260 & \cellcolor[HTML]{EFEFEF}1.32E-260 \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}8.54E-68 & \cellcolor[HTML]{EFEFEF}2.34E-93 & \cellcolor[HTML]{EFEFEF}4.27E-127 & \cellcolor[HTML]{EFEFEF}5.53E-148 & \cellcolor[HTML]{EFEFEF}5.79E-173 & \cellcolor[HTML]{EFEFEF}6.52E-250 & \cellcolor[HTML]{EFEFEF}1.21E-250 & \cellcolor[HTML]{EFEFEF}7.68E-252 & \cellcolor[HTML]{EFEFEF}4.00E-251 & \cellcolor[HTML]{EFEFEF}1.32E-251
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{RR Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table16}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.90582906 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.86524143 & 0.94563689 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & 0.18112563 & 0.22782816 & 0.25486 & - & - & - & - & - & - \\
ReGenSSGAX10 & 0.16007994 & 0.20415385 & 0.22782816 & 0.94563689 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}5.03E-264 & \cellcolor[HTML]{EFEFEF}1.79E-264 & \cellcolor[HTML]{EFEFEF}1.10E-264 & \cellcolor[HTML]{EFEFEF}9.06E-269 & \cellcolor[HTML]{EFEFEF}5.73E-269 & - & - & - & - \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}2.31E-265 & \cellcolor[HTML]{EFEFEF}8.37E-266 & \cellcolor[HTML]{EFEFEF}5.21E-266 & \cellcolor[HTML]{EFEFEF}4.36E-270 & \cellcolor[HTML]{EFEFEF}2.80E-270 & 0.72848115 & - & - & - \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}1.75E-254 & \cellcolor[HTML]{EFEFEF}5.86E-255 & \cellcolor[HTML]{EFEFEF}3.45E-255 & \cellcolor[HTML]{EFEFEF}1.80E-259 & \cellcolor[HTML]{EFEFEF}1.07E-259 & \cellcolor[HTML]{EFEFEF}0.00713327 & \cellcolor[HTML]{EFEFEF}0.00191234 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}7.68E-252 & \cellcolor[HTML]{EFEFEF}2.61E-252 & \cellcolor[HTML]{EFEFEF}1.52E-252 & \cellcolor[HTML]{EFEFEF}7.19E-257 & \cellcolor[HTML]{EFEFEF}4.25E-257 & \cellcolor[HTML]{EFEFEF}0.00057512 & \cellcolor[HTML]{EFEFEF}0.00011766 & 0.49787684 & - \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}1.87E-242 & \cellcolor[HTML]{EFEFEF}5.96E-243 & \cellcolor[HTML]{EFEFEF}3.40E-243 & \cellcolor[HTML]{EFEFEF}1.06E-247 & \cellcolor[HTML]{EFEFEF}6.13E-248 & \cellcolor[HTML]{EFEFEF}2.18E-09 & \cellcolor[HTML]{EFEFEF}1.83E-10 & \cellcolor[HTML]{EFEFEF}0.00119893 & \cellcolor[HTML]{EFEFEF}0.01289906
\\\hline
\end{tabular}
\end{table}
\end{landscape}
\paragraph{\em{Multiple pairwise t-test:}}
Multiple pairwise-comparison between means of EA groups is performed. In the one-way ANOVA test described above, significant p-values indicate that some group means are different. In order to know which pairs of groups are different, multiple pairwise-comparison is performed for Deceptive Order Three (D3), Deceptive Order Four Trap (D4), and Royal Road (RR) best solutions samples. Tables (\ref{c4table11}, \ref{c4table12}, \ref{c4table13}, \ref{c4table14}, \ref{c4table15}, and \ref{c4table16}) present Pairwise comparisons using t-tests with pooled standard deviation (SD) with their respective p-values. The test adjusts p-values with the Benjamini-Hochberg method. Pairwise comparisons show that only highlighted values in gray between two algorithms are significantly different ($p < 0.05$). Therefore, the alternative hypothesis is true.
Now, to find out any significant difference between the median fitness of individuals in the two experimental groups (classic GAs and GAs with regulated genes), the Wilcoxon test is conducted.
\paragraph{\em{Paired Samples Wilcoxon Test:}}
For this test, algorithms are grouped per population replacement strategy, without taking into account the crossover rates. Wilcoxon signed rank test for generational EAs (GGA and ReGen GGA) and Wilcoxon signed rank test for steady state EAs (SSGA and ReGen SSGA). The test assesses classic EAs versus Epigenetic EAs.
\begin{itemize}
\item Deceptive Order Three (D3)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. The {\em P-value} is equal to $2.256122e-26$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. {\em P-value} is equal to $2.250642e-26$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\item Deceptive Order Four Trap (D4)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. The P-value is equal to $2.163978e-26$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. {\em P-value} is equal to $2.217806e-26$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\newpage
\item Royal Road (RR)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. The P-value is equal to $2.135633e-26$, which is less than the significance level $alpha = 0.05$.
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. {\em P-value} is equal to $1.948245e-26$, which is less than the significance level alpha ($0.05$).
\end{enumerate}
\end{itemize}
The above leads to conclude that median fitnesses of solutions found by classic generational genetic algorithms (GGAs) are significantly different from median fitnesses of solutions found by generational genetic algorithms with regulated genes (ReGen GGAs) with p-values equal to $2.256122e-26$ (D3 samples), $2.163978e-26$ (D4 samples), and $2.135633e-26$ (RR samples). So, the alternative hypothesis is true.
The median fitness of solutions found by classic steady state genetic algorithms (SSGAs) is significantly different from the median fitness of solutions found by steady state genetic algorithms with regulated genes (ReGen SSGAs) with p-values equal to $2.250642e-26$ (D3 sampling fitness), $217806e-26$ (D4 sampling fitness), and $1.948245e-26$ (RR sampling fitness). As p-values are less than the significance level $0.05$, it may be concluded that there are significant differences between the two EAs groups in each Wilcoxon Test.
\newpage
\section{Real Problems}\label{c4s3}
The real problems have been encoded as binary strings. The individuals are initialized with randomized binary strings of $(d \cdot n)$, where $d$ is the number of dimensions of the problem and $n$ the length in bits of the binary representation for a real value. The process to obtain real values from binary strings of $32$ bits is done by taking its representation as an integer number and then applying a decoding function. Equation \ref{c4eq3} and Equation \ref{c4eq4} define the encoding/decoding schema \cite{BODENHOFER}.
In the general form for an arbitrary interval [a, b] the coding function is defined as:
\begin{align}
C_n,\left[a, b\right] : \left[a, b\right]
&\longrightarrow\lbrace0,1\rbrace^{n} {\nonumber}\\
x &\longmapsto bin_n\left(round\left((2^{n}-1)\cdot \frac{x-a}{b-a}\right)\right)
\label{c4eq3}
\end{align}
where $bin_n$ is the function which converts a number from $\lbrace0,..., 2^{n}-1\rbrace$ to its binary representation of length $n$ \cite{BODENHOFER}. The corresponding decoding function is defined as follows:
\begin{align}
\widetilde C_n,[a,b]: \{0,1\}^{n} & \longrightarrow [a, b] {\nonumber} \\
s & \longmapsto a + bin_{n}^{-1}(s)\cdot \frac{b-a}{2^{n}-1}
\label{c4eq4}
\end{align}
Now, applying the above decoding function to the interval [$-5.12, 5.11$] with $n = 32$, where the total size of the search space is $2^{32} = 4.294.967.296$, that is $\lbrace0,..., 4294967295\rbrace$, the $32$ bits string is equal to {\em11111111111111111111111111111111}, and the bit string representation as integer number is $4294967295$. The decoding function yields:
\begin{eqnarray*}
s \longmapsto -5.12 + 4294967295 \cdot \frac{5.11 - (-5.12)}{4294967295} = 5.11
\label{c4eq5}
\end{eqnarray*}
\subsection{Experiments}\label{c4s3ss1}
Experiments using real definition are performed to determine the proposed technique applicability. For the selected problems with real definition, a vector with binary values encodes the problem's solution. The real functions explained in this section are used as testbeds. For all functions, the problem dimension is fixed to $n = 10$; each real value is represented with a binary of $32$-bits.
\subsubsection{Rastrigin}
The Rastrigin function has several local minima, it is highly multimodal, but locations of the minima are regularly distributed. Among its features: the function is continuous, convex, defined on n-dimensional space, multimodal, differentiable, and separable. The function is usually evaluated on the hypercube $x_i \in [-5.12, 5.12]$ for $i = 1, ..., n$. The global minimum $f(\textbf{x}^{\ast}) = 0$ at $\textbf{x}^{\ast} = (0, ..., 0)$ \cite{BENCHMARKS, BENCHMARKS2}. On an n-dimensional domain, it is defined by Equation \ref{c4eq6} as:
\begin{eqnarray}
f(x, y)=10n + \sum_{i=1}^{n}(x_i^2 - 10cos(2\pi x_i))
\label{c4eq6}
\end{eqnarray}
\subsubsection{Rosenbrock}
The Rosenbrock function, also referred to as the Valley or Banana function, is a popular test problem for gradient-based optimization algorithms. Among its features: the function is continuous, convex, defined on n-dimensional space, multimodal, differentiable, and non-separable. The function is usually evaluated on the hypercube $x_i \in [-5, 10]$ for $i = 1, ..., n$, although it may be restricted to the hypercube $x_i \in [-2.048, 2.048]$ for $i = 1, ..., n$. The global minimum $f(\textbf{x}^{\ast}) = 0$ at $\textbf{x}^{\ast} = (1, ..., 1)$. In Equation \ref{c4eq7}, the parameters \em{a} and \em{b} are constants and are generally set to $a = 1$ and $b = 100$ \cite{BENCHMARKS, BENCHMARKS2}. On an n-dimensional domain, it is defined by:
\begin{eqnarray}
f(x, y)=\sum_{i=1}^{n}[b (x_{i+1} - x_i^2)^ 2 + (a - x_i)^2]
\label{c4eq7}
\end{eqnarray}
\subsubsection{Schwefel}
The Schwefel function is complex, with many local minima. Among its features: the function is continuous, not convex, multimodal, and can be defined on n-dimensional space. The function can be defined on any input domain but it is usually evaluated on the hypercube $x_i \in [-500, 500]$ for $i = 1, ..., n$. The global minimum $f(\textbf{x}^{\ast}) = 0$ at $\textbf{x}^{\ast} = (420.9687, ..., 420.9687)$ \cite{BENCHMARKS, BENCHMARKS2}. On an n-dimensional domain, it is defined by Equation \ref{c4eq8} as:
\begin{eqnarray}
f(\textbf{x}) = f(x_1, x_2, ..., x_n) = 418.9829n -{\sum_{i=1}^{n} x_i sin\left(\sqrt{|x_i|}\right)}
\label{c4eq8}
\end{eqnarray}
\subsubsection{Griewank}
The Griewank function has many widespread local minima, which are regularly distributed. Among its features: this function is continuous, not convex, can be defined on n-dimensional space, and is unimodal. This function can be defined on any input domain but it is usually evaluated on $x_i \in [-600, 600]$ for $i = 1, ..., n$. The global minimum $f(\textbf{x}^{\ast}) = 0$ at $\textbf{x}^{\ast} = (0, ..., 0)$ \cite{BENCHMARKS, BENCHMARKS2}. On an n-dimensional domain, it is defined by Equation \ref{c4eq9} as:
\begin{eqnarray}
f(\textbf{x}) = f(x_1, ..., x_n) = 1 + \sum_{i=1}^{n} \frac{x_i^{2}}{4000} - \prod_{i=1}^{n}cos\left(\frac{x_i}{\sqrt{i}}\right)
\label{c4eq9}
\end{eqnarray}
\subsection{Results}\label{c4s3ss2}
Based on the defined configuration, both classic and ReGen GA are compared to identify the tags' behavior during individuals' evolution. Results are tabulated from Table~\ref{c4table17} to Table~\ref{c4table20}, these tables present real defined functions: Rastrigin (RAS), Rosenbrock (ROSE), Schwefel (SCHW), and Griewank (GRIE). Both EA implementations with generational (GGA) and steady state (SSGA) replacements, and five crossover rates per technique. For each rate, the best fitness based on the minimum median performance is reported, following the standard deviation of the observed value, and the iteration where the reported fitness is found. The latter is enclosed in square brackets.
Graphs from Fig.~\ref{c4fig7} to Fig.~\ref{c4fig10} illustrate the best individuals' fitness in performed experiments, reported fitnesses are based on the minimum median performance. Each figure shows the tendency of the best individuals per technique. For ReGen GA and Classic GA, two methods are applied: steady state and generational population replacements. The fitness evolution of individuals can be appreciated by tracking green and red lines that depict the best individual's fitness for classic GA. Blue and black lines trace the best individual's fitness for ReGen GA. From top to bottom, each figure displays individuals' behavior with crossover rates from $0.6$ to $1.0$. Figures on the right side show defined marking periods. Vertical lines in blue depict the starting of a marking period, lines in gray delimit the end of such periods.
\newpage
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Rastrigin}
\label{c4table17}
\begin{tabular}{p{1cm}lllll}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{Rastrigin}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $ 11.018\pm4.43 [998]$ & $ 11.074 \pm4.42 [990]$ & $ 1.005 \pm0.77 [943]$ & $ 1.011 \pm1.18 [1000]$\\
0.7 & $ 10.878\pm5.07 [997]$ & $ 10.909 \pm4.34 [980]$ & $ 0.033 \pm1.00 [947]$ & $ 0.521 \pm0.70 [948]$\\
0.8 & $ 10.638\pm4.91 [1000]$ & $ 10.592 \pm3.68 [1000]$& $ 0.025 \pm0.88 [1000]$ & $ 0.031 \pm1.00 [946]$\\
0.9 & $ 09.748 \pm4.47 [1000]$ & $ 09.435 \pm3.71 [919]$ & $ 0.030 \pm0.85 [995] $ & $ 0.026 \pm0.72 [951]$\\
1.0 & $ 08.038 \pm3.88 [1000]$ & $ 06.422 \pm3.79 [960]$ & $ 0.025 \pm0.67 [964]$ & $ 0.027 \pm0.60 [954]$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/RAS062.pdf}
\includegraphics[width=2in]{imagesThesis/RAS06.pdf}
\includegraphics[width=2in]{imagesThesis/RAS072.pdf}
\includegraphics[width=2in]{imagesThesis/RAS07.pdf}
\includegraphics[width=2in]{imagesThesis/RAS082.pdf}
\includegraphics[width=2in]{imagesThesis/RAS08.pdf}
\includegraphics[width=2in]{imagesThesis/RAS092.pdf}
\includegraphics[width=2in]{imagesThesis/RAS09.pdf}
\includegraphics[width=2in]{imagesThesis/RAS102.pdf}
\includegraphics[width=2in]{imagesThesis/RAS10.pdf}
\caption{Rastrigin. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig7}
\end{figure}
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Rosenbrock}
\label{c4table18}
\begin{tabular}{p{1cm}lllll}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{Rosenbrock}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $ 0.473 \pm3.95 [850]$ & $ 1.035 \pm3.71 [736]$ & $ 0.291 \pm0.42 [1000]$ & $ 0.248 \pm0.86 [987]$\\
0.7 & $ 0.412 \pm2.79 [804]$ & $ 0.502 \pm3.45 [1000]$ & $ 0.280 \pm0.19 [938]$ & $ 0.252 \pm0.67 [939]$\\
0.8 & $ 0.580 \pm5.24 [481]$ & $ 0.474 \pm2.88 [829]$ & $ 0.251 \pm0.25 [998]$ & $ 0.238 \pm0.46 [999]$\\
0.9 & $ 0.476 \pm3.92 [753]$ & $ 0.471 \pm3.77 [732]$ & $ 0.248 \pm0.29 [1000]$ & $ 0.216 \pm0.20 [950]$\\
1.0 & $ 0.445 \pm2.84 [1000]$ & $ 0.503 \pm3.67 [994]$ & $ 0.169 \pm0.18 [999]$ & $ 0.258 \pm0.31 [1000]$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/ROSE062.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE06.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE072.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE07.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE082.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE08.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE092.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE09.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE102.pdf}
\includegraphics[width=2in]{imagesThesis/ROSE10.pdf}
\caption{Rosenbrock. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig8}
\end{figure}
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Schwefel}
\label{c4table19}
\begin{tabular}{p{1cm}lllll}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{Schwefel}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $161.9\pm179.5 [977]$ & $ 201.2 \pm117.4 [853]$ & $ 3.6e-4 \pm26.4 [952]$ & $ 7.1e-4 \pm55.0 [952]$\\
0.7 & $148.9\pm126.5 [979]$ & $ 148.9 \pm147.5 [915]$ & $ 3.3e-4 \pm44.2 [957]$ & $ 3.2e-4 \pm55.4 [941]$\\
0.8 & $ 76.20\pm93.80 [879]$ & $ 118.4 \pm124.4 [982]$ & $ 2.7e-4 \pm35.1 [965]$ & $ 3.1e-4 \pm23.5 [976]$\\
0.9 & $ 60.90\pm121.6 [889]$ & $ 118.4 \pm103.1 [995]$ & $ 3.0e-4 \pm10.7 [998]$ & $ 2.9e-4 \pm66.3 [858]$\\
1.0 & $ 30.40\pm84.6 [1000]$ & $ 60.90 \pm78.80 [922]$ & $ 2.9e-4 \pm3.4 [1000]$ & $ 2.6e-4 \pm1.40 [880]$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/SCHW062.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW06.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW072.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW07.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW082.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW08.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW092.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW09.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW102.pdf}
\includegraphics[width=2in]{imagesThesis/SCHW10.pdf}
\caption{Schwefel. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig9}
\end{figure}
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady replacements: Griewank}
\label{c4table20}
\begin{tabular}{p{1cm}lllll}
\hline
\multirow{2}{5cm}{\textbf{Rate}} & \multicolumn{4}{c}{\textbf{Griewank}} \\
\cline{2-5} & \textbf{Classic GGA} & \textbf{Classic SSGA} & \textbf{ReGen GGA} & \textbf{ReGen SSGA} \\
\hline
0.6 & $ 0.150 \pm0.12 [929]$ & $ 0.185 \pm0.16 [911]$ & $ 0.069 \pm0.05 [990]$ & $ 0.064 \pm0.04 [910]$\\
0.7 & $ 0.205 \pm0.09 [973]$ & $ 0.157 \pm0.09 [977]$ & $ 0.064 \pm0.04 [942]$ & $ 0.057 \pm0.04 [991]$\\
0.8 & $ 0.189 \pm0.11 [847]$ & $ 0.189 \pm0.11 [984]$ & $ 0.075 \pm0.06 [861]$ & $ 0.063 \pm0.04 [961]$\\
0.9 & $ 0.161 \pm0.07 [1000]$& $ 0.152 \pm0.07 [893]$ & $ 0.076 \pm0.06 [957]$ & $ 0.065 \pm0.05 [1000]$\\
1.0 & $ 0.136 \pm0.08 [945]$ & $ 0.187 \pm0.08 [865]$ & $ 0.081 \pm0.04 [944]$ & $ 0.058 \pm0.04 [989]$\\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/GRIE062.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE06.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE072.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE07.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE082.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE08.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE092.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE09.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE102.pdf}
\includegraphics[width=2in]{imagesThesis/GRIE10.pdf}
\caption{Griewank. Generational replacement (GGA) and Steady State replacement (SSGA). From top to bottom, crossover rates from $0.6$ to $1.0$.}
\label{c4fig10}
\end{figure}
Based on tabulated results in Table~\ref{c4table17} for Rastrigin function, it can be noted that ReGen GA performs better than the classic GA. ReGen GA is able to discover varied optimal solutions until achieving the total of configured iterations. Even though the global minimum ($0.0$) is not reached, it achieves suitable solutions in general. ReGen GA reports solutions with local minimum under $1.0$; in contrast, classic GA solutions are above $5.0$. In Fig.~\ref{c4fig7} is observable that marking periods applied on chromosomes at iterations $200$, $500$, and $800$ produce significant changes in the evolution of individuals. After starting the first marking period, the fitness improves to be closer to the optimum, and populations improve their performance once tags are added to individuals. Fig.~\ref{c4fig7} also shows that classic GA performance is under ReGen GA performance in all crossover rates levels. Generational and steady replacements performed similarly to this problem.
Tabulated results in Table~\ref{c4table18} for Rosenbrock show that ReGen GA accomplishes better solutions than the classic GA. However, not much difference is evident in results. ReGen GA solutions are a bit closer to the global minimum ($0.0$) than solutions reported by the classic GA. In Fig.~\ref{c4fig8} is noticeable that the pressure applied on chromosomes at iteration $200$ does cause a change in the evolution of individuals. After starting the marking period, the fitness slightly improves, and populations improve their performance once tags are bound. ReGen GA reports better local minima than GA, and generational and steady replacements have almost similar results for all crossover rates.
On the other hand, tabulated results in Table~\ref{c4table19} for Schwefel, evidence that ReGen GA performs much better than the classic GA for current experiments. ReGen GA reports suitable solutions nearer the global minimum ($0.0$) than GA solutions. It can be appreciated that the best solutions are close to the optima for all crossover rates. On the contrary, individuals' fitness for classic GA does not reach the same local optima. Fig.~\ref{c4fig9} remarks that the pressure applied on chromosomes during defined marking periods introduces a great change in the evolution of individuals. After starting the first marking period, the fitness improves to be closer to the optimum, and populations improve their performance once tags are attached. The ReGen GA reaches a variety of good solutions during the evolution process, exposing the ability of the proposed approach to discover novelties that are not identified by the classic GA. Fig.~\ref{c4fig9} also shows that classic GA performance is below ReGen GA performance in all crossover rates levels; the classic GA does not find suitable solutions for this experiment. Generational replacement performed better than the steady replacement for this problem.
As well, tabulated results in Table~\ref{c4table20} for Griewank objective function, show that both ReGen GA and classic GA have a small margin of difference on their performances; still, ReGen GA produces better solutions than the classic GA. Both reached local optima under $1.0$. In Fig.~\ref{c4fig10} is evident that the marking process at iteration $200$ generates a change in the evolution of individuals. After starting the marking period, fitness improves and keeps stable for the best individuals. Both generational and steady replacements performed slightly similar for all crossover rates.
Continuing with the analysis, Table~\ref{c4table21} gives an outline of the best solutions found by ReGen GA as reported previously and best solutions reported by Gomez \cite{GOMEZa, GOMEZb} on GA implementations with Real encoding \footnote{Gomez implements four GAs with Single Point Real Crossover (X), Gaussian (G), and Uniform (U) Mutation as genetic operators in order to compare their performance with \textsc{HaEa}\xspace. Two generational GAs (GGA(XG) and GGA(XU)), and two steady state GAs (SSGA(XG) and SSGA(XU)). The GAs uses a tournament size of four as parent selection method. For steady state implementations, the worst individual of the population is replaced with the best child generated after crossover and mutation occurs. The reported results are performed with a mutation rate of $0.5$ and crossover rate of $0.7$.}.
\begin{table}[H]
\centering
\caption{Solutions found by different EAs on real functions}
\label{c4table21}
\begin{tabular}{lllll} \hline
\textbf{EA} & \textbf{Rosenbrock} & \textbf{Schwefel} & \textbf{Rastrigin} & \textbf{Griewank} \\\hline
ReGen GGA & $0.16954\pm0.18$ & $0.00027\pm35.15$ & $0.02539\pm0.67$ & $0.06481\pm0.04$\\
ReGen SSGA & $0.21634\pm0.20$ & $0.00026\pm1.420$ & $0.02669\pm0.72$ & $0.05725\pm0.04$\\
GGA(XU) & $0.17278\pm0.11$ & $2.00096\pm1.210$ & $0.26500\pm0.15$ & $0.63355\pm0.24$\\
GGA(XG) & $0.03852\pm0.03$ & $378.479\pm222.4$ & $12.1089\pm5.01$ & $0.05074\pm0.02$\\
SSGA(XU) & $0.06676\pm0.08$ & $0.88843\pm0.570$ & $0.12973\pm0.07$ & $0.32097\pm0.13$\\
SSGA(XG) & $0.04842\pm0.04$ & $659.564\pm277.3$ & $19.7102\pm7.80$ & $0.04772\pm0.02$\\
Digalakis \cite{DIGALAKIS} & $0.40000000$ & - & $10.000$ & $0.7000$\\
Patton \cite{PATTON} & - & - & $4.8970$ & $0.0043$\\\hline
\end{tabular}
\end{table}
ReGen GA, in general, has better performance for Schwefel (ReGen GGA with crossover $0.8$; ReGen SSGA with crossover $1.0$) and Rastrigin (ReGen GGA with crossover $1.0$; ReGen SSGA with crossover $0.9$) functions. Nevertheless, for Rosenbrock (ReGen GGA with crossover $1.0$; ReGen SSGA with crossover $0.9$) and Griewank (with crossover $0.7$ for both implementations) functions, it obtained suitable solutions but not always better than the ones reported by listed EAs.
\subsection{Statistical Analysis} \label{c4s3ss3}
The statistical analysis presented in this subsection follows the same scheme from binary problems section; therefore, some descriptions are omitted, refer to subsection~\ref{c4s2ss3} for more details. Three different tests are performed, One-Way ANOVA test, Pairwise Student's t-test, and Paired Samples Wilcoxon Test (also known as Wilcoxon signed-rank test). The data set ReGen EAs Samples in Appendix \ref{appendB} is used, the samples contain twenty EAs implementations for each of the following functions: Ratrigin, Rosenbrock, Schwefel, and Griewank. The samples refer to the best fitness of a solution found in each run, the number of executions per algorithm is $30$. Different implementations involve classic GAs and ReGen GAs with Generational (G) and Steady State (SS) population replacements, and crossover rates from $0.6$ to $1.0$.
\begin{landscape}
\begin{table}[H]
\centering
\caption{Anova Single Factor: SUMMARY}
\label{c4table22}
\scriptsize
\begin{tabular}{llllllllllllll}
\hline
\multicolumn{2}{l}{} & \multicolumn{3}{c}{\textbf{Rastrigin}} & \multicolumn{3}{c}{\textbf{Rosenbrock}} & \multicolumn{3}{c}{\textbf{Schwefel}} & \multicolumn{3}{c}{\textbf{Griewank}} \\
Groups & Count & Sum & Average & Variance & Sum & Average & Variance & Sum & Average & Variance & Sum & Average & Variance \\\hline
GGAX06 & 30 & 340.2427 & 11.3414 & 19.3266 & 52.58683 & 1.75289 & 13.96380 & 5859.204 & 195.3068 & 30075.83 & 5.33366 & 0.17779 & 0.00578 \\
GGAX07 & 30 & 342.8903 & 11.4297 & 24.1491 & 46.71745 & 1.55725 & 5.79244 & 5045.616 & 168.1872 & 14482.14 & 6.11428 & 0.20381 & 0.00946 \\
GGAX08 & 30 & 336.9313 & 11.2310 & 23.8248 & 97.34299 & 3.24477 & 17.80430 & 2816.054 & 93.8685 & 6880.24 & 5.85960 & 0.19532 & 0.01076 \\
GGAX09 & 30 & 310.2216 & 10.3407 & 19.6901 & 69.00354 & 2.30012 & 11.97435 & 2676.372 & 89.2124 & 13007.02 & 5.13625 & 0.17121 & 0.00449 \\
GGAX10 & 30 & 260.1163 & 8.6705 & 14.5260 & 42.00588 & 1.40020 & 7.17257 & 2057.618 & 68.5873 & 5671.65 & 4.28734 & 0.14291 & 0.00598 \\
SSGAX06 & 30 & 352.4735 & 11.7491 & 19.0427 & 77.03410 & 2.56780 & 11.33058 & 6353.236 & 211.7745 & 13329.70 & 7.33354 & 0.24445 & 0.02425 \\
SSGAX07 & 30 & 335.8611 & 11.1954 & 18.6028 & 52.04329 & 1.73478 & 10.39958 & 4691.105 & 156.3702 & 19022.31 & 5.45293 & 0.18176 & 0.00758 \\
SSGAX08 & 30 & 303.3327 & 10.1111 & 13.3066 & 48.09139 & 1.60305 & 7.03069 & 4213.878 & 140.4626 & 15033.45 & 6.57597 & 0.21920 & 0.01325 \\
SSGAX09 & 30 & 292.4265 & 9.7475 & 11.4686 & 62.27272 & 2.07576 & 11.61922 & 3689.997 & 122.9999 & 10995.33 & 5.11276 & 0.17043 & 0.00483 \\
SSGAX10 & 30 & 228.0086 & 7.6003 & 12.6033 & 58.61603 & 1.95387 & 11.33277 & 2557.336 & 85.2445 & 5461.39 & 5.83554 & 0.19452 & 0.00733 \\
ReGenGGAX06 & 30 & 21.1354 & 0.7045 & 0.5303 & 11.54441 & 0.38481 & 0.16875 & 344.900 & 11.4967 & 559.56 & 2.29069 & 0.07636 & 0.00237 \\
ReGenGGAX07 & 30 & 19.2442 & 0.6415 & 0.6273 & 9.33206 & 0.31107 & 0.03705 & 601.931 & 20.0644 & 1538.00 & 1.90437 & 0.06348 & 0.00186 \\
ReGenGGAX08 & 30 & 13.3463 & 0.4449 & 0.6015 & 8.84472 & 0.29482 & 0.06261 & 302.420 & 10.0807 & 673.19 & 2.44422 & 0.08147 & 0.00326 \\
ReGenGGAX09 & 30 & 13.9836 & 0.4661 & 0.5390 & 8.97137 & 0.29905 & 0.08245 & 93.521 & 3.1174 & 104.54 & 2.55778 & 0.08526 & 0.00372 \\
ReGenGGAX10 & 30 & 8.2175 & 0.2739 & 0.2014 & 6.59899 & 0.21997 & 0.03338 & 24.872 & 0.8291 & 11.52 & 2.65118 & 0.08837 & 0.00197 \\
ReGenSSGAX06 & 30 & 24.7609 & 0.8254 & 1.3584 & 16.02581 & 0.53419 & 0.66559 & 891.330 & 29.7110 & 2116.61 & 2.14538 & 0.07151 & 0.00158 \\
ReGenSSGAX07 & 30 & 19.3034 & 0.6434 & 0.4815 & 14.03180 & 0.46773 & 0.40673 & 854.721 & 28.4907 & 2233.54 & 1.96305 & 0.06544 & 0.00216 \\
ReGenSSGAX08 & 30 & 17.8044 & 0.5935 & 0.6824 & 11.06604 & 0.36887 & 0.20173 & 210.062 & 7.0021 & 505.22 & 2.08242 & 0.06941 & 0.00241 \\
ReGenSSGAX09 & 30 & 10.9318 & 0.3644 & 0.3144 & 8.10569 & 0.27019 & 0.03800 & 820.391 & 27.3464 & 2287.48 & 2.08257 & 0.06942 & 0.00260 \\
ReGenSSGAX10 & 30 & 9.6430 & 0.3214 & 0.2819 & 9.05231 & 0.30174 & 0.09498 & 0.00824998 & 0.00027 & 1.41E-09 & 2.05718 & 0.06857 & 0.00186
\\\hline
\end{tabular}
\end{table}
\end{landscape}
Based on the ReGen EAs Samples in Appendix \ref{appendB}, the analysis of variance is computed to know the difference between evolutionary algorithms with different implementations. Variations include classic GAs and ReGen GAs, replacement strategies (Generational and Steady State), and crossover rates from $0.6$ to $1.0$, algorithms are twenty in total. Table~\ref{c4table22} shows a summary for each algorithm and function; the summary presents the number of samples per algorithm ($30$), the sum of the fitness, the average fitness, and their variances. Results of the ANOVA single factor are tabulated in Table~\ref{c4table23}.
\begin{table}[H]
\centering
\caption{Anova Single Factor: ANOVA}
\label{c4table23}
\begin{tabular}{lllllll}
\hline
\multicolumn{7}{c}{\textbf{Rastrigin}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 14947.3266 & 19 & 786.70140 & 86.3753 & 4.8815E-155 & 1.60449 \\
Within Groups & 5282.60586 & 580 & 9.10794 & & & \\
& & & & & & \\
Total & 20229.9325 & 599 & & & &
\\\hline
\multicolumn{7}{c}{\textbf{Rosenbrock}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 507.02716 & 19 & 26.68564 & 4.8426 & 1.30194E-10 & 1.60449 \\
Within Groups & 3196.1356 & 580 & 5.51057 & & & \\
& & & & & & \\
Total & 3703.1628 & 599 & & & &
\\\hline
\multicolumn{7}{c}{\textbf{Schwefel}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 2832064.8 & 19 & 149056.042 & 20.7038 & 2.4055E-53 & 1.60449 \\
Within Groups & 4175672.6 & 580 & 7199.43552 & & & \\
& & & & & & \\
Total & 7007737.3 & 599 & & & &
\\\hline
\multicolumn{7}{c}{\textbf{Griewank}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 2.26223 & 19 & 0.11906 & 20.2641 & 2.6291E-52 & 1.60449 \\
Within Groups & 3.40786 & 580 & 0.00587 & & & \\
& & & & & & \\
Total & 5.67009 & 599 & & & &
\\\hline
\end{tabular}
\end{table}
As P-values for Rastrigin, Rosenbrock, Schwefel, and Griewank functions are less than the significance level $0.05$, results allow concluding that there are significant differences between groups, as shown in Table~\ref{c4table23}. In one-way ANOVA tests, significant P-values indicate that some group means are different, but it is not evident which pairs of groups are different. In order to interpret one-way ANOVA test' results, multiple pairwise-comparison with Student's t-test is performed to determine if the mean difference between specific pairs of the group is statistically significant. Also, paired-sample Wilcoxon tests are computed.
\begin{landscape}
\begin{figure}[H]
\centering
\includegraphics[width=4.2in]{imagesThesis/boxRAS.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxRAS2.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxROSE.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxROSE2.pdf}
\caption{From top to bottom: Rastrigin and Rosenbrock Functions. On the left, EAs with Generational replacement (GGA) and Steady State replacement (SSGA) with Crossover rates from $0.6$ to $1.0$. On the right, EAs grouped by Generational replacement (GGA) and Steady State replacement (SSGA).}
\label{c4fig11}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=4.6in]{imagesThesis/boxSCHW.pdf}
\includegraphics[width=4.6in]{imagesThesis/boxSCHW2.pdf}
\includegraphics[width=4.6in]{imagesThesis/boxGRIE.pdf}
\includegraphics[width=4.6in]{imagesThesis/boxGRIE2.pdf}
\caption{From top to bottom: Schwefel and Griewank Functions. On the left, EAs with Generational replacement (GGA) and Steady State replacement (SSGA) with Crossover rates from $0.6$ to $1.0$. On the right, EAs grouped by Generational replacement (GGA) and Steady State replacement (SSGA).}
\label{c4fig12}
\end{figure}
\end{landscape}
Box plots in Fig.~\ref{c4fig11} and Fig.~\ref{c4fig12} depict the median fitness of EAs' best solutions (ReGen EAs Samples in Appendix \ref{appendB}). On the left, twenty EAs' variations with different crossover rates: Gray ($0.6$), Orange ($0.7$), Blue ($0.8$), White ($0.9$), and Yellow ($1.0$). On the right, figures illustrate the median fitness of classic and epigenetic EAs, which are grouped by population replacement type: Gray (GGA), Orange (ReGen GGA), Blue (ReGen SSGA), and White (SSGA). For Rastrigin function, the median fitness for each Epigenetic EA is under the local minima ($1.0$), while median fitnesses for classic GAs are over the local optimum ($5.0$). On the other hand, Rosenbrock's median fitness is less than $0.5$ for all Epigenetic implementations; in contrast, for standard GAs, the median fitness does exceed $1.0$. Epigenetic EAs for Schwefel achieved median fitness inferior to $0.1$; conversely, GAs median fitnesses are greater than $30$. For Griewank function's box plots, depicted median fitnesses are below the local optimum $0.1$ for epigenetic evolutionary algorithms, while traditional GAs median fitness values are above $0.1$. So, based on these data, it seems that Epigenetic GAs find better solutions than classic GAs. However, it is needed to determine whether these findings are statistically significant.
\begin{landscape}
\begin{table}[H]
\centering
\caption{RAS Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table24}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & 0.96041246 & - & - & - & - & - & - & - & - & - \\
GGAX08 & 0.95257897 & 0.92910961 & - & - & - & - & - & - & - & - \\
GGAX09 & 0.29168935 & 0.24549691 & 0.36794632 & - & - & - & - & - & - & - \\
GGAX10 & \cellcolor[HTML]{EFEFEF}0.0011367 & \cellcolor[HTML]{EFEFEF}0.00076506 & \cellcolor[HTML]{EFEFEF}0.0018616 & 0.05322976 & - & - & - & - & - & - \\
ReGenGGAX06 & \cellcolor[HTML]{EFEFEF}2.66E-36 & \cellcolor[HTML]{EFEFEF}9.11E-37 & \cellcolor[HTML]{EFEFEF}9.78E-36 & \cellcolor[HTML]{EFEFEF}7.19E-31 & \cellcolor[HTML]{EFEFEF}2.44E-22 & - & - & - & - & - \\
ReGenGGAX07 & \cellcolor[HTML]{EFEFEF}1.24E-36 & \cellcolor[HTML]{EFEFEF}4.69E-37 & \cellcolor[HTML]{EFEFEF}4.64E-36 & \cellcolor[HTML]{EFEFEF}3.45E-31 & \cellcolor[HTML]{EFEFEF}1.23E-22 & 0.97139031 & - & - & - & - \\
ReGenGGAX08 & \cellcolor[HTML]{EFEFEF}1.45E-37 & \cellcolor[HTML]{EFEFEF}6.45E-38 & \cellcolor[HTML]{EFEFEF}4.69E-37 & \cellcolor[HTML]{EFEFEF}3.12E-32 & \cellcolor[HTML]{EFEFEF}1.36E-23 & 0.90599934 & 0.92910961 & - & - & - \\
ReGenGGAX09 & \cellcolor[HTML]{EFEFEF}1.80E-37 & \cellcolor[HTML]{EFEFEF}7.59E-38 & \cellcolor[HTML]{EFEFEF}6.04E-37 & \cellcolor[HTML]{EFEFEF}4.03E-32 & \cellcolor[HTML]{EFEFEF}1.71E-23 & 0.91303244 & 0.92910961 & 0.98343714 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}2.71E-38 & \cellcolor[HTML]{EFEFEF}1.36E-38 & \cellcolor[HTML]{EFEFEF}7.82E-38 & \cellcolor[HTML]{EFEFEF}3.65E-33 & \cellcolor[HTML]{EFEFEF}1.96E-24 & 0.79383597 & 0.83932039 & 0.92910961 & 0.92910961 & - \\
ReGenSSGAX06 & \cellcolor[HTML]{EFEFEF}1.10E-35 & \cellcolor[HTML]{EFEFEF}4.00E-36 & \cellcolor[HTML]{EFEFEF}4.54E-35 & \cellcolor[HTML]{EFEFEF}3.19E-30 & \cellcolor[HTML]{EFEFEF}9.30E-22 & 0.95195906 & 0.92910961 & 0.83697676 & 0.83932039 & 0.67474664 \\
ReGenSSGAX07 & \cellcolor[HTML]{EFEFEF}1.24E-36 & \cellcolor[HTML]{EFEFEF}4.69E-37 & \cellcolor[HTML]{EFEFEF}4.65E-36 & \cellcolor[HTML]{EFEFEF}3.48E-31 & \cellcolor[HTML]{EFEFEF}1.24E-22 & 0.97139031 & 0.99797965 & 0.92910961 & 0.92910961 & 0.83932039 \\
ReGenSSGAX08 & \cellcolor[HTML]{EFEFEF}7.13E-37 & \cellcolor[HTML]{EFEFEF}2.77E-37 & \cellcolor[HTML]{EFEFEF}2.66E-36 & \cellcolor[HTML]{EFEFEF}1.92E-31 & \cellcolor[HTML]{EFEFEF}7.23E-23 & 0.95257897 & 0.97139031 & 0.94200353 & 0.95195906 & 0.85814598 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}6.72E-38 & \cellcolor[HTML]{EFEFEF}2.71E-38 & \cellcolor[HTML]{EFEFEF}1.91E-37 & \cellcolor[HTML]{EFEFEF}1.13E-32 & \cellcolor[HTML]{EFEFEF}5.46E-24 & 0.85648226 & 0.89695052 & 0.96340589 & 0.95659485 & 0.96041246 \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}4.29E-38 & \cellcolor[HTML]{EFEFEF}1.88E-38 & \cellcolor[HTML]{EFEFEF}1.27E-37 & \cellcolor[HTML]{EFEFEF}6.62E-33 & \cellcolor[HTML]{EFEFEF}3.36E-24 & 0.83697676 & 0.85814598 & 0.95195906 & 0.94200353 & 0.97139031 \\
SSGAX06 & 0.81568977 & 0.85814598 & 0.70748155 & 0.11090763 & \cellcolor[HTML]{EFEFEF}0.00015681 & \cellcolor[HTML]{EFEFEF}3.31E-38 & \cellcolor[HTML]{EFEFEF}1.88E-38 & \cellcolor[HTML]{EFEFEF}3.11E-39 & \cellcolor[HTML]{EFEFEF}3.33E-39 & \cellcolor[HTML]{EFEFEF}1.14E-39 \\
SSGAX07 & 0.94200353 & 0.91303244 & 0.97375344 & 0.39322514 & \cellcolor[HTML]{EFEFEF}0.00216188 & \cellcolor[HTML]{EFEFEF}1.50E-35 & \cellcolor[HTML]{EFEFEF}7.11E-36 & \cellcolor[HTML]{EFEFEF}7.10E-37 & \cellcolor[HTML]{EFEFEF}8.88E-37 & \cellcolor[HTML]{EFEFEF}1.14E-37 \\
SSGAX08 & 0.17605857 & 0.14080147 & 0.22979948 & 0.91303244 & 0.10208473 & \cellcolor[HTML]{EFEFEF}1.16E-29 & \cellcolor[HTML]{EFEFEF}5.50E-30 & \cellcolor[HTML]{EFEFEF}5.08E-31 & \cellcolor[HTML]{EFEFEF}6.54E-31 & \cellcolor[HTML]{EFEFEF}6.38E-32 \\
SSGAX09 & 0.06643715 & 0.05168347 & 0.09169682 & 0.63832768 & 0.24857143 & \cellcolor[HTML]{EFEFEF}9.17E-28 & \cellcolor[HTML]{EFEFEF}4.36E-28 & \cellcolor[HTML]{EFEFEF}4.09E-29 & \cellcolor[HTML]{EFEFEF}5.18E-29 & \cellcolor[HTML]{EFEFEF}5.31E-30 \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}3.71E-06 & \cellcolor[HTML]{EFEFEF}2.16E-06 & \cellcolor[HTML]{EFEFEF}7.19E-06 & \cellcolor[HTML]{EFEFEF}0.00082833 & 0.25058299 & \cellcolor[HTML]{EFEFEF}2.04E-17 & \cellcolor[HTML]{EFEFEF}1.09E-17 & \cellcolor[HTML]{EFEFEF}1.49E-18 & \cellcolor[HTML]{EFEFEF}1.84E-18 & \cellcolor[HTML]{EFEFEF}2.56E-19
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{RAS Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table25}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.92910961 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.91303244 & 0.97139031 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & 0.76325659 & 0.89695052 & 0.91303244 & - & - & - & - & - & - \\
ReGenSSGAX10 & 0.71850131 & 0.85814598 & 0.89710033 & 0.97139031 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}1.14E-37 & \cellcolor[HTML]{EFEFEF}1.88E-38 & \cellcolor[HTML]{EFEFEF}1.36E-38 & \cellcolor[HTML]{EFEFEF}1.37E-39 & \cellcolor[HTML]{EFEFEF}1.14E-39 & - & - & - & - \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}7.11E-35 & \cellcolor[HTML]{EFEFEF}7.14E-36 & \cellcolor[HTML]{EFEFEF}4.03E-36 & \cellcolor[HTML]{EFEFEF}2.86E-37 & \cellcolor[HTML]{EFEFEF}1.80E-37 & 0.67474664 & - & - & - \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}4.98E-29 & \cellcolor[HTML]{EFEFEF}5.55E-30 & \cellcolor[HTML]{EFEFEF}3.15E-30 & \cellcolor[HTML]{EFEFEF}1.92E-31 & \cellcolor[HTML]{EFEFEF}1.15E-31 & 0.05841924 & 0.24627159 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}3.92E-27 & \cellcolor[HTML]{EFEFEF}4.41E-28 & \cellcolor[HTML]{EFEFEF}2.45E-28 & \cellcolor[HTML]{EFEFEF}1.53E-29 & \cellcolor[HTML]{EFEFEF}9.20E-30 & \cellcolor[HTML]{EFEFEF}0.01743086 & 0.10081497 & 0.83932039 & - \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}6.80E-17 & \cellcolor[HTML]{EFEFEF}1.11E-17 & \cellcolor[HTML]{EFEFEF}6.77E-18 & \cellcolor[HTML]{EFEFEF}6.50E-19 & \cellcolor[HTML]{EFEFEF}4.18E-19 & \cellcolor[HTML]{EFEFEF}2.73E-07 & \cellcolor[HTML]{EFEFEF}8.82E-06 & \cellcolor[HTML]{EFEFEF}0.00227926 & \cellcolor[HTML]{EFEFEF}0.01015915
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{ROSE Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table26}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & 0.97208658 & - & - & - & - & - & - & - & - & - \\
GGAX08 & \cellcolor[HTML]{EFEFEF}0.04881414 & \cellcolor[HTML]{EFEFEF}0.02555667 & - & - & - & - & - & - & - & - \\
GGAX09 & 0.57153304 & 0.36485797 & 0.20856668 & - & - & - & - & - & - & - \\
GGAX10 & 0.83562136 & 0.97544713 & \cellcolor[HTML]{EFEFEF}0.01549184 & 0.23648078 & - & - & - & - & - & - \\
ReGenGGAX06 & 0.06582199 & 0.10908846 & \cellcolor[HTML]{EFEFEF}7.08E-05 & \cellcolor[HTML]{EFEFEF}0.01126426 & 0.16925516 & - & - & - & - & - \\
ReGenGGAX07 & 0.05376269 & 0.08887445 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}0.00799463 & 0.13573914 & 0.99094875 & - & - & - & - \\
ReGenGGAX08 & 0.05286811 & 0.08631238 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}0.00788007 & 0.13321453 & 0.99094875 & 0.9964496 & - & - & - \\
ReGenGGAX09 & 0.05286811 & 0.08664538 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}0.00788007 & 0.13387032 & 0.99094875 & 0.9964496 & 0.9964496 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}0.04274603 & 0.07223188 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}0.00609823 & 0.10737367 & 0.97544713 & 0.99094875 & 0.99094875 & 0.99094875 & - \\
ReGenSSGAX06 & 0.09586036 & 0.16641953 & \cellcolor[HTML]{EFEFEF}0.00017713 & \cellcolor[HTML]{EFEFEF}0.01918161 & 0.26058028 & 0.97544713 & 0.96054387 & 0.95886064 & 0.95886064 & 0.86336478 \\
ReGenSSGAX07 & 0.08069305 & 0.13573914 & \cellcolor[HTML]{EFEFEF}0.00011926 & \cellcolor[HTML]{EFEFEF}0.01600757 & 0.21502063 & 0.99094875 & 0.97544713 & 0.97544713 & 0.97544713 & 0.953995 \\
ReGenSSGAX08 & 0.06294053 & 0.10522403 & \cellcolor[HTML]{EFEFEF}7.08E-05 & \cellcolor[HTML]{EFEFEF}0.01068613 & 0.16328371 & 0.9964496 & 0.9964496 & 0.99094875 & 0.99094875 & 0.97544713 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}0.04910282 & 0.08069305 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}0.00745631 & 0.12296574 & 0.99094875 & 0.9964496 & 0.9964496 & 0.9964496 & 0.9964496 \\
ReGenSSGAX10 & 0.05286811 & 0.08664538 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}0.00788007 & 0.13387032 & 0.99094875 & 0.9964496 & 0.9964496 & 0.9964496 & 0.99094875 \\
SSGAX06 & 0.29886179 & 0.17046693 & 0.42748594 & 0.93354867 & 0.10908846 & \cellcolor[HTML]{EFEFEF}0.00362422 & \cellcolor[HTML]{EFEFEF}0.00256288 & \cellcolor[HTML]{EFEFEF}0.00256288 & \cellcolor[HTML]{EFEFEF}0.00256288 & \cellcolor[HTML]{EFEFEF}0.00206334 \\
SSGAX07 & 0.9964496 & 0.97544713 & \cellcolor[HTML]{EFEFEF}0.04576565 & 0.55629676 & 0.8428942 & 0.06943874 & 0.05516519 & 0.05376269 & 0.05376269 & \cellcolor[HTML]{EFEFEF}0.04560677 \\
SSGAX08 & 0.97544713 & 0.9964496 & \cellcolor[HTML]{EFEFEF}0.02812045 & 0.41044716 & 0.97208658 & 0.09586036 & 0.08069305 & 0.07931275 & 0.07947888 & 0.06294053 \\
SSGAX09 & 0.85566196 & 0.60652592 & 0.10908846 & 0.96054387 & 0.42748594 & \cellcolor[HTML]{EFEFEF}0.02555667 & \cellcolor[HTML]{EFEFEF}0.01918161 & \cellcolor[HTML]{EFEFEF}0.01918161 & \cellcolor[HTML]{EFEFEF}0.01918161 & \cellcolor[HTML]{EFEFEF}0.01508228 \\
SSGAX10 & 0.97208658 & 0.77377559 & 0.08069305 & 0.83665025 & 0.56743955 & \cellcolor[HTML]{EFEFEF}0.03752499 & \cellcolor[HTML]{EFEFEF}0.02812045 & \cellcolor[HTML]{EFEFEF}0.02790432 & \cellcolor[HTML]{EFEFEF}0.02790432 & \cellcolor[HTML]{EFEFEF}0.0218973
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{ROSE Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table27}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.99094875 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.97544713 & 0.99094875 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & 0.93354867 & 0.97208658 & 0.99094875 & - & - & - & - & - & - \\
ReGenSSGAX10 & 0.95886064 & 0.97544713 & 0.99094875 & 0.9964496 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}0.00745631 & \cellcolor[HTML]{EFEFEF}0.00569756 & \cellcolor[HTML]{EFEFEF}0.00347469 & \cellcolor[HTML]{EFEFEF}0.00256288 & \cellcolor[HTML]{EFEFEF}0.00256288 & - & - & - & - \\
SSGAX07 & 0.10151582 & 0.085759 & 0.06582199 & 0.05236195 & 0.05376269 & 0.2855976 & - & - & - \\
SSGAX08 & 0.14452907 & 0.12182725 & 0.09212144 & 0.07258523 & 0.07947888 & 0.19702636 & 0.99094875 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}0.04186549 & \cellcolor[HTML]{EFEFEF}0.03244222 & \cellcolor[HTML]{EFEFEF}0.02448551 & \cellcolor[HTML]{EFEFEF}0.01789277 & \cellcolor[HTML]{EFEFEF}0.01918161 & 0.63931284 & 0.83884205 & 0.66236082 & - \\
SSGAX10 & 0.05531127 & \cellcolor[HTML]{EFEFEF}0.04910282 & \cellcolor[HTML]{EFEFEF}0.03549543 & \cellcolor[HTML]{EFEFEF}0.02555667 & \cellcolor[HTML]{EFEFEF}0.02790432 & 0.4973954 & 0.96054387 & 0.83562136 & 0.99094875
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{SCHW Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table28}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & 0.29349563 & - & - & - & - & - & - & - & - & - \\
GGAX08 & \cellcolor[HTML]{EFEFEF}1.32E-05 & \cellcolor[HTML]{EFEFEF}0.00154568 & - & - & - & - & - & - & - & - \\
GGAX09 & \cellcolor[HTML]{EFEFEF}4.96E-06 & \cellcolor[HTML]{EFEFEF}0.00074085 & 0.88288414 & - & - & - & - & - & - & - \\
GGAX10 & \cellcolor[HTML]{EFEFEF}4.79E-08 & \cellcolor[HTML]{EFEFEF}1.91E-05 & 0.32852579 & 0.44530689 & - & - & - & - & - & - \\
ReGenGGAX06 & \cellcolor[HTML]{EFEFEF}5.04E-15 & \cellcolor[HTML]{EFEFEF}1.89E-11 & \cellcolor[HTML]{EFEFEF}0.00042857 & \cellcolor[HTML]{EFEFEF}0.00089786 & \cellcolor[HTML]{EFEFEF}0.01566357 & - & - & - & - & - \\
ReGenGGAX07 & \cellcolor[HTML]{EFEFEF}7.65E-14 & \cellcolor[HTML]{EFEFEF}2.05E-10 & \cellcolor[HTML]{EFEFEF}0.00165974 & \cellcolor[HTML]{EFEFEF}0.00325874 & \cellcolor[HTML]{EFEFEF}0.04229624 & 0.76682348 & - & - & - & - \\
ReGenGGAX08 & \cellcolor[HTML]{EFEFEF}3.60E-15 & \cellcolor[HTML]{EFEFEF}1.27E-11 & \cellcolor[HTML]{EFEFEF}0.00034065 & \cellcolor[HTML]{EFEFEF}0.0007295 & \cellcolor[HTML]{EFEFEF}0.01320684 & 0.9634304 & 0.73372792 & - & - & - \\
ReGenGGAX09 & \cellcolor[HTML]{EFEFEF}4.10E-16 & \cellcolor[HTML]{EFEFEF}1.56E-12 & \cellcolor[HTML]{EFEFEF}0.00010421 & \cellcolor[HTML]{EFEFEF}0.00023203 & \cellcolor[HTML]{EFEFEF}0.00539223 & 0.76682348 & 0.52852644 & 0.80585652 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}2.03E-16 & \cellcolor[HTML]{EFEFEF}8.63E-13 & \cellcolor[HTML]{EFEFEF}6.76E-05 & \cellcolor[HTML]{EFEFEF}0.00015732 & \cellcolor[HTML]{EFEFEF}0.00394782 & 0.71707208 & 0.47538279 & 0.75214023 & 0.93656485 & - \\
ReGenSSGAX06 & \cellcolor[HTML]{EFEFEF}1.38E-12 & \cellcolor[HTML]{EFEFEF}2.53E-09 & \cellcolor[HTML]{EFEFEF}0.00634508 & \cellcolor[HTML]{EFEFEF}0.01164887 & 0.11267669 & 0.50102081 & 0.74186251 & 0.46632728 & 0.30322679 & 0.26447332 \\
ReGenSSGAX07 & \cellcolor[HTML]{EFEFEF}9.78E-13 & \cellcolor[HTML]{EFEFEF}1.88E-09 & \cellcolor[HTML]{EFEFEF}0.00541319 & \cellcolor[HTML]{EFEFEF}0.00994238 & 0.10132833 & 0.52852644 & 0.76682348 & 0.49805897 & 0.32852579 & 0.28531851 \\
ReGenSSGAX08 & \cellcolor[HTML]{EFEFEF}1.33E-15 & \cellcolor[HTML]{EFEFEF}5.10E-12 & \cellcolor[HTML]{EFEFEF}0.0002037 & \cellcolor[HTML]{EFEFEF}0.00043581 & \cellcolor[HTML]{EFEFEF}0.00889768 & 0.88404959 & 0.6425634 & 0.91726021 & 0.89709331 & 0.8306861 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}7.48E-13 & \cellcolor[HTML]{EFEFEF}1.46E-09 & \cellcolor[HTML]{EFEFEF}0.00465967 & \cellcolor[HTML]{EFEFEF}0.00863338 & 0.09089022 & 0.55086676 & 0.8031202 & 0.52488929 & 0.35275945 & 0.30322679 \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}1.72E-16 & \cellcolor[HTML]{EFEFEF}7.34E-13 & \cellcolor[HTML]{EFEFEF}5.90E-05 & \cellcolor[HTML]{EFEFEF}0.00013811 & \cellcolor[HTML]{EFEFEF}0.00351558 & 0.69085485 & 0.4561716 & 0.73372792 & 0.91726021 & 0.96983594 \\
SSGAX06 & 0.53740475 & 0.07160523 & \cellcolor[HTML]{EFEFEF}3.70E-07 & \cellcolor[HTML]{EFEFEF}1.27E-07 & \cellcolor[HTML]{EFEFEF}7.75E-10 & \cellcolor[HTML]{EFEFEF}3.25E-17 & \cellcolor[HTML]{EFEFEF}4.38E-16 & \cellcolor[HTML]{EFEFEF}2.31E-17 & \cellcolor[HTML]{EFEFEF}2.82E-18 & \cellcolor[HTML]{EFEFEF}1.77E-18 \\
SSGAX07 & 0.11267669 & 0.68333237 & \cellcolor[HTML]{EFEFEF}0.00796687 & \cellcolor[HTML]{EFEFEF}0.00427897 & \cellcolor[HTML]{EFEFEF}0.00017385 & \cellcolor[HTML]{EFEFEF}4.93E-10 & \cellcolor[HTML]{EFEFEF}4.37E-09 & \cellcolor[HTML]{EFEFEF}3.38E-10 & \cellcolor[HTML]{EFEFEF}4.80E-11 & \cellcolor[HTML]{EFEFEF}2.49E-11 \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}0.02007775 & 0.28531851 & 0.05230058 & \cellcolor[HTML]{EFEFEF}0.0308658 & \cellcolor[HTML]{EFEFEF}0.0021947 & \cellcolor[HTML]{EFEFEF}2.82E-08 & \cellcolor[HTML]{EFEFEF}2.09E-07 & \cellcolor[HTML]{EFEFEF}2.03E-08 & \cellcolor[HTML]{EFEFEF}3.37E-09 & \cellcolor[HTML]{EFEFEF}1.88E-09 \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}0.00207074 & 0.06067189 & 0.26108727 & 0.17920944 & \cellcolor[HTML]{EFEFEF}0.02103374 & \cellcolor[HTML]{EFEFEF}1.54E-06 & \cellcolor[HTML]{EFEFEF}9.73E-06 & \cellcolor[HTML]{EFEFEF}1.15E-06 & \cellcolor[HTML]{EFEFEF}2.33E-07 & \cellcolor[HTML]{EFEFEF}1.38E-07 \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}2.07E-06 & \cellcolor[HTML]{EFEFEF}0.00039175 & 0.76682348 & 0.89709331 & 0.53459108 & \cellcolor[HTML]{EFEFEF}0.00165974 & \cellcolor[HTML]{EFEFEF}0.00551935 & \cellcolor[HTML]{EFEFEF}0.00136046 & \cellcolor[HTML]{EFEFEF}0.0004371 & \cellcolor[HTML]{EFEFEF}0.00030782
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{SCHW Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table29}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.9634304 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.39089825 & 0.4227465 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & 0.93656485 & 0.9634304 & 0.45073509 & - & - & - & - & - & - \\
ReGenSSGAX10 & 0.25082493 & 0.27097547 & 0.80585652 & 0.29040453 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}8.06E-15 & \cellcolor[HTML]{EFEFEF}5.65E-15 & \cellcolor[HTML]{EFEFEF}9.17E-18 & \cellcolor[HTML]{EFEFEF}4.38E-15 & \cellcolor[HTML]{EFEFEF}1.77E-18 & - & - & - & - \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}4.79E-08 & \cellcolor[HTML]{EFEFEF}3.65E-08 & \cellcolor[HTML]{EFEFEF}1.47E-10 & \cellcolor[HTML]{EFEFEF}2.82E-08 & \cellcolor[HTML]{EFEFEF}2.01E-11 & \cellcolor[HTML]{EFEFEF}0.01900709 & - & - & - \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}1.80E-06 & \cellcolor[HTML]{EFEFEF}1.40E-06 & \cellcolor[HTML]{EFEFEF}9.20E-09 & \cellcolor[HTML]{EFEFEF}1.11E-06 & \cellcolor[HTML]{EFEFEF}1.58E-09 & \cellcolor[HTML]{EFEFEF}0.00237449 & 0.55086676 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}6.52E-05 & \cellcolor[HTML]{EFEFEF}5.27E-05 & \cellcolor[HTML]{EFEFEF}5.75E-07 & \cellcolor[HTML]{EFEFEF}4.25E-05 & \cellcolor[HTML]{EFEFEF}1.16E-07 & \cellcolor[HTML]{EFEFEF}0.00014806 & 0.18460857 & 0.52185554 & - \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}0.0188548 & \cellcolor[HTML]{EFEFEF}0.01622979 & \cellcolor[HTML]{EFEFEF}0.00083026 & \cellcolor[HTML]{EFEFEF}0.01419946 & \cellcolor[HTML]{EFEFEF}0.00026794 & \cellcolor[HTML]{EFEFEF}4.85E-08 & \cellcolor[HTML]{EFEFEF}0.00242003 & \cellcolor[HTML]{EFEFEF}0.0193023 & 0.1247525
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{GRIE Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table30}
\tiny
\begin{tabular}{llllllllllllllllllll}
\hline
EAs & GGAX06 & GGAX07 & GGAX08 & GGAX09 & GGAX10 & ReGenGGAX06 & ReGenGGAX07 & ReGenGGAX08 & ReGenGGAX09 & ReGenGGAX10 \\\hline
GGAX07 & 0.29214033 & - & - & - & - & - & - & - & - & - \\
GGAX08 & 0.510415 & 0.78361424 & - & - & - & - & - & - & - & - \\
GGAX09 & 0.83155928 & 0.15842627 & 0.32989657 & - & - & - & - & - & - & - \\
GGAX10 & 0.12649056 & \cellcolor[HTML]{EFEFEF}0.00396273 & \cellcolor[HTML]{EFEFEF}0.01463119 & 0.24076026 & - & - & - & - & - & - \\
ReGenGGAX06 & \cellcolor[HTML]{EFEFEF}1.06E-06 & \cellcolor[HTML]{EFEFEF}1.54E-09 & \cellcolor[HTML]{EFEFEF}1.51E-08 & \cellcolor[HTML]{EFEFEF}4.91E-06 & \cellcolor[HTML]{EFEFEF}0.00153207 & - & - & - & - & - \\
ReGenGGAX07 & \cellcolor[HTML]{EFEFEF}5.06E-08 & \cellcolor[HTML]{EFEFEF}4.12E-11 & \cellcolor[HTML]{EFEFEF}4.61E-10 & \cellcolor[HTML]{EFEFEF}2.45E-07 & \cellcolor[HTML]{EFEFEF}0.0001398 & 0.6486839 & - & - & - & - \\
ReGenGGAX08 & \cellcolor[HTML]{EFEFEF}3.48E-06 & \cellcolor[HTML]{EFEFEF}5.70E-09 & \cellcolor[HTML]{EFEFEF}5.65E-08 & \cellcolor[HTML]{EFEFEF}1.57E-05 & \cellcolor[HTML]{EFEFEF}0.00365564 & 0.87028325 & 0.49701859 & - & - & - \\
ReGenGGAX09 & \cellcolor[HTML]{EFEFEF}8.27E-06 & \cellcolor[HTML]{EFEFEF}1.67E-08 & \cellcolor[HTML]{EFEFEF}1.42E-07 & \cellcolor[HTML]{EFEFEF}3.59E-05 & \cellcolor[HTML]{EFEFEF}0.00666481 & 0.77062026 & 0.3850767 & 0.90053134 & - & - \\
ReGenGGAX10 & \cellcolor[HTML]{EFEFEF}1.67E-05 & \cellcolor[HTML]{EFEFEF}3.75E-08 & \cellcolor[HTML]{EFEFEF}2.93E-07 & \cellcolor[HTML]{EFEFEF}6.94E-05 & \cellcolor[HTML]{EFEFEF}0.01072871 & 0.66683146 & 0.31512052 & 0.82281877 & 0.9185665 & - \\
ReGenSSGAX06 & \cellcolor[HTML]{EFEFEF}3.45E-07 & \cellcolor[HTML]{EFEFEF}4.14E-10 & \cellcolor[HTML]{EFEFEF}3.85E-09 & \cellcolor[HTML]{EFEFEF}1.56E-06 & \cellcolor[HTML]{EFEFEF}0.00063804 & 0.87589027 & 0.79842526 & 0.73481827 & 0.62597133 & 0.53081354 \\
ReGenSSGAX07 & \cellcolor[HTML]{EFEFEF}7.80E-08 & \cellcolor[HTML]{EFEFEF}7.51E-11 & \cellcolor[HTML]{EFEFEF}7.69E-10 & \cellcolor[HTML]{EFEFEF}3.82E-07 & \cellcolor[HTML]{EFEFEF}0.00020698 & 0.70346634 & 0.94620992 & 0.55156577 & 0.44359153 & 0.35545711 \\
ReGenSSGAX08 & \cellcolor[HTML]{EFEFEF}2.09E-07 & \cellcolor[HTML]{EFEFEF}2.29E-10 & \cellcolor[HTML]{EFEFEF}2.20E-09 & \cellcolor[HTML]{EFEFEF}9.91E-07 & \cellcolor[HTML]{EFEFEF}0.00043492 & 0.82281877 & 0.84437496 & 0.66683146 & 0.55156577 & 0.46623314 \\
ReGenSSGAX09 & \cellcolor[HTML]{EFEFEF}2.09E-07 & \cellcolor[HTML]{EFEFEF}2.29E-10 & \cellcolor[HTML]{EFEFEF}2.20E-09 & \cellcolor[HTML]{EFEFEF}9.91E-07 & \cellcolor[HTML]{EFEFEF}0.00043492 & 0.82281877 & 0.84437496 & 0.66683146 & 0.55156577 & 0.46623314 \\
ReGenSSGAX10 & \cellcolor[HTML]{EFEFEF}1.72E-07 & \cellcolor[HTML]{EFEFEF}1.91E-10 & \cellcolor[HTML]{EFEFEF}1.86E-09 & \cellcolor[HTML]{EFEFEF}8.31E-07 & \cellcolor[HTML]{EFEFEF}0.00038006 & 0.80431313 & 0.87028325 & 0.6486839 & 0.53081354 & 0.44359153 \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}0.00151754 & 0.06686587 & \cellcolor[HTML]{EFEFEF}0.02281821 & \cellcolor[HTML]{EFEFEF}0.00045194 & \cellcolor[HTML]{EFEFEF}1.04E-06 & \cellcolor[HTML]{EFEFEF}4.59E-15 & \cellcolor[HTML]{EFEFEF}1.92E-16 & \cellcolor[HTML]{EFEFEF}2.84E-14 & \cellcolor[HTML]{EFEFEF}1.04E-13 & \cellcolor[HTML]{EFEFEF}2.96E-13 \\
SSGAX07 & 0.89755203 & 0.3797239 & 0.62951206 & 0.7142955 & 0.08207869 & \cellcolor[HTML]{EFEFEF}4.14E-07 & \cellcolor[HTML]{EFEFEF}1.76E-08 & \cellcolor[HTML]{EFEFEF}1.36E-06 & \cellcolor[HTML]{EFEFEF}3.37E-06 & \cellcolor[HTML]{EFEFEF}6.82E-06 \\
SSGAX08 & 0.06140684 & 0.56499111 & 0.33338764 & \cellcolor[HTML]{EFEFEF}0.02626616 & \cellcolor[HTML]{EFEFEF}0.00026054 & \cellcolor[HTML]{EFEFEF}1.87E-11 & \cellcolor[HTML]{EFEFEF}3.06E-13 & \cellcolor[HTML]{EFEFEF}8.84E-11 & \cellcolor[HTML]{EFEFEF}2.54E-10 & \cellcolor[HTML]{EFEFEF}5.91E-10 \\
SSGAX09 & 0.81757796 & 0.14718982 & 0.31512052 & 0.9735815 & 0.25697268 & \cellcolor[HTML]{EFEFEF}5.86E-06 & \cellcolor[HTML]{EFEFEF}2.93E-07 & \cellcolor[HTML]{EFEFEF}1.84E-05 & \cellcolor[HTML]{EFEFEF}4.22E-05 & \cellcolor[HTML]{EFEFEF}8.13E-05 \\
SSGAX10 & 0.53081354 & 0.75872072 & 0.9735815 & 0.3471931 & \cellcolor[HTML]{EFEFEF}0.01630752 & \cellcolor[HTML]{EFEFEF}1.78E-08 & \cellcolor[HTML]{EFEFEF}5.73E-10 & \cellcolor[HTML]{EFEFEF}6.80E-08 & \cellcolor[HTML]{EFEFEF}1.72E-07 & \cellcolor[HTML]{EFEFEF}3.51E-07
\\\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{GRIE Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c4table31}
\tiny
\begin{tabular}{lllllllllllllllllll}
\hline
EAs & ReGenSSGAX06 & ReGenSSGAX07 & ReGenSSGAX08 & ReGenSSGAX09 & ReGenSSGAX10 & SSGAX06 & SSGAX07 & SSGAX08 & SSGAX09 \\\hline
GGAX07 & - & - & - & - & - & - & - & - & - \\
GGAX08 & - & - & - & - & - & - & - & - & - \\
GGAX09 & - & - & - & - & - & - & - & - & - \\
GGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX07 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX08 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX09 & - & - & - & - & - & - & - & - & - \\
ReGenGGAX10 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX06 & - & - & - & - & - & - & - & - & - \\
ReGenSSGAX07 & 0.84437496 & - & - & - & - & - & - & - & - \\
ReGenSSGAX08 & 0.94566043 & 0.89755203 & - & - & - & - & - & - & - \\
ReGenSSGAX09 & 0.94566043 & 0.89755203 & 0.99979789 & - & - & - & - & - & - \\
ReGenSSGAX10 & 0.92073066 & 0.9185665 & 0.9735815 & 0.9735815 & - & - & - & - & - \\
SSGAX06 & \cellcolor[HTML]{EFEFEF}8.07E-16 & \cellcolor[HTML]{EFEFEF}2.13E-16 & \cellcolor[HTML]{EFEFEF}4.22E-16 & \cellcolor[HTML]{EFEFEF}4.22E-16 & \cellcolor[HTML]{EFEFEF}4.22E-16 & - & - & - & - \\
SSGAX07 & \cellcolor[HTML]{EFEFEF}1.37E-07 & \cellcolor[HTML]{EFEFEF}2.96E-08 & \cellcolor[HTML]{EFEFEF}7.80E-08 & \cellcolor[HTML]{EFEFEF}7.80E-08 & \cellcolor[HTML]{EFEFEF}6.65E-08 & \cellcolor[HTML]{EFEFEF}0.00298689 & - & - & - \\
SSGAX08 & \cellcolor[HTML]{EFEFEF}3.72E-12 & \cellcolor[HTML]{EFEFEF}5.71E-13 & \cellcolor[HTML]{EFEFEF}1.90E-12 & \cellcolor[HTML]{EFEFEF}1.90E-12 & \cellcolor[HTML]{EFEFEF}1.62E-12 & 0.31027932 & 0.09591503 & - & - \\
SSGAX09 & \cellcolor[HTML]{EFEFEF}1.87E-06 & \cellcolor[HTML]{EFEFEF}4.56E-07 & \cellcolor[HTML]{EFEFEF}1.15E-06 & \cellcolor[HTML]{EFEFEF}1.15E-06 & \cellcolor[HTML]{EFEFEF}9.91E-07 & \cellcolor[HTML]{EFEFEF}0.00039995 & 0.69048565 & \cellcolor[HTML]{EFEFEF}0.02377523 & - \\
SSGAX10 & \cellcolor[HTML]{EFEFEF}4.78E-09 & \cellcolor[HTML]{EFEFEF}9.57E-10 & \cellcolor[HTML]{EFEFEF}2.67E-09 & \cellcolor[HTML]{EFEFEF}2.67E-09 & \cellcolor[HTML]{EFEFEF}2.20E-09 & \cellcolor[HTML]{EFEFEF}0.02056055 & 0.64946895 & 0.31848689 & 0.32989657
\\\hline
\end{tabular}
\end{table}
\end{landscape}
\paragraph{\em{Multiple pairwise t-test:}}
Multiple pairwise-comparison between means of groups is performed. In the one-way ANOVA test described above, significant p-values indicate that some group means are different. In order to know which pairs of groups are different, multiple pairwise-comparison is performed for Rastrigin (RAS), Rosenbrock (ROSE), Schwefel (SCHW), and Griewank (GRIE) best solutions samples. Tables (\ref{c4table24}, \ref{c4table25}, \ref{c4table26}, \ref{c4table27}, \ref{c4table28}, \ref{c4table29}, \ref{c4table30}, and \ref{c4table31}) present Pairwise comparisons using t-tests with pooled standard deviation (SD) with their respective p-values. The test adjusts p-values with the Benjamini-Hochberg method. Pairwise comparisons show that only highlighted values in gray between two algorithms are significantly different ($p < 0.05$). Therefore, the alternative hypothesis is true.
Now, to find out any significant difference between the median fitness of individuals in the two experimental groups (classic GAs and GAs with regulated genes), the Wilcoxon test is conducted.
\paragraph{\em{Paired Samples Wilcoxon Test:}}
For this test, algorithms are grouped per population replacement strategy, ignoring crossover rates. Wilcoxon signed rank test for generational EAs (GGA and ReGen GGA) and Wilcoxon signed rank test for steady state EAs (SSGA and ReGen SSGA). The test assesses classic EAs versus Epigenetic EAs. In the results, $V$ represents the total of the ranks assigned to differences with a positive sign, and {\em P-value} refers to the probability value. In statistical hypothesis testing, the p-value corresponds to the probability of obtaining test results as evidence to reject or confirm the null hypothesis.
\begin{itemize}
\item Rastrigin (RAS)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. $V = 11325$, {\em P-value} is equal to $2.322841e-26$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. $V = 11325$, {\em P-value} is equal to $2.322841e-26$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\item Rosenbrock (ROSE)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. $V = 10368$, {\em P-value} is equal to $1.068438e-18$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. $V = 10114$, {\em P-value} is equal to $6.760613e-17$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\newpage
\item Schwefel (SCHW)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. $V = 11121$, {\em P-value} is equal to $1.305395e-24$, which is less than the significance level $alpha = 0.05$.
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. $V = 10913$, {\em P-value} is equal to $6.836875e-23$, which is less than the significance level alpha ($0.05$).
\end{enumerate}
\item Griewank (GRIE)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from GGAs and ReGen GGAs. $V = 10438$, {\em P-value} is equal to $3.275069e-19$, which is less than the significance level $alpha = 0.05$.
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SSGAs and ReGen SSGAs. $V = 10975$, {\em P-value} is equal to $2.134437e-23$, which is less than the significance level alpha ($0.05$).
\end{enumerate}
\end{itemize}
The above leads to conclude that median fitnesses of solutions found by classic generational genetic algorithms (GGAs) are significantly different from median fitnesses of solutions found by generational genetic algorithms with regulated genes (ReGen GGAs) with p-values equal to $2.322841e-26$ (RAS samples), $1.068438e-18$ (ROSE samples), $1.305395e-24$ (SCHW samples), and $3.275069e-19$ (GRIE samples). So, the alternative hypothesis is true.
The median fitness of solutions found by classic steady state genetic algorithms (SSGAs) is significantly different from the median fitness of solutions found by steady state genetic algorithms with regulated genes (ReGen SSGAs) with p-values equal to $2.322841e-26$ (RAS sampling fitness), $6.760613e-17$ (ROSE sampling fitness), $6.836875e-23$ (SCHW sampling fitness), and $2.134437e-23$ (GRIE sampling fitness). As p-values are less than the significance level $0.05$, it may be concluded that there are significant differences between the two EAs groups in each Wilcoxon Test.
\section{Summary}\label{c4s4}
The epigenetic technique is implemented on GAs to solve both binary and real encoding problems. For real encoding, the search space must be discretized by using a binary representation of real values. A decoding schema from binary to real value is performed in order to evaluate individuals' fitness. Results have shown that the marking process does impact the way the population evolves, and the fitness of individuals considerably improves to the optimum. The use of epigenetic tags revealed that they help the ReGen GA to find better solutions (although the optimum is not always reached). A better exploration and exploitation of the search space is evident; in addition, {\em Tags} are transmitted through generations, which leads to maintaining a notion of memory between generations. The statistical analysis helps to conclude that epigenetic implementations performed better than standard versions.
\chapter{ReGen HAEA: Binary and Real Codification}\label{chapter5}
The Hybrid Adaptive Evolutionary Algorithm with Regulated Genes (ReGen \textsc{HaEa}\xspace) is the implementation of the proposed epigenetic model on the standard \textsc{HaEa}\xspace. This implementation is meant to address real and binary encoding problems. Experimental functions with binary and real encoding have been selected for determining the model applicability. In section~\ref{c5s1}, general settings for all experiments are described. In section~\ref{c5s2}, two binary experiments are presented, performing Deceptive order three and Deceptive order four trap functions to evidence tags effect on populations' behavior. Also, some experimental results and their analysis are exhibited in subsection~\ref{c5s2ss1} and subsection~\ref{c5s2ss2}. In section~\ref{c5s3}, three Real encoding problems are presented, implementing Rastrigin, Schwefel, and Griewank functions. Additionally, some experimental results and their analysis are exhibited in subsections~\ref{c5s3ss1} and~\ref{c5s3ss2}. In section~\ref{c5s4}, the statistical analysis of the results is described. At the end of this chapter, a summary is given in section~\ref{c5s5}.
Gomez in \cite{GOMEZa, GOMEZb} proposed an evolutionary algorithm that adapts operator rates while it is solving the optimization problem. \textsc{HaEa}\xspace is a mixture of ideas borrowed from evolutionary strategies, decentralized control adaptation, and central control adaptation. Algorithm~\ref{c5algo1} presents the pseudo-code of \textsc{HaEa}\xspace with the embedded epigenetic components.
\begin{algorithm}[htb!]%
\caption{Hybrid Adaptive Evolutionary Algorithm (\textsc{HaEa}\xspace)}\label{c5algo1}%
$\textsc{HaEa}\xspace(\text{fitness},\mu,\text{terminationCondition})$%
\begin{algorithmic}[1]%
\State $t = 0$
\State $P_0 = \textsc{initPopulation}(\mu)$
\State $\text{evaluate}(P_0,\text{fitness})$
\While {\big($\text{terminationCondition}(t,P_t,\text{fitness})$ is false\big)}
\State $P_{t+1} = \varnothing$
\For{\textbf{each} ind $\in P_t$}
\State rates = extracRatesOper(ind)
\State oper = \textsc{OpSelect}(operators, rates)
\State parents = \textsc{ParentsSelection}\big($P_t$, ind, arity(oper)\big)
\State offspring = apply(oper, parents)
\If {\Call{markingPeriodON}{t}}
\State \Call{applyMarking}{offspring}
\EndIf
\State offspring $\gets$ \textsc{decode}(\Call{epiGrowingFunction}{offspring})
\If {steady}
\State child = \textsc{Best}(offspring, ind)
\Else
\State child = \textsc{Best}(offspring)
\EndIf
\State $\delta = \text{random}(0,1)$ \Comment{learning rate}
\If{$\big(\text{fitness}(\text{child}) > \text{fitness}(\text{ind})\big)$}
\State rates[oper] = $(1.0 + \delta)\ *\ $rates[oper] \Comment{reward}
\Else
\State rates[oper] = $(1.0 - \delta)\ *\ $rates[oper] \Comment{punish}
\EndIf
\State normalizeRates(rates)
\State setRates(child, rates)
\State $P_{t+1} = P_{t+1}\ \cup$ \{child\}
\EndFor
\State $t = t+1$
\EndWhile
\end{algorithmic}
\end{algorithm}
As can be noted, \textsc{HaEa}\xspace does not generate a parent population to produce the next generation. Among the offspring produced by the genetic operator, only one individual is chosen as a child (lines 16 and 18) and will take the place of its parent in the next population (line 28). In order to be able to preserve competent individuals through evolution, \textsc{HaEa}\xspace compares the parent individual against the offspring generated by the operator, for steady state replacement. For generational replacement, it chooses the best individual among the offspring (lines 15 and 17).
At line 11, the marking period function has been embedded to initiate the marking process on individuals when defined periods are activated. Then, the epigenetic growing function (line 14) interprets markers on the chromosome structure of individuals with the purpose of generating phenotypes that will be evaluated by the objective function.
For all experiments in this chapter, three marking periods have been defined. There is not a particular reason why this number of periods has been chosen. Marking periods could be between different ranges of iterations, defined periods are just for testing purposes. Marking periods can be appreciated in figures of reported results delineated with vertical lines. Vertical lines in blue depict the starting point of marking periods and gray lines, the end of them.
\section{General Experimental Settings}\label{c5s1}
Following experimental settings apply to binary and real experiments reported in sections \ref{c5s2} and \ref{c5s3}. For the standard \textsc{HaEa}\xspace, a population size of 100 is used and 1000 iterations. A tournament size of $4$ is implemented to select the parent of crossover. Reported results are the median over 30 different runs. For current tests, \textsc{HaEa}\xspace only uses one genetic operator combination: the single-point mutation and the single-point crossover. The mutation operator always modifies the genome by randomly changing only one single bit with uniform distribution. The single-point crossover splits and combines parents' chromosome sections (left and right) using a randomly selected cutting point. The set up for the standard \textsc{HaEa}\xspace also includes: generational (G\textsc{HaEa}\xspace) and steady state (SS\textsc{HaEa}\xspace) replacements to choose the fittest individuals for the new population.
The ReGen \textsc{HaEa}\xspace setup involves the same defined configuration for the standard \textsc{HaEa}\xspace with an additional configuration for the epigenetic process as follows: a marking probability of $0.02$ (the probability to add a tag is $0.35$, to remove a tag is $0.35$, and to modify a tag is $0.3$) and three marking periods.
\section{Binary Problems}\label{c5s2}
Binary encoding experiments have been performed in order to determine the proposed approach applicability. In binary encoding, a vector with binary values encodes the problem's solution.
\subsection{Test Functions}\label{c5s2ss1}
Two well known binary problems deceptive order three and deceptive order four trap functions developed by Goldberg in 1989 \cite{GOLDBERG} have been selected. The genome length for each function is $360$, the global optimum for Deceptive order three is $3600$, and $450$ for Deceptive order four trap. A complete definition of these functions can be found in previous chapter \ref{chapter4}, section \ref{c4s2}. Also, in chapter \ref{chapter4}, a more in-depth explanation can be found regarding the implemented binary to real decoding mechanism.
\subsection{Results}\label{c5s2ss2}
Based on the defined configuration, both \textsc{HaEa}\xspace and ReGen \textsc{HaEa}\xspace are compared to identify tags' behavior during individuals' evolution. Results are tabulated in Table~\ref{c5table1}, the table presents binary functions: Deceptive order three and four with generational (G\textsc{HaEa}\xspace) and steady state (SS\textsc{HaEa}\xspace) replacements for standard and epigenetic implementations. Also, the table shows the best fitness based on the maximum median performance, following the standard deviation of the observed value, and the iteration where the reported fitness is found, which is enclosed in square brackets.
Fig.~\ref{c5fig1} and Fig.~\ref{c5fig2} illustrate the fitness of best individuals in performed experiments, reported fitnesses are based on the maximum median performance. Each figure shows the tendency of the best individuals per technique. For \textsc{HaEa}\xspace and ReGen \textsc{HaEa}\xspace, two methods are applied: steady state and generational population replacements. The fitness evolution of individuals can be appreciated by tracking green and red lines that depict best individuals' fitness for the standard \textsc{HaEa}\xspace. Blue and black lines trace best individuals' fitness for ReGen \textsc{HaEa}\xspace. Figures on the right side show defined marking periods. Vertical lines in blue depict the starting of a marking period, lines in gray delimit the end of such periods.
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady state replacements}
\label{c5table1}
\begin{tabular}{llll} \hline
\textbf{EA} & \textbf{Deceptive Order 3} & \textbf{Deceptive Order 4} \\\hline
G\textsc{HaEa}\xspace & $3438 \pm 10.16 [686]$ & $394 \pm 3.16 [198]$\\
SS\textsc{HaEa}\xspace & $3435 \pm 10.96 [265]$ & $392 \pm 4.55 [249]$\\
ReGen G\textsc{HaEa}\xspace & $3587 \pm 09.89 [936]$ & $447 \pm 2.56 [810]$\\
ReGen SS\textsc{HaEa}\xspace & $3590 \pm 09.26 [925]$ & $446 \pm 2.39 [594]$\\ \hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/HAEAD32.pdf}
\includegraphics[width=2in]{imagesThesis/HAEAD3.pdf}
\caption{Deceptive Order 3. Generational replacement (G\textsc{HaEa}\xspace) and Steady state replacement (SS\textsc{HaEa}\xspace).}
\label{c5fig1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/HAEAD42.pdf}
\includegraphics[width=2in]{imagesThesis/HAEAD4.pdf}
\caption{Deceptive Order 4. Generational replacement (G\textsc{HaEa}\xspace) and Steady state replacement (SS\textsc{HaEa}\xspace).}
\label{c5fig2}
\end{figure}
Tabulated results in Table~\ref{c5table1} for Deceptive Order Three and Deceptive Order Four Trap, show that ReGen \textsc{HaEa}\xspace performs better than standard \textsc{HaEa}\xspace implementations. ReGen \textsc{HaEa}\xspace is able to discover varied optimal solutions until achieving the total of configured iterations, even though, it did not find the global optimum in performed experiments. In Fig.~\ref{c5fig1} and Fig.~\ref{c5fig2} is notable that the pressure applied on chromosomes at iteration $200$ does cause a change in the evolution of individuals. After starting the marking period, populations improve their performance once tags are added. Following the marking period at iteration $500$, a slight change is identified again, the individuals' fitness improves to some degree closer to the optimal solution. The same behavior can be appreciated in the next period, at iteration $800$.
\section{Real Problems}\label{c5s3}
Experiments using real definition are performed for determining the proposed technique applicability. For the selected problems with real coded definition, a vector with binary values encodes the problem's solution.
\subsection{Test Functions}\label{c5s3ss1}
Real functions shown in Table~\ref{c5table2} are used as testbeds. Each real value is represented with a binary string of 32-bits, for each function, the dimension of the problem is fixed to $n = 10$. A complete definition of these functions can be found in previous chapter \ref{chapter4}, section \ref{c4s3}. Also, a detailed description of the encoding/decoding scheme to obtain real values from binary strings of $32$ bits and its representation as integer numbers are presented in the same section.
\begin{table}[H]
\centering
\caption{Real functions tested}
\label{c5table2}
\begin{tabular}{ccccl}
\hline
Name & Function & Feasible Region\\
\hline
Rastrigin &
\large $f(\textbf{x}) = 10n + \sum_{i=1}^{n}(x_i^2 - 10cos(2\pi x_i))$ & -5.12 $ \geq x_i \leq $ 5.12\\
Schwefel & \large $f(\textbf{x}) = 418.9829d -{\sum_{i=1}^{n} x_i sin(\sqrt{|x_i|})}$ &
-500 $ \geq x_i \leq $ 500\\
Griewank & \large $f(\textbf{x}) = 1 + \sum_{i=1}^{n} \frac{x_i^{2}}{4000} - \prod_{i=1}^{n}cos(\frac{x_i}{\sqrt{i}})$ &
-600 $ \geq x_i \leq $ 600\\
\hline
\end{tabular}
\end{table}
\subsection{Results}\label{c5s3ss2}
Results are tabulated in Table~\ref{c5table3}, the table presents real encoded functions: Rastrigin, Schwefel, and Griewank with generational (G\textsc{HaEa}\xspace) and steady state (SS\textsc{HaEa}\xspace) replacements for standard and ReGen implementations. Additionally, the table includes the best fitness based on the minimum median performance, following the standard deviation of the observed value, and the iteration where the reported fitness is found, which is enclosed in square brackets. The last row displays \textsc{HaEa}\xspace(XUG) implementation with the best results reported by Gomez \cite{GOMEZa, GOMEZb} using three different genetic operators: Single real point crossover, Uniform mutation, and Gaussian mutation (XUG). \textsc{HaEa}\xspace(XUG) performed experiments with real encoding.
Graphs Fig.~\ref{c5fig3}, Fig.~\ref{c5fig4}, and Fig.~\ref{c5fig5} illustrate the fitness of best individuals of populations in performed experiments, reported fitnesses are based on the minimum median performance. Each figure shows the course of best individuals per technique. For \textsc{HaEa}\xspace and ReGen \textsc{HaEa}\xspace, two methods are applied: steady state and generational population replacements. The fitness evolution of individuals can be noted by tracking green and red lines, which depict best individuals' fitness for the standard \textsc{HaEa}\xspace. Blue and black lines trace best individuals' fitness for ReGen \textsc{HaEa}\xspace. Figures on the right side show defined marking periods. Vertical lines in blue depict the starting of a marking period, lines in gray delimit the end of such periods.
\begin{table}[H]
\centering
\caption{Results of the experiments for Generational and Steady state replacements}
\label{c5table3}
\begin{tabular}{llll} \hline
\textbf{EA} & \textbf{Rastrigin} & \textbf{Schwefel} & \textbf{Griewank} \\\hline
ReGen G\textsc{HaEa}\xspace & $0.019836 \pm 0.400 [969]$ & $0.000259 \pm 05.660 [998]$ & $0.048921 \pm 0.04 [975]$\\
ReGen SS\textsc{HaEa}\xspace & $0.019799 \pm 0.32 [1000]$ & $0.000259 \pm 24.770 [923]$ & $0.054499 \pm 0.02 [956]$\\
G\textsc{HaEa}\xspace & $11.14796 \pm 4.53 [1000]$ & $15.24888 \pm 101.59 [601]$ & $0.212970 \pm 0.12 [818]$\\
SS\textsc{HaEa}\xspace & $13.68203 \pm 5.12 [1000]$ & $135.4623 \pm 115.92 [843]$ & $0.211103 \pm 0.16 [783]$\\
\textsc{HaEa}\xspace(XUG) & $0.053614\pm0.2168080$ & $0.005599\pm0.01170200$ & $0.054955\pm0.029924$\\\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/HAEARAS2.pdf}
\includegraphics[width=2in]{imagesThesis/HAEARAS.pdf}
\caption{Rastrigin. Generational replacement (G\textsc{HaEa}\xspace) and Steady state replacement (SS\textsc{HaEa}\xspace).}
\label{c5fig3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/HAEASCHW2.pdf}
\includegraphics[width=2in]{imagesThesis/HAEASCHW.pdf}
\caption{Schwefel. Generational replacement (G\textsc{HaEa}\xspace) and Steady state replacement (SS\textsc{HaEa}\xspace).}
\label{c5fig4}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=2in]{imagesThesis/HAEAGRIE2.pdf}
\includegraphics[width=2in]{imagesThesis/HAEAGRIE.pdf}
\caption{Griewank. Generational replacement (G\textsc{HaEa}\xspace) and Steady state replacement (SS\textsc{HaEa}\xspace).}
\label{c5fig5}
\end{figure}
Based on tabulated results in Table~\ref{c5table3} it can be noted that ReGen \textsc{HaEa}\xspace implementations perform better than standards \textsc{HaEa}\xspace implementations, including results from \textsc{HaEa}\xspace(XUG), which used real encoding for experiments. ReGen \textsc{HaEa}\xspace is able to discover suitable candidate solutions. In Fig.~\ref{c5fig3}, Fig.~\ref{c5fig4}, and Fig.~\ref{c5fig5} is observable that marking periods applied on chromosomes at iterations $200$, $500$, and $800$ does cause a great change on the evolution of individuals. After starting the first marking period, populations improve their performance once tags are added, especially for Rastrigin and Schwefel functions. It is remarkable how in every defined marking period (delimited with vertical blue lines), individuals improve their fitness. For the Griewank function, there is a small margin of difference between the two implementation performances, even though ReGen \textsc{HaEa}\xspace accomplishes better results. In Fig.~\ref{c5fig5} is evident that the pressure applied on chromosomes at iteration $200$ affects the evolution of individuals, the fitness improves, and keeps stable for best individuals until the evolution process finishes. ReGen \textsc{HaEa}\xspace found a variety of good solutions during the evolution process, exposing the ability of the proposed approach to discover local minima that are not identified by standard \textsc{HaEa}\xspace implementations.
\section{Statistical Analysis}\label{c5s4}
Three different tests are performed, One-Way ANOVA test, Pairwise Student's t-test, and Paired Samples Wilcoxon Test (also known as Wilcoxon signed-rank test). The data set ReGen EAs Samples in Appendix \ref{appendB} is used. The samples contain four \textsc{HaEa}\xspace implementations for each of the following functions: Deceptive Order Three, Deceptive Order Four Trap, Rastrigin, Schwefel, and Griewank. The samples refer to the best fitness of a solution found in each run, the number of executions per algorithm is $30$. Different implementations involve standard \textsc{HaEa}\xspace and ReGen \textsc{HaEa}\xspace with Generational (G) and Steady State (SS) population replacements.
\begin{table}[H]
\centering
\caption{Anova Single Factor: SUMMARY}
\label{c5table4}
\begin{tabular}{lllll}
\hline
\multicolumn{5}{c}{\textbf{Deceptive Order Three}} \\
Groups & Count & Sum & Average & Variance \\\hline
GHAEA & 30 & 103146 & 3438.2 & 102.993103 \\
SSHAEA & 30 & 103052 & 3435.066667 & 111.650575 \\
ReGenGHAEA & 30 & 107554 & 3585.133333 & 93.4298851 \\
ReGenSSHAEA & 30 & 107650 & 3588.333333 & 78.3678161 \\\hline
\multicolumn{5}{c}{\textbf{Deceptive Order Four Trap}} \\
Groups & Count & Sum & Average & Variance \\\hline
GHAEA & 30 & 11815 & 393.8333333 & 9.385057471 \\
SSHAEA & 30 & 11737 & 391.2333333 & 20.73678161 \\
ReGenGHAEA & 30 & 13410 & 447 & 4.75862069 \\
ReGenSSHAEA & 30 & 13390 & 446.3333333 & 3.471264368 \\\hline
\multicolumn{5}{c}{\textbf{Rastrigin}} \\
Groups & Count & Sum & Average & Variance \\\hline
GHAEA & 30 & 329.0666924 & 10.96888975 & 20.2816323 \\
SSHAEA & 30 & 403.9728574 & 13.46576191 & 26.1883055 \\
ReGenGHAEA & 30 & 4.911251327 & 0.163708378 & 0.14340697 \\
ReGenSSHAEA & 30 & 3.576815371 & 0.119227179 & 0.09299807 \\\hline
\multicolumn{5}{c}{\textbf{Schwefel}} \\
Groups & Count & Sum & Average & Variance \\\hline
GHAEA & 30 & 1344.597033 & 44.81990111 & 7258.77527 \\
SSHAEA & 30 & 4439.27726 & 147.9759087 & 13144.3958 \\
ReGenGHAEA & 30 & 30.50459322 & 1.016819774 & 31.002189 \\
ReGenSSHAEA & 30 & 218.4970214 & 7.283234047 & 527.103699 \\\hline
\multicolumn{5}{c}{\textbf{Griewank}} \\
Groups & Count & Sum & Average & Variance \\\hline
GHAEA & 30 & 6.520457141 & 0.217348571 & 0.01290201 \\
SSHAEA & 30 & 7.181644766 & 0.239388159 & 0.02090765 \\
ReGenGHAEA & 30 & 1.624738713 & 0.054157957 & 0.0015871 \\
ReGenSSHAEA & 30 & 1.61989951 & 0.05399665 & 0.000493\\\hline
\end{tabular}
\end{table}
Based on ReGen EAs Samples in Appendix \ref{appendB}, the analysis of variance is computed to know the difference between evolutionary algorithms with different implementations that include standard \textsc{HaEa}\xspace and ReGen \textsc{HaEa}\xspace with generational and steady state replacement strategies. Algorithms are four in total, in Table~\ref{c5table4} a summary of each function and algorithm is shown. The summary presents the number of samples per algorithm (30), the sum of fitnesses, the average fitness, and their variances. Results of the ANOVA single factor is tabulated in Table~\ref{c5table5}.
\begin{table}[H]
\centering
\caption{Anova Single Factor: ANOVA}
\label{c5table5}
\begin{tabular}{lllllll}
\hline
\multicolumn{7}{c}{\textbf{Deceptive Order Three}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 676201.16 & 3 & 225400.38 & 2333.0875 & 1.7577E-103 & 2.6828 \\
Within Groups & 11206.8 & 116 & 96.6103 & & & \\
& & & & & & \\
Total & 687407.96 & 119 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Deceptive Order Four Trap}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 88020.6 & 3 & 29340.2 & 3060.1179 & 3.2412E-110 & 2.6828 \\
Within Groups & 1112.2 & 116 & 9.5879 & & & \\
& & & & & & \\
Total & 89132.8 & 119 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Rastrigin}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 4468.33 & 3 & 1489.44 & 127.5582 & 1.39871E-36 & 2.6828 \\
Within Groups & 1354.48 & 116 & 11.6765 & & & \\
& & & & & & \\
Total & 5822.8196 & 119 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Schwefel}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 415496.57 & 3 & 138498.85 & 26.4294 & 4.22517E-13 & 2.6828 \\
Within Groups & 607877.03 & 116 & 5240.3192 & & & \\
& & & & & & \\
Total & 1023373.61 & 119 & & & & \\\hline
\multicolumn{7}{c}{\textbf{Griewank}} \\
Source of Variation & SS & df & MS & F & P-value & F crit \\\hline
Between Groups & 0.918607 & 3 & 0.306202 & 34.1270 & 6.92905E-16 & 2.6828 \\
Within Groups & 1.040803 & 116 & 0.008972 & & & \\
& & & & & & \\
Total & 1.959410 & 119 & & & & \\\hline
\end{tabular}
\end{table}
As P-values for Deceptive Order Three, Deceptive Order Four Trap, Rastrigin, Schwefel, and Griewank functions are less than the significance level $0.05$, results allow concluding that there are significant differences between the groups as shown in Table~\ref{c5table5}. In one-way ANOVA tests, significant P-values indicate that some of the group means are different, but it is not evident which pairs of groups are different. In order to interpret one-way ANOVA test' results, multiple pairwise-comparison with Student's t-test is performed to determine if the mean difference between specific pairs of the group is statistically significant. Also, paired-sample Wilcoxon tests are computed.
\begin{landscape}
\begin{figure}[H]
\centering
\includegraphics[width=4.2in]{imagesThesis/boxHAEAD3.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxHAEAD4.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxHAEARAS.pdf}
\includegraphics[width=4.2in]{imagesThesis/boxHAEASCHW.pdf}
\caption{EAs with Generational replacement (G\textsc{HaEa}\xspace) and Steady State replacement (SS\textsc{HaEa}\xspace). On top: Deceptive Order Three and Deceptive Order Four Trap Functions. On the bottom: Rastrigin and Schwefel functions.}
\label{c5fig6}
\end{figure}
\end{landscape}
\begin{figure}[H]
\centering
\includegraphics[width=4.2in]{imagesThesis/boxHAEAGRIE.pdf}
\caption{EAs with Generational replacement (G\textsc{HaEa}\xspace) and Steady State replacement (SS\textsc{HaEa}\xspace). Griewank function.}
\label{c5fig7}
\end{figure}
Box plots in Fig.~\ref{c5fig6} and Fig.~\ref{c5fig7} depict the median fitness of EAs' best solutions (ReGen EAs Samples in Appendix \ref{appendB}). Four EAs are illustrated, epigenetic EAs in Orange (ReGen G\textsc{HaEa}\xspace) and Blue (ReGen SS\textsc{HaEa}\xspace), standard EAs in Gray (G\textsc{HaEa}\xspace) and White (SS\textsc{HaEa}\xspace). For Deceptive Order Three function, the median fitness for each Epigenetic EA is close to the global optimum ($3600$), while median fitnesses for classic \textsc{HaEa}\xspace are under the local optimum ($3450$). On the other hand, Deceptive Order Four Trap median fitness surpasses $440$ for all Epigenetic implementations; in contrast, for standard \textsc{HaEa}\xspace, the median fitness does not reach $400$. For Rastrigin function, the median fitness for each Epigenetic EA is lower than the local minima ($1.0$), while median fitnesses for standard \textsc{HaEa}\xspace are over the local optimum ($10$). Next in order, epigenetic EAs for Schwefel achieved median fitness inferior to $0.0003$; conversely, \textsc{HaEa}\xspace median fitnesses are greater than $0.0004$ for generational replacement and higher than 100 for steady state implementation. Finally, for Griewank function's box plots, depicted median fitnesses are below the local optimum $0.1$ for epigenetic evolutionary algorithms, while median fitness values for standard versions of \textsc{HaEa}\xspace are above $0.2$. So, based on these data, it seems that Epigenetic \textsc{HaEa}\xspace versions find better solutions than classic \textsc{HaEa}\xspace implementations. However, it is needed to determine whether these findings are statistically significant.
\begin{table}[H]
\centering
\caption{Student T-tests pairwise comparisons with pooled standard deviation. Benjamini Hochberg (BH) as p-value adjustment method.}
\label{c5table6}
\begin{tabular}{llll}
\hline
\multicolumn{4}{c}{\textbf{Deceptive Order Three}} \\
EAs & GHAEA & ReGenGHAEA & ReGenSSHAEA \\\hline
ReGenGHAEA & \cellcolor[HTML]{EFEFEF}2.92E-87 & - & - \\
ReGenSSHAEA & \cellcolor[HTML]{EFEFEF}3.65E-88 & 0.2194596 & - \\
SSHAEA & 0.2194596 & \cellcolor[HTML]{EFEFEF}3.65E-88 & \cellcolor[HTML]{EFEFEF}1.02E-88 \\\hline
\multicolumn{4}{c}{\textbf{Deceptive Order Four Trap}} \\
EAs & GHAEA & ReGenGHAEA & ReGenSSHAEA \\\hline
ReGenGHAEA & \cellcolor[HTML]{EFEFEF}6.52E-94 & - & - \\
ReGenSSHAEA & \cellcolor[HTML]{EFEFEF}2.04E-93 & 0.40607501 & - \\
SSHAEA & \cellcolor[HTML]{EFEFEF}0.00180068 & \cellcolor[HTML]{EFEFEF}8.80E-96 & \cellcolor[HTML]{EFEFEF}1.72E-95 \\\hline
\multicolumn{4}{c}{\textbf{Rastrigin}} \\
EAs & GHAEA & ReGenGHAEA & ReGenSSHAEA \\\hline
ReGenGHAEA & \cellcolor[HTML]{EFEFEF}1.83E-22 & - & - \\
ReGenSSHAEA & \cellcolor[HTML]{EFEFEF}1.83E-22 & 0.959877985 & - \\
SSHAEA & \cellcolor[HTML]{EFEFEF}0.006585082 & \cellcolor[HTML]{EFEFEF}1.27E-28 & \cellcolor[HTML]{EFEFEF}1.27E-28 \\\hline
\multicolumn{4}{c}{\textbf{Schwefel}} \\
EAs & GHAEA & ReGenGHAEA & ReGenSSHAEA \\\hline
ReGenGHAEA & \cellcolor[HTML]{EFEFEF}0.03120645 & - & - \\
ReGenSSHAEA & 0.05632517 & 0.73803211 & - \\
SSHAEA & \cellcolor[HTML]{EFEFEF}4.21E-07 & \cellcolor[HTML]{EFEFEF}1.29E-11 & \cellcolor[HTML]{EFEFEF}3.65E-11 \\\hline
\multicolumn{4}{c}{\textbf{Griewank}} \\
EAs & GHAEA & ReGenGHAEA & ReGenSSHAEA \\\hline
ReGenGHAEA & \cellcolor[HTML]{EFEFEF}1.35E-09 & - & - \\
ReGenSSHAEA & \cellcolor[HTML]{EFEFEF}1.35E-09 & 0.99474898 & - \\
SSHAEA & 0.44325525 & \cellcolor[HTML]{EFEFEF}2.87E-11 & \cellcolor[HTML]{EFEFEF}2.87E-11\\\hline
\end{tabular}
\end{table}
\paragraph{\em{Multiple pairwise t-test:}}
Multiple pairwise-comparison between means of groups is performed. In the one-way ANOVA test described above, significant p-values indicate that some group means are different. In order to know which pairs of groups are different, multiple pairwise-comparison is performed for Deceptive Order Three (D3), Deceptive Order Four Trap (D4), Rastrigin (RAS), Schwefel (SCHW), and Griewank (GRIE) best solutions samples. Table~\ref{c5table6} presents Pairwise comparisons using t-tests with pooled standard deviation (SD) with their respective p-values. The test adjusts p-values with the Benjamini-Hochberg method. Pairwise comparisons show that only highlighted values in gray between two algorithms are significantly different ($p < 0.05$). Therefore, the alternative hypothesis is true.
Now, to find out any significant difference between the median fitness of individuals in the two experimental groups (standard \textsc{HaEa}\xspace and \textsc{HaEa}\xspace with regulated genes), the Wilcoxon test is conducted.
\paragraph{\em{Paired Samples Wilcoxon Test:}}
For this test, algorithms are grouped per population replacement strategy. Wilcoxon signed rank test for generational EAs (G\textsc{HaEa}\xspace and ReGen G\textsc{HaEa}\xspace) and Wilcoxon signed rank test for steady state EAs (SS\textsc{HaEa}\xspace and ReGen SS\textsc{HaEa}\xspace). The test assesses standard \textsc{HaEa}\xspace versus Epigenetic \textsc{HaEa}\xspace implementations.
\begin{itemize}
\item Deceptive Order Three (D3)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from G\textsc{HaEa}\xspace and ReGen G\textsc{HaEa}\xspace implementations. $V = 0$, {\em P-value} is equal to $1.792453e-06$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SS\textsc{HaEa}\xspace and ReGen SS\textsc{HaEa}\xspace algorithms. $V = 0$, {\em P-value} is equal to $1.803748e-06$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\item Deceptive Order Four Trap (D4)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from G\textsc{HaEa}\xspace and ReGen G\textsc{HaEa}\xspace implementations. $V = 0$, {\em P-value} is equal to $1.760031e-06$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SS\textsc{HaEa}\xspace and ReGen SS\textsc{HaEa}\xspace versions. $V = 0$, {\em P-value} is equal to $1.768926e-06$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\item Rastrigin (RAS)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from G\textsc{HaEa}\xspace and ReGen G\textsc{HaEa}\xspace implementations. $V = 465$, {\em P-value} is equal to $1.863e-09$, which is less than the significance level alpha ($0.05$).
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SS\textsc{HaEa}\xspace and ReGen SS\textsc{HaEa}\xspace algorithms. $V = 465$, {\em P-value} is equal to $1.863e-09$, which is less than the significance level $alpha = 0.05$.
\end{enumerate}
\item Schwefel (SCHW)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from G\textsc{HaEa}\xspace and ReGen G\textsc{HaEa}\xspace implementations. $V = 450$, {\em P-value} is equal to $2.552e-07$, which is less than the significance level $alpha = 0.05$.
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SS\textsc{HaEa}\xspace and ReGen SS\textsc{HaEa}\xspace versions. $V = 452$, {\em P-value} is equal to $1.639e-07$, which is less than the significance level alpha ($0.05$).
\end{enumerate}
\item Griewank (GRIE)
\begin{enumerate}
\item \par Wilcoxon signed rank test with continuity correction for generational EAs uses all data-set samples from G\textsc{HaEa}\xspace and ReGen G\textsc{HaEa}\xspace implementations. $V = 462$, {\em P-value} is equal to $9.313e-09$, which is less than the significance level $alpha = 0.05$.
\item \par Wilcoxon signed rank test with continuity correction for steady state EAs uses all data-set samples from SS\textsc{HaEa}\xspace and ReGen SS\textsc{HaEa}\xspace algorithms. $V = 465$, {\em P-value} is equal to $1.863e-09$, which is less than the significance level alpha ($0.05$).
\end{enumerate}
\end{itemize}
The above leads to conclude that median fitnesses of solutions found by standard generational Hybrid Adaptive Evolutionary Algorithms (G\textsc{HaEa}\xspace) are significantly different from median fitnesses of solutions found by generational \textsc{HaEa}\xspace with regulated genes (ReGen G\textsc{HaEa}\xspace) with p-values equal to $1.792453e-06$ (D3 samples), $1.760031e-06$ (D4 samples), $1.863e-09$ (RAS samples), $2.552e-07$ (SCHW samples), and $9.313e-09$ (GRIE samples). So, the alternative hypothesis is true.
The median fitness of solutions found by classic steady state Hybrid Adaptive Evolutionary Algorithms (SS\textsc{HaEa}\xspace) is significantly different from the median fitness of solutions found by steady state \textsc{HaEa}\xspace with regulated genes (ReGen SS\textsc{HaEa}\xspace) with p-values equal to $1.803748e-06$ (D3 sampling fitness), $1.768926e-06$ (D4 sampling fitness), $1.863e-09$ (RAS sampling fitness), $1.639e-07$ (SCHW sampling fitness), and $1.863e-09$ (GRIE sampling fitness). As p-values are less than the significance level $0.05$, it may be concluded that there are significant differences between the two EAs groups in each Wilcoxon Test.
\section{Summary}\label{c5s5}
The epigenetic technique is implemented on \textsc{HaEa}\xspace to solve both binary and real encoding problems. Results have shown that the marking process did impact the way populations evolve, and the fitness of individuals considerably improves to the optimum. It is important to point out that only two operators are used: single point Crossover and single bit Mutation. This thesis intends to avoid giving too many advantages to implemented EAs in terms of parametrization and specialized operators in order to identify the applicability of the proposed epigenetic model. The statistical analysis helps to conclude that epigenetic implementations performed better than standard versions.
\chapter{Concluding Remarks}\label{chapter6}
\section{Conclusions}
Epigenetics has proven to be a useful field of study to extract elements for improving the framework of evolutionary algorithms. Primarily because epigenetic encompasses mechanisms that support inheritance and prolongation of experiences, so future generations have enough information to adapt to changing environments. Based on the preceding, some elements are used to bring into life the ReGen EA technique. This research abstracts epigenetics fundamental concepts and introduces them as part of standard evolutionary algorithms' elements or operations.
Modeling epigenetic evolutionary algorithms is not easy, mainly because epigenetic involves too many elements, concepts, principles, and interactions to describe what is known about the epigenetic landscape today. The process of designing epigenetic algorithms requires well-defined abstractions to simplify the epigenetic dynamics and its computational implementation. Epigenetic strategies for EAs variations have been designed during the last decade; those strategies have reported improvement in EAs performance and reduction in the computational cost when solving specific problems. Nevertheless, almost all strategies use the same idea of switching genes off and on (gene activation mechanism), or silencing chromosome sections in response to a changing environment. This approach is correct, but epigenetic goes beyond on and off states. The ReGen EA approach focuses on developing interactions by affecting genetic codes with tags that encode epigenetic instructions.
One thing ReGen EA has in common with other strategies is the use of epigenetic mechanisms such as DNA Methylation. This research characterizes DNA Methylation along with the Histone Modification mechanism. These epigenetic mechanisms are the best characterization among all epigenetic modifications, the most studied, and offer a description that is easy to understand and represent computationally. Most epigenetic approaches have abstracted the repression and activation principles from these mechanisms; basically, they use a large genotype in order to activate advantageous genes/sections and deactivate others to only express parts of the genome that produce suitable phenotypic variations. Note that, traditional evolutionary algorithms assume a finite number of genes, and to obtain novelty, EAs require not only mutations in their chromosome but also new genes. Epigenetics satisfies these needs; epigenetics becomes a problem-solver; it optimizes the number of genes and reduces classic mutation dependence. This thesis takes advantage of that; for the ReGen EA does not exist good genes, a fixed number of genes is defined; the ReGen EA involves epigenetic tags that positively or negatively affect individuals, in this way, tags promote or prevent individuals from becoming more suitable to a specific problem.
Metaphorical representations of epigenetic elements and principles such as epigenotype, tags, marking (to add, modify, and remove tags), reading (tags interpretation), and tags inheritance help in optimizing a defined number of genes, and avoiding the use of high and varied mutation rates; the ReGen EA is capable to produce suitable solutions without enlarging individuals genome and with a fixed mutation rate of {\em 1.0/chomosome length}. Compared to other approaches, emerging interactions from the dynamic of marking and reading processes is beneficial to combine multiple schemes and build varied phenotypes; avoiding to create such large genomes and regulate them by activation and deactivation mechanisms. The epigenetic mechanisms aforementioned have been useful for the design of the markers, but it has been too complicated to define what tags encode today. Tags design involves encoding, structure, and meaning. These properties have helped in proposing tags sections and rules to be interpreted by the reader function.
In this thesis, gene regulation is accomplished by adding, modifying, and removing epigenetic tags from individuals genome. The complete regulation process produces phenotypes that, in most cases, become feasible solutions to a problem. Tags structure contains binary operations; defining operations has implied a process of trial and error. The operations do not represent any biological mechanism, this fact may be miss-interpreted; it is clear that the decision to include binary operations to build the instructions may be seen as advantageous, but it is not the case. The operations have been selected taking into account many factors, three of the most prominent are: first, designed tags are meant to only solve binary and real defined problems since it is the scope of this research; second, the idea has always been to avoid giving advantages to the marking process and follow some basic principles that biological epigenetic mechanisms offer when attaching and interpreting tags, based on this, simpler operations, that do not cause abrupt changes to the phenotype generation process are chosen; and third, chemical tags in biology contain epigenetic codes that are interpreted to maintain the dynamic of many natural processes, in this case, defined operations are considered plausible to solve binary and real coded problems. Even though other operations must be explored, it is conceivable that better operations have not been taken into consideration yet.
It is important to point out that there are other molecular units, epigenetically heritable, such epigenetic factors are not abstracted in this thesis, because any non-genetic factor is reduced to only be represented as tags. With tags-only regulation, the ReGen EA proves that epigenetic factors in general influence individuals fitness by propagating permanent tags; but does not evidence tags instability that many classical geneticists question about epigenetics. Epigenetic mechanisms influence is not only at the individuals level, but also on evolution, and is caused by the environment; marking periods represent the surrounding environment of individuals and abstract the Dutch Hunger Winter study case between the winter of 1944 and 1945 in the Netherlands. Based on this study case, the ReGen EA simulates periods where individuals' genetic codes are affected by external factors -designed tags-, during different periods or iteration ranges, not continuous, but separated. The impact is evident in conducted experiments where individuals' fitness improves to a certain extent during the starting of marking periods. The impact of such an element becomes more influential when configuring more than one period; the use of marking periods reflects an acceleration towards the global optimum. On the contrary, when a single long period is set, without interruptions, fitness variations are only seen at the beginning of that period, reaching good local optima; but then, the fitness keeps stable without variation. The above confirms that repeated periods with pauses in between allow to establish and make permanent modifications to gain more variety -at the phenotypic level- and discovery of new search areas.
The marking process through periods reveals a prominent behavior between populations; the ReGen EA finds better solutions (although the optimum is not always reached) than standard EAs. There are a better exploration and exploitation of the search space and it is observable through the trajectory of depicted curves, how individuals fitness improves, noticing a steep speedup in the first marking period, and then stabilizing curves slowly, with more moderate variations during the remaining periods as the end of the curve approaches. For Binary experiments in the first place, more pronounced jumps are seen, except by Max Ones problem, which does not show any alteration by epigenetic marks, especially because before the first marking period starts, it almost has reached the global maximum. For Rosenbrock and Griewank real defined functions, it is noticed that marking periods generates more subtle changes and take more time to reach better local optima, it moves slowly towards the global minimum. On the other side, Rastrigin and Schwefel evidence abrupt changes from the first marking period, leading to obtaining better local optima. Despite behaving differently, it is worth using markers to show prominent breaks; the use of epigenetic factors shows different behaviors in the evolution of the populations, which is not noticed in the classic EAs. The discrepancy between abrupt and gradual fitness changes may be related to functions features such as the domain, modality, space dimensionality, constraints, defined schemata, among others. This brings questions about the effectiveness of designed tags, how should be tags redefined in order to cope with such problem restrictions? what if the gap between good and poor performances is giving ideas to consider other epigenetics elements that may be missed? can this approach be assessed with other harder and broader problems?. Future work must give some closer ideas.
Regarding the definition of the marking rate, it has taken a long time to tests a variety of rates, so individuals genomes do not be over-marked and keep tags in any position, as it is reflected in biology, except creating islands or groups of marks as particularized by the methylation mechanism ({\em CpG islands}). The hardest activity has been focused on testing the entire framework through the marking and reading processes. The positive thing is the ReGen EA architecture is simple, the epigenetic elements do not add complexity in its implementation, ReGen EA follows a generic idea of an individual with a genome and a well-defined epigenome structure that is shaped during the evolution process. This approach obeys to a population-based bio-inspired technique that adapts while signals from the environment influence the epigenome; the phenotype is configured from interactions between the genotype and the environment; tags are transmitted through generations to maintaining a notion of memory between generations.
It is important to mention that, EA implementations are also statistically analyzed; the median analysis of samples allows graphically depict groups of data and explain them visually to identify the distribution of the samples' median fitnesses. For all functions, except by Max Ones function, median values outpoint the samples from EA standard versions. The analysis to find differences statistically significant between EA groups also confirms there are remarkable differences between the algorithms. This process has evidenced an improvement in epigenetic implementations performance compared to standard versions as reported in experimental results; differences between epigenetic algorithms and standard EAs fitnesses vary significantly, leading to conclude that introducing epigenetic factors to classical versions of EAs do accelerate the search process. These analyses also demonstrate it is not needed to increase or vary mutations rate -for classical mutations-; experiments have used the same rate, inversely, ReGen EA takes advantage of the recombination operator -to promote inheritance-; this operator is a powerful element to evaluate results. For many strategies mutation operator introduces diversity, this thesis does not deny it but instead embraces the idea that mutations can be used with a low rate to have a closer occurrence as seen in biology, and contemplates epigenetic assimilation -fixed changes- to influence the fitness.
Epigenetic components presented in this thesis for the Evolutionary Algorithms framework, describe a way to model Epigenetic Evolutionary Algorithms. ReGen Evolutionary Algorithms involve populations of individuals with genetic and epigenetic codes. This research mainly focuses on those experiences that individuals could acquire during their life cycle and how epigenetic mechanisms lead to learn and adapt for themselves under different conditions. So, such experiences can be inherited over time, and populations would evidence a kind of power of survival. To validate the technique applicability, only problems with real definition and binary encoding schemes were selected. Designed operations are meant to exclusively cover problems with these kinds of encoding, even though, problems with different encoding should be addressed by transforming their domain set into a binary representation; the performance and possible results of such implementations are unknown since no experiments of that kind have been conducted, but ReGen EA must allow them to be performed. The journey with this research allows concluding that epigenetics has many elements to continue improving this work and expanding it in such a way that it can be used for any problem.
\section{Future Work}
Epigenetic mechanisms offer a variety of elements to extend this work and improve the adaptation of individuals in population-based methods to identify novelties during the evolution process. From a biological point of view, some ideas are linked to the fact that there are mechanisms that make epigenetic tags keep fixed and maintained over a long time. It seems to be a process that let tags be bound without changing, just being preserved in the same location under specific environmental conditions or in certain life stages of an individual to avoid their degradation. This hypothesis supports the idea of memory consolidation and adds another dimension to describe a kind of intelligence at a molecular level.
It is intended to extend this model to cover a wider set of optimization problems, different from binary and real encoding problems. The plan is to design a mechanism to create dynamic tags during the evolution process, tags that use a generic encoding and do not depend on specific encoding problems. Problems with numbers-forms, chars, instructions, permutations, commands, expressions among others encoding schemes that can be influenced by generic-defined tags. Base on the former, designed operations need to be redefined, the reading process might be extended, any domain-specific problem must keep its encoding and not be transformed into a binary representation, as it is the current case, and also the marking process may be expanded.
Currently, the application of the marking actions by the ReGen EA is mutually exclusive; marking actions are not happening at the same time. The proceeding opens the possibility to think about another marking process enhancement so that adding, removing, and modifying actions are applied independently based on their distributed probabilities. Performed experiments changing this configuration reveal that by having the possibility of applying them simultaneously with their probability rate, they can produce more suitable individuals with scores closer to the global optimum. However, experiments are required to see individuals' behavior in problems with different encoding from binary and real defined, such as a permutation problem.
From the computational point of view, attempts are made to facilitate this model's replicability in the evolutionary algorithms community. It is expected to elaborate more tests by designing a complete benchmark to continue assessing the ReGen EA performance and improving what it integrates today.
\chapter{Dedicatory}
\begin{quote}
\centering
\large{\textit{To Abba...}}
\end{quote}
\chapter{Glossary}\label{glossary}
\begin{small}
\begin{enumerate}[A]
\item[A]
\begin{verbatim}
\end{verbatim}
\item[B]
\begin{verbatim}
\end{verbatim}
\item[C]
\begin{verbatim}
\end{verbatim}
\item[D]
\begin{verbatim}
\end{verbatim}
\item[E]
\begin{verbatim}
\end{verbatim}
\item[F]
\begin{verbatim}
\end{verbatim}
\item[G]
\item[H]
\begin{verbatim}
\end{verbatim}
\item[I]
\begin{verbatim}
\end{verbatim}
\item[J]
\item[K]
\begin{verbatim}
\end{verbatim}
\item[L]
\begin{verbatim}
\end{verbatim}
\item[M]
\begin{verbatim}
\end{verbatim}
\item[N]
\begin{verbatim}
\end{verbatim}
\item[O]
\item[P]
\begin{verbatim}
\end{verbatim}
\item[Q]
\item[R]
\begin{verbatim}
\end{verbatim}
\item[S]
\begin{verbatim}
\end{verbatim}
\item[T]
\begin{verbatim}
\end{verbatim}
\item[U]
\begin{verbatim}
\end{verbatim}
\item[V]
\begin{verbatim}
\end{verbatim}
\item[W]
\item[X]
\item[Y]
\item[Z]
\end{enumerate}
\end{small} | {
"attr-fineweb-edu": 1.253906,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe2bxK6Ot9TMSsRIP |
\section{Introduction}\label{Sec:Introduction}
The evaluation of the symbol error probability (SEP) and bit error probability (BEP) of a multidimensional constellation over an additive white Gaussian noise (AWGN) channel is a classical problem in digital communications. This problem traces back to \cite{Gilbert52} in 1952, where upper and lower bounds on the SEP of multidimensional constellations based on the maximum likelihood (ML) detector were first presented.
When nonuniform signaling is used, i.e.,~ when constellation points are transmitted using different probabilities, the optimal detection strategy is the maximum a posteriori (MAP) detector. The main drawback of MAP detection is that its implementation requires decision regions that vary as a function of the signal-to-noise ratio (SNR). Practical implementations therefore favor the (suboptimal) ML approach where the a priori probabilities are essentially ignored. For ML detection, the decision regions are the so-called Voronoi regions, which do not depend on the SNR, and thus, are simpler to implement.
Error probability analysis of constellations for the AWGN channel has been extensively investigated in the literature, see e.g., \cite{Wozencraft65_Book,Jacobs67,Foschini74,Smith75,Forney89a,Kschischang93}. In fact, this a problem treated in many---if not all---digital communication textbooks. To the best of our knowledge, and to our surprise, the general problem of error probability analysis for multidimensional constellations with arbitrary input distributions and MAP detection has not been investigated in such a general setup.
As the SNR increases, the MAP decision regions tend towards the ML regions. Intuitively, one would then expect that both detectors are asymptotically equivalent, which would justify the use of ML detection. In this paper, we show that this is not the case. MAP and ML detection give different SEPs and BEPs asymptotically, where the difference lies in the factors before the dominant Q-function expression. More precisely, the ratio between the SEPs with MAP and ML detection approaches a constant, and the ratio between their BEPs approaches another constant. These constants are analytically calculated for arbitrary constellations, labelings, and input distributions. To the best of our knowledge, this has never been previously reported in the literature. Numerical results support our analytical results and clearly show the asymptotic suboptimality of ML detection.
All the results in this paper are presented in the context of detection of constellations with arbitrary number of dimensions. These multidimensional constellations, however, can be interpreted as finite-length codewords from a code, where the cardinality of the constellation and its dimensionality correspond to the codebook size and codeword length, respectively. In this context, the results of this paper can be used to study the performance of the MAP and ML \emph{sequence decoders} at asymptotically high SNRs.
This paper is organized as follows. In \secref{Sec:Preliminaries}, the model is introduced and in \secref{Sec:Bounds}, the error probability bounds are presented. The main results of this paper are given in \secref{Sec:Asymptotics}. Conclusions are drawn in \secref{Sec:Conclusions}. All proofs are deferred to Appendices.
\section{Preliminaries}\label{Sec:Preliminaries}
\subsection{System Model}\label{Sec:Preliminaries:Model}
The system model under consideration is shown in Fig.~\ref{model}.
We consider the discrete-time, real-valued, $N$-dimensional, AWGN channel
\begin{align}\label{AWGN}
\boldsymbol{Y} = \boldsymbol{X}+\boldsymbol{Z},
\end{align}
where the transmitted symbol $\boldsymbol{X}$ belongs to a discrete constellation $\mc{X}=\set{\boldsymbol{x}_1,\boldsymbol{x}_2,\ldots,\boldsymbol{x}_{M}}$ and $\boldsymbol{Z}$ is an $N$-dimensional vector, independent of $\boldsymbol{X}$, whose components are independent and identically distributed Gaussian random variables with zero mean and variance $\SZ^{2}$ per dimension. The conditional channel transition probability is
\begin{align}\label{pdf.channel}
\pdf(\boldsymbol{y}|\boldsymbol{x}) = \frac{1}{(2\pi\SZ^{2})^{N/2}}\tr{exp}{\left(-\frac{\|\boldsymbol{y}-\boldsymbol{x}\|^{2}}{2\SZ^{2}}\right)}.
\end{align}
We assume that the symbols are distinct and that each of them is transmitted with probability $p_i= \Pr\set{\boldsymbol{X}=\boldsymbol{x}_i}$, $0< p_i <1$. Neither the constellation points nor their probabilities depend on $\SZ$. We use the set $\mc{I}= \set{1,\ldots,M}$ to enumerate the constellation points. The average symbol energy is $\SX= \sum_{i\in\mcIX} p_i \|\boldsymbol{x}_i\|^2 < \infty$. The Euclidean distance between $\boldsymbol{x}_{i}$ and $\boldsymbol{x}_{j}$ is defined as $\delta_{ij}= \|\boldsymbol{x}_i-\boldsymbol{x}_j\|$ and the minimum Euclidean distance (MED) of the constellation as $d= \min_{i,j\in\mc{I}:i\neq j}\delta_{ij}$.
For the BEP analysis, assuming that $M$ is a power of two, we consider a binary source that produces length-$m$ binary labels. These labels are mapped to symbols in $\mc{X}$ using a \emph{binary labeling}, which is a one-to-one mapping between the $M=2^{m}$ different length-$m$ binary labels and the constellation points. The length-$m$ binary labels have an arbitrary input distribution, and thus, the same distribution is induced on the constellation points. The binary label of $\boldsymbol{x}_{i}$ is denoted by $\boldsymbol{c}_{i}$, where $i\in\mc{I}$. The Hamming distance between $\boldsymbol{c}_{i}$ and $\boldsymbol{c}_{j}$ is denoted by $\hd{i}{j}$.
At the receiver, we assume that (hard-decision) symbol-wise decisions are made. The estimated symbol is then mapped to a binary label to obtain an estimate on the transmitted bits.\footnote{This detector based on symbols has been shown in \cite{Ivanov13a} to be suboptimal in terms of BEP; however, differences are expected only at high BEP values.}
\begin{figure}[tpb]
\newlength{\blocksep}\setlength{\blocksep}{6mm}
\begin{center}
\small{
\hspace{-2ex}
\begin{tikzpicture}[>=stealth,auto,tight background,
block/.style={rectangle,rounded corners=3pt,thick,draw,inner sep=1.5pt,minimum width=18mm,
minimum height=9mm,fill=figyellow,drop shadow,align=center,execute at begin node=\setlength{\baselineskip}{2.5ex}},
plain/.style={align=center,execute at begin node=\setlength{\baselineskip}{5.5ex}}
]
\node[plain] (C) {};
\coordinate (AWGN) at ($(C)+(-0.35cm,0)$);
\node[draw,thick,circle,fill=white,inner sep=1.5pt] (N1) at (AWGN) {+};
\node[plain,above=20pt of AWGN] (Z) {};
\node[block,fill=figgreen,left=1.2\blocksep of AWGN] (M) {Mapper\\ $\set{0,1}^{m} \rightarrow \mc{X}$};
\node[block,fill=figgreen,right=\blocksep of AWGN] (HD) {MAP or ML\\Detector};
\node[plain,left=0.9\blocksep of M] (SRC) {};
\node[block,fill=figgreen,right=\blocksep of HD] (D) {Demapper\\ $\mc{X} \rightarrow \set{0,1}^{m}$};
\node[plain,right=0.8\blocksep of D] (SINK) {};
\draw[thick,->] (SRC) -- node[plain,above] {$\boldsymbol{c}_{i}$} (M);
\draw[thick,->] (M) -- node[plain,above] {$\boldsymbol{x}_{i}$} (N1);
\draw[thick,->] (N1) -- node[plain,above] {$\boldsymbol{Y}$} (HD);
\draw[thick,->] (Z) -- node[plain,left] {$\boldsymbol{Z}$} (N1);
\draw[thick,->] (HD) -- node[plain,above] {$\hat{\boldsymbol{X}}$} (D);
\draw[thick,->] (D) -- node[plain,above] {$\hat{\boldsymbol{C}}$} (SINK);
\end{tikzpicture}
}
\end{center}
\caption{System model under consideration. For a given length-$m$ transmitted binary label $\boldsymbol{c}_{i}$, the received vector $\boldsymbol{Y}$ is processed by the MAP or ML detector. The estimated symbol $\hat{\boldsymbol{X}}$ is then converted to an estimated binary label $\hat{\boldsymbol{C}}$.}
\label{model}
\end{figure}
For any received symbol $\boldsymbol{y}$, the MAP decision rule is\footnote{Throughout the paper, the superscripts ${\tnr{``map''}}$ and ${\tnr{``ml''}}$ denote quantities associated with MAP and ML detection, respectively.}
\begin{align}
\label{map}
\hat{\boldsymbol{X}}^{\tnr{map}}(\boldsymbol{y}) & = \mathop{\mathrm{argmax}}_{j\in\mc{I}}\set{p_j\pdf(\boldsymbol{y}|\boldsymbol{x}_{j})}.
\end{align}
This decision rule generates MAP decision regions defined as
\begin{align}\label{map.j.region}
\mc{R}_{j}^{\tnr{map}}(\SZ) &= \set{\boldsymbol{y}\in\mathbb{R}^{N}: p_j\pdf(\boldsymbol{y}|\boldsymbol{x}_{j}) \geq p_i\pdf(\boldsymbol{y}|\boldsymbol{x}_{i}), \forall i\in\mc{I}}
\end{align}
for all $j\in\mc{I}$. Similarly, the ML detection rule is
\begin{align}
\label{ml}
\hat{\boldsymbol{X}}{}^{\tnr{ml}}(\boldsymbol{y}) &= \mathop{\mathrm{argmax}}_{j\in\mc{I}}\set{\pdf(\boldsymbol{y}|\boldsymbol{x}_{j})},
\end{align}
which results in the decision regions
\begin{align}\label{ml.j.region}
\mc{R}_{j}^{\tnr{ml}}(\SZ) &= \set{\boldsymbol{y}\in\mathbb{R}^{N}: \pdf(\boldsymbol{y}|\boldsymbol{x}_{j}) \geq \pdf(\boldsymbol{y}|\boldsymbol{x}_{i}), \forall i\in\mc{I}}.
\end{align}
\begin{example}\label{Example.Regions}
Consider the $32$-ary constellation with the nonuniform input distribution in \cite[Fig.~2, Table~I]{Valenti12}\footnote{Using three shaping bits, which results in radii $1,2.53, 4.30$.}. The constellation is shown in \figref{Valenti_32APSK}, where the area of the constellation points is proportional to the corresponding probabilities. In \figref{Valenti_32APSK}, the MAP and ML decision regions in \eqref{map.j.region} and \eqref{ml.j.region} are shown for three values of the noise variance. These results show how the MAP regions converge to the ML regions as the noise variance decreases.
\end{example}
\begin{figure}[tbp]
\centering
\begin{tikzpicture
\begin{axis}[axis lines=none,
xminorgrids=true,
width=0.34\textwidth,
height=0.34\textwidth,
xmin=-1.8,xmax=1.8,
ymin=-1.8,ymax=1.8,
xlabel style={yshift=0.1cm},
xtick={-1.5,-0.5,0.5,1.5},
every axis/.append style={font=\footnotesize},
legend style={legend pos=south west,font=\scriptsize,legend cell align=left},
title={(a) $\SZ^2=0.1$}
]
\input{data/Valenti_32APSK_constellation.tikz}
\input{data/Valenti_32APSK_constellation_Voronoi.tikz}
\input{data/Valenti_32APSK_constellation_MAP_10_dB.tikz}
\end{axis}
\end{tikzpicture}
\hspace{5pt}
\begin{tikzpicture
\begin{axis}[axis lines=none,
xminorgrids=true,
width=0.34\textwidth,
height=0.34\textwidth,
xmin=-1.8,xmax=1.8,
ymin=-1.8,ymax=1.8,
xlabel style={yshift=0.1cm},
xtick={-1.5,-0.5,0.5,1.5},
every axis/.append style={font=\footnotesize},
legend style={legend pos=south west,font=\scriptsize,legend cell align=left},
title={(b) $\SZ^2=0.05$}
]
\input{data/Valenti_32APSK_constellation.tikz}
\input{data/Valenti_32APSK_constellation_Voronoi.tikz}
\input{data/Valenti_32APSK_constellation_MAP_13_dB.tikz}
\end{axis}
\end{tikzpicture}
\hspace{5pt}
\begin{tikzpicture
\begin{axis}[axis lines=none,
xminorgrids=true,
width=0.34\textwidth,
height=0.34\textwidth,
xmin=-1.8,xmax=1.8,
ymin=-1.8,ymax=1.8,
xlabel style={yshift=0.1cm},
xtick={-1.5,-0.5,0.5,1.5},
every axis/.append style={font=\footnotesize},
legend style={legend pos=south west,font=\scriptsize,legend cell align=left},
title={(c) $\SZ^2=0.025$}
]
\input{data/Valenti_32APSK_constellation.tikz}
\input{data/Valenti_32APSK_constellation_Voronoi.tikz}
\input{data/Valenti_32APSK_constellation_MAP_16_dB.tikz}
\end{axis}
\end{tikzpicture}
\caption{ML (dashed blue) and MAP (solid red) decision regions for the constellation in Example~\ref{Example.Regions} and three values of the noise variance. The area of the constellation points is proportional to their probabilities. As the noise variance decreases, the MAP regions converge to the ML regions.}
\label{Valenti_32APSK}
\end{figure}
\subsection{Error Probability}\label{Sec:Preliminaries:EP}
Throughout this paper, the SEP and BEP are denoted by $P_{\tnr{s}}(\SZ)$ and $P_{\tnr{b}}(\SZ)$, respectively. Furthermore, we are interested in the error probability (SEP and BEP) of both the MAP and ML detectors. To study these four error probabilities, we define the generic error probability function
\begin{align}\label{P}
P(\SZ) & = \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\j\neq i}} h_{ij} \tp{i}{j}(\SZ),
\end{align}
where the transition probability $\tp{i}{j}(\SZ)$ is given by
\begin{align}\label{Fij}
\tp{i}{j}(\SZ) & = \Pr\Set{\hat{\boldsymbol{X}}(\boldsymbol{Y})= \boldsymbol{x}_{j}|\boldsymbol{X}=\boldsymbol{x}_{i}}\\
\label{Fij.2}
& = \Pr\Set{\boldsymbol{Y} \in \mc{R}_{j}(\SZ)|\boldsymbol{X}=\boldsymbol{x}_{i}}.
\end{align}
The expressions \eqref{P}--\eqref{Fij.2} represent both the MAP and ML detectors, as well as both the SEP and BEP, as explained in the following.
The error probability with MAP detection is obtaining by using $\mc{R}_{j}(\SZ)=\mc{R}_{j}^{\tnr{map}}(\SZ)$ in \eqref{Fij.2}, where $\mc{R}_{j}^{\tnr{map}}(\SZ)$ is given by \eqref{map.j.region}. Similarly, the use of $\mc{R}_{j}(\SZ)=\mc{R}_{j}^{\tnr{ml}}(\SZ)$ in \eqref{Fij.2}, where $\mc{R}_{j}^{\tnr{ml}}(\SZ)$ is given by \eqref{ml.j.region}, leads to the error probability with ML detection.
To study the SEP, $h_{ij}$ in \eqref{P} should be set to one, which gives the well-known expression
\begin{align}\label{sep}
P_{\tnr{s}}(\SZ) = \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\j\neq i}} \tp{i}{j}(\SZ).
\end{align}
Similarly, the BEP expression \cite[Eq.~(1)]{Lassing03}, \cite[Eq.~(1)]{Agrell04}
\begin{align}\label{bep}
\BEP(\SZ) & = \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\j\neq i}} \frac{\hd{i}{j}}{m} \BEPijMAP
\end{align}
is obtained by using $h_{ij}=\hd{i}{j}/m$ in \eqref{P}.
The four cases discussed above are summarized in the first three columns of Table~\ref{Table:EP}.
\begin{table}[tpb]
\renewcommand\arraystretch{1.45}
\caption{Values of $\hat{\boldsymbol{X}}$ and $h_{ij}$ that used in \eqref{P}--\eqref{Fij.2} give SEP and BEP expressions for both the MAP and ML detectors. The last column shows the values of $w_{ij}$ for the asymptotic expressions in \secref{Sec:Asymptotics}.}
\centering
\begin{tabular}{ l l c c}
\hline
\hline
$P(\SZ)$ & $\hat{\boldsymbol{X}}$ & $h_{ij}$ & $w_{ij}$ \\
\hline
\hline
$P_{\tnr{s}}^{\tnr{map}}(\SZ)$ & $\hat{\boldsymbol{X}}^{\tnr{map}}$ & $1$ & $\sqrt{\dfrac{p_j}{p_i}}$\\
$P_{\tnr{s}}^{\tnr{ml}}(\SZ)$ & $\hat{\boldsymbol{X}}{}^{\tnr{ml}}$ & $1$ & $1$\\
$P_{\tnr{b}}^{\tnr{map}}(\SZ)$ & $\hat{\boldsymbol{X}}^{\tnr{map}}$ & $\dfrac{\hd{i}{j}}{m}$ & $\sqrt{\dfrac{p_j}{p_i}}$\\
$P_{\tnr{b}}^{\tnr{ml}}(\SZ)$ & $\hat{\boldsymbol{X}}{}^{\tnr{ml}}$ & $\dfrac{\hd{i}{j}}{m}$ & $1$\\
\hline
\end{tabular}
\label{Table:EP}
\end{table}
\section{Error Probability Bounds}\label{Sec:Bounds}
Error probability calculations for arbitrary multidimensional constellations and finite SNR are difficult because the decision regions defining the transition probabilities $\tp{i}{j}(\SZ)$ in \eqref{Fij.2} are in general irregular. Therefore, to analytically study the error probability, bounding techniques are usually the preferred alternative. In this section, we present two lemmas that give upper and lower bounds on the transition probability $\tp{i}{j}(\SZ)$. These bounds are expressed in terms of the Gaussian Q-function $\QF(x) = (1/\sqrt{2\pi}) \int_{x}^{\infty} \explow{-\xi^2/2}\tr{d}\xi$ and will then be used to upper- and lower-bound the SEP and BEP in \secref{Sec:Asymptotics}.
\begin{lemma}\label{Lemma.UB}
For any $i,j\in\mc{I}$, $j\neq i$,
\begin{align}\label{Lemma.UB.eq}
\tp{i}{j}(\SZ) \leq \QF\biggl(\frac{\Delta_{ij}(\SZ)}{\SZ}\biggr),
\end{align}
where
\begin{align}\label{delta.ij}
\Delta_{ij}(\SZ)=
\begin{cases}
\frac{\delta_{ij}}{2} \left(1+\frac{2\SZ^{2}}{\delta_{ij}^{2}}\log\frac{p_i}{p_j}\right), & \tnr{for MAP},\\
\frac{\delta_{ij}}{2}, & \tnr{for ML}.
\end{cases}
\end{align}
\end{lemma}
\begin{IEEEproof}
See Appendix~\ref{Appendix.Lemma.UB}.
\end{IEEEproof}
\begin{lemma}\label{Lemma.LB}
For any $i,j\in\mc{I}$, $j\neq i$ and any $\SZ<\tau_{ij}$,
\begin{align}\label{Lemma.LB.eq}
\tp{i}{j}(\SZ) \geq
\begin{cases}
0, & \textnormal{if $\delta_{ij}>d$},\\
\left(\QF\Bigl(\frac{\Delta_{ij}(\SZ)}{\SZ}\Bigr)-\QF\Bigl(\frac{d}{2\SZ}+\frac{r(\SZ)}{\sqrt{N}\SZ}\Bigr)\right)\cdot & \\
\qquad\qquad\quad \left(1-2\QF\left(\frac{r(\SZ)}{\sqrt{N}\SZ}\right)\right)^{N-1}, &\textnormal{if $\delta_{ij}=d$},\\
\end{cases}
\end{align}
where $\Delta_{ij}(\SZ)$ is given by \eqref{delta.ij},
\begin{align}\label{r}
r(\SZ) = \frac{d^{2}-4\SZ^{2}\log\max_{a,b\in\mc{I}}\left\{p_{a}/p_{b}\right\}}{2(1+\sqrt{3})d},
\end{align}
and
\begin{align}\label{sigma.zero}
\tau_{ij} = {d}\left(2(1+\sqrt{3})\sqrt{N}\left|\log{\left(\frac{p_i}{p_j}\right)}\right|+4\log\max_{a,b\in\mc{I}}\left\{\frac{p_{a}}{p_{b}}\right\}\right)^{-\frac{1}{2}}.
\end{align}
\end{lemma}
\begin{IEEEproof}
See Appendix~\ref{Appendix.Lemma.LB}.
\end{IEEEproof}
The results in Lemmas~\ref{Lemma.UB} and \ref{Lemma.LB} can be combined with \eqref{P} to obtain upper and lower bounds on the error probability:
\begin{align}
P(\SZ) & \leq \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\j\neq i}} h_{ij} \QF\biggl(\frac{\Delta_{ij}(\SZ)}{\SZ}\biggr) \label{P.UBound}
\end{align}
and
\begin{align}
\nonumber
\hspace{-1ex}
P(\SZ) &\ge \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij} \left(\QF\Bigl(\frac{\Delta_{ij}(\SZ)}{\SZ}\Bigr)-\QF\Bigl(\frac{d}{2\SZ}+\frac{r(\SZ)}{\sqrt{N}\SZ}\Bigr)\right)\cdot\\
&\quad\quad\left(1-2\QF\left(\frac{r(\SZ)}{\sqrt{N}\SZ}\right)\right)^{N-1}, \quad \SZ<\min_{i,j\in\mc{I} }\tau_{ij}, \label{P.LBound}
\end{align}
where $\Delta_{ij}(\SZ)$, $r(\SZ)$, and $\tau_{ij}$ are given by \eqref{delta.ij}, \eqref{r}, and \eqref{sigma.zero}, respectively. In the next section, it will be proved that both these bounds are tight for asymptotically high SNR.
\section{High-SNR Asymptotics of the SEP and BEP}\label{Sec:Asymptotics}
\subsection{Main Results}\label{Sec:MainResults}
The following theorem gives an asymptotic expression for the error probability in \eqref{P}, i.e., it describes the asymptotic behavior of the MAP and ML detectors, for both SEP and BEP. The results are given in terms of the input probabilities $p_i$, the Euclidean distances between constellation points $\delta_{ij}$, the MED of the constellation $d$, and the Hamming distances between the binary labels of the constellation points $\hd{i}{j}$. {The results in this section will be discussed later in \secref{Sec:Discussion}. Numerical results will be presented in \secref{Sec:Examples}.}
\begin{theorem}\label{EP.Asym.Theo}
For any input distribution,
\begin{align}\label{EP.Asym}
\limsnszero{
\frac{P(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}
} = B,
\end{align}
where
\begin{align}\label{EP.B}
B = \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}w_{ij}
\end{align}
and where $h_{ij}$ and $w_{ij}$ are constants given in Table~\ref{Table:EP} and $\QF(\cdot)$ is the Gaussian Q-function.
\end{theorem}
\begin{IEEEproof}
See Appendix~\ref{Appendix.EP.Asym.Theo}.
\end{IEEEproof}
{The following corollary shows that, at high SNR, the ratio between the error probability with MAP and ML detection approaches a constant $R\leq 1$. This constant shows the asymptotic suboptimality of ML detection: when $R<1$ an asymptotic penalty is expected, but when $R=1$ both detectors are asymptotically equivalent.}
\begin{corollary}\label{EP.MAP_vs_ML.Theo}
For any input distribution and for either SEP or BEP,
\begin{align}\label{EP.MAP_vs_ML}
\limsnszero{
\frac{P^{\tnr{map}}(\SZ)}{P^{\tnr{ml}}(\SZ)}
}
=
R,
\end{align}
where
\begin{align}\label{R}
R
=
\frac{B^{\tnr{map}}}{B^{\tnr{ml}}}=
\dfrac{\sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}\sqrt{\dfrac{p_j}{p_i}}}{\sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}},
\end{align}
where $h_{ij}$ is a constant given in Table~\ref{Table:EP}. Furthermore, $R\leq 1$ with equality if and only if $p_i=p_j,\, \forall i,j:\delta_{ij}=d$.
\end{corollary}
\begin{IEEEproof}
See Appendix~\ref{Appendix.EP.MAP_vs_ML.Theo}.
\end{IEEEproof}
\subsection{Discussion}\label{Sec:Discussion}
\theoref{EP.Asym.Theo} generalizes \cite[Ths.~3 and 7]{Alvarado12b} to arbitrary multidimensional constellations.\footnote{All the results in \cite{Alvarado12b} are valid for one-dimensional constellations only.} Somewhat surprisingly, \theoref{EP.Asym.Theo} in fact shows that \cite[Ths.~3 and 7]{Alvarado12b} apply verbatim to multidimensional constellations. The result in \theoref{EP.Asym.Theo} for the particular case of SEP with ML detection also coincides with the approximation presented in \cite[Eqs.~(1)--(2)]{Kschischang93}. \theoref{EP.Asym.Theo} can therefore be seen as a formal proof of the asymptotic approximation in \cite[Eqs.~(1)--(2)]{Kschischang93} as well as its generalization to MAP detection for SEP and to both MAP and ML detection for BEP.
Recognizing $B \QF(d/(2\SZ))$ as the dominant term in the union bound, \theoref{EP.Asym.Theo} in fact proves that the union bound is tight for both SEP and BEP with arbitrary multidimensional constellations, arbitrary labelings and input distributions, and both MAP and ML detection, which, to the best of our knowledge, has not been previously reported in the literature. The special case of SEP with uniform input distribution and ML detection was elegantly proved in \cite[Eqs.~(7.10)--(7.15)]{Zetterberg77} using an asymptotically tight lower bound.\footnote{An earlier attempt to lower-bound the SEP in the same scenario was presented in \cite[Th.~3]{Gilbert52}, but that bound was incorrect, which can be shown by considering a constellation consisting of three points on a line.}
Note that the lower bound in \eqref{P.LBound} is identical for MAP and ML detection when $R=1$, since $\Delta_{ij}(\SZ)=d / 2$ in both cases. The upper bound in \eqref{P.UBound}, however, is always different for MAP and ML detection as long as the symbols are not equally likely, even when $R=1$.
\subsection{Examples}\label{Sec:Examples}
\begin{example}\label{example.1D}
{
Consider the one-dimensional \emph{asymmetric} constellation with $M=3$, $(x_1,x_2,x_3) = (-1,0,+2)$, and $(p_1,p_2,p_3) = (0.62,0.07,0.31)$. This probability distribution is chosen so that the average of the constellation is zero. \figref{Steiner94}~(a) shows the exact SEP with MAP and ML detection, which can be analytically calculated, together with the upper bounds in \eqref{P.UBound} (green), the lower bounds in \eqref{P.LBound} (cyan), and the asymptotic approximations $P_{\tnr{s}}(\SZ)\approx B_{\tnr{s}}\QF(d/(2\SZ))$ from \eqref{EP.Asym} (blue) with
$B^{\tnr{map}}_{\tnr{s}}=2\sqrt{p_1 p_2}=0.4167$ and
$B^{\tnr{ml}}_{\tnr{s}}=p_1+p_2=0.6900$, i.e.,
$R_{\tnr{s}}=0.6038$, given by \cororef{EP.MAP_vs_ML.Theo}. The solid and dotted curves represent MAP and ML detection, respectively. The lower bounds are only defined when $\SX/\SZ^{2}>15.8$ dB, due to the restrictions on $\SZ$ in \eqref{P.LBound}. In this example the asymptotic approximation for the ML detector is below the exact SEP, while for the MAP detector the asymptotic approximation is above the exact SEP when $\SX/\SZ^{2}>2.6$ dB. \figref{Steiner94}~(a) also shows that there is a difference in the SEP between the MAP and ML detector and that the upper and lower bounds are tight and converge to both the exact SEP and the asymptotic approximation for high SNR.
}
{Now consider instead the one-dimensional \emph{symmetric}} constellation with $M=3$, $(x_1,x_2,x_3) = (-1,0,+1)$, and $(p_1,p_2,p_3) = (p_1,1-2p_1,p_1)$, where $0<p_{1}<1/2$. If $p_{1}=1/3$, an equally likely and equally spaced $3$-ary constellation is obtained. If $p_1 = 1/K$, this constellation is equivalent to a constellation with $K$ equally likely points, of which $K-2$ are located at the origin; such a constellation was used in \cite{Steiner94} to disprove the so-called strong simplex conjecture.
In \figref{Steiner94}~(b), the exact SEP with MAP and ML detection, which can be analytically calculated, is shown for different values of $p_{1}$.
There is a clear performance differences between the two detectors when $p_{1}\neq 1/3$. According to \cororef{EP.MAP_vs_ML.Theo}, $B^{\tnr{map}}_{\tnr{s}}=4\sqrt{p_1(1-2p_1)}$ and $B^{\tnr{ml}}_{\tnr{s}}=2(1-p_1)$, i.e., $R_{\tnr{s}}=2\sqrt{p_1(1-2p_1)}/(1-p_1)$. \figref{Steiner94}~(c) shows the ratio of the SEP curves and how these converge to $R_{\tnr{s}}$ as $\SZ\rightarrow0$ (indicated by the horizontal dashed lines).
For $p_{1}=0.167$ and $p_{1}=0.444$, the asymptote is the same ($R_{\tnr{s}}=0.8$); however, their SEP performance is quite different (see \figref{Steiner94}~(b)). This can be explained using the results in \figref{Steiner94}~(d), where the solid line shows $R_{\tnr{s}}$.
The two markers when $R_{\tnr{s}}=0.8$ correspond to $p_{1}=0.167$ and $p_{1}=0.444$, which explains the results in \figref{Steiner94}~(c) for those values of $p_{1}$.
\end{example}
\begin{figure*
\centering
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,black,red,green,blue,cyan,red,green,blue,cyan}
\begin{semilogyaxis}[cycle list name=color list,
legend columns=1,
xminorgrids=true,
width=0.43\textwidth,
height=0.37\textwidth,
grid=both,
xmin=1,xmax=19,
ymin=5e-4,ymax=0.5,
xlabel={$\SX/\SZ^{2}$~dB},
xlabel style={yshift=0.1cm},
ylabel={$P_{\tnr{s}}(\SZ)$},
ylabel style={yshift=-0.2cm},
xtick={1,3,...,23},
every axis/.append style={font=\footnotesize},
legend style={legend pos=south west,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(a)}
]
\addplot[very thick] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{MAP};
\addplot[very thick,dotted] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{ML};
\addplot+[thick] file {data/M3_SER_vs_SNR_MAP_exact.dat};
\addplot+[thick] file {data/M3_SER_vs_SNR_MAP_UB.dat};
\addplot+[thick] file {data/M3_SER_vs_SNR_MAP_Asymp.dat};
\addplot+[thick] file {data/M3_SER_vs_SNR_MAP_LB.dat};
\addplot+[thick,dotted] file {data/M3_SER_vs_SNR_ML_exact.dat};
\addplot+[thick,dotted] file {data/M3_SER_vs_SNR_ML_UB.dat};
\addplot+[thick,dotted] file {data/M3_SER_vs_SNR_ML_Asymp.dat};
\addplot+[thick,dotted] file {data/M3_SER_vs_SNR_ML_LB.dat};
\node[coordinate] (A) at (axis cs:3,3.25e-1) {};
\node[coordinate] (B) at (axis cs:3,1.65e-1) {};
\node[anchor=west,align=left,inner sep=0.2ex] (UB) at (axis cs:7,3.5e-1) {Upper bound};
\draw[->,thick] (UB.west) -- (A);
\draw[->,thick] (UB.west) -- (B);
\node[coordinate] (C) at (axis cs:8,7.5e-2) {};
\node[coordinate] (D) at (axis cs:8,1.22e-1) {};
\node[anchor=west,align=left,inner sep=0.2ex] (AS) at (axis cs:11,1.5e-1) {Asymptotic};
\draw[->,thick] (AS.west) -- (C);
\draw[->,thick] (AS.west) -- (D);
\node[coordinate] (E) at (axis cs:6,1.8e-1) {};
\node[coordinate] (F) at (axis cs:6,8e-2) {};
\node[anchor=east,align=left,inner sep=0.2ex] (EX) at (axis cs:4,2e-2) {Exact};
\draw[->,thick] (EX.east) -- (E);
\draw[->,thick] (EX.east) -- (F);
\node[coordinate] (G) at (axis cs:16,5.3e-3) {};
\node[coordinate] (H) at (axis cs:15.8,2.3e-3) {};
\node[anchor=east,align=left,inner sep=0.2ex] (LB) at (axis cs:13,2.5e-3) {Lower bound};
\draw[->,thick] (LB.east) -- (G);
\draw[->,thick] (LB.east) -- (H);
\end{semilogyaxis}
\end{tikzpicture}
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,black,green,red,blue,cyan,green,red,blue,cyan}
\begin{semilogyaxis}[cycle list name=color list,
legend columns=1,
xminorgrids=true,
width=0.43\textwidth,
height=0.37\textwidth,
grid=both,
xmin=8,xmax=18,
ymin=1e-5,ymax=0.1,
xlabel={$\SX/\SZ^{2}$~dB},
xlabel style={yshift=0.1cm},
ylabel={$P_{\tnr{s}}(\SZ)$},
ylabel style={yshift=-0.2cm},
xtick={8,10,...,18},
every axis/.append style={font=\footnotesize},
legend style={legend pos=north east,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(b)}
]
\addplot[very thick] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{MAP};
\addplot[very thick,dotted] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{ML};
\foreach \i in {0.100,0.167,0.250,0.444}
\addplot+[thick] file {data/Steiner_SER_vs_SNR_MAP_p_\i.dat};
\foreach \i in {0.100,0.167,0.250,0.444}
\addplot+[thick,dotted] file {data/Steiner_SER_vs_SNR_ML_p_\i.dat};
\node[coordinate] (A) at (axis cs:10.6,1e-4) {};
\node[coordinate] (B) at (axis cs:11.8,5e-4) {};
\node[coordinate] (C) at (axis cs:14.8,7e-5) {};
\node[coordinate] (D) at (axis cs:14.8,1.8e-3) {};
\node[rotate=-65,anchor=north] at (axis cs:10.2,1e-4) {$p_{1}=0.1$};
\node[rotate=-65,anchor=north] at (axis cs:11.4,5e-4) {$p_{1}=0.167$};
\node[rotate=-65,anchor=south] at (axis cs:15.0,7e-5) {$p_{1}=0.25$};
\node[rotate=-55,anchor=south] at (axis cs:15.0,2e-3) {$p_{1}=0.444$};
\end{semilogyaxis}
\draw[thick,green,rotate=90] (A) ellipse (0.06 and .2);
\draw[thick,red,rotate=90] (B) ellipse (0.06 and .2);
\draw[thick,blue,rotate=90] (C) ellipse (0.06 and .2);
\draw[thick,cyan,rotate=90] (D) ellipse (0.06 and .2);
\end{tikzpicture}\\
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,green,red,blue,cyan,green,red,blue,cyan}
\begin{axis}[cycle list name=color list,
legend columns=2,
xminorgrids=true,
width=0.43\textwidth,
height=0.37\textwidth,
grid=both,
xmin=8,xmax=23,
ymin=0.5,ymax=1,
xlabel={$\SX/\SZ^{2}$~dB},
xlabel style={yshift=0.1cm},
ylabel={${\frac{P^{\tnr{map}}_{\tnr{s}}(\SZ)}{P^{\tnr{ml}}_{\tnr{s}}(\SZ)}}$},
ylabel style={yshift=-0.2cm},
xtick={0,2,...,23},
ytick={0.5,0.6,0.7,...,1},
every axis/.append style={font=\footnotesize},
legend style={legend pos=south east,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(c)}
]
\addplot[very thick,dashed,black] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{$R_{\tnr{s}}$ in \eqref{R}};
\foreach \i in {0.100,0.167,0.250,0.444}
\addplot+[thick] file {data/Steiner_Ratio_vs_SNR_p_\i.dat};
\foreach \i in {0.100,0.167,0.250,0.444}
\addplot+[very thick,dashed,black] file {data/Steiner_Asymptotic_Ratio_vs_SNR_p_\i.dat};
\node[rotate=0,anchor=north west,inner sep=2pt] at (axis cs:8.8,0.94) {$p_{1}=0.25$};
\node[rotate=0,anchor=north west,inner sep=2pt] at (axis cs:8.8,0.61) {$p_{1}=0.1$};
\node[rotate=0,anchor=north west,inner sep=2pt] at (axis cs:9.8,0.71) {$p_{1}=0.444$};
\node[rotate=0,anchor=south west,inner sep=2pt] at (axis cs:9.8,0.82) {$p_{1}=0.167$};
\draw[->,thick] (axis cs:9.8,0.82) -- (axis cs:8.8,0.775);
\draw[->,thick] (axis cs:9.8,0.71) -- (axis cs:8.8,0.74);
\end{axis}
\end{tikzpicture}
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{red,red,red,red,red,red,red,red}
\begin{axis}[cycle list name=color list,
legend columns=2,
xminorgrids=true,
width=0.43\textwidth,
height=0.37\textwidth,
grid=both,
xmin=0,xmax=0.5,
ymin=0,ymax=1,
xlabel={$p_{1}$},
xlabel style={yshift=0.1cm},
ylabel={$R_{\tnr{s}}=\limsnszero{\frac{P^{\tnr{map}}_{\tnr{s}}(\SZ)}{P^{\tnr{ml}}_{\tnr{s}}(\SZ)}}$},
ylabel style={yshift=-0.2cm},
xtick={0,0.1,0.2,...,0.5},
ytick={0,0.1,0.2,...,1},
every axis/.append style={font=\footnotesize},
legend style={legend pos=south west,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(d)}
]
\addplot[color=black,thick] table {data/Steiner_ratio_vs_p.dat};
\addplot[color=black,mark=pentagon*,fill=red,thick,only marks,mark size=2.5pt] coordinates {(0.167,0.8)};
\addplot[color=black,mark=pentagon*,fill=cyan,thick,only marks,mark size=2.5pt] coordinates {(0.444,0.8)};
\addplot[color=black,mark=pentagon*,fill=green,thick,only marks,mark size=2.5pt] coordinates {(0.1,0.6285)};
\addplot[color=black,mark=pentagon*,fill=blue,thick,only marks,mark size=2.5pt] coordinates {(0.25,0.9428)};
\end{axis}
\end{tikzpicture}
\caption{Results obtained for the constellations in Example~\ref{example.1D}: {(a) SEP and bounds with MAP and ML detection for the asymmetric constellation}, (b) SEP with MAP and ML detection {for the symmetric constellation}, (c) ratio of SEPs and asymptote {for the symmetric constellation} given by \cororef{EP.MAP_vs_ML.Theo}, and (d) asymptote {for the symmetric constellation} as a function of the symbol probability $p_{1}$.}
\label{Steiner94}
\end{figure*}
\begin{example}\label{Example.Valenti}
Consider again the constellation in Example~\ref{Example.Regions} (see \figref{Valenti_32APSK}) with the labeling specified in \cite[Fig.~2]{Valenti12}. \figref{Valenti12}~(a) shows the simulated BEPs (red markers) together with the upper bounds in \eqref{P.UBound} (green), the lower bounds in \eqref{P.LBound} (cyan), and the asymptotic approximations $P_{\tnr{b}}(\SZ)\approx B_{\tnr{b}}\QF(d/(2\SZ))$ from \eqref{EP.Asym} (blue). The solid and dotted curves represent MAP and ML detection, respectively. The lower bounds are only defined when $\SX/\SZ^{2}>20.78$ dB, due to the restrictions on $\SZ$ in \eqref{P.LBound}. These result show very small differences between the MAP and ML detectors. To see the asymptotic behavior more clearly, \figref{Valenti12}~(b) shows the ratio between the eight curves in \figref{Valenti12}~(a) and $\QF(d/(2\SZ))$. It is clear that the simulated BEPs closely follow the upper bounds at these SNR values. These results also show that both the upper and lower bounds converge to $B^{\tnr{map}}_{\tnr{b}}=0.1450$ and $B^{\tnr{ml}}_{\tnr{b}}=0.1495$ for MAP and ML detection, respectively, as predicted by \theoref{EP.Asym.Theo}. Unlike \figref{Valenti12}~(a), \figref{Valenti12}~(b) clearly shows the asymptotic difference between the MAP and ML detectors, {since $R_{\tnr{b}}=0.97$}.
{
The gap between $P_{\tnr{b}}^{\tnr{map}}$ and $P_{\tnr{b}}^{\tnr{ml}}$ depends on the bit labeling, but not as strongly as on the probability distribution. It can be shown that for the probabilities in this example, $0.956 < R_{\tnr{b}} < 0.989$ for any labeling. On the other hand, for this labeling, $R_{\tnr{b}}$ can be made equal to any value in the interval $(0,1]$ by changing the probabilities.
}
\end{example}
\begin{figure*
\centering
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,black,green,red,blue,cyan,green,red,blue,cyan}
\begin{semilogyaxis}[cycle list name=color list,
legend columns=1,
xminorgrids=true,
width=0.48\textwidth,
height=0.37\textwidth,
grid=both,
xmin=10,xmax=26,
ymin=1e-6,ymax=1,
xlabel={$\SX/\SZ^{2}$~dB},
xlabel style={yshift=0.1cm},
ylabel={$P_{\tnr{b}}(\SZ)$},
ylabel style={yshift=-0.2cm},
every axis/.append style={font=\footnotesize},
legend style={legend pos=north east,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(a)}
]
\addplot[very thick] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{MAP};
\addplot[very thick,dotted] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{ML};
\addplot+[thick,dotted] file {data/Valenti_BEP_ML_UB.txt};
\addplot+[thick,dotted,mark=*,mark size=1.7,mark options={fill=white,solid}] file {data/Valenti_BEP_ML_Sim.txt};
\addplot+[thick,dotted] file {data/Valenti_BEP_ML_Asym.txt};
\addplot+[thick,dotted] file {data/Valenti_newBEP_ML_LB.txt};
\addplot+[thick] file {data/Valenti_BEP_MAP_UB.txt};
\addplot+[thick,mark=+,mark size=1.8,mark options={fill=white}] file {data/Valenti_BEP_MAP_Sim.txt};
\addplot+[thick] file {data/Valenti_BEP_MAP_Asym.txt};
\addplot+[thick] file {data/Valenti_newBEP_MAP_LB.txt};
\node[coordinate] (EL1) at (axis cs:12,0.3) {};
\node[coordinate] (EL3) at (axis cs:19.3,3e-3) {};
\node[coordinate] (EL4) at (axis cs:23,6e-5) {};
\draw[->,thick,red] (axis cs:12,5e-3)node[red,anchor=north]{{\scriptsize{Simulations}}} -- (axis cs:13.5,8e-2);
\end{semilogyaxis}
\draw[thick,green,rotate=0] (EL1) ellipse (0.06 and .2) node[anchor=south west,inner sep=2pt]{{\scriptsize{Upper bounds}}};
\draw[thick,blue,rotate=0] (EL3) ellipse (0.06 and .2) node[anchor=north east,inner sep=2pt]{{\scriptsize{Asymptotes}}};
\draw[thick,cyan,rotate=0] (EL4) ellipse (0.06 and .2) node[anchor=north east,inner sep=2pt]{{\scriptsize{Lower bounds}}};
\end{tikzpicture}
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,black,green,red,blue,cyan,green,red,blue,cyan}
\begin{axis}[cycle list name=color list,
legend columns=1,
xminorgrids=true,
width=0.48\textwidth,
height=0.37\textwidth,
grid=both,
xmin=24,xmax=35,
ymin=0.12,ymax=0.21,
xlabel={$\SX/\SZ^{2}$~dB},
xlabel style={yshift=0.1cm},
ylabel={$\frac{P_{\tnr{b}}(\SZ)}{\QF(d/(2\SZ))}$},
ylabel style={yshift=-0.1cm},
every axis/.append style={font=\footnotesize},
legend style={legend pos=north east,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(b)}
]
\addplot[very thick] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{MAP};
\addplot[very thick,dotted] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{ML};
\addplot+[thick] file {data/Valenti_BEPQ_MAP_UB.txt};
\addplot+[thick,only marks,mark=+,mark size=1.8,mark options={fill=white}] file {data/Valenti_BEPQ_MAP_Sim.txt};
\addplot+[thick] file {data/Valenti_BEPQ_MAP_Asym.txt};
\addplot+[thick] file {data/Valenti_BEPQ_MAP_LB.txt};
\addplot+[thick,dotted] file {data/Valenti_BEPQ_ML_UB.txt};
\addplot+[thick,only marks,dotted,mark=*,mark size=1.7,mark options={fill=white,solid}] file {data/Valenti_BEPQ_ML_Sim.txt};
\addplot+[thick,dotted] file {data/Valenti_BEPQ_ML_Asym.txt};
\addplot+[thick,dotted] file {data/Valenti_BEPQ_ML_LB.txt};
\node[coordinate] (ER1) at (axis cs:27,0.163) {};
\node[coordinate] (ER2) at (axis cs:25,0.186) {};
\node[coordinate] (ER3) at (axis cs:26,0.147) {};
\node[coordinate] (ER4) at (axis cs:30,0.134) {};
\end{axis}
\draw[thick,green,rotate=0] (ER1) ellipse (0.06 and .3) node[anchor=south west,inner sep=2pt]{{\scriptsize{Upper bounds}}};
\draw[thick,red,rotate=0] (ER2) ellipse (0.08 and .35) node[anchor=south west,inner sep=4pt]{{\scriptsize{Simulations}}};
\draw[thick,blue,rotate=0] (ER3) ellipse (0.06 and .3) node[anchor=north west,inner sep=4pt]{{\scriptsize{Asymptotes}}};
\draw[thick,cyan,rotate=0] (ER4) ellipse (0.06 and .3) node[anchor=north west,inner sep=4pt]{{\scriptsize{Lower bounds}}};
\end{tikzpicture}
\caption{Results obtained for the constellation in Example~\ref{Example.Valenti}: (a) BEP with MAP and ML detection, and (b) ratio between BEPs in (a) and $\QF(d/(2\SZ))$.}
\label{Valenti12}
\end{figure*}
\cororef{EP.MAP_vs_ML.Theo} gives necessary and sufficient conditions for the asymptotic optimality of ML detection for both SEP and BEP. A nonuniform distribution will in general give $R<1$, although there are exceptions. Consider for example a constellation that can be divided into \emph{clusters}, where all pairs of constellation points in different clusters are at distances larger than the MED. Then ML detection is asymptotically optimal (i.e., $R=1$) if the probabilities of all constellations points \emph{within} a cluster are equal, even if the clusters have different probabilities. In this special case, \eqref{EP.B} for SEP yields $B^{\tnr{map}}_{\tnr{s}}=B^{\tnr{ml}}_{\tnr{s}}=\sum_{i\in\mcIX} p_i G_i$, where $G_i$ is the number of neighbors at MED from point $i$. We illustrate this concept with the following example.
\begin{example}\label{example.2D}
Fig.~\ref{Thomas74}~(a) illustrates the two-dimensional constellation in \cite[Fig.~3 (d)]{Thomas74}. We let the symbols in the inner ring be used with probability $p_{1}$ each and the symbols in the outer ring with probability $p_{2}=(1-4p_{1})/12$. {The radii of the two rings are $r_1 = 0.71d$ and $r_2 = 1.93 d$, and the average symbol energy is $\SX = 4 p_1 r_1^2 + 12 p_2 r_2^2$.} Fig.~\ref{Thomas74}~(b) shows the simulated ratio $P_{\tnr{s}}(\SZ)/\QF(d/(2\SZ))$ when $p_1=0.22$ and $p_2=0.01$ for ML (red circles) and MAP (red crosses) detection.
The upper bounds in \eqref{P.UBound}, the lower bounds in \eqref{P.LBound}, and the asymptotic expression, all divided by $\QF(d/(2\SZ))$, are included as green, cyan, and blue curves, respectively. In this case, the lower bounds for ML and MAP detection are identical, as are the asymptotes. For this specific constellation, $G_i=2$ for all $i\in\mc{I}$, and hence, $B^{\tnr{map}}_{\tnr{s}}=B^{\tnr{ml}}_{\tnr{s}}=2$, independently of $p_1$ and $p_2$, which implies $R_{\tnr{s}}=1$. {In terms of BEP, $B^{\tnr{map}}_{\tnr{b}}=B^{\tnr{ml}}_{\tnr{b}}$, hence $R_{\tnr{b}}=1$, regardless of both the labeling and $p_1$ and $p_2$. This shows that for this particular nonuniform constellation, both ML and MAP detectors are asymptotically equivalent for SEP and BEP for all labelings.}
\end{example}
\begin{figure
\centering
\raisebox{1ex}{
\hspace{6ex}
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,black,green,red,blue,cyan,green,red,blue,cyan}
\begin{axis}[
axis lines=none,
width=0.22\textwidth,
height=0.22\textwidth,
grid=both,
xmin=-4,xmax=4,
ymin=-4,ymax=4,
every axis/.append style={font=\footnotesize},
legend style={legend pos=north east,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(a)}
]
\addplot+[black,thick,mark=*,mark options={fill=black},mark size=4.69pt] file {data/dodeca2_16_inner_ring.txt};
\addplot+[black,thick,mark=*,mark options={fill=black},mark size=1pt] file {data/dodeca2_16_outer_ring.txt};
\end{axis}
\end{tikzpicture}
}
\begin{tikzpicture}
\pgfplotscreateplotcyclelist{color list}{black,black,green,red,blue,cyan,green,red,blue,cyan}
\begin{axis}[cycle list name=color list,
legend columns=1,
xminorgrids=true,
width=0.48\textwidth,
height=0.37\textwidth,
grid=both,
xmin=0,xmax=30,
ymin=1.8,ymax=2.8,
xlabel={$\SX/\SZ^{2}$~dB},
xlabel style={yshift=0.1cm},
ylabel={$\frac{P_{\tnr{s}}(\SZ)}{\QF(d/(2\SZ))}$},
ylabel style={yshift=-0.2cm},
every axis/.append style={font=\footnotesize},
legend style={legend pos=north east,font=\scriptsize,legend cell align=left},
grid style={dashed},
title={(b)}
]
\addplot[very thick] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{MAP};
\addplot[very thick,dotted] coordinates {(1e-5,1e-5) (1e-6,1e-6)};\addlegendentry{ML};
\addplot+[thick] file {data/APSK-4-12_SEPQ_MAP_UB.txt};
\addplot+[thick,only marks,mark=+,mark size=1.8,mark options={fill=white}] file {data/APSK-4-12_SEPQ_MAP_Sim.txt};
\addplot+[thick] file {data/APSK-4-12_SEPQ_MAP_Asym.txt};
\addplot+[thick] file {data/APSK-4-12_SEPQ_MAP_LB.txt};
\addplot+[thick,dotted] file {data/APSK-4-12_SEPQ_ML_UB.txt};
\addplot+[thick,only marks,dotted,mark=*,mark size=1.7,mark options={fill=white,solid}] file {data/APSK-4-12_SEPQ_ML_Sim.txt};
\addplot+[thick,dotted] file {data/APSK-4-12_SEPQ_ML_Asym.txt};
\addplot+[thick,dotted] file {data/APSK-4-12_SEPQ_ML_LB.txt};
\node[coordinate] (T1) at (axis cs:9.8,2.77) {};
\node[blue,anchor=north] (T2) at (axis cs:9,2) {{\scriptsize{Asymptotes}}};
\node[cyan,anchor=east] (T3) at (axis cs:23.5,1.88) {{\scriptsize{Lower bounds}}};
\draw[->,thick,red] (axis cs:5,2.4) node[red,anchor=north]{{\scriptsize{Simulations}}} -- (axis cs:7.9,2.72);
\draw[->,thick,red] (axis cs:5,2.31) -- (axis cs:7.9,2.199);
\end{axis}
\draw[thick,green,rotate=90] (T1) ellipse (0.06 and .5) node[anchor=west,inner sep=17pt]{{\scriptsize{Upper bounds}}};
\end{tikzpicture}
\caption{Results obtained for the constellation in Example~\ref{example.2D}: (a) Constellation where the pairs of symbols at MED are marked with solid lines and the symbol probabilities are indicated by the point areas, and (b) asymptotic performance shown as the ratio between SEPs and $\QF(d/(2\SZ))$.}
\label{Thomas74}
\end{figure}
\section{Conclusions}\label{Sec:Conclusions}
In this paper, an analytical characterization of the asymptotic behavior of the MAP and ML detectors in terms of SEP and BEP for arbitrary multidimensional constellations over the AWGN channel was presented. The four obtained results from \theoref{EP.Asym.Theo} and Table~\ref{Table:EP} can be summarized as
\begin{align}
\label{BEP.map.app}
P_{\tnr{b}}^{\tnr{map}}(\SZ) &\approx \QF\biggl(\frac{d}{2\SZ}\biggr) \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} \dfrac{\hd{i}{j}}{m}\sqrt{p_ip_j},\\
\label{BEP.ml.app}
P_{\tnr{b}}^{\tnr{ml}}(\SZ) &\approx \QF\biggl(\frac{d}{2\SZ}\biggr) \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} \dfrac{\hd{i}{j}}{m} p_i,\\
\label{SEP.map.app}
P_{\tnr{s}}^{\tnr{map}}(\SZ) &\approx \QF\biggl(\frac{d}{2\SZ}\biggr) \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} \sqrt{p_ip_j},\\
\label{SEP.ml.app}
P_{\tnr{s}}^{\tnr{{ml}}}(\SZ) &\approx \QF\biggl(\frac{d}{2\SZ}\biggr) \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} p_i,
\end{align}
where the relative error in all approximations approaches zero as $\SZ\rightarrow0$. The expressions for MAP and ML are equal if and only if $p_i=p_j,\, \forall i,j:\delta_{ij}=d$.
Somewhat surprisingly, the results in this paper are the first ones that address the problem in such a general setup. The theoretical analysis shows that for nonuniform input distributions, ML detection is in general asymptotically suboptimal. In most practically relevant cases, however, MAP and ML detection give very similar asymptotic results. The results in this paper are first-order only. An asymptotic analysis considering higher order terms is left for further investigation.
{Most modern transceivers based on high-order modulation formats use a receiver that operates at a bit level (i.e., a bit-wise receiver). Because of this, the binary labeling of the constellation plays a fundamental role in the system design. Furthermore, the use of nonequally likely symbols (probabilistic shaping) has recently received renewed attention in the literature. In this context, the asymptotic BER expressions presented in this paper can be used to optimize both the constellations and the binary labeling. This is also left for further investigation.}
\section*{Acknowledgment}\label{Sec:Ack}
We would like to thank Prof.~F.~R.~Kschischang for insightful comments on our poster ``Asymptotic Error Probability Expressions for the MAP Detector and Multidimensional Constellations,'' presented at the Recent Results Poster Session of ISIT 2016, Barcelona.
\appendices
\section{Proof of \lemmaref{Lemma.UB}}\label{Appendix.Lemma.UB}
For MAP detection, we have from \eqref{Fij.2} and \eqref{map.j.region} that
\begin{align}\label{Fij.UB.1}
\tpMAP{i}{j}(\SZ) \leq \Pr\bigl\{\boldsymbol{Y} \in \mc{H}_{ij}^{\tnr{map}}(\SZ)|\boldsymbol{X}=\boldsymbol{x}_{i}\bigr\},
\end{align}
where $\mc{H}_{ij}^{\tnr{map}}(\SZ)$ is the half-space determined by a \emph{pairwise} MAP decision (see \eqref{map}--\eqref{map.j.region}), i.e.,
\begin{align}\label{sep.i.map.halfspace.def}
\mc{H}_{ij}^{\tnr{map}}(\SZ) &= \set{\boldsymbol{y}\in\mathbb{R}^{N}: p_i\pdf(\boldsymbol{y}|\boldsymbol{x}_{i}) \leq p_j\pdf(\boldsymbol{y}|\boldsymbol{x}_{j})}.
\end{align}
Using \eqref{pdf.channel}, \eqref{sep.i.map.halfspace.def} can be expressed as
\begin{align}
\mc{H}_{ij}^{\tnr{map}}(\SZ)
\label{sep.i.map.halfspace}
& = \biggl\{\boldsymbol{y}\in\mathbb{R}^{N}:\inner{\boldsymbol{y}-\boldsymbol{x}_{i}}{\frac{\boldsymbol{d}_{ij}}{\delta_{ij}}} \geq \Delta_{ij}(\SZ)\biggr\},
\end{align}
where $\Delta_{ij}(\SZ)$ is given by \eqref{delta.ij} and $\boldsymbol{d}_{ij}= \boldsymbol{x}_{j}-\boldsymbol{x}_{i}$. The value of $|\Delta_{ij}(\SZ)|$ is the shortest Euclidean distance between $\boldsymbol{x}_{i}$ and the hyperplane defining the half-space $\mc{H}_{ij}^{\tnr{map}}(\SZ)$. For a geometric interpretation, see $\mc{H}_{ij}^{\tnr{map}}(\SZ)$ in \figref{proof_sketch_2D}.
\begin{figure}[t]
\scalebox{0.9}{
\centerline{
\footnotesize{
\hspace{-3.5ex}
\begin{tikzpicture}[tight background]
\draw [lightgray,fill=lightgray] plot [smooth] coordinates {(-100pt,45pt) (-110pt,130pt) (-20pt,110pt) (30pt,130pt) (70pt,115pt) (130pt,130pt) (100pt,45pt)};
\draw [black,fill=lightblue,opacity=0.5] plot coordinates {(-50pt,45pt) (-70pt,100pt) (80pt,110pt) (50pt,45pt)};
\draw [black,fill=lightred,opacity=0.5] plot coordinates {(50pt,45pt) (70pt,2.5pt) (130pt,-15pt) (130pt,90pt) (80pt,110pt) };
\draw [black,fill=lightgreen,opacity=0.5] plot coordinates {(70pt,2.5pt) (20pt,-56pt) (80pt,-90pt) (120pt,-70pt) (130pt,-15pt)};
\node[] at (55pt,95pt) {$\mc{R}_{j}^{\tnr{map}}(\SZ)$};
\node[] at (110pt,5pt) {$\mc{R}_{k}^{\tnr{map}}(\SZ)$};
\node[] at (100pt,-50pt) {$\mc{R}_{n}^{\tnr{map}}(\SZ)$};
\node[] at (-90pt,110pt) {$\mc{H}_{ij}^{\tnr{map}}(\SZ)$};
\fill[thick] (0pt,0pt) circle (2pt) node[anchor=south east] {$\boldsymbol{x}_{i}$};
\draw[black,thick,dashed] (0pt,0pt) circle (80pt);
\draw[thick,dashed] (0pt,80pt) circle (80pt);
\draw[<->,thick] (0.5pt,81.5pt) -- node[anchor=east] {$d$} +(55:78pt) ;
\fill[thick,black] (0pt,80pt) circle (2pt) node[anchor=south east] {$\boldsymbol{x}_{j}$};
\fill[thick,black] (90pt,40pt) circle (2pt) node[anchor=north west] {$\boldsymbol{x}_{k}$};
\draw[<->,thick] (-120:2pt) -- node[anchor=west] {$d$} (-120:80pt) ;
\draw[black,thick] (-10pt,30pt) rectangle (10pt,50pt);
\draw[black,thick,fill=gray] (-10pt,45pt) rectangle (10pt,50pt);
\draw[black,thick,dashed] (0pt,40pt) circle (14.1pt);
\draw[->,thick,black] (0pt,2pt) -- +(90:76pt) node[anchor=north west] {$\boldsymbol{d}_{ij}$};
\draw[<->,thick] (0pt,40pt) -- node[anchor=south east,inner sep=-0.5pt] {$r$} (-10pt,30pt);
\draw[-,black,thick] (-100pt,45pt) -- (100pt,45pt) ;
\draw[-,thick,black] (-2pt,40pt) -- (2pt,40pt) {};
\draw[<-,thick] (2.2pt,40pt) -- (30pt,35pt) node[anchor=west,inner sep=1pt] {$\ov{\boldsymbol{x}}_{ij}$};
\draw[<-,thick] (-5pt,47.5pt) -- (-65pt,-10pt) node[anchor=north,inner sep=1pt] {Orthotope $\mc{R}_{j}^{\tnr{map}}(\SZ)\cap\mc{C}_{ij}(\SZ)$};
\draw[<->,thick] (-100pt,0pt) -- node[anchor=east] {$|\Delta_{ij}(\SZ)|$} (-100pt,45pt) ;
\draw[-,thick,dashed] (-100pt,0pt) -- (0pt,0pt) ;
\fill[thick,black] (-40:80pt) circle (2pt) node[anchor=north west] {$\boldsymbol{x}_{n}$};
\draw[black,thick,dashed] (-40:40pt) circle (14.1pt);
\draw[black,thick,rotate=-40,fill=gray] (30pt,-10pt) rectangle (50pt,10pt);
\draw[<-,thick] (30pt,-20pt) -- (35pt,-5pt) node[anchor=south west,inner sep=0pt] {$\mc{C}_{in}(\SZ)$};
\draw[-,thick,black,rotate=-40] (40pt,-2pt) -- (40pt,2pt) {};
\draw[->,thick] (-40:2pt) -- (-40:35pt) -- node[anchor=west] {$\boldsymbol{d}_{in}$} (-40:78pt);
\end{tikzpicture}}}
}
\caption{Geometric representation of the proofs of Lemma~\ref{Lemma.UB} and \ref{Lemma.LB} for a 2D constellation with MAP detection. The MAP decision regions are shown for $\boldsymbol{x}_{j}$, $\boldsymbol{x}_{k}$, and $\boldsymbol{x}_{n}$, the half-space in \eqref{sep.i.map.halfspace} for $\boldsymbol{x}_{j}$ ($\mc{H}_{ij}^{\tnr{map}}(\SZ)$), and the hypercube for $\boldsymbol{x}_{n}$ ($\mc{C}_{in}(\SZ)$).}
\label{proof_sketch_2D}
\end{figure}
Using \eqref{sep.i.map.halfspace}, \eqref{Fij.UB.1} can be calculated as
\begin{align}\label{Fij.UB.2}
\tpMAP{i}{j}(\SZ) & \leq
\Pr\left\{ \inner{\boldsymbol{Y}-\boldsymbol{X}}{\frac{\boldsymbol{d}_{ij}}{\delta_{ij}}} \ge \Delta_{ij}(\SZ) \mid \boldsymbol{X} = \boldsymbol{x}_i \right\} \\
\label{Fij.UB.3}
& = \QF\biggl(\frac{\Delta_{ij}(\SZ)}{\SZ}\biggr),
\end{align}
where \eqref{Fij.UB.3} follows from \eqref{AWGN}--\eqref{pdf.channel} by recognizing $\inner{\boldsymbol{Y}-\boldsymbol{X}}{\boldsymbol{d}_{ij}/\delta_{ij}}$ as a zero-mean Gaussian random variable with variance $\SZ^2$.
The proof for the ML case is analogous but starts from \eqref{ml.j.region} instead of \eqref{map.j.region}. If follows straightforwardly that $\mc{H}_{ij}^{\tnr{ml}}(\SZ)$ is also given by \eqref{sep.i.map.halfspace}, where now $\Delta_{ij}(\SZ)={\delta_{ij}}/{2}$ defines a hyperplane half-way between $\boldsymbol{x}_{i}$ and $\boldsymbol{x}_{j}$.
\section{Proof of \lemmaref{Lemma.LB}}\label{Appendix.Lemma.LB}
To lower-bound \eqref{Fij.2} for MAP detection, we first ignore all the contributions of constellation points not at MED, i.e., we use $\Pr\Set{\boldsymbol{Y} \in \mc{R}_{j}^{\tnr{map}}(\SZ)|\boldsymbol{X}=\boldsymbol{x}_{i}}\geq 0$, for all $j$ such that $\delta_{ij}>d$. This gives the first case in \eqref{Lemma.LB.eq}.
The case of $\delta_{ij}=d$ is addressed by first defining an $N$-dimensional hypersphere centered at the mid-point $\ov{\boldsymbol{x}}_{ij}= \frac{1}{2}(\boldsymbol{x}_{i}+\boldsymbol{x}_{j})$ and of radius $r(\SZ)$ defined in \eqref{r}. Second, we inscribe an $N$-dimensional hypercube with half-side $r(\SZ)/\sqrt{N}$ inside this hypersphere. And third, we rotate this hypercube so that one of its sides is perpendicular to $\boldsymbol{d}_{ij}$. We denote this hypercube by $\mc{C}_{ij}(\SZ)$. For a geometric interpretation when $N=2$, see \figref{proof_sketch_2D}.
To lower-bound \eqref{Fij.2}, we integrate $\pdf(\boldsymbol{y}|\boldsymbol{x})$ in \eqref{pdf.channel} over the intersection of the MAP region $\mc{R}_{j}^{\tnr{map}}(\SZ)$ and the hypercube $\mc{C}_{ij}(\SZ)$, i.e.,
\begin{align}\label{Lemma.LB.1}
\tpMAP{i}{j}(\SZ) & = \int_{\mc{R}_{j}^{\tnr{map}}(\SZ)} \pdf(\boldsymbol{y}|\boldsymbol{x}_{i})\, \tnr{d}\boldsymbol{y}\\
\label{Lemma.LB.2}
& \geq \int_{\mc{R}_{j}^{\tnr{map}}(\SZ)\,\cap\,\mc{C}_{ij}(\SZ)} \pdf(\boldsymbol{y}|\boldsymbol{x}_{i})\, \tnr{d}\boldsymbol{y}.
\end{align}
We will prove that for sufficiently low values of $\SZ$, the integration region in \eqref{Lemma.LB.2} is an orthotope (hyperrectangle), as illustrated for $\boldsymbol{x}_{i}$ and $\boldsymbol{x}_{j}$ in \figref{proof_sketch_2D}. This will be done in three steps. We will show first that the hypercube $\mc{C}_{ij}(\SZ)$ is nonempty, second that it does not intersect any region $\mc{R}_{k}^{\tnr{map}}(\SZ)$ for $k \notin \set{i,j}$, and third that $\mc{C}_{ij}(\SZ)$ intersects both $\mc{R}_{i}^{\tnr{map}}(\SZ)$ and $\mc{R}_{j}^{\tnr{map}}(\SZ)$. Together these three facts imply that $\mc{R}_{j}^{\tnr{map}}(\SZ)\cap\mc{C}_{ij}(\SZ)$ is an orthotope, whose dimensions are then determined, which allows the integral in \eqref{Lemma.LB.2} to be calculated exactly.
For the first step, combining \eqref{r} and \eqref{sigma.zero} and rearranging terms yields for any $i,j\in\mc{I}$ with $j\neq i$
\begin{align}\label{d2diff}
\frac{d^2}{\SZ^2} - \frac{d^2}{\tau_{ij}^2}
= 2(1+\sqrt{3}) \left(\frac{d r(\SZ)}{\SZ^2} - \sqrt{N}\left|\log{\frac{p_i}{p_j}}\right| \right).
\end{align}
If $\SZ < \tau_{ij}$, then \eqref{d2diff} implies
\begin{align}\label{r.ineq}
\frac{r(\SZ)}{\sqrt{N}} &> \frac{\SZ^2}{d}\left|\log{\frac{p_i}{p_j}}\right|.
\end{align}
This inequality, to which we will return later, shows that $r(\SZ) > 0$ and hence that $\mc{C}_{ij}(\SZ)$ is not empty.
For the second step, we consider any $i,j,k\in\mc{I}$ such that $\delta_{ij} = d$ and $k \notin \set{i,j}$. We have
\begin{align}
d^2 &\le \min_{\ell \in \set{i,j}} \|\boldsymbol{x}_k - \boldsymbol{x}_\ell\|^2 \\
& = \min_{\ell \in \set{i,j}} \|(\boldsymbol{x}_k - \ov{\boldsymbol{x}}_{ij}) - (\boldsymbol{x}_\ell - \ov{\boldsymbol{x}}_{ij})\|^2 \\
& = \|\boldsymbol{x}_k - \ov{\boldsymbol{x}}_{ij}\|^2 - 2\max_{\ell \in \set{i,j}} \inner{\boldsymbol{x}_k - \ov{\boldsymbol{x}}_{ij}}{\boldsymbol{x}_\ell - \ov{\boldsymbol{x}}_{ij}}+ \frac{d^2}{4} \label{xlbound1} \\
& \le \|\boldsymbol{x}_k - \ov{\boldsymbol{x}}_{ij}\|^2 + \frac{d^2}{4}, \label{xlbound2}
\end{align}
where \eqref{xlbound1} follows because $\|\boldsymbol{x}_i - \ov{\boldsymbol{x}}_{ij}\|^2 = \|\boldsymbol{x}_j - \ov{\boldsymbol{x}}_{ij}\|^2 = d^2/4$ and \eqref{xlbound2} because $\boldsymbol{x}_i - \ov{\boldsymbol{x}}_{ij} = - (\boldsymbol{x}_j - \ov{\boldsymbol{x}}_{ij})$ in the second term of \eqref{xlbound1}. Hence, $\|\boldsymbol{x}_k-\ov{\boldsymbol{x}}_{ij}\| \ge d\sqrt{3}/2$.
Consider now any point $\boldsymbol{y} \in \mc{C}_{ij}(\SZ)$. By the triangle inequality, $\|\boldsymbol{y}-\boldsymbol{x}_k\| \ge \|\boldsymbol{x}_k-\ov{\boldsymbol{x}}_{ij}\| - \|\boldsymbol{y}-\ov{\boldsymbol{x}}_{ij}\| \ge d\sqrt{3}/2-r(\SZ)$ and $\|\boldsymbol{y}-\boldsymbol{x}_i\| \le \|\boldsymbol{x}_i-\ov{\boldsymbol{x}}_{ij}\| + \|\boldsymbol{y}-\ov{\boldsymbol{x}}_{ij}\| \le d/2+r(\SZ)$,
which are then combined into
\begin{align}
\label{empty.eq.2}
\|\boldsymbol{y}-\boldsymbol{x}_{k}\|^{2}-\|\boldsymbol{y}-\boldsymbol{x}_{i}\|^{2} & \geq \left(\frac{d\sqrt{3}}{2}-r(\SZ)\right)^{2}-\left(\frac{d}{2}+r(\SZ)\right)^{2}\\
&= \frac{d^2}{2} - \left(1+\sqrt{3}\right) d r(\SZ) \\
\label{empty.eq.3}
& = 2\SZ^{2}\log\max_{a,b\in\mc{I}}\left\{\frac{p_{a}}{p_{b}}\right\} \\
\label{empty.eq.4}
& \ge 2\SZ^{2}\log\frac{p_k}{p_i},
\end{align}
where \eqref{empty.eq.3} follows from \eqref{r}. Rearranging terms,
\begin{align} \label{map.ineq}
p_k \explow{-\frac{\|\boldsymbol{y}-\boldsymbol{x}_{k}\|^2}{2\SZ^2}} \le p_i \explow{-\frac{\|\boldsymbol{y}-\boldsymbol{x}_{i}\|^2}{2\SZ^2}},
\end{align}
which via \eqref{pdf.channel} and \eqref{map.j.region} implies $\boldsymbol{y} \notin \mc{R}_{k}^{\tnr{map}}(\SZ).$\footnote{If \eqref{map.ineq} is an equality, $\boldsymbol{y}$ may lie on the boundary of $\mc{R}_{k}^{\tnr{map}}(\SZ)$, but such points do not influence the integral in \eqref{Lemma.LB.2} and are neglected.} Hence, $\mc{C}_{ij}(\SZ) \cap \mc{R}_{k}^{\tnr{map}}(\SZ) = \varnothing$ and \eqref{Lemma.LB.2} can be written as
\begin{align}\label{Lemma.LB.3}
\tpMAP{i}{j}(\SZ) \geq \int_{\mc{H}_{ij}^{\tnr{map}}(\SZ)\,\cap\,\mc{C}_{ij}(\SZ)} \pdf(\boldsymbol{y}|\boldsymbol{x}_{i})\, \tnr{d}\boldsymbol{y}.
\end{align}
For the third step, we return to \eqref{r.ineq}, which holds for any pair $i,j\in\mc{I}$ with $j\neq i$. In the special case when $\delta_{ij} = d$, \eqref{r.ineq} and \eqref{delta.ij} yield
\begin{align}\label{orthotope.ineq}
\frac{d}{2}-\frac{r(\SZ)}{\sqrt{N}} < \Delta_{ij}(\SZ) < \frac{d}{2}+\frac{r(\SZ)}{\sqrt{N}}.
\end{align}
Since $\Delta_{ij}(\SZ)$ gives the distance between $\boldsymbol{x}_{i}$ and $\mc{R}_{j}^{\tnr{map}}(\SZ)$, and $d/2 \pm r(\SZ)/\sqrt{N}$ gives the distance between $\boldsymbol{x}_{i}$ and two opposite facets of the hypercube $\mc{C}_{ij}(\SZ)$, \eqref{orthotope.ineq} implies that $\mc{H}_{ij}^{\tnr{map}}(\SZ)\cap\mc{C}_{ij}(\SZ)$ is a (nonempty) orthotope with thickness $d/2 + r(\SZ)/\sqrt{N}-\Delta_{ij}(\SZ)$. Carrying out the integration in \eqref{Lemma.LB.3} over this orthotope gives the second case of \eqref{Lemma.LB.eq}, which completes the proof of \lemmaref{Lemma.LB} for MAP detection.
The proof for ML detection is obtained similarly. The hypercube $\mc{C}_{ij}(\SZ)$ is defined in the same way as before, and the analysis is identical up to \eqref{empty.eq.3}. Equation \eqref{empty.eq.4} is replaced by $\|\boldsymbol{y}-\boldsymbol{x}_k\|^2-\|\boldsymbol{y}-\boldsymbol{x}_i\|^2 \ge 0$, which via \eqref{pdf.channel} and \eqref{ml.j.region} implies $\boldsymbol{y} \notin \mc{R}_{k}^{\tnr{ml}}(\SZ)$. Hence, \eqref{Lemma.LB.3} is still valid and so is \eqref{orthotope.ineq}, except that $\Delta_{ij}(\SZ)$ in \eqref{orthotope.ineq} is now equal to $d/2$ in accordance with \eqref{delta.ij}. The thickness of $\mc{H}_{ij}^{\tnr{ml}}(\SZ)\cap\mc{C}_{ij}(\SZ)$ is thus $r(\SZ)/\sqrt{N}$, which proves \eqref{Lemma.LB.eq} for ML detection.
\section{Proof of Theorem~\ref{EP.Asym.Theo}}\label{Appendix.EP.Asym.Theo}
To prove \theoref{EP.Asym.Theo}, we first use \eqref{P} to obtain
\begin{align}\label{proof.1}
\limsnszero{
\frac{P(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}
}
= \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\j\neq i}} h_{ij}
\limsnszero{
\frac{\tp{i}{j}(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}
}.
\end{align}
As will become apparent later, the limit on the right hand side of \eqref{proof.1} exists and, hence, so does the limit on the left hand side. To calculate the limit in the right hand side of \eqref{proof.1}, we will sandwich it using \lemmasref{Lemma.UB}{Lemma.LB}.
For MAP detection, we first study the asymptotic behavior of the upper bound in \lemmaref{Lemma.UB}
\begin{align}
\nonumber
&\limsnszero{\frac{\tpMAP{i}{j}(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}}\\
&\le
\label{B.3}
\limsnszero{
\frac{\QF\bigl(\frac{\Delta_{ij}(\SZ)}{\SZ}\bigr)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}
}\\
& =
\limsnszero{
\frac{
\QF\bigl(\frac{\delta_{ij}}{2\SZ}+\frac{\SZ\log({p_i}/{p_j})}{\delta_{ij}}\bigr)
}
{
\QF\bigl(\frac{d}{2\SZ}\bigr)}
}\\
\label{B.35}
& =
\nonumber
\limsnszero{
\frac{\frac{\delta_{ij}}{2\SZ^{2}}-\frac{\log({p_i}/{p_j})}{\delta_{ij}}}{\frac{d}{2\SZ^{2}}} }
\cdot
\\
&
\,\,
\mathrm{exp}\Biggl(
-\frac{\delta_{ij}^{2}}{8\SZ^{2}}
-\frac{(\SZ\log({p_i}/{p_j}))^{2}}{2\delta_{ij}^{2}}
-\frac{\log({p_i}/{p_j})}{2}
+\frac{d^{2}}{8\SZ^{2}}\Biggl)
\\
& =
\limsnszero{
\frac{\delta_{ij}}{d}\sqrt{\frac{p_j}{p_i}}
\explow{
-\frac{\delta_{ij}^{2}-d^{2}}{8\SZ^{2}}}
}\\
\label{B.4}
&=
\begin{cases}
0, & \textnormal{if $\delta_{ij}>d$},\\
\sqrt{\frac{p_j}{p_i}},& \textnormal{if $\delta_{ij}=d$},
\end{cases}
\end{align}
where \eqref{B.35} follows from l'H\^{o}pital's rule {\cite[Sec.~11.2]{Marsden85}}.
Next, we study the asymptotic behavior of the lower bound in \lemmaref{Lemma.LB} for $\delta_{ij}=d$ (the lower bound \eqref{Lemma.LB.eq} is zero for $\delta_{ij}>d$). Assuming that all the limits exist, we obtain
\begin{align}
\nonumber
&\limsnszero{\frac{\tpMAP{i}{j}(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}}\\
& \hspace{-2ex} \ge
\limsnszero{
\frac{
\left(\QF\Bigl(\frac{\Delta_{ij}(\SZ)}{\SZ}\Bigr)-\QF\Bigl(\frac{d}{2\SZ}+\frac{r(\SZ)}{\sqrt{N}\SZ}\Bigr)\right)\left(1-2\QF\left(\frac{r(\SZ)}{\sqrt{N}\SZ}\right)\right)^{N-1}
}
{
\QF\bigl(\frac{d}{2\SZ}\bigr)
}
}\\
\label{B.5}
& \hspace{-2ex}=
\nonumber
\left[
\limsnszero{\frac{\QF\Bigl(\frac{\Delta_{ij}(\SZ)}{\SZ}\Bigr)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}}
-
\limsnszero{\frac{\QF\Bigl(\frac{d+2r(\SZ)/\sqrt{N}}{2\SZ}\Bigr)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}}
\right]\cdot\\
&
\qquad\qquad\qquad\qquad\quad\,\,
\limsnszero{\left(1-2\QF\left(\frac{r(\SZ)}{\sqrt{N}\SZ}\right)\right)^{N-1}}.
\end{align}
The first limit in \eqref{B.5} is the same as in \eqref{B.3}--\eqref{B.4}. The second limit is zero and the last limit is one because by \eqref{r}, $\limsnszero{r(\SZ)} = d/(2(1+\sqrt{3}))$. Hence, all limits exist and asymptotically, both lower and upper bounds converge to \eqref{B.4}. Using this in \eqref{proof.1} gives
\begin{align}\label{proof.2}
\limsnszero{
\frac{P(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}
}
= \sum_{i\in\mcIX} p_i \sum_{\substack{j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}
\sqrt{\frac{p_j}{p_i}},
\end{align}
which completes the proof for MAP detection.
The proof for ML detection follows similar steps. Substituting $\Delta_{ij}(\SZ)=\delta_{ij}/2$ from \eqref{delta.ij} into \eqref{B.3} yields
\begin{align}
\limsnszero{\frac{\tpML{i}{j}(\SZ)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}}
&\le
\limsnszero{
\frac{\QF\bigl(\frac{\delta_{ij}}{2\SZ}\bigr)}{\QF\bigl(\frac{d}{2\SZ}\bigr)}
}\\
&=
\begin{cases}
0, & \textnormal{if $\delta_{ij}>d$},\\
1, & \textnormal{if $\delta_{ij}=d$}.
\label{B.3.ML}
\end{cases}
\end{align}
The asymptotic expression for the lower bound in \eqref{B.5} holds unchanged in the ML case too. In this case, the first limit is given by \eqref{B.3.ML}, the second is zero, and the third is one. This combined with \eqref{proof.1} completes the proof for ML detection.
\section{Proof of Corollary~\ref{EP.MAP_vs_ML.Theo}}\label{Appendix.EP.MAP_vs_ML.Theo}
Equations \eqref{EP.MAP_vs_ML}--\eqref{R} follow immediately from \eqref{EP.Asym}--\eqref{EP.B}.
To prove $R\leq 1$, we need to prove
\begin{align}
\sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}p_i-\sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}\sqrt{p_{j'}_i}\geq 0.
\end{align}
Using
$h_{ij}=h_{ji}$ and $\delta_{ij}=\delta_{ji}$, we obtain
\begin{align}\label{B.C.inequality}
\sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}(p_i - \sqrt{p_{j'}_i})
\nonumber
& = \frac{1}{2} \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}(p_i-\sqrt{p_{j'}_i})+\\
& \quad\,\, \frac{1}{2} \sum_{\substack{j,i\in\mcIX\\ \delta_{ji}=\MED}} h_{ji}(p_j-\sqrt{p_ip_j})\\
& = \frac{1}{2} \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}(p_i+ p_j - 2\sqrt{p_{j'}_i})\\
& = \frac{1}{2} \sum_{\substack{i,j\in\mcIX\\ \delta_{ij}=\MED}} h_{ij}(\sqrt{p_i}-\sqrt{p_j})^{2}\\
& \geq 0,
\end{align}
which holds with equality if and only if $p_i=p_j, \, \forall i,j:\delta_{ij}=d$.
\balance
| {
"attr-fineweb-edu": 1.87207,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe4DxK7kjXLSrHAse | \section{Foreword}
This article is written in memory of Danny Segal, who was a colleague of one of us (Rob Nyman) in the Quantum Optics and Laser Science group at Imperial College for many years. The topic of this article touches on the subject of dye lasers, the stuff of nightmares for any AMO physicist of his generation, but a stronger connection to Danny is that he was very supportive of my application for the fellowship that pushed my career forward, and funded this research. One of Danny's quirks was a strong dislike of flying. As a consequence, I had the pleasure of joining him on a 24~hour, four-train journey from London to Italy to a conference. That's a lot of time for story telling and forging memories for life. Danny was one of the good guys, and I sorely miss his good humour and advice.
This article presents a gentle introduction to thermalization and Bose-Einstein condensation (BEC) of photons in dye-filled microcavities, followed by a review of the state of the art. We then note the similarity to microlasers, particularly when there are very few photons involved. We compare a simple non-equilibrium model for microlasers with an even simpler thermal equilibrium model for BEC and show that the models coincide for similar values of a `smallness' parameter.
\section{Introduction to thermalization and condensation of photons}
Bose-Einstein condensation can be defined as macroscopic occupation of the ground state at thermal equilibrium~\cite{Pathria}\footnote{The formal Penrose-Onsager definition~\cite{Penrose56} is essentially that the largest eigenvalue of the single-particle density matrix that solves the full many-body problem is extensive with system size. But that's not the kind of definition that helps the non-expert.}. It is a natural consequence of the exchange statistics of identical bosons, and therefore occurs in a wide variety of physical systems, such as liquid helium, ultracold atomic gases or electron pairs in superconductors.
The Bose-Einstein distribution for non-interacting identical bosons may be familiar to most physicists, but it's worth looking at again:
\begin{equation}
f(\epsilon) = {1}/\left[{\exp{(\epsilon-\mu)/k_B T}-1}\right]
, \label{eqn:BE distribution}
\end{equation}
which gives the occupancy $f$ of a state at energy $\epsilon$. It has two parameters. $T$ is the temperature, which tells us immediately that we are discussing thermal-equilibrium phenomena. The chemical potential is $\mu$, which is the thermal energy required to add another particle to the system from a reservoir, and it dictates the number of particles in the system. The chemical potential is always lower than the energy of the ground state in the system, but as it approaches from below, the distribution shows a divergence in the ground-state population. That's BEC.
Photons can be brought to thermal equilibrium in a black box, but their number is not conserved, which implies that the chemical potential is either not well defined or strictly zero, depending on your point of view. In either case, BEC is not possible. It is however possible to give light a non-zero chemical potential in a medium which has an optical transition between two broad bands, such as a semiconductor (with valence and conduction bands) or a fluorescent dye (with ro-vibrationally broadened electronic states), as explained by W\"urfel~\cite{Wurfel82}. The chemical potential then sets the population of the excited band. There is a kind of chemical equilibrium between photons and excitations in the medium (induced for example by optical pumping), with reactions being absorption and emission. Thus the chemical potential of the photons will equal that of the medium, which is non-zero, provided there are enough absorption and emission events, i.e. enough time for the chemical reaction to have taken place before other loss processes occur or the photons leave the system.
Not only will the photons acquire a non-zero chemical potential, they will also reach a thermal equilibrium population, dictated by the ratio of absorption (loss) and emission (gain) of the optically interesting medium. That relation is known as the Kennard-Stepanov~\cite{Kennard26}\footnote{The canonical citation of Stepanov is Ref.~\cite{Stepanov57}, but we cannot find a copy of it, nor read Russian.} or McCumber relation~\cite{McCumber64}. The ratio is dictated by a principle of detailed balance, through rapid relaxation among the states within the bands, i.e. vibrational relaxation of dye molecules, which takes perhaps 100~fs, which is fast compared to typical spontaneous emission lifetimes of a few ns. It is usually possible to identify a zero-phonon line (ZPL) about which the spectra are symmetric with a Boltzmann factor between them:
\begin{equation}
\frac{A(\epsilon)}{F(\epsilon)} = \rm{e}^{(\epsilon-\epsilon_{ZPL})/k_B T} \label{eqn:KSR}
\end{equation}
where $F$ and $A$ are fluorescence emission and absorption respectively, normalized to their peak values, $\epsilon$ is the energy of the light and $\epsilon_{ZPL}$ the energy of the ZPL~\footnote{For the sake of simplicity we have omitted a factor $\epsilon^3$ which accounts for the density of states available for spontaneous emission.}. The temperature $T$ is the temperature of the the phonons that cause vibrational relaxation, assumed to be the same as the temperature of the bulk medium. In \figref{fig:abs and fluo} we show the absorption and emission spectra of Rhodamine~6G in ethylene glycol, which shows the expected symmetry. Analysis reveals that the Kennard-Stepanov/McCumber relation \eqnref{eqn:KSR} is very well matched at room temperature over a wide range of wavelengths.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\columnwidth]{R6G_spectra}
\caption{
Absorption and fluorescence emission spectra for Rhodamine~6G in Ethylene Glycol, the solution most used for thermalizing and condensing photons. Both are normalized to their peak value, and the ZPL is the wavelength at which they are equal.
The data in this figure are available from Ref.~\cite{Nyman17Rhodamine}.
}
\label{fig:abs and fluo}
\end{figure}
But BEC requires a unique ground state into which condensation can occur. We engineer the density of states for the light using a pair of near-planar mirrors, a Fabry-Perot optical cavity. Let us make the mirrors so close together that the free spectral range, the difference between longitudinal modes of the cavity, is at least as large as the widths of the absorption and emission spectra of the dye. Then, only one longitudinal mode is relevant, but there can be many transverse modes. This is known as a microcavity. For photon thermalization in Rhodamine, the cavity is typically about 8 half wavelengths long, so we label the longitudinal mode $q=8$.
The cavity-resonant energy is minimum for light propagating parallel to the cavity optical axis. Light propagating at an angle must have higher energy to match the boundary conditions for resonance. For small angles, the energy is quadratically proportional to the in-plane wavenumber (or momentum) of the light, just like a massive particle with with kinetic energy.
We can understand how the transverse modes of the cavity relate to the shape of the mirrors, by considering a local cavity length, and hence local cavity-resonant energy for the light. Where the mirrors are closer together, the energy is higher, so there is an effective potential energy cost, dependent on the mirror shape.
Thus the light can be considered as a massive particle moving in a trapping potential (assuming that the cavity is convex, longer in the middle than the edges), whose energy as a function of momentum ${\bf p}$ and position ${\bf r}$ is:
\begin{align}
E({\bf r, p}) = m {c^*}^2 + \frac{p^2}{2 m} + V({\bf r})
\label{eqn:particle energy}
\end{align}
Here the effective mass is given by the cavity length $L_0$ and $c^*=c/n$ the speed of light in the intracavity medium: $m = h n^2 / c\lambda_0$. The cutoff wavelength $\lambda_0$ is the longest wavelength (for light emitted from the cavity) which is resonant with the cavity in the pertinent longitudinal mode: $\lambda_0 = 2n L_0 / q$. The local potential energy $V({\bf r})$ is given by local deviations of the cavity length, $\delta L({\bf r})$ and can be simply written as $V({\bf r}) / m {c^*}^2 = \delta L({\bf r})/L_0$.
\subsection{Experimental apparatus}
To trap the light long enough for thermalization through multiple absorption events, the cavity mirrors must have reflectivity of at least 99.99\% ($<100$~ppm loss). Such mirrors are commercially available, and use dielectric coatings of several pairs of layers. The simplest configuration for a cavity uses spherical mirrors, either a pair, or one in conjunction with a planar mirror, as shown in \figref{fig:apparatus}. The spherical cavity length variation leads to a harmonic potential, at least close to the longest part of the cavity, where the photons are trapped. Typically the mirror radius of curvature is about 0.5 m, leading to mode spacings (trapping frequencies) around 40~GHz. Because of the curvature and the proximity of the mirrors, being just a few half-wavelengths apart, at least one of mirrors is ground down to about 1~mm diameter. Light is pumped at an angle to the optic axis, to take advantage of the transmission maximum of the dielectric mirror coating, As a result, the mirrors are often glued to other components which make alignment of the pump easier: see \figref{fig:apparatus} for one example of how the assembly can be done.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\columnwidth]{schematic_dye_cavity_v4}
\includegraphics[width=0.6\columnwidth]{optical_assembly_edit}
\caption{
Top: Schematic of the apparatus required for photon thermalization and condensation. Because the distance between the mirrors is short (about 1.5~\micron) and one mirror is curved the planar mirror is ground down to about 1~mm diameter. To pump at an angle, taking advantage of an angle-dependent transparency of the dielectric mirrors, the planar mirror is built into a simple optical assembly (bottom).
}
\label{fig:apparatus}
\end{figure}
To align the cavity, the mirrors require five degrees of freedom for their relative position and orientation. The separation on the optic axis must be actively controlled with nanometre-precision using for example a piezo-electric translation stage. It is very likely that the cavity length will need to be actively stabilized, yet scanned so that the resonant wavelength varies by tens of nanometers. The solution is to shine a collimated beam of narrowband light at the edge of the stop-band of the mirrors, at which wavelength they transmit at least an order of magnitude more than at the wavelengths used for thermalized light, and the dye does not strongly absorb this light. HeNe laser light 633~nm wavelength is a good match to mirrors designed for thermalizing light at around 580~nm using Rhodamine~6G. This narrowband light then forms rings, similar to Newton's rings. The images are acquired by a camera, processed to find the ring radius, and feedback is applied to actively control the cavity length.
Light emitted from the cavity is collected by an objective. Our imaging system uses a simple achromatic doublet in an afocal setup, i.e. producing an image at infinity. This collimated light is then split, by dichroic mirrors (to extract the stabilization reference light) and by non-polarizing beamsplitters, after which is it sent to a variety of diagnostic tools. The most important tools are a camera and a spectrometer.
\begin{figure}[htb]
\centering
\includegraphics[width=0.57\columnwidth]{20140805_143546_power_variation}
\includegraphics[width=0.41\columnwidth]{example_images_false_colour_crop}
\caption{
Typical data showing photon BEC. Left: spectra. At low pump powers, the spectrum is compatible with the Boltzmann distribution at room temperature, showing both a cutoff (ground state) and a density of states equivalent to a two-dimensional harmonic oscillator. At higher powers, extra photons go into the ground state and the populations of the excited states saturate. Right: a real-space, real-colour image just above threshold, showing a thermal cloud of photons around a large population in the centre, where the lowest-energy transverse mode of the cavity is located.
}
\label{fig:characteristic results}
\end{figure}
Typical data are shown in \figref{fig:characteristic results}. The spectrum shows, at low power, a distribution which is compatible with the Bose-Einstein distribution, \eqnref{eqn:BE distribution}, at room temperature, taking into account the density of states for a two-dimensional (2D) symmetric harmonic trapping potential. There is a clear cutoff showing that there is a well-defined lowest-energy mode, in this case around $\lambda_0 = 576$~nm. As the pump power is increased, the chemical potential approaches zero from below, and the population of the lowest mode increases dramatically while all other modes saturate. BEC is the explanation. The image is taken just above threshold. It shows a Gaussian fuzz around, which is the non-condensed, thermal cloud, whose size depends mainly on the temperature and the trapping potential. In the centre is a bright spot, indicating the large occupation of the smallest mode, which is the lowest energy. It is noteworthy that the light is green (high energy) at the edge, where the potential energy is highest, and yellow (low energy) in the middle. At high pump intensities, the pump must be pulsed with a low duty cycle, so that the population of a scientifically-uninteresting triplet state~\cite{Schaefer, Coles15} is kept low. Typically, pulses last about 500~ns and are repeated at about 500~Hz.
\subsection{Things which are like Photon BEC but which are not Photon BEC}
At this point we will digress and discuss three other kinds of condensates of light, none of which is photon BEC: exciton-polariton condensates; classical wave condensation of light and BEC of plasmons.
There is a large community working with light and solid-state matter which are strongly-coupled, in the cavity QED sense that the coherent coupling is faster than incoherent mechanisms like spontaneous emission or cavity loss, using microcavities. Strongly coupled light-matter systems are known as polaritons. Typically the light interacts with a quasiparticle made of a bound electron and hole pair known as an exciton, making an exciton-polariton. In near-planar microcavities, sufficient pump power leads polariton condensation~\cite{Carusotto13}. Condensation is considered distinct from lasing in that the excitons interact with each other substantially (see Ref.~\cite{MicrocavitiesBook}, p362), approaching thermal equilibrium, even if imperfectly. The excitons associated with the condensed polaritons can be free to move (Wannier excitons, typical of inorganic semiconductors~\cite{Kasprzak06}) or bound to individual sites (Frenkel excitons, typical of organic fluorescent solids~\cite{Daskalakis14, Plumhof14}). By contrast, thermalization and BEC of photons as described above is performed in the weak-coupling limit, and with liquid-state matter.
Classical wave condensation, sometimes known as Rayleigh-Jeans condensation, of light is a nonlinear wave phenomenon. Let us consider light with spatial amplitude or phase noise, propagating through a nonlinear medium. Stochastically, the spatial spectrum will redistribute to follow a Rayleigh-Jeans distribution in transverse wavenumber $k$: \mbox{$f(k) = T / \left( \epsilon_k -\mu \right)$} where $T$ is the amount of noise which is equivalent to a temperature, $\epsilon_k \propto k^2$ the equivalent of kinetic energy, and $\mu$ represents the total light power propagating relative to the nonlinearity. It has been achieved using simple imaging optics~\cite{Sun12} and in multimode pulsed lasers~\cite{Weill10}. There is no mention of quantization of light or matter here, and indeed the distribution is the high-temperature limit of Bose-Einstein distribution ($\epsilon_k \ll T$), \eqnref{eqn:BE distribution}, provided that modes are very closely spaced ($\hbar \rightarrow 0$). Since photon BEC shows an exponential decay of population at high energy, it cannot be classical wave condensation.
Very recently, BEC of plasmons has been achieved, using a lattice of metallic nanoparticles~\cite{Hakala17}, laser pumped and immersed in a bath of fluorescent dye. The band structure of this lattice shows a quadratic dispersion. Thermalization of plasmons occurs via scattering of light from the dye, in very much the same way as for dye-microcavity photon BEC.
\section{State of the art
Having presented the physical principles and experimental basics, we now review the history and state of the art of Photon BEC.
The thermalization of photons in a dye-filled microcavity was first shown by Martin Weitz's group in Bonn~\cite{Klaers10a}, far below condensation threshold. Proof of thermalization relied on showing that the distribution of light in the cavity is largely independent of the details of the pumping, e.g. pump light position. They also showed that thermalization works well only with detuning of the cutoff wavelength close to resonance so that re-absorption of a cavity photon is likely to happen before loss from the cavity. Having proven thermal equilibrium, they cranked up the pump power. Macroscopic occupation of the ground state, much as in \figref{fig:characteristic results} was observed~\cite{Klaers10b}. Together with the thermal equilibrium, that is sufficient evidence for most commentators to declare that BEC has been achieved~\cite{Anglin10}. In addition, they inferred a thermo-optic repulsive interaction between photons, whose value $\tilde{g}$ in dimensionless units is $\tilde{g} \simeq 7\times 10^{-4}$ which is very small indeed and indicate that photon BEC is for the most part, an ideal gas of non-interacting bosons.
Since these initial observations, there have been a great number of theoretical discussions of how photon BEC happens, and what one expects its properties to be. Measurements have been rarer, with only the Weitz group and ours publishing experimental articles on the topic, with Dries van Oosten's group (Utrecht) having more recently achieved photon BEC.
There have been a small number of review articles on photon BEC, some of which explain the concepts of photon BEC for a non-specialist audience~\cite{Rajan16}. Jan Klaers's tutorials~\cite{Klaers11,Klaers14} provide an excellent introduction to the field. Schmitt \textit{et al.}~\cite{Schmitt16review} is more up to date. There are a few chapters of the book on \textit{Universal Themes of Bose-Einstein Condensation}~\cite{UniversalBECbook} which are relevant to photon BEC and available open-access, most notably Ref.~\cite{Klaers16}. In this section, we attempt a more comprehensive review, covering the majority of the published literature directly on the topic of photon BEC, even if some of the theory work may have little hope of experimental implementation.
\subsection{Observed phenomena
After their first observation of thermalization and condensation of photons, the Weitz group attempted to move from a liquid to a solid-state sample of dye dissolved in a UV-setting polymer, cured while inside a microcavity~\cite{Schmitt12}. The thermalization functions equally well, although the concentration of dye is limited by fluorescence quenching, as explored in Ref.~\cite{Palatnik16}. While they did observe BEC, it was not reproducible as the dye photobleached at high pump intensity in a matter of seconds. They have also made progress by dissolving materials with large thermo-optic effects to the dye solution, and then locally heating. The resultant position-dependent refractive index translates to a controllable potential energy landscape for the microcavity photons~\cite{Klaers13}.
They then measured both second-order correlations and number fluctuations of the condensate mode~\cite{Schmitt14}. With thermalizing photons it is possible to interpolate between canonical and grand-canonical ensembles by changing the cavity detuning from resonance, effectively changing the ratio of photons to molecular excitations. The molecular excitations form a reservoir. In the canonical ensemble, far detuned from resonance, the photon number is large relative to the square-root of the excitation number, and fluctuations are largely Poissonian. By contrast, close to resonance, the reservoir of excitations is large, and the fluctuations super-Poissonian. The result is that the condensate number can fluctuate wildly, even leading to the Grand Canonical Catastrophe where frequently there are no photons at all in the condensate. While this work was guided by earlier statistical modelling~\cite{Klaers12}, the measured correlations have also been explained through photon-photon interactions~\cite{VanDerWurff14}. The conclusion is that the photon-photon interaction is certainly weak ($\tilde{g} < 10^{-3}$), depends on the detuning from resonance, and that perhaps counter-intuitively the fewer molecules involved, the stronger the interactions.
Two studies indicated how thermalization happens, and how it breaks down. At Imperial College, we produced the first photon condensates outside the Weitz group, and showed how simple parameters such as the shape of the pump spot affect the distribution of photons~\cite{Marelic15}. Light is imperfectly redistributed from pump spot towards the thermal equilibrium distribution. One can achieve condensation with very low pump powers using a small spot, but only for larger spots does the spatial distribution of photons match thermal equilibrium. Using a streak camera and 15-ps pump, Schmitt \textit{et al.}~\cite{Schmitt15} observed the dynamics of thermalization of photons, showing how thermalization happens on the timescale of photon absorption by dye molecules.
BEC is a thermodynamic phase transition. Damm \textit{et al.}~\cite{Damm16} measured the internal energy of the photons (from the spectrum) and defined an equivalent for heat capacity, as a function of not absolute temperature but temperature relative to threshold for condensation. From this, they inferred a heat capacity, which shows a lambda transition characteristic of BEC.
Condensates are typically characterized by their long-range coherence, first hinted at in photon BEC by a single image in Ref.~\cite{Klaers11}. Marelic \textit{et al.}~\cite{Marelic16a} systematically studied stationary first-order coherence using imaging interferometers with slow cameras. They showed that non-dissipative thermal Bose gas theory describes the data well below and just above threshold, with the condensate showing long-range spatial and temporal coherence. Below threshold, the thermal cloud has a position dependent potential energy, which makes for interesting images but complicated analysis: see \figref{fig:interferometer image}. Far above threshold the coherence decreases, which can be explained by multimode condensation, in which several modes become macroscopically occupied.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\columnwidth]{pbec_20141210_171813_fringes_image014_sqrt_crop}
\caption{
Image of the interference pattern of a thermal cloud of photons away from the interferometer white-light fringe. Rings appear because the potential energy landscape is rotationally symmetric, with increasing energy near the edges.
}
\label{fig:interferometer image}
\end{figure}
Non-stationary measurements of the phase of the condensate mode show phase slips~\cite{Schmitt16}, associated with the occurrences of fluctuations in population number down to zero photons. These phase slips are a clear example of spontaneous phase symmetry breaking in a driven-dissipative system: when the population is large, the phase diffuses only slowly, but with small populations the phase is ill-defined and can jump discontinuously. The condensate re-forms with a spontaneously chosen phase.
Further experiments have explored the momentum distribution of thermalized light~\cite{Marelic16b} showing that the photons interact only weakly with themselves ($\tilde{g} < 10^{-3}$) and with the molecules. More recently, a third group, that of Dries van Oosten in Utrecht, have achieved photon BEC. Their preliminary results show that the condensate is polarized, whereas the thermal cloud around it is unpolarized~\cite{Jagers16}.
\subsection{Theoretical models
The theory works on photon condensates can loosely be divided into those that assume approximate thermal equilibrium and those that don't. There are works which take a fully quantized approach to fluctuations and there are semiclassical mean-field models, and there are those that apply statistical mechanics.
\subsubsection{A nonequilibrium model of photon condensation}
Foremost among the nonequilibrium models is the Kirton and Keeling model, as first presented in Ref.~\cite{Kirton13}. The model starts from conservative dynamics based on a standard cavity QED model, the Jaynes-Cummings model, with multiple emitters and multiple light modes, and with the addition of a phonon degree of freedom associated with each molecule. Molecular electronic state is coupled to vibrational state via a Huang-Rhys parameter (a continuum generalization of a Franck-Condon factor) since the molecule shape is slightly different between ground and excited electronic states. Drive and dissipation are then included via standard Markovian assumptions.
The resulting master equation contains terms which include cavity loss, pumping of molecules by an external source, decay of molecules via spontaneous emission out of the cavity, and most crucially both emission of light into the cavity and absorption from the cavity. The latter two terms come with amplitudes which depend on the absorption and emission strength of the dye, and it is these processes which lead to thermalization of light. The master equation, realistically involving millions of molecules and thousands of light modes, is far too unwieldy to solve directly, but the averages of populations can be solved for quite efficiently. The solutions of the rate equation for photon populations show thermalization and condensation matching the Bose-Einstein distribution when the rate at which cavity photons scatter from dye molecules is larger than the cavity loss rate. For larger loss rates, a mode can show threshold behaviour, but it is not necessarily the ground state, indicating lasing rather than BEC.
Kirton and Keeling elaborated further results of their model~\cite{Kirton15}, looking at the dynamics of photon populations after a pulsed pumping event, and evaluating both first- and second-order correlations for individual photon modes. In response to observations of the breakdown of thermalization due to inhomogeneous pumping, in both stationary~\cite{Marelic15} and time-resolved experiments~\cite{Schmitt15}, they modified their model to include spatial distributions of pumping and molecular excitation~\cite{Keeling16}. The results match the salient points of the experimental data. They were able to show that the multimode condensation seen in Ref.~\cite{Marelic16a} was due to imperfect clamping of the molecular excited-state population in regions adjacent to the central condensate light mode which leaves the possibility of positive gain for other modes.
Hesten \textit{et al.} used the Kirton and Keeling model to explore a large parameter space, describing a non-equilibrium phase diagram for dye-microcavity photons~\cite{Hesten17}. The phase diagram proved to be particularly rich, with many possible multimode condensate phases in the crossover between well-thermalized BEC and un-thermalized laser states. In particular, they predict decondensation, where population in a mode decreases with increasing pumping rate, due to mode competition.
A full master equation approach using just a small number of light modes can be tractable. Kopylov \textit{et al.}~\cite{Kopylov15} have worked with two modes, which is enough modes to draw conclusions about condensation but not about thermalization.
\subsubsection{Quantum field-theory models}
There are several papers treating near-equilibrium as a given, and using quantum field theory techniques such as Schwinger-Keldysh~\cite{deLeeuw13} or quantum Langevin~\cite{Chiocchetta14} techniques to access not only the average behaviour but also fluctuations. These techniques are needed to deal with the fact that photon BEC is driven-dissipative system with both pumping and loss processes (like exciton-polariton condensates), rather than a conservative system (like atomic BEC).
The theory group of Henk Stoof have applied the Schwinger-Keldysh to calculate the effects of drive and dissipation on both temporal~\cite{deLeeuw14b} and spatial~\cite{deLeeuw14a} coherence. They have also shown how interacting photons in a lattice potential behave differently from the superfluid-Mott insulator transition known from conservative systems~\cite{deLeeuw15}.
The fluctuations of a system at thermal equilibrium are understood to be related to compressibilities and susceptibilities via the temperature in what are known as fluctuation-dissipation relations. Chiochetta \textit{et al.}~\cite{Chiocchetta15} propose testing the fluctuation-relations as a means of quantifying how close driven-dissipative systems like photon BEC come to true thermal equilibrium.
Snoke and Girvin~\cite{Snoke13} point out that it is rather unusual that coherence in a photon BEC can build up despite the absence of direct photon-photon interactions. They show that, remarkably, the coherence is generated through incoherent interactions with the thermal bath of molecular vibrations.
\subsubsection{Mean-field models}
The equation of motion for the condensate order parameter is typically derived in the same way as for nonlinear optical systems, and is sometimes known as the Lugiato-Lefever equation~\cite{Lugiato87}, or a dissipative Gross-Pitaevskii equation or a complex Ginzburg-Landau equation. There are various nearly-equivalent forms, which include:
\begin{align}
-{\rm i}\hbar\partialderiv{\psi}{t} &= \label{Eqn: cGPE}
\hspace{-0ex}\left[ V({\bf r})\!-\!\frac{\hbar^2}{2 m}\nabla^2_\perp
+ g|\psi|^2 + {\rm i}\!\left(\gamma_{net} -\!
\Gamma|\psi|^2\right) \right]\psi
\end{align}
where $\psi$ is the order parameter, which is the electric field of the condensate mode; $g$ the strength of interactions; $\gamma_{net}$ is the difference between the pump rate and cavity decay rate; $\Gamma$ describes the saturation of molecular excited states; and $m$ and $V$ are effective mass and potential as described earlier. The effective kinetic energy $\nabla^2_\perp$ comes from diffraction of light. The dissipative terms modify beyond-mean-field properties such as correlations and depend on the fact that the pump light is incoherent with the condensate mode~\cite{Carusotto13}. Excluding the dissipative terms, the order parameter equation reduces to \eqnref{eqn:particle energy} for plane waves, ignoring the rest-mass energy term.
Similar equations were first derived for multimode lasers, then applied to light in microcavities~\cite{Chiao99}. Its solutions are Bogoliubov modes of sound~\cite{Chiao00, Bolda01} or collective breathing~\cite{Vyas14} or scissors~\cite{deLeeuw16} modes. It can be derived from Maxwell's equations~\cite{Nyman14}, and coincides with the mean of the equations coming from quantum field treatments~\cite{Chiocchetta14, deLeeuw13}. Interactions in photon BEC are expected to include retarded thermo-optic effects, and nonlocal effects have also been considered~\cite{Strinati14}.
\subsubsection{Statistical models}
It is possible to treat photon BEC with non-quantum formalisms from statistical mechanics or laser rate equations, where quantum effects only come in through bosonic stimulation or exchange statistics. Average populations for effectively two-dimensional~\cite{Kruchkov14} and one-dimensional~\cite{Kruchkov16,Cheng16} landscapes are readily calculated from the Bose-Einstein distribution.
Fluctuations in numbers of photons are correctly calculated only if the finite size of the reservoir of molecular excitations is taken into account~\cite{Klaers12, Sobyanin12}. When taking into account polarization modes of the light, there is at least one prediction that the second-order correlations of condensed light could show sub-Poissonian statistics (anti-bunched) with not unreasonable parameters~\cite{Sobyanin13}. Although the approximation that photons do not interact among themselves is both simplifying and usually applicable to photon BEC, it has been shown that interacting photons should show non-Gaussian statistics, and perhaps suppress the Grand Canonical Catastrophe~\cite{Weiss16, Zannetti15}.
\subsection{Suggestions for alternative systems for photon condensation}
So far, experiments on photon BEC have been restricted to near-planar microcavities filled with one of a small number of fluorescent dyes (mostly Rhodamine 6G and Perylene Red) in liquid water or ethylene glycol, with the exception of Ref~\cite{Schmitt12} in a UV-set polymer. The requirements of the thermalizing medium are rather general: satisfying the Kennard-Stepanov/McCumber relation, having a good fluorescence quantum yield and strong absorption at high concentrations. Other dyes and perhaps colloidal quantum dots are obvious candidate replacement materials. Suitable media may also include optomechanical devices~\cite{Weitz13}. Preliminary measurements suggest that both molecular gases at high pressure with ultraviolet light~\cite{Wahl16} and erbium-doped fibres in the infrared~\cite{Weill16} would be suitable media. BEC of photons thermalizing by scattering from plasmas is probably the oldest of all the proposals~\cite{Zeldovich69} but still relevant~\cite{Mendonca17}.
The optical environment need only provide a minimum energy mode and a gap as well as retaining photons longer than the re-scatter time from the thermalizing medium. For example, planar photonic crystals filled with semiconductors have been proposed for photon thermalization and condensation~\cite{deLeeuw16}, as have arrays of superconducting qubits~\cite{Marcos12}.
There are a few outlandish theoretical proposals to combine photon BEC with quantum optomechanics~\cite{Fani16, Fani17} or atomic BEC~\cite{Zhang16, Zhang12} but, while not technically impossible, it is unlikely that anyone will go to the effort to implement the ideas experimentally.
One unusual proposal~\cite{Chiocchetta16} interpolates between the classical Rayleigh-Jeans condensation of waves and quantum BEC. Light with spatial noise propagates in a non-linear medium with a gradient of linear refractive index perpendicular to the propagation direction, and light is made to selectively leak out for large transverse wavenumbers. This system is then formally equivalent to evaporative cooling of trapped, interacting bosons in two dimensions, where the propagation direction plays the role of time.
\section{When photon BEC gets small}
A question that is often asked about photon BEC is `how is it not a laser'? There are many answers, but it is not unreasonable to argue that photon BEC systems are a very special case of a laser, where the re-scatter of photons is rapid enough to redistribute the light among many cavity modes. But if we look at very tight confinement, i.e. small mirror radii of curvature, the mode spacing can become as large as the thermal energy, and only one cavity mode has significant occupation. In this regime, photons in dye-filled microcavities can exhibit BEC with very small numbers of photons far away from the thermodynamic limit, and they can also act as microlasers, where the spontaneous emission is more likely to go into a cavity mode than in free space.
In this section we will first see how the concept of threshold in an equilibrium, Bose-Einstein distribution becomes ambiguous for large mode spacings. We will then take a look at a simple rate-equation model which shows microlasers exhibit remarkably similar effects.
\subsection{Tiny Bose-Einstein condensates}
The thermal-equilibrium Bose-Einstein distribution \eqnref{eqn:BE distribution} is the very simplest statistical model relevant for photon condensation. In \figref{fig:BE distribution} we show the result for a two-dimensional harmonic oscillator potential of angular frequency $\omega$, where the states involved are discrete with the $i$th state having and energy $i\times \hbar\omega$ and degeneracy $i+1$. We choose a chemical potential, and from that calculate the total population and the ground-state number, as displayed. A well-known result is that the total number of particles in the system at threshold is $N_C = \left({\pi^2}/{6}\right)\left( \hbar\omega / k_B T \right)^{-2}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\columnwidth]{BE_distribution_small}
\caption{
The ground-state population in the Bose-Einstein distribution in a two-dimensional harmonic oscillator potential, as a function of total particle number. When the mode spacing is small compared to the temperature, the threshold tends a thermodynamic (sharp) transition. Conversely, for very large mode spacings, only one mode is occupied and no threshold is apparent in the population of the ground state.
}
\label{fig:BE distribution}
\end{figure}
With a small mode spacing (or equivalently a high temperature), the threshold is deep and narrow in the sense that there is a large jump in population for a small change in total population. In the thermodynamic limit, of infinitesimal mode spacing, the threshold is infinitely sharp, and is a true phase transition. On the other hand, for small mode spacing (low temperature), the difference between below- and above-threshold populations is indistinct, and there is a wide range of population where it is not clear if the system is above or below threshold: the threshold is broad and shallow. For extremely small systems, there is just one mode with non-negligible population, so the population of that mode is equal to the total. In that case, there is no threshold in terms of average population, although there may be distinctive correlation or fluctuation behaviour.
\subsection{Microlasers}
Photon BEC takes place inside microcavities, where the spontaneous emission from the dye molecules is modified by the resonator, an effect known as the Purcell effect, which can lead to enhancement (on resonance) or reduction (off resonance) of the spontaneous emission rate~\cite{MicrocavitiesBook}. The factor by which the emission is sped up depends on the cavity parameters (the Purcell Factor, $F_P$) and on the exact position of the molecule in the cavity mode, i.e. the emission rate for a molecule at a node of a cavity mode is very different from that of a molecule at an antinode. The latter factor can vary greatly, so it is difficult to make better than order-of-magnitude estimates for the overall emission enhancement. $F_P$ notably depends inversely on the cavity mode volume: smaller cavities result in large modifications to the emission rate, provided they have large quality factors.
Lasers using microcavities are parameterized principally by the fraction of spontaneous emission directed into the one cavity mode of interest, given the symbol $\beta$. With Purcell enhancement, $\beta = F_P / (1+F_P)$. For large laser systems, where the cavity does not markedly affect the spontaneous emission rate, $F_P\ll 1$ and so $\beta \ll 1$.
The simplest rate equation model for a microlaser with a single cavity mode containing photon number $P$ interacting with a number of molecular excitations $N$ is:
\begin{align}
\dot{P} &= [\gamma\beta N - \kappa] P + \gamma \beta N\label{eqn:microlaser rate photons}\\
\dot{N} &= R_p - \gamma N - \gamma\beta N P \label{eqn:microlaser rate excitations}
\end{align}
where $\gamma$ is the total spontaneous emission rate including the effects of the cavity, and $R_p$ the pumping rate. Recasting \eqnref{eqn:microlaser rate photons} we find \mbox{$\dot{P} = \gamma \beta N (P+1) - \kappa P$} where the terms in parentheses make clear the roles of stimulated ($P$) and spontaneous ($+1$) emission. This model neglects re-absorption by the fluorescent medium, non-radiative loss, saturation of excited state population and fluctuations but still captures the essential behaviours~\cite{deMartini88, Yokoyama89, Yokoyama91, Yokoyama92, deMartini92, Bjork94, Rice94}.
The equations are readily solved for the steady state population:
\begin{align}
P = \frac{
(\beta \rho - 1) +
\sqrt{ (1 - \beta \rho)^2 + 4\beta^2 \rho}
}{2\beta}
\end{align}
where $\rho = R_p / \kappa$, the rate at which molecules are excited in units of the cavity loss rate. The positive root of the quadratic equation is taken since $P>0$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\columnwidth]{microlaser}
\caption{
Mode population in the microlaser model as a function of pump rate in units of cavity lifetime, for various fractions $\beta$ of spontaneous emission directed into the cavity mode (as opposed to into free space). When $\beta$ is small , the threshold tends a sharp transition. Conversely, for $\beta \rightarrow 1$, all light is directed into the cavity mode and no threshold is apparent in the population.
}
\label{fig:microlaser}
\end{figure}
The result is shown in \figref{fig:microlaser}. For comparison, standard lasers have $\beta = 10^{-5}$ -- $10^{-8}$. For small $\beta$, the laser shows a clear threshold, with a large jump in population for a small change in pump rate. When more of the spontaneous emission is directed into the cavity mode, $\beta \rightarrow 1$ and the threshold is less clear, being both shallow, meaning that threshold is accompanied by only a small increase in population, and broad, so over a large range of power it is unclear if the system is above or below threshold.
When all of the spontaneous emission goes into the cavity, there is no obvious threshold. Two competing ideas can be invoked: either this is a thresholdless laser~\cite{Rice94} or we can define the threshold in any reasonable way, for example when the population of the mode exceeds unity~\cite{Bjork94}. In any case, the laser (stimulated emission) action happens at very low pump powers, which is where the industrial interest in microlasers may come from.
\subsection{Which system is smaller?}
Figs.~\ref{fig:BE distribution} and \ref{fig:microlaser} show that tiny Bose-Einstein condensates and microlasers exhibit very similar behaviours in terms of reduction and broadening of threshold when the parameter indicates that the system is size, respectively $k_B T/\hbar \omega$ and $1/\beta$, becomes small. In \figref{fig:comparison} we show results of the models side-by-side. A value of $\beta$ is set, and then $\hbar \omega / k_B T$ adjusted to match mode population in the limit of low pump rate or total population. For small systems, the two models very nearly coincide, although there are deviations for larger parameters. It is therefore difficult to distinguish BEC from lasing, although saturation of excited state populations, as in \figref{fig:characteristic results}, may be a hint. The number of modes thermally available in the BEC model is approximately $(k_B T/\hbar \omega)^2$, and perhaps it is this parameter which should be more directly compared to $1/\beta$. In this respect, BEC is an exclusively multi-mode phenomenon, but if there is only one occupied cavity mode, then there really is not much difference between BEC and lasing.
\begin{figure}[htb]
\centering
\includegraphics[width=0.95\columnwidth]{comparison}
\caption{
Comparison of the two models, BEC and microlaser. $n_{tot}$ and $n$ are the total population and ground-state population (BEC model); $\rho$ and $P$ are pump rate and mode population (microlaser model). For a chosen $\beta$, we set $\hbar\omega/k_B T$ to match the low-number population, and there are no other adjustable parameters.
}
\label{fig:comparison}
\end{figure}
Microcavities suitable for photon BEC can be constructed using focussed ion beam milling to pattern the confining potential through the surface shape~\cite{Dolan10}. Notably, Ref.~\cite{Palatnik17} shows a solid-state dye microlaser operating in a regime of strong re-absorption, showing features reminiscent of thermalization and BEC. With small radii of curvature, near-single-mode operation, i.e. $k_B T / \hbar \omega \sim 1$, is certainly possible. Very small mode volumes and high quality factors can be simultaneously achieved~\cite{Coles15}. While the bare Purcell factor can be large $F_p\gg 1$, in a fluorescent dye, rapid dephasing due to vibrational relaxation makes it difficult to predict exactly what proportion of the spontaneous emission will be emitted into the cavity mode. Nonetheless, lasers using these microcavities should show $\beta$ parameters which approach unity. It seems that is possible to make a device which can be tuned between tiny BEC and tiny laser, by tuning for example the re-scattering rate via the detuning from the molecular resonance. At threshold, the photon numbers will be rather similar, with barely more than one photon in the lowest-energy cavity mode, despite the different physical origins of the threshold behaviour.
\section{Conclusions}
We have seen how photons can be made to thermalize and condense at room temperature. There is a growing body of literature on this subject, which is connected to wider fields of driven-dissipative condensates of light. When we push the concept of photon BEC to fewer photons, we run into ideas from microlasers, and the distinction between the two concepts becomes blurred, despite the fact that BEC is an equilibrium phenomenon and lasing is dynamic. In this regime of few photons, we expect to find interesting quantum correlations among the photons which may lead to applications of photon BEC in quantum information processing.
\section*{Acknowledgements}
We thank the UK Engineering and Physical Sciences Research Council for supporting this work through fellowship EP/J017027/1 and the Controlled Quantum Dynamics CDT EP/L016524/1 which was co-directed by Danny for many years.
\bibliographystyle{prsty}
| {
"attr-fineweb-edu": 1.810547,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe6k25V5jOxNYlVl- | \section{Introduction}
When training machine learning models in a distributed
fashion, the underlying constraints of how workers (or nodes)
communication have a significant impact on the training
algorithm. When workers cannot form a fully connected
communication topology
or the communication latency is high ({\color{black}
e.g., in sensor networks or mobile networks}),
decentralizing the
communication comes to the rescue. On the other hand,
when the amount of data sent through the network is an
optimization objective (maybe to lower the cost or energy consumption),
or the network bandwidth is low,
compressing the traffic,
either via sparsification \citep{wangni2017gradient, konevcny2016randomized} or quantization \citep{zhang2017zipml, pmlr-v70-suresh17a} is a popular strategy.
In this paper, our goal is to develop a
novel framework that works robustly in an environment
that {\em both} decentralization and communication
compression could be beneficial. {\color{black} In this paper, we focus on quantization, the process of lowering the precision of data representation, often in a
stochastically unbiased way. But the same techniques
would apply to other unbiased compression schemes such as
sparsification
Both decentralized training and quantized (or compressed more generally) training have
attracted intensive interests recently \citep{Yuan_dsgd, zhao2016decentralized, Lian_dsgd, konevcny2016randomized, alistarh2017qsgd}.
Decentralized algorithms usually exchange local models
among nodes, which consumes the main communication budget; on
the other hand, quantized algorithms usually exchange
quantized gradient, and update an un-quantized model.
A straightforward idea to combine these two is to directly
quantize the models sent through the network during
decentralized training. However, this simple strategy does not
converge to the right solution as the quantization error would
accumulate during training. The technical contribution of
this paper is to develop novel algorithms that combine {\em both}
decentralized training and quantized training together.
\paragraph{ Problem Formulation.} We consider the following decentralized optimization:
{\small
\begin{equation}
\min_{x\in\mathbb{R}^{N}}\quad f(x) = {1\over n} \sum_{i=1}^n \underbrace{\mathbb{E}_{\xi\sim\mathcal{D}_i}F_{i}(x; \xi)}_{=: f_i(x)},\label{eq:main}
\end{equation}
}
where $n$ is the number of node and $\mathcal{D}_i$ is the local data distribution for node $i$. $n$ nodes form a connected graph and each node can only communicate with its neighbors. {\color{black} Here we only assume $f_i(x)$'s are with L-Lipschitzian gradients.}
\paragraph{ Summary of Technical Contributions.}
In this paper, we propose two decentralized parallel stochastic gradient descent algorithms (D-PSGD): extrapolation compression D-PSGD (ECD-PSGD) and
difference compression D-PSGD (DCD-PSGD). Both algorithms can be proven to converge in the rate roughly $O(1/\sqrt{nT})$ where $T$ is the number of iterations. The convergence rates are consistent with two special cases: centralized parallel stochastic gradient descent (C-PSGD) and D-PSGD. {\color{black} To the best of our knowledge, this is the first work to combine quantization algorithms and decentralized algorithms for generic optimization.}
The key difference between ECD-PSGD and
DCD-PSGD is that DCD-PSGD quantizes the {\em difference} between
the last two local models, and ECD-PSGD quantizes the
{\em extrapolation} between the last two local models. DCD-PSGD admits a slightly better convergence rate than ECD-PSGD when the data variation among nodes is very large. On the other hand, ECD-PSGD is more
robust to more aggressive quantization, as extremely low precision
quantization can cause DCD-PSGD to diverge, since DCD-PSGD has strict constraint on quantization.
In this paper, we analyze both algorithms, and empirically
validate our theory. We also show that when the underlying
network has both high latency and low bandwidth,
both algorithms outperform state-of-the-arts
significantly. {\color{black} We present both algorithm because
we believe both of them are theoretically interesting. In
practice, ECD-PSGD could potentially be a more robust
choice.}
\paragraph{Definitions and notations}
Throughout this paper, we use following notations and definitions:
\begin{itemize}[fullwidth]
\item $\nabla f(\cdot)$ denotes the gradient of a function $f$.
\item $f^{*}$ denotes the optimal solution of \eqref{eq:main}.
\item $\lambda_{i}(\cdot)$ denotes the $i$-th largest eigenvalue of a matrix.
\item $\bm{1}=[1,1,\cdots,1]^{\top}\in\mathbb{R}^n$ denotes the full-one vector.
\item $\|\cdot\|$ denotes the $l_2$ norm for vector.
\item $\|\cdot\|_F$ denotes the vector Frobenius norm of matrices.
\item $\bm{C}(\cdot)$ denotes the compressing operator.
\item $f_i(x) := \mathbb{E}_{\xi\sim\mathcal{D}_i}F_{i}(x; \xi)$.
\end{itemize}
\section{Related work}
\paragraph{Stochastic gradient descent} The \textsl{Stocahstic Gradient Descent} (\textbf{SGD}) \citep{Ghadimi_dsgd,Moulines_dsgd,Nemi_dsgd} - a stochastic variant of the gradient descent method - has been widely used for solving large scale machine learning problems \citep{Leon_sgd}. It admits the optimal convergence rate $O ( 1/\sqrt{T} )$ {\color{black} for non-convex functions}. \\
\paragraph{Centralized algorithms} The centralized algorithms is a widely used scheme for parallel computation, such as Tensorflow \citep{abadi2016tensorflow}, MXNet \citep{chen2015mxnet}, and CNTK \citep{Seide:2016:CMO:2939672.2945397}. It uses a central node to control all leaf nodes. For \textsl{Centralized Parallel Stochastic Gradient Descent} (\textbf{C-PSGD}), the central node performs parameter updates and leaf nodes compute stochastic gradients based on local information in parallel. In \citet{agarwal2011distributed,zinkevich2010parallelized}, the effectiveness of C-PSGD is studied with latency taken into consideration. The distributed mini-batches SGD, which requires each leaf node to compute the stochastic gradient more than once before the parameter update, is studied in \citet{dekel2012optimal}. \citet{recht2011hogwild} proposed a variant of C-PSGD, HOGWILD, and proved that it would still work even if we allow the memory to be shared and let the private mode to be overwriten by others. The asynchronous non-convex C-PSGD optimization is studied in \citet{lian2015asynchronous}. \citet{zheng2016asynchronous} proposed an algorithm to improve the performance of the asynchronous C-PSGD. In \citet{alistarh2017qsgd,de2017understanding}, a quantized SGD is proposed to save the communication cost for both convex and non-convex object functions. The convergence rate for C-PSGD is $O (1/\sqrt{Tn}) $. The tradeoff between the mini-batch number and the local SGD step is studied in \citet{DBLP:journals/corr/abs-1808-07217,Stich18local}.\\
\paragraph{Decentralized algorithms}
Recently, decentralized training algorithms have attracted significantly
amount of attentions. Decentralized algorithms
are mostly applied to solve the consensus problem \citep{zhang2017projection,Lian_dsgd,Sirb_dsgd}, where the network topology is decentralized. A recent work shows that decentralized algorithms could outperform the centralized counterpart for distributed training \citep{Lian_dsgd}. The main advantage of decentralized algorithms over centralized algorithms lies on avoiding the communication traffic in the central node. In particular, decentralized algorithms could be much more efficient than centralized algorithms when the network bandwidth is small and the latency is large.
The decentralized algorithm (also named gossip algorithm in some literature under certain scenarios \citep{colin2016gossip}) only assume a connect computational network, without using the central node to collect information from all nodes. Each node owns its local data and can only exchange information with its neighbors. The goal is still to learn a model over all distributed data.
The decentralized structure can applied in solving of multi-task multi-agent reinforcement learning \citep{omidshafiei2017deep,mhamdi2017dynamic}. \citet{Boyd_dsgd} uses a randomized {\color{black} weighted} matrix and studied the effectiveness of the {\color{black} weighted} matrix in different situations. Two methods \citep{li2017decentralized,Shi_dgd} were proposed to blackuce the steady point error in decentralized gradient descent convex optimization. \citet{dobbe2017fully} applied an information theoretic framework for decentralize analysis. The performance of the decentralized algorithm is dependent on the second largest eigenvalue of the {\color{black} weighted} matrix. In \citet{NIPS2018_7705}, In \citet{NIPS2018_7705}, they proposed the gradient descent based algorithm (\textbf{CoLA}) for decentralized learning of linear classification and regression models, and proved the convergence rate for strongly convex and general convex cases.\\
\paragraph{Decentralized parallel stochastic gradient descent} The \textsl{Decentralized Parallel Stochastic Gradient Descent} (\textbf{D-PSGD}) \citep{nedic2009distributed,Yuan_dsgd} requires each node to exchange its own stochastic gradient and update the parameter using the information it receives. In \citet{nedic2009distributed}, the convergence rate for a time-varying topology was proved when the maximum of the subgradient is assumed to be bounded. In \citet{Lan_dsgd}, a new decentralized primal-dual type method is proposed with a computational complexity of {\color{black}$O(\sqrt{n/T})$} for general convex objectives. The linear speedup of D-PSGD is proved in \citet{Lian_dsgd}, where the computation complexity is {\color{black}$O (1/\sqrt{nT})$}. The asynchronous variant of D-PSGD is studied in \citet{Lian_adsgd}. \\
\paragraph{Compression} To guarantee the convergence and correctness, this paper only considers using the unbiased stochastic compression techniques. Existing methods include randomized quantization \citep{zhang2017zipml, pmlr-v70-suresh17a} and randomized sparsification \citep{wangni2017gradient, konevcny2016randomized}. Other compression methods can be found in~\citet{kashyap2007quantized,lavaei2012quantized,nedic2009quan}. In \citet{DBLP:conf/nips/DrumondLJF18}, a compressed DNN training algorithm is proposed. In \citet{NIPS2018_7697}, a centralized biased sparsified parallel SGD with memory is studied and proved to admits an factor of acceleration
\begin{wrapfigure}{R}{10cm}
\centering
\vspace{-0.5cm}
\includegraphics[width=7.5cm]{naive.eps}
\vspace{0cm}
\caption{D-PSGD vs. D-PSGD with naive compression }
\vspace{0.5cm}
\label{Fig:naive}
\end{wrapfigure}
\section{Preliminary: decentralized parallel stochastic gradient descent (D-PSGD)}
Unlike the traditional (centralized) parallel stochastic gradient descent (C-PSGD), which requires a central node to compute the average value of all leaf nodes, the decentralized parallel stochastic gradient descent (D-PSGD) algorithm does not need such a central node. Each node (say node $i$) only exchanges its local model $\bm{x}^{(i)}$ with its neighbors to take weighted average, specifically, $\bm{x}^{(i)} = \sum_{j=1}^nW_{ij}\bm{x}^{(j)}$ where $W_{ij} \geq 0$ in general and $W_{ij}=0$ means that node $i$ and node $j$ is not connected.
At $t$th iteration, D-PSGD consists of three steps ($i$ is the node index):
{\bf 1.} Each node computes the stochastic gradient $\nabla F_i(\bm{x}^{(i)}_t;\xi_t^{(i)})$, where $\xi^{(i)}_t$ is the samples from its local data set and $\bm{x}^{(i)}_t$ is the local model on node $i$.
{\bf 2.} Each node queries its neighbors' variables and updates its local model using $\bm{x}^{(i)} = \sum_{j=1}^nW_{ij}\bm{x}^{(j)}$.
{\bf 3.} Each node updates its local model {\small$\bm{x}^{(i)}_t \gets \bm{x}^{(i)}_t - \gamma_t \nabla F_i\left(\bm{x}_t^{(i)};\xi^{(i)}_t\right) $} using stochastic gradient, where $\gamma_t$ is the learning rate.
To look at the D-PSGD algorithm from a global view, by defining{\small
\begin{align*}
&X := [\bm{x}^{(1)}, \bm{x}^{(2)}, \cdots, \bm{x}^{(n)}] \in \mathbb{R}^{N\times n},\quad
G(X; \xi) := [\nabla F_1(x^{(1)}; \xi^{(1)}), \cdots, \nabla F_n(x^{(n)}; \xi^{(n)})] \\
&\nabla f(\overline{X}) := \sum_{i=1}^{n}\frac{1}{n}\nabla f_i\left(\frac{1}{n}\sum_{i=1}^nx^{(i)}\right),\quad
\overline{\nabla f}(X):= \mathbb{E}_{\xi}G(X;\xi_t)\frac{\bm{1}}{n}=\frac{1}{n}\sum_{i=1}^n\nabla f_i(x^{(i)}),
\end{align*}}
the D-PSGD can be summarized into the form
$ X_{t+1} = X_t W - \gamma_t G(X_t; \xi_t)$.
The convergence rate of D-PSGD can be shown to be {\small$O \left(\frac{\sigma}{\sqrt{nT}} + \frac{n^{\frac{1}{3}} \zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}\right) $} (without assuming convexity) where both $\sigma$ and $\zeta$ are the stochastic variance (please refer to Assumption~\ref{ass:global} for detailed definitions), if the learning rate is chosen appropriately.
\section{Quantized, Decentralized Algorithms}
We introduce two quantized decentralized algorithms that compress information exchanged between nodes. All communications for decentralized algorithms are exchanging local models $\bm{x}^{(i)}$.
To reduce the communication cost, a straightforward idea is to compress the information exchanged within the decentralized network just like centralized algorithms sending compressed stochastic gradient \citep{2016arXiv161002132A}. Unfortunately, such naive combination does not work even using the unbiased stochastic compression and diminishing learning rate as shown in Figure~\ref{Fig:naive}. The reason can be seen from the detailed derivation (please find it in Supplement).
Before propose our solutions to this issue, let us first make some common optimization assumptions for analyzing decentralized stochastic algorithms~\citep{Lian_adsgd}.
\begin{assumption}
\label{ass:global}
Throughout this paper, we make the following commonly used assumptions:
\begin{enumerate}
\item \textbf{Lipschitzian gradient:} All function $f_i(\cdot)$'s are with $L$-Lipschitzian gradients.
\item \textbf{Symmetric double stochastic matrix:} The weighted matrix $W$ is a real double stochastic matrix that satisfies $W=W^{\top}$ and $W\bm{1}=W$.
\item \textbf{Spectral gap:} Given the symmetric doubly stochastic matrix $W$,
we define $\rho := \max \{| \lambda_2 (W) |, | \lambda_n (W) |\}$ and assume $\rho<1$.
\item \textbf{Bounded variance:} Assume the variance of stochastic gradient to be bounde
{\small\begin{align*}
\mathbb{E}_{\xi\sim \mathcal{D}_i} \left\| \nabla F_i (\bm{x}; \xi) - \nabla f_i (\bm{x})\right\|^2 \leqslant \sigma^2,
\quad
{1\over n}\sum_{i=1}^n\left\| \nabla f_i (\bm{x})-\nabla f (\bm{x})\right\|^2 \leqslant \zeta^2, \quad \forall i, \forall \bm{x},
\end{align*}}
\item \textbf{Independent and unbiased stochastic compression:} The stochastic compression operation $\bm{C}(\cdot)$ is unbiased, that is, $\mathbb{E}(\bm{C}(Z)) = Z$ for any $Z$ and the stochastic compressions are independent on different workers or at different time point.
\end{enumerate}
\end{assumption}
The last assumption essentially restricts the compression to be lossy but unbiased. Biased stochastic compression is generally hard to ensure the convergence and lossless compression can combine with any algorithms. Both of them are beyond of the scope of this paper. The commonly used stochastic unbiased compression include random quantization\footnote{A real number is randomly quantized into one of closest thresholds, for example, givens the thresholds $\{0, 0.3, 0.8, 1\}$, the number ``$0.5$'' will be quantized to $0.3$ with probability $40\%$ and to $0.8$ with probability $60\%$. Here, we assume that all numbers have been normalized into the range $[0,1]$.} \citep{zhang2017zipml} and sparsification\footnote{A real number $z$ is set to $0$ with probability $1-p$ and to $z/p$ with probability $p$.} \citep{wangni2017gradient,konevcny2016randomized}.
\subsection{Difference compression approach}
\begin{figtab}[t!]
\vspace{-0.4cm}
\scriptsize
\begin{minipage}[b]{0.43\textwidth}
\begin{algorithm}[H]\label{alg_2}
\caption{DCD-PSGD}\label{alg2}
\begin{algorithmic}[1]
\scriptsize
\STATE {\bfseries Input:} Initial point $\bm{x}^{(i)}_1=\bm{x}_1$, initial replica $\hat{\bm{x}}^{(i)}_1=\bm{x}_1$, iteration step length $\gamma$, {\color{black} weighted} matrix $W$, and number of total iterations T
\FOR{t = 1,2,...,T}
\STATE Randomly sample $\xi^{(i)}_t$ from local data of the $i$th node
\STATE Compute local stochastic gradient $\nabla F_i(\bm{x}^{(i)}_t;\xi^{(i)}_t)$ using $\xi^{(i)}_t$ and current optimization variable $\bm{x}^{(i)}_t$
\STATE \label{alg:step} Update the local model using local stochastic gradient and the weighted average of its connected neighbors' replica {\color{black} (denote as $\hat{\bm{x}}^{(j)}_t$)}
\begin{align*}
\bm{x}_{t+\frac{1}{2}}^{(i)}=\sum_{j=1}^{n}W_{ij}\bm{x}^{(j)}_t -\gamma\nabla F_i(\bm{x}^{(i)}_t;\xi^{(i)}_t),
\end{align*
\STATE Each node computes
$\bm{z}^{(i)}_{t} = \bm{x}_{t+{1\over 2}}^{(i)} - \bm{x}^{(i)}_t,$
and compress this $\bm{z}^{(i)}_{t}$ into $\bm{C}(\bm{z}_t^{(i)})$.
\STATE Update the local optimization variables
\begin{align*}
\bm{x}_{t+1}^{(i)}\gets \bm{x}_{t}^{(i)} + \bm{C}(\bm{z}_t^{(i)}).
\end{align*
\STATE Send $\bm{C}(\bm{z}_t^{(i)})$ to its connected neighbors, and update the replicas of its connected neighbors' values
\begin{align*}
\hat{\bm{x}}_{t+1}^{(j)} = \hat{\bm{x}}_{t}^{(j)} + \bm{C}(\bm{z}_t^{(i)}).
\end{align*
\ENDFOR
\STATE {\bfseries Output:} $\frac{1}{n}\sum_{i=1}^{n}\bm{x}^{(i)}_T$
\end{algorithmic}
\end{algorithm}
\end{minipage}\quad
\begin{minipage}[b]{0.5\textwidth}
\begin{algorithm}[H]
{
\caption{ECD-PSGD}\label{alg1}
\begin{algorithmic}[1]
\scriptsize
\STATE {\bfseries Input:} Initial point $\bm{x}^{(i)}_1=\bm{x}_1$, initial estimate ${\color{black} \tilde{\bm{x}}}^{(i)}_1=\bm{x}_1$, iteration step length $\gamma$, {\color{black} weighted} matrix $W$, and number of total iterations T.
\FOR{$t = 1,2,\cdots,T$}
\STATE Randomly sample $\xi^{(i)}_t$ from local data of the $i$th node
\STATE Compute local stochastic gradient $\nabla F_i(\bm{x}^{(i)}_t;\xi^{(i)}_t)$ using $\xi^{(i)}_t$ and current optimization variable $\bm{x}^{(i)}_t$
\STATE Compute the neighborhood weighted average by using the estimate value of the connected neighbors
\[
\bm{x}_{t+\frac{1}{2}}^{(i)}=\sum_{j=1}^{n}W_{ij}{\color{black} \tilde{\bm{x}}}^{(j)}_t
\
\STATE Update the local mode
\[
\bm{x}_{t+1}^{(i)}\gets \bm{x}_{t+\frac{1}{2}}^{(i)}-\gamma\nabla F_i(\bm{x}^{(i)}_t,\xi^{(i)}_t)
\
\STATE Each node computes the $z$-value of itself
\[
\bm{z}^{(i)}_{t+1} = \left(1-0.5t\right)\bm{x}_t^{(i)}+0.5t\bm{x}_{t+1}^{(i)
\]
and compress this $\bm{z}^{(i)}_{t}$ into $\bm{C}(\bm{z}_t^{(j)})$.
\STATE Each node updates the estimate for its connected neighbors:
\[{\color{black} \tilde{\bm{x}}}_{t+1}^{(j)}=\left(1-2t^{-1}\right){\color{black} \tilde{\bm{x}}}^{(j)}_t+2t^{-1}\bm{C}(\bm{z}_t^{(j)})
\]
\ENDFOR
\STATE {\bfseries Output:} $\frac{1}{n}\sum_{i=1}^{n}\bm{x}^{(i)}_T$
\end{algorithmic}
}
\end{algorithm}
\end{minipage}
\end{figtab}
In this section, we introduces a difference based approach, namely, difference compression D-PSGD (DCD-PSGD), to ensure efficient convergence.
The DCD-PSGD basically follows the framework of D-PSGD, except that nodes exchange the compressed difference of local models between two successive iterations, instead of exchanging local models. More specifically, each node needs to store its neighbors' models in last iteration $\{{\color{black}{\hat{\bm{x}}}^{(j)}_{t}}: j~\text{is node $i$'s neighbor}\}$ and follow the following steps:
\begin{enumerate}
\item take the weighted average and apply stochastic gradient descent step:
$\bm{x}_{t+\frac{1}{2}}^{(i)}=\sum_{j=1}^{n}W_{ij}{{\color{black}\hat{\bm{x}}}^{(j)}_t} -\gamma\nabla F_i(\bm{x}^{(i)}_t;\xi^{(i)}_t)$,
where {\color{black}$\hat{\bm{x}}^{(j)}_t$} is just the replica of $\bm{x}^{(j)}_t$ but is stored on node $i$\footnote{Actually each neighbor of node $j$ maintains a replica of $\bm{x}^{(j)}_t$.};
\item compress the difference between $\bm{x}^{(i)}_{t}$ and $\bm{x}_{t+\frac{1}{2}}^{(i)}$ and update the local model:$\bm{z}_t^{(i)} = \bm{x}_{t+\frac{1}{2}}^{(i)} -\bm{x}_t^{(i)},\quad \bm{x}^{(i)}_{t+1} =\bm{x}^{(i)}_t + \bm{C}(\bm{z}_t^{(i)})$;
\item send $\bm{C}(\bm{z}_t^{(i)})$ and query neighbors' $\bm{C}(\bm{z}_t)$ to update the local replica:
$\forall j~\text{is node $i$'s neighbor}$,
$\hat{\bm{x}}_{t+1}^{(j)} = \hat{\bm{x}}^{(j)}_t + \bm{C}(\bm{z}_t^{(j)})$.
\end{enumerate}
The full DCD-PSGD algorithm is described in Algorithm~\ref{alg2}.
To {\color{black} ensure convergence,} we need to make some restriction on the compression operator $\bm{C}(\cdot)$. Again this compression operator could be random quantization or random sparsification or any other operators. We introduce the definition of the signal-to-noise related parameter $\alpha$.
Let $\alpha := \sqrt{\sup_{Z\neq 0} {\|Q\|^2_F / \|Z\|^2_F}}$, where $Q=Z-C(Z)$. We have the following theorem.
\begin{theorem}\label{theo_2}
Under the Assumption~\ref{ass:global},
if $\alpha$ satisfies $(1-\rho)^2-4\mu^2\alpha^2>0$, choosing
$\gamma$ {satisfying} $1-3D_1L^2\gamma^2>0$, we have the following convergence rate for
Algorithm~\ref{alg2}{\small
\begin{align*}
& \sum_{t=1}^T\left(\left(1-D_3\right)\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + D_4\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right)
\leq \frac{2(f(0)-f^*)}{\gamma} + \frac{L\gamma T\sigma^2}{n}\\
& + \left(\frac{T\gamma^2LD_2}{2} + \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)D_1T\gamma^2}{1-3D_1L^2\gamma^2}\right)\sigma^2
+ \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1T\gamma^2}{1-3D_1L^2\gamma^2}\zeta^2, \numberthis \label{bound_theo_2}
\end{align*}}
where $\mu := \max_{i\in\{2,\cdots ,n\}}|\lambda_i-1|$, and {\small
\begin{alignat*}{2}
&D_1 := \frac{2\alpha^2}{1-\rho^2}\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right) + \frac{1}{(1-\rho)^2},
&\quad &
D_2 := 2\alpha^2\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right)\\
&D_3 := \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1\gamma^2}{1-3D_1L^2\gamma^2} + \frac{3LD_2\gamma^2}{2},
&\quad &
D_4 := \left(1-L\gamma\right).
\end{alignat*}}
\end{theorem}
To make the result more clear, we appropriately choose the steplength in the following:
\begin{corollary} \label{cor:convergence_alg2}
Choose {\small $\gamma = \left(6\sqrt{D_1}L + 6\sqrt{D_2L}+\frac{\sigma}{\sqrt{n}}T^{\frac{1}{2}} + \zeta^{\frac{2}{3}}T^{\frac{1}{3}}\right)^{-1}$}
in Algorithm~\ref{alg2}. If $\alpha$ is small enough that satisfies $(1-\rho)^2-4\mu^2\alpha^2>0$, then we have{\small
\begin{align*}
\frac{1}{T}\sum_{t=1}^T\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 \lesssim & \frac{\sigma}{\sqrt{nT}} + \frac{\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}+ \frac{1}{T}.
\end{align*}
}
where $D_1$, $D_2$ follow to same definition in Theorem~\ref{theo_2} and we treat $f(0)- f^*$, $L$, and $\rho$ constants.
\end{corollary}
The leading term of the convergence rate is {\small$O \left(1/\sqrt{Tn}\right)$}, and we also proved the convergence rate for {\small$\mathbb{E}\left[\sum_{i=1}^n\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2\right]$} (see \eqref{finalcoro_1} in Supplementary). We shall see the tightness of our result in the following discussion.
\paragraph{Linear speedup} Since the leading term of the convergence rate is {\small$O \left(1/\sqrt{Tn}\right)$} when $T$ is large, which is consistent with the convergence rate of C-PSGD, this indicates that we would achieve a linear speed up with respect to the number of nodes.
\paragraph{Consistence with D-PSGD} Setting $\alpha = 0$ to match the scenario of D-PSGD, ECD-PSGD admits the rate {\small$O \left(\frac{\sigma}{\sqrt{nT}} + \frac{ \zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}\right)$}, that is slightly better the rate of D-PSGD proved in \citet{Lian_adsgd} {\small$O \left(\frac{\sigma}{\sqrt{nT}} + \frac{n^{\frac{2}{3}} \zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}\right) $}. {\color{black} The non-leading terms' dependence of the spectral gap $(1-\rho)$ is also consistent with the result in D-PSDG.}
\subsection{Extrapolation compression approach}
{\color{black} From Theorem~\ref{theo_2}, we can see that there is an upper bound for the compressing level $\alpha$ in DCD-PSGD. Moreover, since the spectral gap $(1-\rho)$ would decrease with the growth of the amount of the workers, so DCD-PSGD will fail to work under a very aggressive compression. So in this section, we propose another approach, namely ECD-PSGD, to remove the restriction of the compressing degree, with a little sacrifice on the computation efficiency.}
For ECD-PSGD, we make the following assumption that the noise brought by compression is bounded.
\begin{assumption}
\label{ass:alg1}
(\textbf{Bounded compression noise}) We assume the noise due to compression is unbiased and its variance is bounded, that is, $\forall \bm{z}\in \mathbb{R}^n
\begin{align*}
\mathbb{E}\|\bm{C}(\bm{z}) - \bm{z}\|^2\leq \tilde{\sigma}^2/2, \quad \forall \bm{z}
\end{align*}
\end{assumption}
Instead of sending the local model $\bm{x}^{(i)}_t$ directly to neighbors, we send a $z$-value that is extrapolated from $\bm{x}^{(i)}_t$ and $\bm{x}^{(i)}_{t-1}$ at each iteration. Each node (say, node $i$) estimates its neighbor's values $\bm{x}^{(j)}_t$ from compressed $z$-value at $t$-th iteration. This procedure could ensure diminishing estimate error, in particular, {\small$\mathbb{E}\|\tilde{\bm{x}}^{(j)}_t - \bm{x}^{(j)}_t\|^2 \leq \mathcal{O}\left(t^{-1}\right)$.}
At $t$th iteration, node $i$ performs the following steps to estimate $\bm{x}^{(j)}_t$ by $\tilde{\bm{x}}^{(j)}_t$:
\begin{itemize
\setlength{\itemsep}{1.5pt}
\setlength{\parsep}{2pt}
\setlength{\parskip}{1pt}
\item The node $j$, computes the $z$-value that is obtained through extrapolation{\small
\begin{align*}
\bm{z}_{t}^{(j)}=\left(1-0.5t\right)\bm{x}_{t-1}^{(j)}+0.5t{\bm{x}}_{t}^{(j)},\numberthis \label{alg1_noisecontrol1}
\end{align*}}
\item Compress $\bm{z}_t^{(j)}$ and send it to its neighbors, {\color{black} say node $i$. Node $i$ computes $\tilde{\bm{x}}_{t}^{(j)}$ using}{\small
\begin{align*}
\tilde{\bm{x}}_{t}^{(j)} = \left(1-2t^{-1}\right)\tilde{\bm{x}}^{(j)}_{t-1}+2t^{-1}\bm{C}(\bm{z}_t^{(j)}). \numberthis \label{alg1_noisecontrol2}
\end{align*}}
\end{itemize}
Using Lemma~\ref{lemma3} (see Supplemental Materials), if the compression noise ${\color{black} \bm{q}}^{(j)}_t:= \bm{z}_t^{(i)} - \bm{C}(\bm{z}_t^{(i)})$ {\color{black} is globally bounded variance} by $\tilde{\sigma}^2/2$, we will have{\small
\begin{align*}
\mathbb{E}(\|\tilde{\bm{x}}^{(j)}_t - \bm{x}^{(j)}_t\|^2) \leq \tilde{\sigma}^2/t.
\end{align*}}
Using this way to estimate the neighbors' local models leads to the following equivalent updating for
\begin{align*}
X_{t+1} = & \tilde{X}_tW - \gamma_t G(X_t; \xi_t)
= X_t W + \underbrace{{Q}_tW}_{\text{diminished estimate error}} - \gamma_t G(X_t; \xi_t)
\end{align*}}
The full extrapolation compression D-PSGD (ECD-PSGD) algorithm is summarized in Algorithm~\ref{alg1}.
Below we will show that EDC-PSGD algorithm would admit the same convergence rate and the same computation complexity as D-PSGD.
\begin{theorem}[Convergence of Algorithm \ref{alg1}] \label{theo_1}
\label{thm:conv-bounded-variance}
Under Assumptions~\ref{ass:global} and ~\ref{ass:alg1}, choosing $\gamma_t$ in Algorithm~\ref{alg1} to be constant $\gamma$ {satisfying} {\small$1-6C_1L^2\gamma^2>0$}, we have the following convergence rate for
Algorithm~\ref{alg1} {\small
\begin{align*}
& \sum_{t=1}^T\left(\left(1-C_3\right)\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + C_4\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right)
\\ \leq & \frac{2(f(0)-f^*)}{\gamma} +
\frac{L\log T}{n\gamma}\tilde{\sigma}^2 + \frac{LT\gamma}{n}\sigma^2 + \frac{4C_2\tilde{\sigma}^2L^2}{1-\rho^2}\log T + 4L^2C_2\left(\sigma^2+3\zeta^2\right)C_1T\gamma^2. \numberthis \label{eq_theo_1}
\end{align*}}
where {\small
$C_1 := \frac{1}{(1-\rho)^2}$,
$C_2 := \frac{1}{1-6\rho^{-2}C_1L^2\gamma^2}$,
$C_3 := 12L^2C_2C_1\gamma^2$, and
$C_4 := 1-L\gamma$. }
\end{theorem}
To make the result more clear, we choose the steplength in the following:
\begin{corollary} \label{cor:convergence}
In Algorithm~\ref{alg1} choose the steplength {\small$\gamma=\left(12\sqrt{C_1}L+\frac{\sigma}{\sqrt{n}}T^{\frac{1}{2}} + \zeta^{\frac{2}{3}}T^{\frac{1}{3}}\right)^{-1}$}. Then it admits the following convergence rate (with $f(0)- f^*$, $L$, and $\rho$ treated as constants). {\small
\begin{align*}
\frac{1}{T}\sum_{t=1}^T\mathbb{E}\|\nabla f(\overline{X}_t)\|^2
\lesssim & \frac{\sigma(1+\frac{\tilde{\sigma}^2\log T}{n})}{\sqrt{nT}} + \frac{\zeta^{\frac{2}{3}}(1+\frac{\tilde{\sigma}^2\log T}{n})}{T^{\frac{2}{3}}} + \frac{1}{T} + \frac{\tilde{\sigma}^2\log{T}}{T}, \numberthis
\label{eq:cor:alg1}
\end{align*}}
\end{corollary}
This result suggests the algorithm converges roughly in the rate {\small$O({1 / \sqrt{nT}})$}, and we also proved the convergence rate for {\small$\mathbb{E}\left[\sum_{i=1}^n\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2\right]$} (see \eqref{final_coro2} in Supplementary). The followed analysis will bring more detailed interpretation to show the tightness of our result.
\paragraph{Linear speedup} Since the leading term of the convergence rate is {\small$O (1/\sqrt{nT})$} when $T$ is large, which is consistent with the convergence rate of C-PSGD, this indicates that we would achieve a linear speed up with respect to the number of nodes.
\paragraph{Consistence with D-PSGD} Setting $\tilde{\sigma} = 0$ to match the scenario of D-PSGD, ECD-PSGD admits the rate {\small$O \left(\frac{\sigma}{\sqrt{nT}} + \frac{\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}\right)$}, that is slightly better the rate of D-PSGD proved in \citet{Lian_adsgd} {\small$O \left(\frac{\sigma}{\sqrt{nT}} + \frac{n^{\frac{1}{3}} \zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}\right) $}. {\color{black} The non-leading terms' dependence of the spectral gap $(1-\rho)$ is also consistent with the result in D-PSDG.}
\paragraph{Comparison between DCD-PSGD and ECD-PSGD}
On one side, in term of the convergence rate, ECD-PSGD is slightly worse than DCD-PSGD due to additional terms {\small$\left( \frac{\sigma\tilde{\sigma}^2\log{T}}{n\sqrt{nT}} + \frac{\zeta^{\frac{2}{3}}\tilde{\sigma}^2\log{T}}{nT^{\frac{2}{3}}} + \frac{\tilde{\sigma}^2\log{T}}{T}\right)$} that suggests that if $\tilde{\sigma}$ is relatively large than $\sigma$, the additional terms dominate the convergence rate.
On the other side, DCD-PSGD does not allow too aggressive compression or quantization and may lead to diverge due to {\small$\alpha \leq \frac{1-\rho}{2\sqrt{2}\mu}$}, while ECD-PSGD is quite robust to aggressive compression or quantization.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.14]{1.pdf}
\caption{Performance Comparison between Decentralized and AllReduce implementations.}
\label{Fig:TrainingLoss}
\end{figure*}
\section{Experiments}
In this section we evaluate two decentralized algorithms by comparing
with an Allreduce implementation of centralized SGD. We run experiments
under diverse network conditions and show that, decentralized algorithms with low precision can speed up training without hurting
convergence.
\subsection{Experimental Setup}
We choose the image classification task as a benchmark to evaluate our theory. We train ResNet-20~\citep{he2016deep} on CIFAR-10 dataset which has 50,000 images for training and 10,000 images for testing. Two proposed algorithms are implemented in Microsoft CNTK and compablack with CNTK's original implementation of distributed SGD:
\begin{figure*}[t]
\centering
\includegraphics[scale=0.14]{2.pdf}
\caption{Performance Comparison in Diverse Network Conditions.}
\label{Fig:EpochTime}
\end{figure*}
\begin{itemize}[fullwidth]
\item{\bf{Centralized:}} This implementation is based on MPI Allreduce primitive with full precision (32 bits). It is the standard training method for multiple nodes in CNTK.
\item{\bf{Decentralized\_32bits/8bits:}} The implementation of the proposed decentralized approach with OpenMPI. The full precision is 32 bits, and the compressed precision is 8 bits.
\item In this paper, we omit the comparison with quantized centralized
training because the difference between Decentralized 8bits
and Centralized 8bits would be similar to the original
decentralized training paper~\cite{Lian_dsgd} -- when the network
latency is high, decentralized algorithm outperforms centralized
algorithm in terms of the time for each epoch.
\end{itemize}
We run experiments on 8 Amazon $p2.xlarge$ EC2 instances, each of which has one Nvidia K80 GPU. We use each GPU as a node. In decentralized cases, 8 nodes are connected as a ring topology, which means each node just communicates with its two neighbors. The batch size for each node is same as the default configuration in CNTK. We also tune learning rate for each variant.
\subsection{Convergence and Run Time Performance}
We first study the convergence of our algorithms. Figure~\ref{Fig:TrainingLoss}(a) shows the convergence w.r.t \# epochs of centralized and decentralized cases.
We only show ECD-PSGD in the figure (and call it Decentralized) because
DCD-PSGD has almost identical convergence behavior in this experiment.
We can see that with our algorithms, decentralization and compression
would not hurt the convergence rate.
We then compare the runtime performance. Figure \ref{Fig:TrainingLoss}(b, c, d) demonstrates how training loss decreases with the run time under different network conditions. We use $tc$ command to change bandwidth and latency of the underlying network. By default, 1.4 Gbps bandwidth and 0.13 ms latency is the best network condition we can get in this cluster. On this occasion, all implementations have a very similar runtime performance because communication is not the bottleneck for system. When the latency is high, as shown in \ref{Fig:TrainingLoss}(c), decentralized algorithms in both low and full precision can outperform the Allreduce method because of fewer number of communications. However, in low bandwidth case, training time is mainly dominated by the amount of communication data, so low precision method can be obviously faster than these full precision methods.
\subsection{Speedup in Diverse Network Conditions}
To better understand the influence of bandwidth and latency on speedup, we compare the time of one epoch under various of network conditions. Figure \ref{Fig:EpochTime}(a, b) shows the trend of epoch time with bandwidth decreasing from 1.4 Gbps to 5 Mbps. When the latency is low (Figure \ref{Fig:EpochTime}(a)), low precision algorithm is faster than its full precision counterpart because it only needs to exchange around one fourth of full precision method's data amount. Note that although in a decentralized way, full precision case has no advantage over Allreduce in this situation, because they exchange exactly the same amount of data. When it comes to high latency shown in Figure \ref{Fig:EpochTime}(b), both full and low precision cases are much better than Allreduce in the beginning. But also, full precision method gets worse dramatically with the decline of bandwidth.
Figure \ref{Fig:EpochTime}(c, d) shows how latency influences the epoch time under good and bad bandwidth conditions. When bandwidth is not the bottleneck (Figure \ref{Fig:EpochTime}(c)), decentralized approaches with both full and low precision have similar epoch time because they have same number of communications. As is expected, Allreduce is slower in this case. When bandwidth is very low (Figure \ref{Fig:EpochTime}(d)), only decentralized algorithm with low precision can achieve best performance among all implementations.
\begin{figure}
\centering
\includegraphics[scale=0.15]{3.pdf}
\caption{Comparison of Alg. 1 and Alg. 2 }
\label{Fig:4bits}
\end{figure}
\subsection{Discussion}
Our previous experiments validate the efficiency of the decentralized algorithms on 8 nodes with 8 bits. However, we wonder if we can scale it to more nodes or compress the exchanged data even more aggressively. We firstly conducted experiments on 16 nodes with 8 bits as before. According to Figure \ref{Fig:4bits}(a), Alg. 1 and Alg. 2 on 16 nodes can still achieve basically same convergence rate as Allreduce, which shows the scalability of our algorithms. However, they can not be comparable to Allreduce with 4 bits, as is shown in \ref{Fig:4bits}(b). What is noteworthy is that these two compression approaches have quite different behaviors in 4 bits. For Alg. 1, although it converges much slower than Allreduce, its training loss keeps reducing. However, Alg. 2 just diverges in the beginning of training. This observation is consistent with our theoretical analysis.
\section{Conclusion}
In this paper, we studied the problem of combining two tricks
of training distributed stochastic gradient descent under
imperfect network conditions: quantization and decentralization.
We developed two novel algorithms or quantized, decentralized
training, analyze the theoretical property of both algorithms,
and empirically study their performance in a various settings
of network conditions. We found that when the underlying communication
networks has {\em both} high latency and low bandwidth,
quantized, decentralized algorithm outperforms other strategies
significantly.
\newpage
\bibliographystyle{abbrvnat}
\section{General bound with compression noise} \label{general}
In this section, to see the influence of the compression more clearly, we are going to prove two general bounds (see see Lemma~\ref{lemma_bound_all_X_ave} and Lemma~\ref{lemma:boundfplus}) for compressed D-PSGD that has the same updating rule like \eqref{general_eq}. Those bounds are very helpful for the following proof of our algorithms.
The most challenging part of a decentralized algorithm, unlike the centralized algorithm, is that we need to ensure the local model on each node to converge to {\color{black} the average value $\overline{X}_t$}. So we start with an analysis of the quantity $\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2$ and its influence on the final convergence rate. For both ECD-PSGD and DCD-PSGD, we are going to prove that
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & \frac{2}{1-\rho^2}\sum_{t=1}^{T}\|Q_t\|^2_F + \frac{2}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\|G(X_{t};\xi_{t})\|^2_F,
\end{align*}
and
\begin{align*}
\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + (1-L\gamma_t)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 \leq & \frac{2}{\gamma_t}\left(\mathbb{E}f(\overline{X}_t) - \mathbb{E}f(\overline{X}_{t+1})\right) +
\frac{L^2}{n}\mathbb{E}\sum_{i=1}^n\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2 \\
& + \frac{L}{\gamma_t}\mathbb{E}\|\overline{Q}_t\|^2 + \frac{L\gamma_t}{n}\sigma^2.
\end{align*}
From the above two inequalities, we can see that the extra noise term decodes the convergence efficiency of $\bm{x}_t^{(i)}$ to the average $\overline{X}_t$.
The proof of the general bound for \eqref{general_eq} is divided into two parts. In subsection~\ref{secA1}, we provide a new perspective in understanding decentralization, which can be very helpful for the simplicity of our following proof. In subsection~\ref{secA2}, we give the detail proof for the general bound.
\subsection{A more intuitive way to understand decentralization}\label{secA1}
To have a better understanding of how decentralized algorithms work, and how can we ensure a consensus from all local variables on each node. We provide a new perspective to understand decentralization using coordinate transformation, which can simplify our analysis in the following.
The confusion matrix $W$ satisfies $W = \sum_{i=1}^{n}\lambda_i\bm{v}^i\left(\bm{v}^i\right)^{\top}$ is doubly stochastic, so we can decompose it into $W = P\Lambda P^{\top}$, where $P = \left(\bm{v}^1,\bm{v}^2,\cdots,\bm{v}^n\right)$ that satisfies $P^{\top}P=PP^{\top}=I$. Without the loss of generality, we can assume $\lambda_1\geq \lambda_2 \geq \cdots, \geq \lambda_{n}$. Then we have the following equalities:
\begin{align*}
X_{t+1} = & X_tW-\gamma_tG(X_t;\xi_t) +Q_t ,\\
X_{t+1} = & X_tP\Lambda P^{\top}-\gamma_tG(X_t;\xi_t) +Q_t ,\\
X_{t+1}P = & X_tP\Lambda - G(X_t;\xi_t)P + Q_tP.
\end{align*}
Consider the coordinate transformation using $P$ as the base change matrix, and
denote $Y_t = X_tP$, $H(X_t;\xi_t) = G(X_t;\xi_t)P$, $R_t = Q_tP$. Then the above equation can be rewritten as
\begin{align*}
Y_{t+1} = & Y_t\Lambda -H(X_t;\xi_t) + R_t,\numberthis\label{Y_t}.
\end{align*}
Since $\Lambda$ is a diagonal matrix, so we use $\bm{y}_t^{(i)}$, $\bm{h}_t^{(i)}$, $\bm{r}_t^{(i)}$ to indicate the $i$-th column of $Y_t$, $H(X_t;\xi_t)$, $R_t$. Then \eqref{Y_t} becomes
\begin{align*}
\bm{y}_{t+1}^{(i)} = & \lambda_i\bm{y}_{t}^{(i)} - \bm{h}_t^{(i)} + \bm{r}_t^{(i)}, \quad \forall i \in \{1,\cdots,n\} \numberthis\label{y_t}.
\end{align*}
\eqref{y_t} offers us a much intuitive way to analysis the algorithm. Since all eigenvalues of $W$, except $\lambda_1$, satisfies $|\lambda_i|<1$, so the corresponding $\bm{y}_t^{(i)}$ would ``decay to zero'' due to the scaling factor $\lambda_i$.
Moreover, since the eigenvector corresponding to $\lambda_1$ is $\frac{1}{\sqrt{n}}(1,1,\cdots,1)$, then we have $\bm{y}_t^{(1)} = \overline{X}_t\sqrt{n}$. So, if $t \to \infty$, intuitively we can set $\bm{y}_t^{(i)}\to \bm{0}$ for $i \neq 1$, then $Y_t\to (\overline{X}_t\sqrt{n},0,\cdots,0)$ and $X_t\to \overline{X}_t\frac{\bm{1}}{n} $. This whole process shows how the confusion works under a coordinate transformation.
\subsection{Analysis for the general updating form in \eqref{general_eq} }\label{secA2}
\begin{lemma}\label{lemma_bound_substract_mean}
For any matrix $X_t\in \mathbb{R}^{N\times n}$, decompose the confusion matrix $W$ as $W = \sum_{i=1}^n \lambda_i\bm{v}^{(i)}\left(\bm{v}^{(i)}\right)^{\top} = P\Lambda P^{\top}$, where $P = (\bm{v}^{(1)},\bm{v}^{(2)},\cdots,\bm{v}^{(n)})\in \mathbb{R}^{N\times n}$, $\bm{v}^{(i)}$ is the normalized eigenvector of $\lambda_i$, and $\Lambda$ is a diagonal matrix with $\lambda_i$ be its $i$th element. We have
\begin{align*}
\sum_{i=1}^n\left\| X_tW^t\bm{e}^{(i)}-X_t\frac{\bm{1}_n}{n} \right\|^2 =& \left\| X_tW^t-X_t\bm{v}^{(1)}\left(\bm{v}^{(1)}\right)^{\top}\right\|^2_F \leq
\left\| \rho^{t}X_t\right\|^2_F,
\end{align*}
where $\rho$ follows the defination in Theorem~\ref{theo_1}.
\end{lemma}
\begin{proof}
Since $W^t = P\Lambda^t P^{\top}$, we have
\begin{align*}
\sum_{i=1}^n\left\| X_tW^t\bm{e}^{(i)}-X_t\frac{\bm{1}_n}{n} \right\|^2 = & \sum_{i=1}^n\left\| \left(X_tW^t-X_t\frac{\bm{1}_n\bm{1}_n^{\top}}{n}\right)\bm{e}^{(i)} \right\|^2\\
= & \left\| X_tW^t-X_t\bm{v}^{(1)}\left(\bm{v}^{(1)}\right)^{\top}\right\|^2_F\\
= & \left\| X_tP\Lambda^tP^{\top}-X_tP\begin{pmatrix}
1,0,\cdots,0\\0,0,\cdots,0\\ \cdots \\0,0,\cdots,0
\end{pmatrix}P^{\top}\right\|^2_F\\
= & \left\| X_tP\Lambda^t-X_tP\begin{pmatrix}
1,0,\cdots,0\\0,0,\cdots,0\\ \cdots \\0,0,\cdots,0
\end{pmatrix}\right\|^2_F\\
= & \left\| X_tP\begin{pmatrix}
&0,&0,&0,&\cdots,&0\\
&0,&\lambda_2^t,&0,&\cdots,&0\\
&0,&0,&\lambda_3^t,&\cdots,&0\\
& \hdotsfor{5}\\
&0,&0,&0,&\cdots,&\lambda_n^t
\end{pmatrix}\right\|^2_F\\
\leq & \left\|\rho^{t}X_tP\right\|^2_F\\
= & \left\|\rho^{t}X_t\right\|^2_F.
\end{align*}
Specifically, when $t=0$, we have
\begin{align*}
\sum_{i=1}^n\left\| X_t\bm{e}^{(i)}-X_t\frac{\bm{1}_n}{n} \right\|^2 = & \left\| X_tP\begin{pmatrix}
&0,&0,&0,&\cdots,&0\\
&0,&1,&0,&\cdots,&0\\
&0,&0,&1,&\cdots,&0\\
& \hdotsfor{5}\\
&0,&0,&0,&\cdots,&1
\end{pmatrix}\right\|^2_F\\
= & \sum_{i=2}^n \left\|\bm{y}_t^{(i)}\right\|^2,\numberthis \label{lemma_bound_substract_mean_eq0}
\end{align*}
where $\bm{y}_t^{(i)} = X_tP\bm{e^{(i)}}$.
\end{proof}
\begin{lemma}\label{lemma1}
Given two non-negative sequences $\{a_t\}_{t=1}^{\infty}$ and $\{b_t\}_{t=1}^{\infty}$ that satisfying
\begin{equation}
a_t = \sum_{s=1}^t\rho^{t-s}b_{s}, \numberthis \label{eqn1}
\end{equation}
with $\rho\in[0,1)$, we have
\begin{align*}
S_k:=\sum_{t=1}^{k}a_t \leq & \sum_{s=1}^k\frac{b_s}{1-\rho},\\
D_k:=\sum_{t=1}^{k}a_t^2 \leq &
\frac{1}{(1-\rho)^2} \sum_{s=1}^kb_s^2.
\end{align*}
\end{lemma}
\begin{proof}
From the definition, we have
\begin{align*}
S_k= & \sum_{t=1}^{k}\sum_{s=1}^t\rho^{t-s}b_{s}
= \sum_{s=1}^{k}\sum_{t=s}^k\rho^{t-s}b_{s}
= \sum_{s=1}^{k}\sum_{t=0}^{k-s}\rho^{t}b_{s}
\leq \sum_{s=1}^{k}{b_{s}\over 1-\rho}, \numberthis \label{eqn3}\\
D_k= & \sum_{t=1}^{k}\sum_{s=1}^t\rho^{t-s}b_{s}\sum_{r=1}^t\rho^{t-r}b_{r}\\
= & \sum_{t=1}^{k}\sum_{s=1}^t\sum_{r=1}^t\rho^{2t-s-r}b_{s}b_{r} \\
\leq & \sum_{t=1}^{k}\sum_{s=1}^t\sum_{r=1}^t\rho^{2t-s-r}{b_{s}^2+b_{r}^2\over2}\\
= & \sum_{t=1}^{k}\sum_{s=1}^t\sum_{r=1}^t\rho^{2t-s-r}b_{s}^2 \\
\leq & {1\over 1-\rho}\sum_{t=1}^{k}\sum_{s=1}^t\rho^{t-s}b_{s}^2\\
\leq & {1\over (1-\rho)^2}\sum_{s=1}^{k}b_{s}^2. \quad \text{(due to \eqref{eqn3})}
\end{align*}
\end{proof}
Lemma~\ref{lemma_bound_substract_mean} shows us an overall understanding about how the confusion matrix works, while Lemma~\ref{lemma1} is a very important tool for analyzing the sequence in Lemma~\ref{lemma_bound_substract_mean}. Next we are going to give a upper bound for the difference between the local modes and the global mean mode.
\begin{lemma}\label{lemma_bound_all_X_ave}
Under Assumption~\ref{ass:global}, we have
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & \frac{2}{1-\rho^2}\sum_{t=1}^{T}\|Q_s\|^2_F + \frac{2}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\|G(X_{t};\xi_{t})\|^2_F.
\end{align*}
\end{lemma}
\begin{proof}
From the updating rule, we have
\begin{align*}
X_{t} = &\sum_{s=1}^{t-1}\gamma_s G\left(X_s;\xi_s\right)W^{t-s-1} +\sum_{s=1}^{t-1}Q_sW^{t-s},\\
\overline{X}_t = &\sum_{s=1}^{t-1}\gamma_sG\left(X_s;\xi_s\right)W^{t-s-1}\frac{\bm{1}}{n} +\sum_{s=1}^{t-1}Q_sW^{t-s}\frac{\bm{1}}{n}\\
=& \sum_{s=1}^{t-1}\gamma_s \overline{G}\left(X_s;\xi_s\right) +\sum_{s=1}^{t-1}\overline{Q}_s.\quad \text{(due to $W\frac{\bm{1}}{n} = \frac{\bm{1}}{n}$)}
\end{align*}
Therefore it yields
\begin{align*}
&\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2\\
= & \sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\left(Q_{s}W^{t-s}\bm{e}^{(i)}-\overline{Q}_s\right) - \sum_{s=1}^{t-1}\gamma_s\left( G(X_{s};\xi_{s})W^{t-s-1}\bm{e}^{(i)} - \overline{G}(X_{s};\xi_{s})\right)\right\|^2 \\
\leq & 2\sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\left(Q_{s}W^{t-s}\bm{e}^{(i)}-\overline{Q}_s\right)\right\|^2 + 2\sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\gamma_s\left( G(X_{s};\xi_{s})W^{t-s-1}\bm{e}^{(i)} - \overline{G}(X_{s};\xi_{s})\right)\right\|^2\\
= & 2\sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\left(Q_{s}W^{t-s}\bm{e}^{(i)}-\overline{Q}_s\right)\right\|^2 + 2\sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\gamma_s\left( G(X_{s};\xi_{s})W^{t-s-1}\bm{e}^{(i)} - \overline{G}(X_{s};\xi_{s})\right)\right\|^2\\
= & 2\sum_{i=1}^n\sum_{s=1}^{t-1}\mathbb{E}\left\|\left(Q_{s}W^{t-s}\bm{e}^{(i)}-\overline{Q}_s\right)\right\|^2 + 2\sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\gamma_s\left( G(X_{s};\xi_{s})W^{t-s-1}\bm{e}^{(i)} - \overline{G}(X_{s};\xi_{s})\right)\right\|^2\\
& + 4\sum_{i=1}^n\sum_{s\neq s'}\mathbb{E}\left\langle \mathbb{E}_{_{Q_s}}Q_{s}W^{t-s}\bm{e}^{(i)}- \mathbb{E}_{_{Q_s}}\overline{Q}_s , \mathbb{E}_{_{Q_{s'}}}Q_{s'}W^{t-s'}e^{(i)}- \mathbb{E}_{_{Q_{s'}}}\overline{Q}_{s'}\right\rangle\\
= & 2\sum_{i=1}^n\sum_{s=1}^{t-1}\mathbb{E}\left\|\left(Q_{s}W^{t-s}\bm{e}^{(i)}-\overline{Q}_s\right)\right\|^2 + 2\sum_{i=1}^n\mathbb{E}\left\|\sum_{s=1}^{t-1}\gamma_s\left( G(X_{s};\xi_{s})W^{t-s-1}\bm{e}^{(i)} - \overline{G}(X_{s};\xi_{s})\right)\right\|^2\\
= & 2\sum_{s=1}^{t-1}\mathbb{E}\left\|\left(Q_{s}W^{t-s}-Q_s\bm{v}_1\bm{v}_1^{\top}\right)\right\|^2_F + 2\mathbb{E}\left\|\sum_{s=1}^{t-1}\gamma_s\left( G(X_{s};\xi_{s})W^{t-s-1} - G(X_{s};\xi_{s})\bm{v}_1\bm{v}_1^{\top}\right)\right\|^2_F\\
\leq & 2\mathbb{E}\sum_{s=1}^{t-1}\left\|\rho^{t-s}Q_s\right\|^2_F + 2\mathbb{E}\left(
\sum_{s=1}^{t-1}\gamma_s\rho^{t-s-1}\left\|G(X_{s};\xi_{s})\right\|_F\right)^2, \quad \text{(due to Lemma~\ref{lemma_bound_substract_mean})}
\end{align*}
We can see that $\sum_{s=1}^{t-1}\rho^{2(t-s)}\left\|Q_s\right\|^2_F$ and $\sum_{s=1}^{t-1}\gamma_s\rho^{t-s-1}\left\|G(X_{s};\xi_{s})\right\|_F$ has the same structure with the sequence in Lemma~\ref{lemma1}, which leads to
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & \frac{2}{1-\rho^2}\mathbb{E}\sum_{t=1}^{T}\|Q_s\|^2_F + \frac{2}{(1-\rho)^2}\mathbb{E}\sum_{t=1}^{T}\gamma_t^2\|G(X_{t};\xi_{t})\|^2_F.
\end{align*}
\end{proof}
\begin{lemma}\label{lemma:boundfplus}
Following the Assumption~\ref{ass:global}, we have
\begin{align*}
\frac{\gamma_t}{2}\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + (\frac{\gamma_t}{2}-\frac{L\gamma_t^2}{2})\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 \leq & \mathbb{E}f(\overline{X}_t) - \mathbb{E}f(\overline{X}_{t+1}) +
\frac{L^2\gamma_t}{2n}\mathbb{E}\sum_{i=1}^n\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2 \\
& + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2 + \frac{L\gamma_t^2}{2n}\sigma^2.\numberthis \label{lemma:boundfplus_eq}
\end{align*}
\begin{proof}
From the updating rule, we have
\begin{align*}
X_{t+1} = \tilde{X}_tW - \gamma_t G(X_t;\xi_t) = X_tW + Q_tW -\gamma_t G(X_t;\xi_t),
\end{align*}
which implies
\begin{align*}
\overline{X}_{t+1} = & \left(X_tW + Q_tW -\gamma_t G(X_t;\xi_t)\right)\frac{\bm{1}}{n}\\
& = \frac{X_t\bm{1}}{n} + \frac{Q_t\bm{1}}{n} -\gamma_t\frac{G(X_t;\xi_t)\bm{1}}{n}\\
& = \overline{X}_t +\overline{Q}_t - \gamma_t \overline{G}(X_t;\xi_t).
\end{align*}
From the Lipschitzian condition for the objective function $f_i$, we know that $f$ also satisfies the Lipschitzian condition. Then we have
\begin{align*}
&\mathbb{E}f(\overline{X}_{t+1})\\
\leq & \mathbb{E}f(\overline{X}_t)+\mathbb{E}\left\langle\nabla f(\overline{X_t}), -\gamma_t\overline{G}(X_t;\xi_t)+\overline{Q}_t\right\rangle + \frac{L}{2}\mathbb{E}\left\|-\gamma_t\overline{G}(X_t;\xi_t) + \overline{Q}_t\right\|^2 \\
= & \mathbb{E}f(\overline{X}_t) + \mathbb{E}\langle\nabla f(\overline{X}_t), -\gamma_t\overline{G}(X_t;\xi_t)+\mathbb{E}_{_{Q_t}}\overline{Q}_t\rangle \\
& + \frac{L}{2}(\mathbb{E}\|\gamma_t\overline{G}(X_t;\xi_t)\|^2 + \mathbb{E}\|\overline{Q}_t\|^2 + \mathbb{E}\langle-\gamma_t\overline{G}(X_t;\xi_t),\mathbb{E}_{_{Q_t}}\overline{Q}_t\rangle)\\
= & \mathbb{E}f(\overline{X}_t) + \mathbb{E}\langle\nabla f(\overline{X}_t), -\gamma_t\mathbb{E}_{\xi_t}\overline{G}(X_t;\xi_t)\rangle + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{G}(X_t;\xi_t)\|^2 + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2\quad \text{(due to $\mathbb{E}_{Q_t}\overline{Q}_t=\bm{0}$)}\\
= & \mathbb{E}f(\overline{X}_t) - \gamma_t\mathbb{E}\langle\nabla f(\overline{X}_t), \overline{\nabla f} (X_t)\rangle + \frac{L\gamma_t^2}{2}\mathbb{E}\|(\overline{G}(X_t;\xi_t) - \overline{\nabla f} (X_t))+\overline{\nabla f}(X_t)\|^2 + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2\\
= & \mathbb{E}f(\overline{X}_t) - \gamma_t\mathbb{E}\langle\nabla f(\overline{X}_t), \overline{\nabla f} (X_t)\rangle + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{G}(X_t;\xi_t) - \overline{\nabla f} (X_t)\|^2 + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 \\
& + L\gamma_t^2\mathbb{E}\langle\mathbb{E}_{\xi_t}\overline{G}(X_t;\xi_t) - \overline{\nabla f}(X_t),\overline{\nabla f}(X_t)\rangle + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2\\
= & \mathbb{E}f(\overline{X}_t) - \gamma_t\mathbb{E}\langle\nabla f(\overline{X}_t), \overline{\nabla f} (X_t)\rangle + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{G}(X_t;\xi_t) - \overline{\nabla f} (X_t)\|^2\\
& + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 +
\frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2\\
= & \mathbb{E}f(\overline{X}_t) - \gamma_t\mathbb{E}\langle\nabla f(\overline{X}_t), \overline{\nabla f} (X_t)\rangle + \frac{L\gamma_t^2}{2n^2}\mathbb{E}\left\|\sum_{i=1}^n\left(\nabla F_i(x_t^{(i)};\xi_t^{(i)}) - \nabla f_i (x_t^{(i)})\right)\right\|^2\\
& + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 +
\frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2\\
= & \mathbb{E}f(\overline{X}_t) - \gamma_t\mathbb{E}\langle\nabla f(\overline{X}_t), \overline{\nabla f} (X_t)\rangle + \frac{L\gamma_t^2}{2n^2}\sum_{i=1}^n\mathbb{E}\left\|\nabla F_i(x_t^{(i)};\xi_t^{(i)}) - \nabla f_i (x_t^{(i)})\right\|^2\\
& + \sum_{i\neq i'}^n\mathbb{E}\left\langle \mathbb{E}_{\xi_t}\nabla F_i(x_t^{(i)};\xi_t^{(i)}) - \nabla f_i(x_t^{(i)}),\nabla \mathbb{E}_{\xi_t}F_{i'}(x_t^{(i')};\xi_t^{(i')}) - \nabla f_{i'} (x_t^{(i')}) \right\rangle\\
& + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 +
\frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2\\
\leq & \mathbb{E}f(\overline{X}_t) - \gamma_t\mathbb{E}\langle\nabla f(\overline{X}_t),
\overline{\nabla f}(X_t)\rangle + \frac{L\gamma_t^2}{2n}\sigma^2 + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2
+ \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2 \\
= & \mathbb{E}f(\overline{X}_t) - \frac{\gamma_t}{2}\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 - \frac{\gamma_t}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 +
\frac{\gamma_t}{2}\mathbb{E}\|\nabla f(\overline{X}_t) -
\overline{\nabla f}(X_t)\|^2\\
& + \frac{L\gamma_t^2}{2}\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2 + \frac{L\gamma_t^2}{2n}\sigma^2\quad \text{(due to $2\langle \bm{a},\bm{b}\rangle=\|\bm{a}\|^2+\|\bm{b}\|^2-\|\bm{a}-\bm{b}\|^2$)}\\
= & \mathbb{E}f(\overline{X}_t) - \frac{\gamma_t}{2}\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 - (\frac{\gamma_t}{2}-\frac{L\gamma_t^2}{2})\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 +
\frac{\gamma_t}{2}\mathbb{E}\|\nabla f(\overline{X}_t) -
\overline{\nabla f}(X_t)\|^2 \\
& + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2 + \frac{L\gamma_t^2}{2n}\sigma^2. \numberthis \label{lemma:boudnfplus_long}
\end{align*}
To estimate the upper bound for
$\mathbb{E}\|\nabla f(\overline{X}_t) - \overline{\nabla f}(X_t)\|^2$, we have
\begin{align*}
\mathbb{E}\|\nabla f(\overline{X}_t) - \overline{\nabla f}(X_t)\|^2 = & \frac{1}{n^2}\mathbb{E}{\left\|
\sum_{i=1}^n\left(\nabla f_i(\overline{X}_t) - \nabla f_i(\bm{x}_t^{(i)})\right)\right\|^2}\\
\leq & \frac{1}{n}\sum_{i=1}^n\mathbb{E}\left\|\nabla f_i(\overline{X}_t) - \nabla f_i(\bm{x}_t^{(i)})\right\|^2\\
\leq & \frac{L^2}{n}\mathbb{E}\sum_{i=1}^n\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2.\numberthis \label{lemma:boudnfplus_short}
\end{align*}
Combining \eqref{lemma:boudnfplus_long} and \eqref{lemma:boudnfplus_short} together, we have
\begin{align*}
\frac{\gamma_t}{2}\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + (\frac{\gamma_t}{2}-\frac{L\gamma_t^2}{2})\mathbb{E}\|\overline{\nabla f}(X_t)\|^2 \leq & \mathbb{E}f(\overline{X}_t) - \mathbb{E}f(\overline{X}_{t+1}) +
\frac{L^2\gamma_t}{2n}\mathbb{E}\sum_{i=1}^n\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2 \\
& + \frac{L}{2}\mathbb{E}\|\overline{Q}_t\|^2 + \frac{L\gamma_t^2}{2n}\sigma^2.
\end{align*}
which completes the proof.
\end{proof}
\end{lemma}
\section{Analysis for Algorithm~\ref{alg2}}\label{sec3}
In Algorithm~\ref{alg2}, we have
\begin{align*}
Z_t = X_t(W_t-I) -\gamma F\left(X_t;\xi_t\right).\numberthis \label{global_z_t}
\end{align*}
We will prove that
\begin{align*}
\sum_{t=1}^{T}E_{qt}\|Q_t\|^2_F
\leq & 2\alpha^2\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right)\sum_{t=1}^{T}\gamma_t^2\|G\left(X_t;\xi_t\right)\|^2_F,
\end{align*}
which leads to
\begin{align*}
\sum_{i=1}^{n}\sum_{t=1}^{T}\left(1-3D_1L^2\gamma_t^2\right)^2\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & 2nD_1(\sigma^2+3\zeta^2)\sum_{t=1}^T\gamma_t^2 + 6nD_1\sum_{t=1}^T\gamma_t^2\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.\end{align*}
$D_1$, $\mu$ and $\rho$ are defined in Theorem~\ref{theo_2}.
\begin{lemma}\label{lemma_bound_Q_t_alg2}
Under Assumption~\ref{ass:global}, when using Algorithm~\ref{alg2}, we have
\begin{align*}
\sum_{t=1}^{T}E_{qt}\|Q_t\|^2_F
\leq & 2\alpha^2\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right)\sum_{t=1}^{T}\gamma_t^2\|G\left(X_t;\xi_t\right)\|^2_F
\end{align*}
when $(1-\rho)^2 - 4\mu^2\alpha^2>0$, where
\begin{align*}
\mu = & \max_{i\in\{2\cdots n\}}|\lambda_i-1|
\end{align*}
\end{lemma}
\begin{proof}
In the proof below, we use $[A]^{(i,j)}$ to indicate the $(i,j)$ element of matrix $A$.
For the noise induced by quantization, we have
\begin{align*}
\bm{r}_t^{(i)} = R_t\bm{e}^{(i)} = Q_tP\bm{e}^{(i)},
\end{align*}
so
\begin{align*}
\|\bm{r}_t^{(i)}\|^2 = & \bm{e}^{(i)\top}P^{\top}Q_t^{\top}Q_tP\bm{e}^{(i)}\\
= & \left(\bm{v}^{(i)}\right)^{\top}Q_t^{\top}Q_t\bm{v}^{(i)}.
\end{align*}
As for $Q_t^{\top}Q_t$, the expectation of non-diagonal elements would be zero because the compression noise on node $i$ is independent on node $j$, which leads to
\begin{align*}
\mathbb{E}_{q_t}\left[Q_t^{\top}Q_t\right]^{(i,j)} = & \mathbb{E}_{q_t}\sum_{k=1}^{N}Q_t^{(k,i)}Q_t^{(k,j)}\\
= & \tau_{ij}\sum_{k=1}^{N}\mathbb{E}_{q_t}\left(Q_t^{(k,i)}\right)^2, \quad \text{(due to $\mathbb{E}_{q_t} Q_t^{(k,i)} = 0$ for $\forall i \in \{1\cdots n\}$)}
\end{align*}
where $\tau_{ij} = 1$ if $i=j$, else $\tau_{ij}=0$.
Then
\begin{align*}
\mathbb{E}_{q_t}\|\bm{r}_t^{(i)}\|^2 = & \mathbb{E}_{q_t}\left(\bm{v}^{(i)}\right)^{\top}Q_t^{\top}Q_t\bm{v}^{(i)}\\
= & \sum_{j=1}^{n}\sum_{k=1}^{N}\left(\bm{v}_j^{(i)}\right)^2\mathbb{E}_{q_t}\left(Q_t^{(k,j)}\right)^2,
\end{align*}
where $\bm{v}_j^{(i)}$ is the $j$th element of $\bm{v}^{(i)}$. So we have
\begin{align*}
\sum_{i=2}^n\mathbb{E}_{q_t}\|\bm{r}_t^{(i)}\|^2 = & \sum_{i=2}^n\sum_{j=1}^{n}\sum_{k=1}^{N}\left(\bm{v}_j^{(i)}\right)^2\mathbb{E}_{q_t}\left(Q_t^{(k,j)}\right)^2\\
\leq & \sum_{i=1}^n\sum_{j=1}^{n}\sum_{k=1}^{N}\left(\bm{v}_j^{(i)}\right)^2\mathbb{E}_{q_t}\left(Q_t^{(k,j)}\right)^2\\
\leq & \sum_{j=1}^{n}\sum_{k=1}^{N}\mathbb{E}_{q_t}\left(Q_t^{(k,j)}\right)^2 \quad \text{(due to $\sum_{i=1}^n\left(\bm{v}_j^{(i)}\right)^2 = 1$)}\\
= & \mathbb{E}_{q_t}\left\|Q_t\right\|^2_F. \numberthis \label{bound_r_t^2}
\end{align*}
$\mathbb{E}_{q_t}\left(Q_t^{(k,j)}\right)^2$ is the noise brought by quantization, which satisfies
\begin{align*}
\mathbb{E}_{q_t}\left(Q_t^{(k,j)}\right)^2 \leq & \alpha^2 \left(z^{(k,j)}_t\right)^2\\
= & \alpha^2\left(\left[X_t(W-I)\right]^{(k,j)} + [G\left(X_t;\xi_t\right)]^{(k,j)}\right)^2 \quad \text{(due to \eqref{global_z_t})} \\
= & 2\alpha^2\left(\left[X_t(W-I)\right]^{(k,j)}\right)^2 + 2\alpha^2\left([G\left(X_t;\xi_t\right)]^{(k,j)}\right)^2\\
= & 2\alpha^2\left(\left[X_tP(\Lambda-I)P^{\top}\right]^{(k,j)}\right)^2 + 2\alpha^2\left([G\left(X_t;\xi_t\right)]^{(k,j)}\right)^2\\
= & 2\alpha^2\left(\left[Y(\Lambda-I)P^{\top}\right]^{(k,j)}\right)^2 + 2\alpha^2\left([G\left(X_t;\xi_t\right)]^{(k,j)}\right)^2,
\end{align*}
then
\begin{align*}
\mathbb{E}_{q_t}\left\|Q_t\right\|^2_F \leq & 2\alpha^2\left\|Y(\Lambda-I)P^{\top}\right\|^2_F + 2\alpha^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F\\
= & 2\alpha^2\left\|Y(\Lambda-I)\right\|^2_F + 2\alpha^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F\\
= & 2\alpha^2\left\|Y \begin{pmatrix}
&0,&0,&\cdots,&0\\
&0,&\lambda_2-1,&\cdots,&0\\
&\hdotsfor{4}\\
&0,&0,&\cdots,&\lambda_n-1
\end{pmatrix}
\right\|^2_F + 2\alpha^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F \\
\leq & 2\alpha^2\sum_{i=2}^n(\lambda_i -1)^2\|\bm{y}_t^{(i)}\|^2 + 2\alpha^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F\\
\leq & 2\alpha^2\mu^2\sum_{i=2}^n\left\|\bm{y}_t^{(i)}\right\|^2 + 2\alpha^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F. \quad \text{(due to $\mu = \max_{i\in\{2\cdots n\}}|\lambda_i-1|$)} \numberthis \label{alg2:Q_t^2}
\end{align*}
Together with \eqref{bound_r_t^2}, it comes to
\begin{align*}
\sum_{i=2}^n\mathbb{E}_{q_t}\left\|\bm{r}_t^{(i)}\right\|^2 \leq & 2\alpha^2\mu^2\sum_{i=2}^n\left\|\bm{y}_t^{(i)}\right\|^2 + 2\alpha^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F. \numberthis \label{bound_r_t^2_second}
\end{align*}
From \eqref{y_t}, we have
\begin{align*}
\bm{y}_t^{(i)} = & \sum_{s=1}^{t-1}\lambda_i^{t-s-1}\left(-\bm{h}_s^{(i)}+\bm{r}_s^{(i)}\right),\\
\left\|\bm{y}_t^{(i)}\right\|^2 \leq & \left(\sum_{s=1}^{t-1}|\lambda_i|^{t-s-1}\left\|-\bm{h}_s^{(i)}+\bm{r}_s^{(i)}\right\|\right)^2.\quad \text{(due to $\|\bm{a} + \bm{b}\|^2 \leq \|\bm{a}\|^2 + \|\bm{b}\|^2$)}
\end{align*}
Denote $m_t^{(i)} = \sum_{s=1}^{t-1}|\lambda_i|^{t-1-s}\left\|-\bm{h}_s^{(i)}+\bm{r}_s^{(i)}\right\|$, we can see that $m_t^{(i)}$ has the same structure of the sequence in Lemma~\ref{lemma1}, Therefore
\begin{align*}
\sum_{i=2}^n\sum_{t=1}^{T}\left(m_t^{(i)}\right)^2\leq & \sum_{i=2}^n\sum_{t=1}^{T}\frac{1}{(1-|\lambda_i|)^2}\left\|-\bm{h}_t^{(i)}+\bm{r}_t^{(i)}\right\|^2\\
\leq & \sum_{i=2}^n\sum_{t=1}^{T}\frac{2}{(1-|\lambda_i|)^2}\left(\left\|\bm{h}_t^{(i)}\right\|^2 + \left\|\bm{r}_t^{(i)}\right\|^2\right)\\
= & \sum_{i=2}^n\sum_{t=1}^{T}\frac{2}{(1-|\lambda_i|)^2}\left(\gamma_t^2\left\|G\left(X_t;\xi_t\right)\bm{v}^{(i)}\right\|^2+\left\|\bm{r}_t^{(i)}\right\|^2\right)\\
= & \sum_{i=2}^n\sum_{t=1}^{T}\frac{2\gamma_t^2\left\|G\left(X_t;\xi_t\right)\bm{v}^{(i)}\right\|^2}{(1-|\lambda_i|)^2} + \sum_{i=2}^n\sum_{t=1}^{T}\frac{2}{(1-|\lambda_i|)^2}\left\|\bm{r}_t^{(i)}\right\|^2\\
\leq & \sum_{i=2}^n\sum_{t=1}^{T}\frac{2\gamma_t^2\left\|G\left(X_t;\xi_t\right)\bm{v}^{(i)}\right\|^2}{(1-|\lambda_i|)^2} + \sum_{i=2}^n\sum_{t=1}^{T}\frac{2}{(1-|\lambda_i|)^2}\left\|\bm{r}_t^{(i)}\right\|^2\\
\leq & \frac{2}{(1-\rho)^2}\sum_{i=2}^n\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\bm{v}^{(i)}\right\|^2 + \frac{2}{(1-\rho)^2}\sum_{i=2}^n\sum_{t=1}^{T}\left\|\bm{r}_t^{(i)}\right\|^2 \\
\leq & \frac{2}{(1-\rho)^2}\sum_{i=2}^n\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)P\right\|^2 + \frac{2}{(1-\rho)^2}\sum_{i=2}^n\sum_{t=1}^{T}\left\|\bm{r}_t^{(i)}\right\|^2 \\
\leq & \frac{2}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F + \frac{2}{(1-\rho)^2}\sum_{i=2}^n\sum_{t=1}^{T}\left\|\bm{r}_t^{(i)}\right\|^2 .\numberthis \label{bound_y_t_second}
\end{align*}
Combining \eqref{bound_r_t^2_second} and \eqref{bound_y_t_second} together, we have
\begin{align*}
\sum_{i=2}^n\sum_{t=1}^T\left\|\bm{y}_t^{(i)}\right\|^2 \leq\sum_{i=2}^n\sum_{t=1}^{T}\left(m_t^{(i)}\right)^2
\leq & \frac{2(1+2\alpha^2)}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F + \frac{4\mu^2\alpha^2}{(1-\rho)^2}\sum_{l=2}^n\sum_{t=1}^T\left\|\bm{y}_t^{(l)}\right\|^2,
\end{align*}
It follows thtat
\begin{align*}
\sum_{i=2}^n\sum_{t=1}^T\left\|\bm{y}_t^{(i)}\right\|^2 \leq & \frac{2(1+2\alpha^2)}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F + \frac{4\mu^2\alpha^2}{(1-\rho)^2}\sum_{t=1}^T\sum_{l=2}^n\|\bm{y}_t^{(l)}\|^2,\\
\left(1-\frac{4\mu^2\alpha^2}{(1-\rho)^2}\right)\sum_{i=2}^n\sum_{t=1}^T\left\|\bm{y}_t^{(i)}\right\|^2 \leq & \frac{2(1+2\alpha^2)}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F.
\end{align*}
If $\alpha$ is small enough that satisfies $(1-\rho)^2 - 4\mu^2\alpha^2>0$, then we have
\begin{align*}
\sum_{i=2}^n\sum_{t=1}^T\left\|\bm{y}_t^{(i)}\right\|^2 \leq & \frac{2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F.\numberthis \label{bound_y_t_third}
\end{align*}
Together \eqref{bound_y_t_third} with \eqref{alg2:Q_t^2} , we have
\begin{align*}
\sum_{t=1}^{T}E_{qt}\|Q_t\|^2_F \leq & 2\alpha^2\mu^2\sum_{t=1}^T\sum_{l=2}^{n}\|\bm{y}_t^{(l)}\|^2 + 2\alpha^2\sum_{t=1}^{T}\gamma_t^2\left\|G\left(X_t;\xi_t\right)\right\|^2_F\\
\leq & 2\alpha^2\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right)\sum_{t=1}^{T}\gamma_t^2\|G\left(X_t;\xi_t\right)\|^2_F.
\end{align*}
{\color{black} Moreover, setting $\gamma_t = \gamma$ and denote $2\alpha^2\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right) = D_2$. Applying Lemma~\ref{lemma3} to bound $\|G\left(X_t;\xi_t\right)\|^2_F$, we have}
\begin{align*}
\sum_{t=1}^{T}E_{qt}\|Q_t\|^2_F \leq & n\sigma^2D_2\gamma^2T+3L^2D_2\gamma^2\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2+3n\zeta^2D_2\gamma^2T\\
&+3nD_2\gamma^2\mathbb{E}\sum_{t=1}^T\left\|\nabla f(\overline{X}_t)\right\|^2.\numberthis \label{alg2_mymiss1}
\end{align*}
\end{proof}
\begin{lemma}\label{lemma_bound_X_t_alg2}
Under Assumption~\ref{ass:global}, we have
\begin{align*}
\sum_{i=1}^{n}\sum_{t=1}^{T}\left(1-3D_1L^2\gamma_t^2\right)^2\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & 2nD_1(\sigma^2+3\zeta^2)\sum_{t=1}^T\gamma_t^2 + 6nD_1\sum_{t=1}^T\gamma_t^2\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
when $1-2\alpha^2\mu^2C_{\lambda}>0$, where
\begin{align*}
\mu = & \max_{l\in\{2\cdots n\}}|\lambda_l-1|\\
\rho = & \max_{l\in\{2\cdots n\}}|\lambda_l|\\
D_1 = & \frac{2\alpha^2}{1-\rho^2}\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right) + \frac{1}{(1-\rho)^2}.
\end{align*}
\end{lemma}
\begin{proof}
From Lemma~\ref{lemma_bound_all_X_ave}, we have
\begin{align*}
&\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2\\
\leq & \frac{2}{1-\rho^2}\sum_{t=1}^{T}\|Q_s\|^2_F + 2\frac{1}{(1-\rho)^2}\sum_{t=1}^{T}\gamma_t^2\|G(X_{t};\xi_{t})\|^2_F\\
\leq & 2\left(\frac{2\alpha^2}{1-\rho^2}\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right) + \frac{1}{(1-\rho)^2}\right)\sum_{t=1}^T\gamma_t^2\|G(X_{t};\xi_{t})\|^2. \quad \text{(due to Lemma~\ref{lemma_bound_Q_t_alg2})}
\end{align*}
Denote $\frac{2\alpha^2}{1-\rho^2}\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right) + \frac{1}{(1-\rho)^2} = D_1$,then from Lemma~\ref{lemma3}, we have
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & 2nD_1(\sigma^2+3\zeta^2)\sum_{t=1}^T\gamma_t^2 + 6nD_1\sum_{t=1}^T\gamma_t^2\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2\\
& + 3D_1L^2\sum_{i=1}^n\gamma_t^2\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2,\\
\sum_{t=1}^{T}\left(1-3D_1L^2\gamma_t^2\right)^2\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & 2nD_1(\sigma^2+3\zeta^2)\sum_{t=1}^T\gamma_t^2 + 6nD_1\sum_{t=1}^T\gamma_t^2\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
If $1-3D_1L^2\gamma_t^2 > 0$, then $\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2$ would be bounded.
\end{proof}
{\bf \noindent Proof to Theorem \ref{theo_2}}
\begin{proof}
Combining Lemma~\ref{lemma:boundfplus} and \eqref{alg2_mymiss1}, we have
\begin{align*}
\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + \left(1-L\gamma\right)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2
\leq & \frac{2}{\gamma}\left(\mathbb{E}f(\overline{X}_t)-f^*-\left(\mathbb{E}f(\overline{X}_{(t+1)})-f^*\right)\right)\\
& + \left(\frac{2L^2}{n} + \frac{3L^3D_2\gamma^2}{2n}\right)\sum_{i=1}^{n}\|\overline{X}_t-x_t^{(i)}\|^2
\\
& + \left(\frac{\gamma^2LD_2}{2} + \frac{L\gamma}{n}\right)T\sigma^2 + \frac{3LD_2\gamma^2\zeta^2T}{2}\\
& + \frac{3LD_2\gamma^2}{2}\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2. \numberthis \label{bound_nabla_f_alg2_second}
\end{align*}
From Lemma~\ref{lemma_bound_substract_mean}, we have
\begin{align*}
\left(1-3D_1L^2\gamma^2\right)^2\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & 2nD_1(\sigma^2+3\zeta^2)T\gamma^2 + 6nD_1\gamma^2\sum_{t=1}^T\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
If $\gamma$ is not too large that satisfies $1-3D_1L^2\gamma^2 > 0$, we have
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^{n}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & \frac{2nD_1(\sigma^2+3\zeta^2)T\gamma^2}{1-3D_1L^2\gamma^2} + \frac{6nD_1\gamma^2}{1-3D_1L^2\gamma^2}\sum_{t=1}^T\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2. \numberthis \label{bound_xmean_alg2}
\end{align*}
Summarizing both sides of \eqref{bound_nabla_f_alg2_second} and applying \eqref{bound_xmean_alg2} yields
\begin{align*}
& \sum_{t=1}^T\left(\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + \left(1-L\gamma\right)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right) \\
\leq & \frac{2(f(0)-f^*)}{\gamma} +
\left(\frac{T\gamma^2LD_2}{2} + \frac{TL\gamma}{n} + \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)D_1T\gamma^2}{1-3D_1L^2\gamma^2}\right)\sigma^2 \\
& + \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1T\gamma^2}{1-3D_1L^2\gamma^2}\zeta^2 + \frac{3LD_2\gamma^2T}{2}\zeta^2\\
& + \left(\frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1\gamma^2}{1-3D_1L^2\gamma^2} + \frac{3LD_2\gamma^2}{2} \right)\sum_{t=1}^T\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
It implies
\begin{align*}
& \sum_{t=1}^T\left(\left(1-\frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1\gamma^2}{1-3D_1L^2\gamma^2} - \frac{3LD_2\gamma^2}{2}\right)\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + \left(1-L\gamma\right)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right)
\\
\leq & \frac{2(f(0)-f^*)}{\gamma} +
\left(\frac{T\gamma^2LD_2}{2} + \frac{L\gamma T}{n} + \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)D_1T\gamma^2}{1-3D_1L^2\gamma^2}\right)\sigma^2 \\
& + \left( \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1T\gamma^2}{1-3D_1L^2\gamma^2} + \frac{3LD_2\gamma^2T}{2} \right)\zeta^2.
\end{align*}
Denote $D_3 = \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1\gamma^2}{1-3D_1L^2\gamma^2} + \frac{3LD_2\gamma^2}{2}$ and $D_4 = 1-L\gamma$, we have
\begin{align*}
& \sum_{t=1}^T\left(\left(1-D_3\right)\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + D_4\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right)
\\
\leq & \frac{2(f(0)-f^*)}{\gamma} +
\left(\frac{T\gamma^2LD_2}{2} + \frac{L\gamma T}{n} + \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)D_1T\gamma^2}{1-3D_1L^2\gamma^2}\right)\sigma^2 \\
& + \left( \frac{\left(4L^2 + 3L^3D_2\gamma^2\right)3D_1T\gamma^2}{1-3D_1L^2\gamma^2} + \frac{3LD_2\gamma^2T}{2} \right)\zeta^2.
\end{align*}
It completes the proof.
\end{proof}
{\bf \noindent Proof to Corollary \ref{cor:convergence_alg2}}
\begin{proof}
Setting $\gamma = \frac{1}{6\sqrt{D_1}L + 6\sqrt{D_2L}+\frac{\sigma}{\sqrt{n}}T^{\frac{1}{2}} + \zeta^{\frac{2}{3}}T^{\frac{1}{3}}}$, then we can verify
\begin{align*}
3D_1L^2\gamma^2 \leq & \frac{1}{12}\\
3LD_2\gamma^2 \leq & \frac{1}{12}\\
D_3 \leq & \frac{1}{2}\\
D_4 \geq & 0.
\end{align*}
So we can remove the $\|\overline{\nabla f}(X_t)\|^2$ on the LHS and substitute $(1-D_3)$ with $\frac{1}{2}$. Therefore \eqref{bound_theo_2} becomes
\begin{align*}
\frac{1}{T}\sum_{t=1}^T\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 \leq & 4(f(0)-f^*)\frac{\sigma}{\sqrt{Tn}} + \frac{4L\sigma}{\sqrt{Tn}} + \frac{\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}}(4LD_2 + 30L^2D_1 + 4(f(0) - f^*))\\
& + \frac{n\sigma^2}{nD_1L^2 + \sigma^2T}\left(5D_1L^2 + \frac{LD_2}{2}\right) + \frac{4(f(0) - f^*)(6\sqrt{D_1}L + 6\sqrt{D_2L})}{T}.\numberthis\label{alg2_pro_coro_1}
\end{align*}
{\color{black} From Lemma~\ref{lemma_bound_X_t_alg2}, we have
\begin{align*}
\frac{1}{T}\sum_{i=1}^{n}\sum_{t=1}^{T}\left(1-3D_1L^2\gamma^2\right)^2\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & 2nD_1(\sigma^2+3\zeta^2)\gamma^2 + 6nD_1\gamma^2\frac{1}{T}\sum_{t=1}^T\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2\\
\leq& \frac{2n\sqrt{n}D_1}{T} + \frac{6n\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}} + 6nD_1\gamma^2\frac{1}{T}\sum_{t=1}^T\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.\\
\frac{1}{T}\sum_{i=1}^{n}\sum_{t=1}^{T}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq & \frac{4n\sqrt{n}D_1}{T} + \frac{12n\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}} + 12nD_1\gamma^2\frac{1}{T}\sum_{t=1}^T\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.\numberthis\label{alg2_pro_coro_2}
\end{align*}
If $\alpha^2 \leq \min\left\{\frac{(1-\rho)^2}{8\mu^2},\frac{1}{4}\right\}$, then
\begin{align*}
D_2 = & 2\alpha^2\left(\frac{2\mu^2(1+2\alpha^2)}{(1-\rho)^2 - 4\mu^2\alpha^2}+1\right)
\leq \frac{3\mu^2\alpha^2}{(1-\rho)^2} + 2\alpha^2,\\
D_1 = & \frac{D_2}{1-\rho^2} + \frac{1}{(1-\rho)^2}
\leq \frac{D_2 + 1}{(1-\rho)^2}, \quad \text{(due to $\rho < 1$)}
\end{align*}
which means $D_2 = O\left(\alpha^2\right) $ and $D_1 = O\left(\alpha^2 +1\right)$.\\
So \eqref{alg2_pro_coro_2} becomes
\begin{align*}
\frac{1}{T}\sum_{t=1}^T\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 \lesssim & \frac{\sigma}{\sqrt{nT}} + \frac{\zeta^{\frac{2}{3}}(1+\alpha)}{T^{\frac{2}{3}}}+ \frac{1+\alpha^2}{T}.
\end{align*}
Combing the inequality above and \eqref{alg2_pro_coro_2}, we have
\begin{align*}
\frac{1}{T}\sum_{i=1}^{n}\sum_{t=1}^{T}\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\lesssim & \frac{n\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}} + \frac{n\sqrt{n}(1+\alpha^2)}{T}+ \frac{n(1+\alpha)}{T^2}.\numberthis\label{finalcoro_1}
\end{align*}
}
\end{proof}
\section{Analysis for Algorithm~\ref{alg1}}\label{sec2}
We are going to prove that by using \eqref{alg1_noisecontrol1} and \eqref{alg1_noisecontrol2} in Algorithm~\ref{alg1}, the upper bound for the compression noise would be
\begin{align*}
\mathbb{E}\|Q_t\|^2_F\leq \frac{n\tilde{\sigma}^2}{t}.
\end{align*}
Therefore, combing this with Lemma~\ref{lemma:boundfplus} and Lemma~\ref{lemma_bound_all_X_ave}, we would be able to prove the convergence rate for ALgorithm~\ref{alg1}.
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^n\left(1-6nC_1L^2\gamma_t^2\right)\mathbb{E}\left\|\overline{X}_t-x_t^{(i)}\right\|^2
\leq &\frac{2n^2\tilde{\sigma}^2}{1-\rho^2}\log T + 2\left(\sigma^2+3\zeta^2\right)n^2C_1\sum_{t=1}^{T-1}\gamma_t^2\\
&+6C_1n^2\sum_{t=1}^{T-1}\mathbb{E}\gamma_t^2\left\|\nabla f(\overline{X}_t)\right\|^2,
\end{align*}
which ensures that all nodes would converge to the same value.
\begin{lemma}\label{lemma:add}
For any non-negative sequences $\{a_n\}_{n=1}^{+\infty}$ and $\{b_n\}_{n=1}^{+\infty}$ that satisfying
\begin{align*}
& a_t= \left(1-\frac{2}{t}\right)^2a_{t-1}+\frac{4}{t^2}b_t\\
& b_t\leq \frac{\tilde{\sigma}^2}{2}\quad \forall t\in\{1,2,3,\cdots\}\\
& a_1=0,
\end{align*}
we have
\begin{align*}
a_t\leq\frac{\tilde{\sigma}^2}{t}{\color{red}.}
\end{align*}
\begin{proof}
We use induction to prove the lemma. Since $a_1=0\leq \tilde{\sigma}^2$, suppose the lemma holds for $t\leq k$, which means $a_t\leq\frac{\tilde{\sigma}^2}{t}$, for $\forall t\leq k$. Then it leads to
\begin{align*}
a_{k+1}= & \left(1-\frac{2}{k}\right)^2a_{k}+\frac{4}{k^2}b_t\\
\leq & \left(1-\frac{2}{k}\right)^2a_{k}+\frac{2\tilde{\sigma}^2}{k^2}\\
\leq & \left(1-\frac{4}{k}+\frac{4}{k^2}\right)\frac{\tilde{\sigma}^2}{k} + \frac{2\tilde{\sigma}^2}{k^2}\\
= & \tilde{\sigma}^2\left(\frac{1}{k}-\frac{1}{k+1}\right)-\frac{2\tilde{\sigma}^2}{k^2}+\frac{4\tilde{\sigma}^2}{k^3}+ \frac{\tilde{\sigma}^2}{k+1}\\
= & \frac{\tilde{\sigma}^2}{k^2(k+1)}\left(k-2(k+1)+\frac{4}{k}\right)+\frac{\tilde{\sigma}^2}{k+1}\\
= & \frac{\tilde{\sigma}^2}{k^2(k+1)}\left(-k-2+\frac{4}{k}\right)+\frac{\tilde{\sigma}^2}{k+1}\\
\leq & \frac{\tilde{\sigma}^2}{k+1}.\quad\text{(due to $k\geq 2$)}
\end{align*}
It completes the proof.
\end{proof}
\end{lemma}
\begin{lemma} \label{lemma3}
Under the Assumption \ref{ass:global}, when using Algorithm~\ref{alg1}, we have
\begin{align*}
\mathbb{E}\left\|G(X_t,\xi_t)\right\|^2_F\leq & n\sigma^2+3L^2\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2+3n\zeta^2+3n\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2,\\
\mathbb{E}\left\|Q_t\right\|^2_F \leq &\frac{n\tilde{\sigma}^2}{t},\\
\mathbb{E}\left\|\overline{Q}_t\right\|^2_F \leq &\frac{\tilde{\sigma}^2}{nt}.
\end{align*}
\end{lemma}
\begin{proof}
Notice that
\begin{align*}
\mathbb{E}\left\|G(X_t,\xi_t)\right\|^2_F
= \sum_{i=1}^{n}\mathbb{E}\left\|\nabla F_i(x_t^{(i)};\xi_t^{(i)})\right\|^2.
\end{align*}
We next estimate the upper bound of $\mathbb{E}\left\|\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})\right\|^2$ in the following
\begin{align*}
\mathbb{E}\left\|\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})\right\|^2
= & \mathbb{E}\left\|\left(\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})-\nabla f_i(\bm{x}_t^{(i)})\right)+\nabla f_i(\bm{x}_t^{(i)})\right\|^2\\
= & \mathbb{E}\left\|\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})-\nabla f_i(\bm{x}_t^{(i)})\right\|^2+\mathbb{E}\left\|\nabla f_i(\bm{x}_t^{(i)})\right\|^2\\
& + 2\mathbb{E}\left\langle\mathbb{E}_{\xi_t}\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})-\nabla f_i(\bm{x}_t^{(i)}),\nabla f_i(\bm{x}_t^{(i)})\right\rangle\\
= &\mathbb{E}\left\|\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})-\nabla f_i(\bm{x}_t^{(i)})\right\|^2+\mathbb{E}\left\|\nabla f_i(\bm{x}_t^{(i)})\right\|^2\\
\leq & \sigma^2 + \mathbb{E}\left\|\left(\nabla f_i(\bm{x}_t^{(i)})-\nabla f_i(\overline{X}_t)\right)+\left(\nabla f_i(\overline{X}_t)-\nabla f(\overline{X}_t)\right)+\nabla f(\overline{X}_t)\right\|^2\\
\leq & \sigma^2 + 3\mathbb{E}\left\|\nabla f_i(\bm{x}_t^{(i)})-\nabla f_i(\overline{X}_t)\right\|^2 + 3\mathbb{E}\left\|\nabla f_i(\overline{X}_t)-\nabla f(\overline{X}_t)\right\|^2\\
& + 3\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2\\
\leq & \sigma^2+3L^2\mathbb{E}\left\|X_t-\bm{x}_t^{(i)}\right\|^2+3\zeta^2+3\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2,
\end{align*}
which means
\begin{align*}
\mathbb{E}\left\|G(X_t,\xi_t)\right\|^2\leq
\sum_{i=1}^{n}\left\|\nabla F_i(\bm{x}_t^{(i)};\xi_t^{(i)})\right\|^2
\leq & n\sigma^2+3L^2\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2+3n\zeta^2\\
&+3n\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
From \eqref{alg1_noisecontrol2}, we have
\begin{align*}
\tilde{\bm{x}}^{(j)}_t - \bm{x}^{(j)}_t
= & (1-2t^{-1})(\tilde{\bm{x}}^{(j)}_{t-1} - \bm{x}^{(j)}_{t-1}) + 2t^{-1} \bm{q}^{(j)}_t,\numberthis\label{alg1_noise}
\end{align*}
Then we have
\begin{align*}
&\mathbb{E}\|\tilde{\bm{x}}^{(j)}_t - \bm{x}^{(j)}_t\|^2\\
= & \left(1-2t^{-1}\right)^2\mathbb{E}\|\tilde{\bm{x}}^{(j)}_{t-1} - \bm{x}^{(j)}_{t-1}\|^2+4t^{-2}\mathbb{E}\|\bm{q}_t^{(j)}\|^2
+ 2t^{-1}\left(1-2t^{-1}\right)\mathbb{E}\left\langle\tilde{\bm{x}}^{(j)}_{t-1} - \bm{x}^{(j)}_{t-1},\mathbb{E}_{\bm{q}_t^{(j)}}\bm{q}_t^{(j)}\right\rangle
\end{align*}
Since $\mathbb{E}_{q_t^{(j)}}\bm{C}(\bm{z}_t^{(j)}) = \bm{z}_t^{(j)}$, so we have $\mathbb{E}_{q_t^{(j)}}\bm{q}_t^{(j)} = 0$, then
\begin{align*}
\mathbb{E}\|\tilde{\bm{x}}^{(j)}_t - \bm{x}^{(j)}_t\|^2 = \left(1-2t^{-1}\right)^2\mathbb{E}\|\tilde{\bm{x}}^{(j)}_{t-1} - \bm{x}^{(j)}_{t-1}\|^2+4t^{-2}\mathbb{E}\|\bm{q_t^{(j)}}\|^2.\numberthis\label{noise_evalution}
\end{align*}
Meanwhile, \eqref{noise_evalution} indicates that
\begin{align*}
\mathbb{E}\left\|\bm{q}_t^{(i)}\right\|^2 \leq \left(1-\frac{2}{t}\right)^2\mathbb{E}\left\|\bm{q}_{t-1}^{(i)}\right\|^2 + \frac{4}{t^2}\frac{\tilde{\sigma}^2}{2}.\numberthis \label{control_noise_1}
\end{align*}
So applying Lemma~\ref{lemma3} into \eqref{control_noise_1}, we have
\begin{align*}
\mathbb{E}\left\|\bm{q}_t^{(i)}\right\|^2 \leq \frac{\tilde{\sigma}^2}{t}.
\end{align*}
Therefore
\begin{align*}
\mathbb{E}\left\|Q_t\right\|^2_F = &
\sum_{i=1}^n\mathbb{E}\left\|\bm{q}_t^{(i)}\right\|^2
\leq \frac{n\tilde{\sigma}^2}{t}, \quad \left( \text{due to $\mathbb{E}\left\|\bm{q}_t^{(i)}\right\|^2\leq\frac{\tilde{\sigma}^2}{t}$}\right)\\
\mathbb{E}\left\|\overline{Q}_t\right\|^2_F = &
\frac{1}{n^2}\mathbb{E}\left\|\sum_{i=1}^n\bm{q}_t^{(i)}\right\|^2\\
= & \frac{1}{n^2}\sum_{i=1}^n\mathbb{E}\left\|\bm{q}_t^{(i)}\right\|^2+ \sum_{i\neq i'}^n\mathbb{E}\left\langle\bm{q}_t^{(i)},\bm{q}_t^{(i')}\right\rangle\\
\leq & \frac{\tilde{\sigma}^2}{nt}. \quad\left(\text{due to $\mathbb{E}\bm{q}_t^{(i)}=0$ for $\forall i\in \{1,\cdots,n\}$}\right)
\end{align*}
\end{proof}
\begin{lemma}\label{lemma:boundx}
Under Assumption \ref{ass:global}, when using Algorithm~\ref{alg1}, we have
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^n\left(1-6C_1L^2\gamma_t^2\right)\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq &\frac{2n\tilde{\sigma}^2}{1-\rho^2}\log T + 2\left(\sigma^2+3\zeta^2\right)nC_1\sum_{t=1}^{T-1}\gamma_t^2\\
&+6C_1n\sum_{t=1}^{T-1}\mathbb{E}\gamma_t^2\left\|\nabla f(\overline{X}_t)\right\|^2, \numberthis \label{theo:boundxtmode}
\end{align*}
{\color{black} where $C_1$ is defined in Theorem~\ref{theo_1}. }
\end{lemma}
\begin{proof}
From Lemma~\ref{lemma_bound_all_X_ave} and Lemma~\ref{lemma3}, we have
\begin{align*}
\sum_{i=1}^n\sum_{t=1}^T\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2 \leq &
\frac{2n\tilde{\sigma}^2}{1-\rho^2}\sum_{t=1}^{T-1}\frac{1}{t} + 2C_1\sum_{t=1}^{T-1}\gamma_t^2\mathbb{E}\left\|G(X_{t};\xi_{t})\right\|^2\\
\leq & \frac{2n\tilde{\sigma}^2}{1-\rho^2}\log T + 2\left(n\sigma^2+3n\zeta^2\right)C_1\sum_{t=1}^{T-1}\gamma_t^2+6C_1n\sum_{t=1}^{T-1}\gamma_t^2\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2\\
& + 6C_1L^2\sum_{t=1}^{T-1}\sum_{i=1}^n\gamma_t^2\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2. \quad\text{(due to Lemma~\ref{lemma3})}
\end{align*}
Rearranging it obtain the following
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^n\left(1-6C_1L^2\gamma_t^2\right)\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq &\frac{2n\tilde{\sigma}^2}{1-\rho^2}\log T + 2\left(\sigma^2+3\zeta^2\right)nC_1\sum_{t=1}^{T-1}\gamma_t^2\\
&+6C_1n\sum_{t=1}^{T-1}\mathbb{E}\gamma_t^2\left\|\nabla f(\overline{X}_t)\right\|^2, \numberthis \label{theo:boundxtmode}
\end{align*}
which completing the proof.
\end{proof}
{\bf \noindent Proof to Theorem \ref{theo_1}}
\begin{proof}
Setting $\gamma_t = \gamma$, then from Lemma~\ref{lemma:boundfplus}, we have
\begin{align*}
\mathbb{E}\|\nabla f(\overline{X_t})\|^2 + \left(1-L\gamma\right)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2
\leq & \frac{2}{\gamma}\left(\mathbb{E}f(\overline{X}_t)-f^*-\left(\mathbb{E}f(\overline{X}_{(t+1)})-f^*\right)\right)\\
&+ \frac{L^2}{n}\sum_{i=1}^{n}\|\overline{X}_t-x_t^{(i)}\|^2 + \frac{L\tilde{\sigma}^2}{nt\gamma}+ \frac{L\gamma}{n}\sigma^2. \numberthis\label{boundxplusfinal}
\end{align*}
From Lemma~\ref{lemma:boundx}, we have
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^n\left(1-6C_1L^2\gamma^2\right)\mathbb{E}\left\|\overline{X}_t-x_t^{(i)}\right\|^2
\leq &\frac{2n\tilde{\sigma}^2}{1-\rho^2}\log T + 2\left(\sigma^2+3\zeta^2\right)nC_1T\gamma^2\\
&+6C_1n\sum_{t=1}^{T-1}\mathbb{E}\gamma^2\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
If $\gamma$ is not too large that satisfies $1-6C_1L^2\gamma^2 > 0$, we have
\begin{align*}
\sum_{t=1}^{T}\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-x_t^{(i)}\right\|^2 \leq
& \frac{1}{1-6C_1L^2\gamma^2}\left( \frac{2n\tilde{\sigma}^2}{1-\rho^2}\log T + 2\left(\sigma^2+3\zeta^2\right)nC_1T\gamma^2\right)\\
&+\frac{6C_1n}{1-6C_1L^2\gamma^2}\sum_{t=1}^{T-1}\mathbb{E}\gamma^2\left\|\nabla f(\overline{X}_t)\right\|^2.\numberthis\label{theo:boundxtfinal}
\end{align*}
Summarizing both sides of \eqref{boundxplusfinal} and applying \eqref{theo:boundxtfinal} yields
\begin{align*}
& \sum_{t=1}^T\left(\mathbb{E}\|\nabla f(\overline{X_t})\|^2 + \left(1-L\gamma\right)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right) \\
\leq & \frac{2\mathbb{E}f(\overline{X}_1)-2f^*}{\gamma} +\frac{L^2}{n}\sum_{t=1}^{T-1}\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-x_t^{(i)}\right\|^2 + \frac{L\log T}{n\gamma}\tilde{\sigma}^2 + \frac{LT\gamma}{n}\sigma^2\\
\leq & \frac{2(f(0)-f^*)}{\gamma} +
\frac{L\log T}{n\gamma}\tilde{\sigma}^2 + \frac{LT\gamma}{n}\sigma^2 + \frac{4C_2\tilde{\sigma}^2L^2}{1-\rho^2}\log T\\
& +4L^2C_2\left(\sigma^2+3\zeta^2\right)C_1T\gamma^2 + 12L^2C_2C_1\sum_{t=1}^{T-1}\mathbb{E}\gamma^2\left\|\nabla f(\overline{X}_t)\right\|^2,
\end{align*}
where $C_2 = \frac{1}{1-6C_1L^2\gamma^2}$. It implies
\begin{align*}
& \sum_{t=1}^T\left(\left(1-C_3\right)\mathbb{E}\|\nabla f(\overline{X}_t)\|^2 + C_4\mathbb{E}\|\overline{\nabla f}(X_t)\|^2\right)
\\ \leq & \frac{2(f(0)-f^*)}{\gamma} +
\frac{L\log T}{n\gamma}\tilde{\sigma}^2 + \frac{LT\gamma}{n}\sigma^2 + \frac{4C_2\tilde{\sigma}^2L^2}{1-\rho^2}\log T + 4L^2C_2\left(\sigma^2+3\zeta^2\right)C_1T\gamma^2,
\end{align*}
where $C_3 = 12L^2C_2C_1\gamma^2$ and $C_4 = \left(1-L\gamma\right)$. It completes the proof.
\end{proof}
{\bf \noindent Proof to \Cref{cor:convergence}}
\begin{proof}
when $\gamma=\frac{1}{12\sqrt{C_1}L+\frac{\sigma}{\sqrt{n}}T^{\frac{1}{2}} + \zeta^{\frac{2}{3}}T^{\frac{1}{3}} }$, we have
\begin{align*}
1 - L\gamma \geq & 0,\\
C_2 = 1 -6C_1L^2\gamma^2 < & 2,\\
12L^2C_2C_1\gamma^2 \leq &\frac{1}{2}.
\end{align*}
So we can remove the $\left(1-L\gamma\right)\mathbb{E}\|\overline{\nabla f}(X_t)\|^2$ term on the left side of \eqref{eq_theo_1} and substitute $12L^2C_2C_1\gamma^2$ with $\frac{1}{2}$, then we have
\begin{align*}
\sum_{t=1}^T \frac{1}{2}\mathbb{E}\|\nabla f(\overline{X_t})\|^2
\leq & 2(f(0) - f^*)6\sqrt{2C_1}L + \frac{2(f(0)-f^*)\sigma)}{\sqrt{n}}T^{\frac{1}{2}} + 2(f(0) - f^*)\zeta^{\frac{2}{3}}T^{\frac{1}{3}}\\
& + \frac{6\sqrt{2C_1}L^2\tilde{\sigma}^2}{n}\log T + \frac{L\log T\sigma\tilde{\sigma}^2}{n\sqrt{n}}T^{\frac{1}{2}} + \frac{L\log{T}\tilde{\sigma}^2\zeta^{\frac{2}{3}}}{n}T^{\frac{1}{3}}\\
& + \frac{L\sigma T^{\frac{1}{2}}}{\sqrt{n}} + \frac{8\tilde{\sigma}^2L^2\log T}{1-\rho^2}\\
& + \frac{8nL^2\sigma^2C_1T}{72C_1nL^2+\sigma^2T} + \frac{24nL^2\zeta^2C_1T}{72C_1nL^2+ \sigma^2T + n\zeta^{\frac{4}{3}}T^{\frac{2}{3}}}.
\end{align*}
\begin{align*}
\frac{1}{T}\sum_{t=1}^T\mathbb{E}\|\nabla f(\overline{X_t})\|^2
\leq & \sigma\frac{4(f(0)-f^*) + 4L}{\sqrt{nT}} + \zeta^{\frac{2}{3}}\frac{4(f(0)-f^*) + 24L^2C_1}{T^{\frac{2}{3}}} + \tilde{\sigma}^2\frac{10L^2\log{T}}{(1-\rho^2)T}\\
& + \sigma\tilde{\sigma}^2\frac{2L\log{T}}{n\sqrt{nT}} + \zeta^{\frac{2}{3}}\tilde{\sigma}^2\frac{L\log{T}}{nT^{\frac{2}{3}}} + \frac{2(f(0)-f^*)L}{T} + \sigma^2\frac{8nL^2C_1}{72nC_1L^2 + \sigma^2T} ,
\end{align*}
which means
\begin{align*}
\frac{1}{T}\sum_{t=1}^T\mathbb{E}\|\nabla f(\overline{X_t})\|^2
\lesssim & \frac{\sigma}{\sqrt{nT}} + \frac{\zeta^{\frac{2}{3}}}{T^{\frac{
2}{3}}} + \frac{1}{T} + \frac{\tilde{\sigma}^2\sigma\log T}{n\sqrt{nT}}+\frac{\zeta^{\frac{2}{3}}\tilde{\sigma}^2\log{T}}{nT^{\frac{2}{3}}} + \frac{\tilde{\sigma}^2\log T}{T}.\numberthis \label{alg1_coro_1}
\end{align*}
From Lemma~\ref{lemma:boundx}, we have
\begin{align*}
\frac{1}{T}\sum_{t=1}^{T}\sum_{i=1}^n\left(1-6C_1L^2\gamma_t^2\right)\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\leq &\frac{2n\tilde{\sigma}^2\log T}{(1-\rho^2)T} + 2\left(\sigma^2+3\zeta^2\right)nC_1\gamma^2\\
&+6C_1n\gamma^2\frac{1}{T}\sum_{t=1}^{T-1}\mathbb{E}\left\|\nabla f(\overline{X}_t)\right\|^2.
\end{align*}
Combing it with \eqref{alg1_coro_1}, we have
\begin{align*}
\frac{1}{T}\sum_{t=1}^{T}\sum_{i=1}^n\mathbb{E}\left\|\overline{X}_t-\bm{x}_t^{(i)}\right\|^2
\lesssim & \frac{n\tilde{\sigma}^2\log T}{T} + \frac{n\sqrt{n}}{T} + \frac{n\zeta^{\frac{2}{3}}}{T^{\frac{2}{3}}} + \frac{1}{T^2}+ \frac{\tilde{\sigma}^2\sigma\log T}{T^{\frac{3}{2}}}+\frac{\zeta^{\frac{2}{3}}\tilde{\sigma}^2\log{T}}{T^{\frac{5}{3}}}.\numberthis\label{final_coro2}
\end{align*}
\section{Why naive combination between compression and D-PSGD does not work?}\label{sec:supp:naive}
Consider combine compression with the D-PSGD algorithm. Let the compression of exchanged models $X_t$ be
\[
\tilde{X}_t = C(X_t) = X_t + Q_t,
\]
where $Q_t=[\bm{q}_t^{(1)},\bm{q}_t^{(2)},\cdots,\bm{q}_t^{(n)}]$, and $\bm{q}_t^{(i)}=\tilde{\bm{x}}_t^{(i)}-\bm{x}^{(i)}_t$ is the random noise. Then the update iteration becomes
\begin{align*}
X_{t+1} = &\tilde{X}_t W - \gamma_t G(X_t; \xi_t)\\
= & X_t W + \underbrace{Q_tW}_{\text{not diminish}}- \gamma_t G(X_t; \xi_t).
\end{align*}
This naive combination does not work, because the compression error $Q_t$ does not diminish unlike the stochastic gradient variance that can be controlled by $\gamma_t$ either decays to zero or is chosen to be small enough.
\end{proof}
| {
"attr-fineweb-edu": 1.702148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe8vxK7kjXIK5tSjz | \section*{Acknowledgment}
The authors gratefully acknowledge financial support by the Central Innovation Programme ZF4086004LP7 of the German Federal Government.
\bibliographystyle{IEEEtran}
\subsection{Identification for control}\label{sec:methods:simultaneous}
Given the optimization framework in Sec. \ref{sec:methods:framework}, the formulation of identification for control is straightforward: the control synthesis objective \eqref{eq:methods:OCOutputCost} replaces the open-loop identification cost \eqref{eq:methods:cost}, while the constraints \eqref{eq:methods:constraint} and \eqref{eq:methods:OCOutputConstraint}, as well as the parameters $\mathcal{P}$, are merged. For linear systems, the optimization problem takes the form
\begin{subequations} \label{eq:methodsD:optimization}
\begin{alignat}{2}
&\!\min_{\mathcal{P}} &\qquad& ||\mathcal{R}_z(t_\infty)||,\\
&\text{subject to} & & \forall k \in [0,\infty]: \forall m: N_k y_{a,m}[k] \leq D_k \vec{\xi}, \label{eq:methodsD:constraint}\\
& & & \forall t: \mathcal{R}_\mathrm{con}(t) \subseteq \mathcal{Y}_\mathrm{c},
\end{alignat}
\begin{equation*}
\mathcal{P} := \lbrace A,B,C,D,E,F,\mathcal{V},\mathcal{W},A_c,B_c,C_c,D_c\rbrace,
\end{equation*}
\end{subequations}
which we solve using standard nonlinear programming algorithms. Although the formulation is straight forward, several points must be considered for the actual application:
\begin{itemize}
\item Constraint \eqref{eq:methodsD:constraint} requires $k$ to go to $t_\infty/t_s$, which can be too large, leading to a high number of linear constraints. During experiments, we discovered that one can choose a $k_\mathrm{max} < t_\infty/t_s$, above which the result of the control synthesis does not change anymore.
\item The set of parameters $\mathcal{P}$ is very large. In practice, we do not optimize all parameters. Rather, we make an educated selection based on engineering knowledge. In Sec. \ref{sec_evaluation}, we will give many hints on how such a choice might look like for robot systems.
\item The chosen order (system dimension) of the plant model can be too low, such that certain dynamic behaviors of the system are not sufficiently covered.
\end{itemize}
The last point is a problem which we frequently encountered: a plant model, whose order is too low, lets the synthesis algorithm optimize for a controller that would destabilize the plant. There are two reasons for this behavior: 1) the formal explanation is that a reachset conformant plant model does not transfer stability properties of a model to the real plant \cite{Roehm2019}; rather, all unknown dynamics should be reflected within non-deterministic parameters, including the ones which have been excited by the destabilization; and 2) the chosen test cases for identification (see Def. \ref{def:reachsetConformance}) do not sufficiently excite the relevant unknown dynamics.
Regarding the first reason: it is not practical to create a model that covers all dynamics of a physical system, since the synthesis task would become increasingly complex; however, dynamically relevant behavior for the application should be considered. Nonetheless, without experience, it is hard to know a priori, which dynamics is relevant. Regarding the second reason: explicitely testing unstable behavior could pose danger for the robotic hardware, as well as the operator.
To encounter these challenges, we propose an iterative synthesis procedure, which is inspired by the one proposed in \cite{VanDenHof1995}: instead of using one model, we propose to approach the synthesis problem with multiple model candidates with differing model orders. Infeasible model candidates will be eliminated after each iteration. From the model candidates that converge to feasible solutions, we choose the best one according to our cost criterium. The iterative process is depicted Fig. \ref{fig:methods:iterative} and goes as follows for each model candidate:
\begin{figure}
\includegraphics[width=\columnwidth]{figs/iterative.pdf}
\caption{Iterative procedure for simultaneous reachability-based identification and control synthesis. The iterations stops, when new test data do not lead to an updated identified model and optimal controller}
\label{fig:methods:iterative}
\end{figure}
\begin{enumerate}
\item From an initial set of test cases we solve \eqref{eq:methodsD:optimization}.
\item Using this controller, we run the robot and obtain a new set of test data.
\item If the new test data is reachset conformant, then the control synthesis has converged. Otherwise, repeat step 1 including the new data.
\end{enumerate}
\section{Discussion \& Conclusion}\label{sec_discussion}
In the case study, we have shown that reachability-based controller synthesis, together with identification, can be a powerful tool to do formal analysis of various control problems in robotics. We are able to make guarantees on the error of tracking controllers, the estimation errors of observers, and compute optimal output-feedback controllers minimizing these errors.
The performance of the synthesis greatly depend on the models, that have been chosen for the robot. As we examined in the case study, one should model behaviors, that are relevant to the synthesis problem, such as delay and observer dynamics. Finding these, however, requires application-related experience. The rewards for accurate modeling are smaller identified non-determinisms, such that more feasible solutions can be found, and a faster convergence of the iterative synthesis approach. The number of iterations also depend on the initial dataset. If only a low amount of data is provided in the beginning, it is much more likely to find non-conforming behavior in the subsequent iterations.
We provide a general formulation of the reachability-based identification and controller synthesis problem, and present a computationally efficient solution for linear systems. We especially make use of the superposition principle to reduce the amount of set inclusion checks and to analyse the tracking error independently from any reference trajectory. The extension to nonlinear, and hybrid systems is challenging, since the superposition does generally not apply for them. We have focused mostly on computing minimal robustly positive invariant sets. Our approach can also be applied to other safe sets, such as the ones shown in \cite{Gruber2020}.
Together with this paper, we provide software tools to replicate the results and to analyze further control problems of linear systems, e.g., other feedback-linearizing controllers of robots. The compositional framework also makes it easy to analyse networked linear systems. In the future, we plan to implement visual tools, such that an identification and synthesis problem can be directly derived and solved from block diagrams descriptions of the system.
\section{Reachability-based methods}\label{sec:methods}
This section describes the theoretical contribution, which is our methodology for reachability-based identification and control. These methods share a common optimization framework, which we introduce in Sec. \ref{sec:methods:framework}. We subsequently derive reachset conformant model identification in Sec. \ref{sec_confTest}, reachability-based control synthesis in Sec. \ref{sec:methods:controlSynthesis}, and the combination of both in Sec. \ref{sec:methods:simultaneous}.
\subsection{Reachability-based optimization framework}\label{sec:methods:framework}
In our previous work \cite{Schuermann2017b}, we showed how we can obtain a safe controller by optimizing over the reachable sets of the closed-loop dynamics. We extend this idea to more general system structures. As we will see, all problems at hand, i.e., reachset conformant identification, controller synthesis, and the combination of both, can be solved by optimizing over reachable sets.
To do so, we consider the interconnected system, e.g., the closed-loop system resulting from the plant in combination with the controller, and compute its reachable sets $\mathcal{R}$. In all cases, we obtain a linear system with variable parameters, e.g., unknown model or controller parameters. Donating this parameter set as $ \mathcal{P}, $ the optimization problem in its general form can be written as
\begin{subequations}
\begin{alignat}{2}
&\!\min_{\mathcal{P}} &\qquad& \texttt{cost}(\mathcal{R}\label{eq:methods:costgeneral}),\\
&\text{subject to} & & \texttt{constraints}(\mathcal{R}).\label{eq:methods:constraintgeneral}
\end{alignat}
\end{subequations}
The cost and constraints functions both depend on the reachable set or projections of the reachable set on subspaces and are going to be defined in detail in the following subsections. Depending on the type of parameters, the cost and constraint functions might become nonlinear and nonconvex. In this case we can use nonlinear programming to obtain optimal parameters.
Overall, we solve all problems at hand with the same set of optimization techniques, while the incorporation of reachability analysis in the synthesis provides formal guarantees and ensures constraint satisfaction despite the presence of disturbances and uncertain measurements.
\subsection{Reachability-based controller synthesis}\label{sec:methods:controlSynthesis}
We consider a linear time-invariant controller system
\begin{align} \label{eq:methods:controller}
\begin{split}
\dot{\vec{x}}_c(t) &= A_c \vec{x}_c(t) + B_c \vec{u}_c(t),\\
\vec{y}_c(t) &= C_c \vec{x}_c(t) + D_c \vec{u}_c(t),
\end{split}
\end{align}
with subscript $c$. We denote the plant with subscript $p$ and the closed-loop linear system with subscript $z$ (see Fig. \ref{fig:methods:controllerSynthesis}). State $ \vec{x}_c $ describes the internal state of the controller, output $ \vec{y}_c $ is connected to the input of the plant, and input $ \vec{u}_c $ accepts feedback from the plant. We regard the problem of synthesizing a controller, that lets the output of the closed-loop system $\vec{y}_z(t)$ optimally track the reference $\vec{y}_\mathrm{ref}(t)$, while formally guaranteeing the satisfaction of state and input constraints.
\begin{figure}
\includegraphics[width=\columnwidth]{figs/controllerSynthesis.pdf}
\caption{Plant and controller are subsystems of the closed-loop system. The interconnection of $\vec{y}_\mathrm{ref}, \vec{u}_c, \vec{y}_c, \vec{y}_z$ depends on the application.}
\label{fig:methods:controllerSynthesis}
\end{figure}
To do so, we use techniques from \cite{Schuermann2017b,Schuermann2021a}, where we combine the controller synthesis with reachable set computation in a single optimization problem in the form of \eqref{eq:methods:costgeneral}--\eqref{eq:methods:constraintgeneral}.
We use the superposition principle \cite{Schuermann2017b} to separate the problems of generating the reference input $\vec{u}_\mathrm{ref}(t)$, and synthesizing the optimal disturbance rejection provided by $\vec{y}_c(t)$, by defining the plant input as
\begin{align}
\vec{u}_p(t):=\vec{u}_\mathrm{ref}(t) + \vec{y}_c(t),
\end{align}
which, in turn, means that the output of the closed-loop system will be the sum of the reference and the tracking error $\vec{y}_e(t)$:
\begin{align}
\vec{y}_z(t)=\vec{y}_\mathrm{ref}(t) + \vec{y}_e(t),
\end{align}
In the following, we discuss our choice of parameters, constraint, and cost function.
\paragraph*{Parameters} The controller is parametrized through the choice of controller matrices $ A_c, B_c, C_c, $ and $ D_c $. Note that a fixed state feedback of the form $ \vec{u}(\vec{x})=K \vec{x} $, as regarded in our previous work in \cite{Schuermann2017b}, is a special case of \eqref{eq:methods:controller}, with $ A_c=B_c=C_c=0 $, $ D_c=K $, and $C=I$ for the plant.
\paragraph*{Cost} Analogously to \cite{Schuermann2017b}, we use the norm of the final reachable set $ ||\mathcal{R}_z(t_\infty)|| $ of the closed-loop system as a cost function for the optimization problem. To obtain the final reachable set $ \mathcal{R}_z(t_\infty) $, we compute the reachable set starting from a small initial set until it converges, i.e., it holds that $ \mathcal{R}_z(t+\Delta t) \subseteq \mathcal{R}_z(t), $ for some $ \Delta t \ge 0. $ In this case, the reachable set converged and since we consider time-invariant systems, any future reachable sets will remain in $ \mathcal{R}_z(t) $. By setting $\vec{u}_\mathrm{ref}(t) := 0$, which implies $\vec{y}_\mathrm{ref}(t)=0$, the set $\mathcal{R}_z(t_\infty)$ corresponds to the minimal robust positively invariant set \cite{Gruber2020} of the tracking error. Note that depending on the stability of the system and the numerical computation, this convergence might not happen; therefore, we apply a convergence tolerance criterium similar \cite{Gruber2020} to terminate the computation.
\paragraph*{Constraints}
We consider constraints sets $\mathcal{U}_p$ and $\mathcal{X}_p$ for the plant input and state by combining them into an output constraint set $\mathcal{Y}_\mathrm{con} = \mathcal{U}_p \times \mathcal{X}_p$, and provide an output signal $\vec{y}_\mathrm{con}(t) = [\vec{u}_p(t),\vec{x}_p(t)]^T$, such that
\begin{align}
\vec{y}_\mathrm{con}(t) &\in \mathcal{Y}_\mathrm{con},~\forall t \in \mathbb{R}^+_0.\label{eq:methods:OutputConstraint}
\end{align}
In order to ensure that the constraints are satisfied despite the decoupled synthesis of the tracking controller and reference trajectory, we divide the input constraints into two parts $ \mathcal{U}_\mathrm{ref} $ and $ \mathcal{Y}_{c} $ for the reference trajectory and for the tracking controller, respectively. We choose these sets such that
\begin{align}\label{eq:methodsC:referenceSet}
\mathcal{U}_\mathrm{ref} \oplus \mathcal{Y}_{c} \subseteq \mathcal{U}_p.
\end{align}
For simpler computation, we choose $ \mathcal{Y}_{c} $ as a polytope.
Notice, that in order to analyze the reachability of $\vec{x}_p$, the plant model must also be reachset conformant regarding $\vec{x}_p$. The identification in Sec. \ref{sec_confTest} can be straight-forwardly extended by considering $\vec{x}_p$ as additional plant model outputs, and by extending the output measurements $\vec{y}_m$ (see Def. \ref{def:reachsetConformance}) with an estimation of the states, e.g., through observers. However, due to the estimation errors, we introduce additional conservativeness in the identified model, as can be seen in \eqref{eq:methods:estimError}. Therefore, $\vec{y}_\mathrm{con}$ should only include the elements of $\vec{x}_p$ that are relevant for state constraints.
We formulate the resulting optimal control problem
\begin{subequations}
\begin{alignat}{2}
&\!\min_{A_c, B_c, C_c, D_c} &\qquad& ||\mathcal{R}_z(t_\infty)||,\label{eq:methods:OCOutputCost}\\
&\text{subject to} & & \forall t: \mathcal{R}_\mathrm{con}(t) \subseteq \mathcal{Y}_c.\label{eq:methods:OCOutputConstraint}
\end{alignat}
\end{subequations}
Since $ \mathcal{R}_z $ and $ \mathcal{R}_\mathrm{con}(t) $ are zonotopes, checking the constraint \eqref{eq:methods:OCOutputConstraint} requires only to check if a zonotope is inside a polytope. As shown in \cite{Schuermann2021a}, this can be computed very efficiently.
In contrast to identification, optimizing the feedback matrix, which is multiplied with the output, cannot be expressed as a linear problem anymore. To be formally safe, we also consider time-varying disturbances when computing an over-approximative reachable set during the optimization problem, which prevents us from using under-approximations like \eqref{eq:methods:reachableSetRecursive}, see Lemma~\ref{lemma:constantInput}. As discussed in \cite{Schuermann2017b}, the resulting optimization problem can be solved using standard nonlinear programming techniques.
\subsection{Reachset conformant model identification}\label{sec_confTest}
Verifying reachability-based properties of a robot requires a reachset conformant model. We apply the definition of \cite{Roehm2019} to measureable physical systems:
\begin{definition}[Reachset conformance]\label{def:reachsetConformance}
Given is a physical system and its model. From the physical system, we perform a series of test cases, where the $m$-th case consists of the input $u_m(t)$, an initial state $x(0)$, and the measured output $y_m(t)$, where $t \in [0,t^*]$.
From the model, we compute the reachable set $\mathcal{R}^{(m)}(t)$ for each $u_m(t)$ and x(0).
The model is reachset conformant in the time interval $[0,t^*]$, iff
\begin{equation*}
\forall m: \forall t \in [0,t^*]: y_m(t) \subseteq \mathcal{R}^{(m)}(t),
\end{equation*}
which is a set inclusion problem.
\end{definition}
The task of model identification is thus to get an optimal set of model parameters $\mathcal{P}$, such that reachset conformance is preserved. For the general open-loop identification problem, we propose to minimize the norm of the reachable set integrated over $t \in [0,t^*]$ and over all test cases $m$:
\begin{subequations}
\begin{alignat}{2}
&\!\min_{\mathcal{P}} &\qquad& \sum_m \int_0^{t^*}||\mathcal{R}^{(m)}(t)|| dt,\\
&\text{subject to} & & \forall m: \forall t: y_m(t) \subseteq \mathcal{R}^{(m)}(t) .
\end{alignat}
\end{subequations}
This general formulation is applicable to nonlinear systems. For the remainder if this subsection, we derive a version for linear systems with $\mathcal{P} = \lbrace A,B,C,D,E,F,\mathcal{V},\mathcal{W}\rbrace$, that is much more computationally efficient to solve. At first, we show that we can remove the the sum $\sum_m$ and the quantifier $\forall m$ for linear systems by using the superposition principle. We substract the nominal output solution
\begin{equation}
y_m^*[k] := C \left(\tilde{A}^k x[0] + \sum_{i=0}^{k-1}\tilde{A}^i \tilde{B} u_m[i]\right) + D u_m[k],
\end{equation}
which is \eqref{eq:methods:discreteSystem} excluding the non-deterministic parameters, from the reachable set defined in \eqref{eq:methods:reachableSet}:
\begin{align*}
\mathcal{R}_a[k] &:= \mathcal{R}^{(m)}[k] - y^*[k] = \bigoplus_{i=0}^{k-1}\bar{E}_i\mathcal{W}
\oplus F\mathcal{V},
\end{align*}
where $\bar{E}_i = C \tilde{A}^{i}\tilde{E}$. We define the non-deterministic parameters as zonotopes $\mathcal{V} := (\vec{c}_V,G_V)$ and $\mathcal{W} := (\vec{c}_W,G_W)$, such that $\mathcal{R}_a[k]$ has a closed-form solution:
\begin{gather}
\mathcal{R}_a[k] = (\vec{c}_k,G_k), \vec{c}_k := \begin{bmatrix} \sum_{i=0}^{k-1}\bar{E}_i & F \end{bmatrix} \begin{bmatrix} \vec{c}_W \\ \vec{c}_V \end{bmatrix},\label{eq:methods:reachA}\\
G_k := \begin{bmatrix} \bar{E}_0 G_W & \dots & \bar{E}_{k-1} G_W & F G_V \end{bmatrix}.
\end{gather}
When applying Def. \ref{def:zonotopeNorm}, we immediately see that the zonotope norm $||\mathcal{R}_a[k]|| = ||\mathcal{R}^{(m)}(t)||$, and is independent from $m$ for linear systems.
Also using the superposition principle, we substract $y_m^*[k]$ from the measurement $y_m[k]$, such that for each test case, $y_{a,m}[k] := y_m[k] - y_m^*[k]$ and the following holds for linear systems:
\begin{align*}
\forall m: \forall k: y_m[k] \subseteq \mathcal{R}^{(m)}[k] &\\
\iff &\forall m: \forall k: y_{a,m}[k] \subseteq \mathcal{R}_{a}[k] \\
\iff &\forall k: \bigcup_m y_{a,m}[k] \subseteq \mathcal{R}_{a}[k].
\end{align*}
Thus, we formulate the open-loop identification problem for linear systems
\begin{subequations}
\begin{alignat}{2}
&\!\min_{A,B,C,D,E,F,\mathcal{V},\mathcal{W}} &\qquad& ||\mathcal{R}_a(t)||\label{eq:methods:cost} dt,\\
&\text{subject to} & & \forall k: \bigcup_m y_{a,m}[k] \subseteq \mathcal{R}_{a}[k] .\label{eq:methods:constraint}
\end{alignat}
\end{subequations}
The following two Lemmas present the cost and constraint function for the above optimization problem, which then result into Theorem \ref{theorem:linearSystemUnc} and \ref{theorem:linearSystem}.
\begin{lemma}\label{lemma:linearCost}
The cost \eqref{eq:methods:cost} for the identification of linear systems is linear in the scaling parameters $\alpha_W$ and $\alpha_V$ of the zonotopic non-determinisms $\mathcal{W}, \mathcal{V}$:
\begin{gather}\label{eq:methods:linearCost}
\int_0^{t^*}||\mathcal{R}(t)||dt = \vec{\gamma} \begin{bmatrix} \alpha_W\\ \alpha_V \end{bmatrix} \\
\gamma := \vec{1}^T \begin{bmatrix}\sum_{k=0}^{a} \left|\sum_{i=0}^{k-1} t_s \bar{E}_i G'_W\right|, & |F G'_V|\end{bmatrix},
\end{gather}
where $\vec{1}$ is a vector full of ones and $a = t^*/\Delta t$. Please be reminded, that we use the notation $G := G'\diag{\alpha}$ here (see Def. \ref{def:zonotopeG}).
\end{lemma}
\begin{proof}
To compute the norm (see Def. \ref{def:zonotopeNorm}), we only require the generators of $\mathcal{R}[k]$. Thus, for discrete-time linear systems, $\int_0^{t^*}||\mathcal{R}(t)||dt =\sum_{k}t_s||\mathcal{R}_a[k]||$. Each side length of $\mathcal{I}(\mathcal{R}_a[k])$ according to Def. \ref{def:intervalHull} is
\begin{align*}
\vec{\delta g}_k &= \left|\begin{bmatrix} \bar{E}_0 G_W & \dots & \bar{E}_{k-1} G_W & F G_V \end{bmatrix}\right|\vec{1} \\
&= \left|\begin{bmatrix} \bar{E}_0 G'_W & \dots & \bar{E}_{k-1} G'_W & F G'_V \end{bmatrix}\right|\begin{bmatrix} \alpha_W \\ \dots \\ \alpha_W \\ \alpha_V \end{bmatrix} \\
&= \left|\begin{bmatrix} \sum_{i=0}^{k-1} \bar{E}_i G'_W & F G'_V \end{bmatrix}\right|\begin{bmatrix} \alpha_W \\ \alpha_V \end{bmatrix}.
\end{align*}
With $||\vec{\delta g}_i||_1 := \vec{1}^T \vec{\delta g}_i$, we obtain $\vec{\gamma}\begin{bmatrix} \alpha_W \\ \alpha_V \end{bmatrix}$ by evaluating $\sum_{k} t_s||\mathcal{R}_a[k]|| = \sum_k t_s \vec{1}^T \vec{\delta g}_i$.
\end{proof}
\begin{lemma}\label{lemma:linearConstraint}
The constraint \eqref{eq:methods:constraint} for the identification of linear systems is linear in $\vec{\xi} = [\vec{c}_W,\vec{c}_V,\vec{\alpha}_W,\vec{\alpha}_V]^T$, if we use the halfspace representation of $\mathcal{R}_a[k]$:
\begin{equation}\label{eq:methods:linearConstraint}
\forall k \in \left[0,\frac{t_e}{t_s}\right]: \forall m: N_k y_{a,m}[k] \leq D_k \vec{\xi},
\end{equation}
where the $j$-th row of $N_k$ and $D_k$ are the facets of the zonotope $\mathcal{R}_a[k]$, s.t.
\begin{align}
\vec{n}_{j,k} &= \nX ({G'_k}^{\langle\gamma,\dots,\eta\rangle})^T/ ||\nX ({G'_k}^{\langle\gamma,\dots,\eta\rangle})||_2. \label{eq:methods:linearConstraint1}\\
\begin{split}
\vec{d}_{j,k}^+ &=
\Big[\begin{matrix} \sum_{i=0}^{k-1} \vec{n}_{j,k}^+ \bar{E}_i & \vec{n}_{j,k}^+ F \end{matrix} \\
&\qquad\qquad \begin{matrix} \sum_{i=0}^{k-1} |\vec{n}_{j,k}^+ \bar{E}_i G'_W|& |\vec{n}_{j,k}^+ F G'_V| \end{matrix}\Big].
\end{split} \\
\begin{split}
\vec{d}_{j,k}^- &=
\Big[\begin{matrix} -\sum_{i=0}^{k-1} \vec{n}_{j,k}^+ \bar{E}_i & -\vec{n}_{j,k}^+ F \end{matrix} \\
&\qquad\qquad \begin{matrix} \sum_{i=0}^{k-1} |\vec{n}_{j,k}^+ \bar{E}_i G'_W|& |\vec{n}_{j,k}^+ F G'_V| \end{matrix}\Big].
\end{split}\label{eq:methods:linearConstraint2}
\end{align}
\end{lemma}
\begin{proof}
Consider the halfspace representation of a zonotope $\mathcal{R}_a[k] = (\vec{c}_k,G_k)$ using the Def. \ref{def:zonotopeH}.
We show that $\vec{n}_{j,k}^+$ is independent from $\vec{\alpha}$ for any generator matrix:
\begin{align*}
\nX &(G'\diag(\vec{\alpha})) =\\
&= [\dots, (-1)^{i+1}\det(G'^{[i]}\diag(\vec{\alpha})), \dots]^T,\\
&= \det(\diag(\vec{\alpha}))[\dots, (-1)^{i+1}\det(G'^{[i]}), \dots]^T,\\
&= \left(\prod \vec{\alpha}\right) \cdot \nX (G'),
\end{align*}
and since $\prod \vec{\alpha}$ is a positive scalar, the two-norm
\begin{align*}
\big|\big| (\prod \vec{\alpha}) \cdot \nX (G')\big|\big|_2 = (\prod \vec{\alpha}) || \nX (G')||_2,
\end{align*}
such that $\vec{\alpha}$ completely cancels out.
To obtain $D_k$ we apply the definition of $\mathcal{R}_a[k]$ in \eqref{eq:methods:reachA}.
From $\Delta d_{j,k}$, we extract $\alpha_W,\alpha_V$ in a similar way as in the proof of Lemma \ref{lemma:linearCost}:
\begin{equation}
\Delta d_{j,k} = \begin{bmatrix} \sum_{i=0}^{k-1} |\vec{n}_{j,k}^+ \bar{E}_i G'_W| & |\vec{n}_{j,k}^+F G'_V| \end{bmatrix}\begin{bmatrix} \alpha_W \\ \alpha_V \end{bmatrix}.
\end{equation}
\end{proof}
The following two theorems formulate the reachset conformant identification problem \eqref{eq:methods:cost} and \eqref{eq:methods:constraint} for linear systems.
\begin{theorem}[Reachset conformant identification of additive non-deterministic parameters of linear systems]\label{theorem:linearSystemUnc}
Given a linear system \eqref{eq:methods:continuousSystem}, where $\mathcal{V}$ and $\mathcal{W}$ are zonotopes, the reachset conformant identification problem is a linear program, where $\mathcal{P} = \lbrace\vec{c}_W,\vec{c}_V,\vec{\alpha}_W,\vec{\alpha}_V\rbrace$ are the model parameters to be identified, \eqref{eq:methods:linearCost} is the cost, and \eqref{eq:methods:linearConstraint} are the constraints.
\end{theorem}
\begin{proof}
Check proofs of Lemma \ref{lemma:linearCost} and \ref{lemma:linearConstraint}. Given $\mathcal{P}$, both cost \eqref{eq:methods:linearCost} and the constraint function \eqref{eq:methods:linearConstraint} are linear.
\end{proof}
\begin{theorem}[Reachset conformant identification of linear systems]\label{theorem:linearSystem}
Given a linear system \eqref{eq:methods:continuousSystem}, where $\mathcal{V}$ and $\mathcal{W}$ are zonotopes, the reachset conformant identification problem is generally a nonlinear program, where $\mathcal{P} = \lbrace A,B,C,D,E,F,\mathcal{V},\mathcal{W}\rbrace$ are the variables to be identified, \eqref{eq:methods:linearCost} is the cost, and \eqref{eq:methods:linearConstraint} are the constraints.
\end{theorem}
\begin{proof}
Check proofs of Lemma \ref{lemma:linearCost} and \ref{lemma:linearConstraint}.
\end{proof}
\begin{remark}
We provide some remarks on the implementation of the above theorems:
\begin{itemize}
\item Theorem \ref{theorem:linearSystem} can be approached in a cascading way: an inner layer solves for $\lbrace\vec{c}_W,\vec{c}_V,\vec{\alpha}_W,\vec{\alpha}_V\rbrace$ using linear programming (Theorem \ref{theorem:linearSystemUnc}), while the outer layer solves for $\lbrace A,B,C,D,E,F,G'_W,G'_V\rbrace$ through nonlinear programming. An MATLAB implementation is provided together with this paper.
\item The solution space can also be reduced by estimating $\lbrace A,B,C,D\rbrace$ using the subspace method based on least-squares optimization \cite[Chapter~4.3]{Ljung1999}, although \cite{Chen2019a} has shown that such an approach is generally not optimal.
\item To compute $y^*[k]$, an estimation of the initial state $x[0]$ is required, similar to other identification algorithms (e.g., subspace methods in \cite{Ljung1999}).
\end{itemize}
\end{remark}
The influence of non-determinism on the linear system \eqref{eq:methods:continuousSystem} is modelled by the matrices $E$ and $F$. The main motivation behind it is that, for systems with high-dimensional state spaces, engineering knowledge can be applied to accurately determine where non-determinism appears, and therefore reducing the number of optimization parameters. The following Lemma evaluates, whether $E$ and $F$ have been chosen correctly.
\begin{lemma}\label{lemma:nonDeterminism}
A linear system with variable $\mathcal{W},\mathcal{V}$ can capture all non-determinisms of the system, if
\begin{equation}
\forall k: \quad J_k=\left[C\bar{E}_0, \dots, C\bar{E}_{k-1}, F\right],
\end{equation}
has full (row) rank. If it is not full for some $k$, then the signals $\vec{y}_a[k]$ must only appear in $S(J_k)$, which is the linear subspace of $J_k$.
\end{lemma}
\begin{proof}
If $J_k$ has full rank, then matrix $J_k:~\lbrace\mathcal{W},\mathcal{V}\rbrace \rightarrow~y[k]$ is in \eqref{eq:methods:reachableSet} is a surjective function (cite). When not full rank, then it is only surjective with respect to the image $S(J_k)$.
\end{proof}
\begin{remark}
To check, if $\forall k: \vec{y}_a[k] \in S(J_k)$, we can simply evaluate whether $\vec{y}_a[k] = J_k J_k^+\vec{y}_a[k]$ is satisfied, where $()^+$ is the Moore-Penrose inverse operator \cite{James1978}.
\end{remark}
A thought experiment demonstrates the use of the above Lemma: the initial state $x[0]$, which is required for reachability analysis, is usually not measureable and can only be estimated with an estimation error $\mathcal{X}_0$, that has not been explicitely modelled in our linear system \eqref{eq:methods:discreteSystem}. However, if the conditions Lemma \ref{lemma:nonDeterminism} is fulfilled, then $\mathcal{X}_0$ is remapped onto $\mathcal{W}$ and $\mathcal{V}$. After one time-step,
\begin{equation}\label{eq:methods:estimError}
\mathcal{W} \times \mathcal{V} = J_1^+J_1(\mathcal{W}^* \times \mathcal{V}^*) \oplus J_1^+C\tilde{A}\mathcal{X}_0
\end{equation}
is a possible solution of the remap. An interesting side result of this example is that it allows us to evaluate the performance of state estimation algorithms: higher-performing state estimation results in a decreasing $\mathcal{X}_0$, which strictly decreases the size of identified $\mathcal{W} \times \mathcal{V}$, as shown in \eqref{eq:methods:estimError}.
An additional note for extensions to nonlinear systems: since reachability algorithms of nonlinear systems are generally non-closed, strict reachset conformance as defined in Def. \ref{def:reachsetConformance} requires an inner-approximation of reachable sets.
\section{A case study on robot manipulators}\label{sec_evaluation}
We demonstrate the applicability of our newly proposed methods for robot systems by studying the reachability-based design of feedback-linearizing controllers for a 6-DOF manipulator. We use reachability analysis to compute and minimize the ultimate bounds of the tracking error. We start with investigating modelling choices for optimal identification results. We subsequently examine the application of our methods on the synthesis of a state-feedback controller, a linear velocity observer, and an output-feedback controller.
\subsection{Modelling choices and open-loop identification}
The system at hand is a Schunk LWA-4P 6-DOF robot manipulator. Each joint has a local current controller and an encoder feedback measuring the angular joint position. A Speedgoat Real-Time Target Machine acts as a centralized controller which sends control currents over a CANopen fieldbus system, and receives the position feedback. The sampling time of the centralized controller is $t_s = 4$ ms. The following paragraphs describe the subsystems involved in this case study.
\paragraph{Robot dynamics}
The rigid-body model of a robot can be described by
\begin{equation}\label{eq:eval:robotDynamics}
M(\vec{q})\ddot{\vec{q}} + \vec{c}(\vec{q},\dot{\vec{q}}) + \vec{g}(\vec{q}) = \vec{\tau},
\end{equation}
where $\vec{q},\dot{\vec{q}},\ddot{\vec{q}}$ are the position, velocity, and acceleration of the robot joints, $M$ is the mass matrix, $\vec{c}$ are the Coriolis and centripedal forces, $ \vec{g}$ are the gravity forces, and $\vec{\tau}$ are the joint torques.
The feedback linearization technique
\begin{equation}\label{eq:eval:feedbackLinearization}
\vec{\tau} = M(\vec{q})\vec{u} + \vec{c}(\vec{q},\dot{\vec{q}}) + \vec{g}(\vec{q})
\end{equation}
implements an internal control loop with a new input $\vec{u}$, such that the system dynamics become $\ddot{\vec{q}} = \vec{u}$ through inserting \eqref{eq:eval:feedbackLinearization} into \eqref{eq:eval:robotDynamics}. From the outside, the robot behaves like a decoupled linear system. However, the feedback linearization is usually imperfect \cite{Abdallah1991}; the effects can be mitigated using disturbance observers such as \cite{Mohammadi2013b}. Nevertheless, we consider an unknown additive disturbance $\mathcal{W} \in \mathbb{R}^2$, and an unknown position feedback error $\mathcal{V} \in \mathbb{R}$. The resulting state-space model for each joint is:
\begin{align*}\label{eq:eval:robotLinearDynamics}
\dot{\vec{x}}_r &= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \vec{x}_r + \begin{bmatrix} 0 \\ 1 \end{bmatrix} u_r + \mathcal{W}, \\
y_r &= \begin{bmatrix} 1 & 0\end{bmatrix} \vec{x}_r + \mathcal{V}
\end{align*}
where $\vec{x}_r = [q,\dot{q}]^T$. The discrete-time version is obtained by applying \eqref{eq:methods:discreteSystem}.
\paragraph{State-feedback control}
The inverse dynamics tracking controller \cite[Section 8.5]{Siciliano2009a} is characterized by the feedback linearization in \eqref{eq:eval:feedbackLinearization} and a state-feedback term for each robot joint:
\begin{equation}\label{eq:eval:robotController}
u_r := y_c = \begin{bmatrix} 1 & k_p & k_d\end{bmatrix} \vec{u}_c,
\end{equation}
where $u_c= [\ddot{q}_d, q_d-\hat{q}, \dot{q}_d -\dot{\hat{q}}]^T$, the values $q_d,\dot{q}_d,\ddot{q}_d$ denote the desired trajectory, and $\hat{q},\dot{\hat{q}}$ are the observation of robot position and velocity. The gains $k_p, k_d$ are designed by choosing a natural frequency $\omega$ and the damping ratio $\zeta$, s.t. $k_p := \omega^2, k_d := 2\zeta\omega$ \cite{Siciliano2009a}.
\paragraph{Observer}
The above controller requires a full state feedback; however, only the robot position is measurable. We thus require an online state estimation and therefore choose the linear high-gain observer from \cite{Nicosia1993a}. Its dynamics for each joint is
\begin{align}\label{eq:eval:robotObserver}
\dot{\vec{x}}_o &= \begin{bmatrix} -h_1 & 1 \\ -h_2 & 0 \end{bmatrix} \vec{x}_o + \begin{bmatrix} h_1 \\ h_2 \end{bmatrix} u_o, \\
\vec{y}_o &= \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \vec{x}_o,
\end{align}
where $u_o := q_m$ is the measured position, and $\vec{y}_o = [\hat{q},\dot{\hat{q}}]^T$ are observed position and velocity. The gains designed by selecting $\tilde{h}_1$, $\tilde{h}_2$ and an $\epsilon$, such that $h_1 := \tilde{h}_1/\epsilon, h_2 := \tilde{h}_2/\epsilon^2$. On our centralized controller, we implement the discrete-time version from \cite{Busawon2017b}.
\paragraph{Delay}
The amount of time delay can seldomly be estimated exactly, and is often time-varying \cite{Liu2012}. We assume a delay of one sampling time in each direction: due to synchronization in the fieldbus communication, a computed control signal needs to wait until the next instant for it to be sent. Vice versa, a position measurement is almost retrieved instantly, but has to wait for the next cycle to be sent back to the central controller. Delay is best expressed in discrete-time:
\begin{align*}
x_{de}[k+1] &= u_{de}[k], \\
y_{de}[k] &= x_{de}[k],
\end{align*}
where $u_{de}$ is the input signal, and $y_{de}$ is the signal delayed by one sampling instant. The Pade approximation is a continuous-time model for any time delay \cite{Golub1989}.
Given the subsystems introduced in the previous paragraphs, we are offered multiple options to choose plant model candidates; an optimal choice is often not immediately clear. Depending on the desired order, we can omit certain subsystems, or decide between a continuous-time or discrete-time version. In this case study, we regard six different plant model candidates with increasing order:
\begin{description}
\item[R-] Only robot dynamics (continuous-time)
\item[R+] Only robot dynamics (discrete-time)
\item[RO-] Robot dynamics with observer (continuous)
\item[RO+] Robot dynamics with observer (discrete)
\item[RD+] Robot dynamics with delay (discrete)
\item[ROD+] Robot dynamics with observer and delay (discrete)
\end{description}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/robotModels.pdf}
\caption{\textit{Robot model candidates:} Block interconnection diagrams of the model structues and their system states $x_*$}
\label{fig:eval:models}
\end{figure}
The block interconnection diagram of the models are shown in Fig. \ref{fig:eval:models}. All candidates have the same inputs and outputs, such that we can use the same dataset to identify all models. For candidates that omit the observer we apply an alternative measurement error $\mathcal{V}' \in \mathbb{R}^2$ to satisfy Lemma \ref{lemma:nonDeterminism}. Since all the candidates are series interconnections of linear subsystems, their respective composition are also linear.
Initially, we evaluate the quality of the model candidates by comparing the cost \eqref{eq:methods:linearCost} of the open-loop identification of the unknown disturbances $\mathcal{W},\mathcal{V}$. To make the comparison concise, we assume zonotopes $\mathcal{W} := (0,G_W'\diag(\vec{\alpha}_W))$ and $\mathcal{V} := (0,G_V'\diag(\vec{\alpha}_V))$ for all models, where $G_W = I$ and $G_V = I$ are fixed. The parameter set thus only consists of $\mathcal{P} = \lbrace \alpha_W,\alpha_V\rbrace$, so that the identification problem is a linear program and can be solved using Theorem \ref{theorem:linearSystemUnc}. The initial dataset for this and all subsequent synthesis problems has been obtained from the real robot running trapezoidal and polynomial trajectories with random target positions, velocities, and accelerations \footnote{A video showing the initial tests, and the code for reproducing all experiments are provided within the supplementary materials.}. The initial gains for the state-feedback controller and linear observer have been manually tuned to $\omega = 20, \zeta = 0.65, \tilde{h}_1 = 15, \tilde{h}_2 = 30, \epsilon = 0.01$. An automatic preselection was done to avoid trajectories that lead to self-collision or exceeding the maximum motor currents. The total duration of the initial dataset is 33 minutes and 20 seconds. We maximize the number of test cases by considering each sampling instant as a starting point of a new test case, resulting in 497,880 test cases for each joint. The initial states $x[0]$ for each model and test case can be mostly derived from the measurements and by tracking the corresponding signals on our controller. Only for the initial robot velocities, we choose to use offline zero-phase-filtering \cite{Oppenheim1999}, because it resulted in smaller identified disturbances compared to using the observed velocity, which ultimately, according to \eqref{eq:methods:estimError}, means that the offline method delivers a better velocity estimation. The resulting costs are shown in Tab. \ref{tab:eval:openLoopCost}, and the corresponding parameters are shown in Tab. \ref{tab:eval:openLoopParam}.
\begin{table}
\caption{Open-loop identification results: cost (Lemma 2)}
\label{tab:eval:openLoopCost}
\begin{center}
\begin{tabular}{ l c c c c c c}
\hline
\textbf{Model} & Axis 1 & Axis 2 & Axis 3 & Axis 4 & Axis 5 & Axis 6 \\\hline
R- & $0.0322$ & $0.0422$& $0.0325$& $0.0309$& $0.0505$& $0.0405$\\
R+ & $0.0322$ & $0.0422$& $0.0325$& $0.0309$& $0.0505$& $0.0405$\\
RO- & $0.0033$ & $0.0046$& $0.0023$& $0.0028$& $0.0035$& $0.0054$\\
RO+ & $0.0032$ & $0.0046$& $0.0022$& $0.0026$& $0.0035$& $0.0053$\\
RD+ & $0.0025$ & $0.0044$& $0.0022$& $0.0023$& $0.0035$& $0.0050$\\
ROD+& $0.0022$ & $0.0041$& $0.0021$& $0.0023$& $0.0032$& $0.0041$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Open-loop identification results: non-determinisms of axis 1}
\label{tab:eval:openLoopParam}
\begin{center}
\begin{tabular}{ l c c c c}
\hline
\textbf{Model} & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$\\\hline
R- & $0.0184$& $2.2009$& $0.0001$& $0.0298$\\
R+ & $0.0184$& $2.2009$& $0.0001$& $0.0298$\\
RO- & $0.0386$& $1.4494$& $0.0000$& $-$\\
RO+ & $0.0401$& $1.7321$& $0$ & $-$\\
RD+ & $0.0127$& $1.7762$& $0.0001$& $0.0207$\\
ROD+& $0.0434$& $0.7556$& $0$& $-$\\
\hline
\end{tabular}
\end{center}
\end{table}
The open-loop identification results clearly show that the cost decreases with increasing model order for every robot axis. A decrease is also visible for $\alpha_{W,2}$, which corresponds to the size of non-determinism of the robot acceleration. A significant difference between discrete-time and continuous-time model candidates in terms of cost is not visible. The computation time for all models are within seconds, when using MATLAB, irregardless of the model order. This evaluation indicates that the ROD+ model is the best model candidate with the smallest reachable set and least amount of non-determinism.
\subsection{State-feedback control synthesis} \label{sec:eval:statefeedback}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/stateFeedback.pdf}
\caption{\textit{Simultaneous state-feedback synthesis and identification:} we minimize $\mathcal{R}_{q_r,\dot{q}_r}$ and $\mathcal{R}_{u_r}$ is under constraint. Variables are shown in \textbf{bold}.}
\label{fig:eval:stateFeedback}
\end{figure}
In this section of our case study, we apply our iterative synthesis approach from Sec. \ref{sec:methods:simultaneous} to the problem of designing the state-feedback controller in $\eqref{eq:eval:robotController}$. The feedback-linearization, which decouples the dynamics of robot joints, is a major simplification, since it allows us to synthesize the controller for each joint separately. The synthesis goal is to reduce the reachable set of the tracking error taking into account the limited motor capabilities, while simultaneously identifying reachset conforming disturbances of the robot. We evaluate the same model candidates as in the previous experiment. The block diagram of the closed-loop system is shown in Fig. \ref{fig:eval:stateFeedback}. The reference for each axis is the desired trajectory $\vec{y}_\mathrm{ref} := [q_d,\dot{q}_d]^T$ and $u_\mathrm{ref} = \ddot{q}_d$, the outputs of the closed-loop system are $\vec{y}_z = \vec{y}_p$ and $\vec{y}_\mathrm{con} = u_r$, where $\vec{y}_p$ consists of the output position and velocity of the robot (see Fig. \ref{fig:eval:models}), and $u_r$ is the plant input.
The synthesis goal is to reduce the position and velocity error of the terminal reachable set:
\begin{subequations} \label{eq:eval:stateOptimization}
\begin{alignat*}{2}
&\!\min_{\mathcal{P}} &\qquad& ||\mathcal{R}_{q_r,\dot{q}_r}(t_\infty)||,\\
&\text{subject to} & & \eqref{eq:methods:linearConstraint},\\
& & & \forall t: \mathcal{R}_\mathrm{con}(t) \subseteq \mathcal{Y}_c,
\end{alignat*}
and
\begin{equation*}
\mathcal{P} := \lbrace \omega,\zeta,\alpha_W,\alpha_V \rbrace.
\end{equation*}
\end{subequations}
To find the appropriate $\mathcal{Y}_c$, according to \eqref{eq:methodsC:referenceSet}, we reserve $\ddot{q}_d \in \mathcal{U}_\mathrm{ref} = [-3,3]$ m/$s^2$. We then derive, for each axis $i$, the upper limit of the input $u_r \in \mathcal{U}_{p,i}$ from the peak torques of the motors, which are $\vec{\tau}_\mathrm{max} = [75.5,75.5,75.5,75.5,20,20]^T$~Nm. We fit the largest intervals for $\mathcal{U}_{p,i}$ of each joint that adheres to $\vec{\tau} \leq \vec{\tau}_\mathrm{max}$ by evaluating \eqref{eq:eval:feedbackLinearization} with $\vec{u} := \mathcal{U}_{p,1} \times ... \times \mathcal{U}_{p,6}$ and randomly sampled $q,\dot{q}$. We determined that $\mathcal{U}_{p,2} =[-7.27,7.27]$ rad/$s^2$, for axis 2, and $\mathcal{U}_{p,i}=[-20,20]$, for all other axes, are admissible intervals. Thus, by applying \eqref{eq:methodsC:referenceSet}, $\mathcal{Y}_c=[-4.27,4.27]$ for axis 2 and $\mathcal{Y}_c=[-17,17]$ for the other axes.
The iterative synthesis is performed for each axis individually. The initial dataset for the first iteration is the same as in the open-loop identification. For subsequent iterations, we run a validation trajectory to obtain new data. The results of the synthesis is shown in Tab. \ref{tab:eval:stateFeedback}.
\begin{table*}
\caption{State feedback control synthesis results for all candidate models}
\label{tab:eval:stateFeedback}
\setlength\tabcolsep{5pt}
\begin{center}
\begin{tabular}{ r r r r r r r r r r r r r r r r r }
\hline
&& \multicolumn{7}{c}{\textbf{Iteration 1}} && \multicolumn{7}{c}{\textbf{Iteration 2}} \\
\textbf{Model} &Ax.& cost & $\omega$ & $\zeta$&$\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$ && cost & $\omega$ & $\zeta$ & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$\\ \hline
\multirow{6}{*}{R-} &$1$&$0.16$ & $100.00$ & $0.90$ & $0.00$ & $2.15$ & $0.00$ & $0.02$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$\\
&$2$&$1.07$ & $7.12$ & $0.74$ & $0.00$ & $2.05$ & $0.00$ & $0.09$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$3$&$0.22$ & $100.00$ & $0.76$ & $0.00$ & $1.50$ & $0.00$ & $0.04$&& $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$4$&$0.22$ & $97.05$ & $0.80$ & $0.00$ & $2.85$ & $0.00$ & $0.03$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$5$&$0.28$ & $78.81$ & $0.75$ & $0.00$ & $3.23$ & $0.00$ & $0.04$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$6$&$0.43$ & $47.75$ & $0.86$ & $0.00$ & $4.93$ & $0.00$ & $0.05$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\[0.1cm]
\multirow{6}{*}{R+} &$1$&$0.21$ & $98.29$ & $1.00$ & $0.01$ & $2.25$ & $0.00$ & $0.02$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$2$&$1.77$ & $3.81$ & $1.00$ & $0.06$ & $2.05$ & $0.00$ & $0.09$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$3$&$0.30$ & $75.71$ & $0.85$ & $0.03$ & $1.54$ & $0.00$ & $0.04$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$4$&$0.30$ & $73.75$ & $0.89$ & $0.02$ & $3.01$ & $0.00$ & $0.03$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$5$&$0.36$ & $61.44$ & $0.88$ & $0.02$ & $3.32$ & $0.00$ & $0.04$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\
&$6$&$0.49$ & $43.78$ & $0.91$ & $0.01$ & $5.02$ & $0.00$ & $0.05$ && $*$ & $*$ & $*$& $*$ &$*$& $*$ &$*$ \\[0.1cm]
\multirow{6}{*}{RO-}&$1$&$\mathit{0.24}$ & $\mathit{21.18}$ & $\mathit{1.00}$ & $\mathit{0.05}$ & $\mathit{0.00}$ & $\mathit{0.00}$ & $-$ && $-$ & $-$ & $-$& $-$ &$-$ & $-$ & $-$ \\
&$2$&$0.27$ & $\mathit{16.31}$ & $\mathit{1.00}$ & $\mathit{0.05}$ & $\mathit{0.00}$ & $\mathit{0.00}$ & $-$ && $-$ & $-$ & $-$& $-$ &$-$ & $-$ & $-$\\
&$3$&$0.18$ & $42.16$ & $1.00$ & $0.04$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\
&$4$&$0.12$ & $40.02$ & $1.00$ & $0.02$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\
&$5$&$0.18$ & $42.48$ & $1.00$ & $0.04$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\
&$6$&$0.21$ & $43.83$ & $1.00$ & $0.04$ & $0.00$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\[0.1cm]
\multirow{6}{*}{RO+}&$1$&$0.29$ & $40.03$ & $1.00$ & $0.03$ & $1.63$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\
&$2$&$\mathit{1.22}$ & $\mathit{4.73}$ & $\mathit{1.00}$ & $\mathit{0.09}$ & $\mathit{1.91}$ & $\mathit{0.00}$ & $-$ && $-$ & $-$ & $-$& $-$ &$-$ & $-$ & $-$ \\
&$3$&$0.30$ & $37.83$ & $1.00$ & $0.03$ & $1.33$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ & $-$ \\
&$4$&$0.35$ & $46.69$ & $1.00$ & $0.04$ & $2.48$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ &$-$ \\
&$5$&$0.35$ & $47.97$ & $1.00$ & $0.03$ & $3.44$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ &$-$ \\
&$6$&$0.48$ & $38.31$ & $1.00$ & $0.04$ & $4.52$ & $0.00$ & $-$ && $*$ & $*$ & $*$& $*$ &$*$ & $*$ &$-$ \\[0.1cm]
\multirow{6}{*}{RD+}&$1$&$\mathbf{0.31}$ & $\mathit{35.46}$ & $\mathit{0.79}$ & $\mathit{0.02}$ & $\mathit{1.88}$ & $\mathit{0.00}$ & $\mathit{0.02}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$2$&$\mathit{1.74}$ & $\mathit{3.96}$ & $\mathit{1.00}$ & $\mathit{0.08}$ & $\mathit{1.99}$ & $\mathit{0.00}$ & $\mathit{0.08}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$3$&$\mathit{0.38}$ & $\mathit{29.65}$ & $\mathit{0.80}$ & $\mathit{0.03}$ & $\mathit{1.38}$ & $\mathit{0.00}$ & $\mathit{0.03}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$4$&$\mathbf{0.42}$ & $\mathit{35.48}$ & $\mathit{0.79}$ & $\mathit{0.03}$ & $\mathit{2.61}$ & $\mathit{0.00}$ & $\mathit{0.03}$&& $-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$5$&$0.51$ & $35.48$ & $0.79$ & $0.03$ & $3.17$ & $0.00$ & $0.03$ && $\mathit{0.56}$ & $\mathit{38.24}$ & $\mathit{0.85}$ & $\mathit{0.03}$ & $\mathit{4.08}$ & $\mathit{0.00}$ & $\mathit{0.03}$ \\
&$6$&$0.58$ & $36.51$ & $0.90$ & $0.03$ & $4.71$ & $0.00$ & $0.03$ && $\mathit{0.71}$ & $\mathit{29.76}$ & $\mathit{1.00}$ & $\mathit{0.02}$ & $\mathit{6.59}$ & $\mathit{0.00}$ & $\mathit{0.03}$ \\[0.1cm]
\multirow{6}{*}{ROD+}&$1$&$\mathit{0.37}$ & $\mathit{19.28}$ & $\mathit{1.00}$ & $\mathit{0.03}$ & $\mathit{1.37}$ & $\mathit{0.00}$& $-$&&$-$&$-$&$-$&$-$&$-$& $-$&$-$\\
&$2$&$\mathbf{1.15}$ & $\mathit{4.92}$ & $\mathit{1.00}$ & $\mathit{0.09}$ & $\mathit{1.74}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$3$&$\mathbf{0.37}$ & $\mathit{18.67}$ & $\mathit{1.00}$ & $\mathit{0.04}$ & $\mathit{1.18}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$4$&$\mathit{0.50}$ & $\mathit{20.85}$ & $\mathit{1.00}$ & $\mathit{0.04}$ & $\mathit{2.14}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$5$&$\mathbf{0.54}$ & $\mathit{19.28}$ & $\mathit{1.00}$ & $\mathit{0.05}$ & $\mathit{2.09}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\
&$6$&$\mathbf{0.67}$ & $\mathit{21.75}$ & $\mathit{1.00}$ & $\mathit{0.04}$ & $\mathit{3.79}$ & $\mathit{0.00}$ & $-$ &&$-$&$-$&$-$&$-$&$-$&$-$&$-$\\\hline
\multicolumn{14}{c}{\textbf{bold}: best model candidate for this axis, \textit{italic}: converged values, $*$: infeasible solution, $-$: not evaluated}
\end{tabular}
\end{center}
\end{table*}
The ROD+ model is the only one, that converges after one iteration, meaning that when running the validation trajectory with the optimized values, the robot did not produce new data, for which the identified model was not reachset conformant. We can see that the R and RO models are not suitable for controller synthesis: the first iteration produced control gains, that were too high, and for which the real robot became unstable. Thus, the second iteration did not yield feasible solutions, since the identified non-determinism were too large. Only RD+ and ROD+ produced converging solutions, because they modelled the delay dynamics. This helped the reachability analysis to predict the instability, when using high gains, because the reachable sets would grow very large, thus letting the optimization avoid them.
\subsection{Observer synthesis}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/observer.pdf}
\caption{\textit{Simultaneous observer synthesis and identification (Approach 1):} we minimize $\mathcal{R}_{\dot{\hat{e}}}$. Variables are shown in \textbf{bold}.}
\label{fig:eval:observer}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/observer2.pdf}
\caption{\textit{Observer synthesis (Approach 2):} we minimize the transient duration of a set of step responses. Variables are shown in \textbf{bold}.}
\label{fig:eval:observer2}
\end{figure}
We study the problem of designing the linear observer in \eqref{eq:eval:robotObserver}. In this paper, we propose two different reachability-based approaches than can be realized within our framework.
\begin{itemize}
\item \textbf{Approach 1}: we minimize velocity estimation error, considering the closed-loop system as depicted in Fig.~\ref{fig:eval:observer}. We obtain an invariant set, towards which the reachable maximal velocity estimation error converges.
\item \textbf{Approach 2}: given a set of step responses, we minimize the duration of the transient, as well as the reachable steady-state error of the observer system. This approach is inspired by the works in \cite{Prasov2013}, where the authors formally proved the boundedness of high-gain observers with measurement errors.
\end{itemize}
Like the state-feedback example, we formulate Approach 1 for each joint as an iterative synthesis problem (Sec. \ref{sec:methods:simultaneous}), since it involves identifying the robot plant. Approach 2 only analyses the known observer system, so that the reachability-based control synthesis method of Sec. \ref{sec:methods:controlSynthesis} is sufficient.
\subsubsection*{Approach 1}
The reference input and output are the same as for the state-feedback example: $\vec{y}_\mathrm{ref} := [q_d,\dot{q}_d]^T$, and $u_\mathrm{ref} = \ddot{q}_d$. The output of the closed-loop system is the estimation error $y_z = \hat{e} = \dot{q}-\dot{\hat{q}}$. Notice, that because the unmeasurable 'true' velocity $\dot{q}$ of the robot system is analysed here, we need to include an estimation (e.g., using an observed value) within the test data for the identification. This does not conflict with the synthesis, because the test data is only relevant for identification, while Approach 1 optimizes the resulting dynamics from a new observer. For brevity, we omit the input constraints. The overall synthesis is formalized in the following optimization problem:
\begin{subequations} \label{eq:eval:observerOptimization}
\begin{alignat*}{2}
&\!\min_{\mathcal{P}} &\qquad& ||\mathcal{R}_{\dot{\hat{e}}}(t_\infty)||,\\
&\text{subject to} & & \eqref{eq:methods:linearConstraint},
\end{alignat*}
and
\begin{equation*}
\mathcal{P} := \lbrace \tilde{h}_1,\tilde{h}_2,\alpha_W,\alpha_V \rbrace,
\end{equation*}
\end{subequations}
where $\mathcal{R}_{\dot{\hat{e}}}(t_\infty)$ is a positively invariant set, towards which the velocity estimation error converges.
Since $\epsilon$ is a redundant parameter, we fix it at $\epsilon = 0.01$. Like in the state-feedback synthesis, we optimize the scaling parameters $\alpha_W,\alpha_V$ of the robot non-determinism. Since the observer is implemented in discrete-time, we only consider the discrete-time robot model candidates R+ and RD+. The results of the synthesis is shown in Tab. \ref{tab:eval:observer}.
The iterative synthesis converged after one iteration for both model candidates. In contrast to the open-loop identification and state-feedback synthesis, we see that the R+ model lead to the smallest final reachable set. Despite the varying non-determinisms of the robot, the optimal observer parameters are very similar across all axes, which indicates that $\mathcal{W},\mathcal{V}$ do not influence the observer dynamics much, but affect the final reachable set.
\begin{table}
\caption{Observer synthesis results (approach 1)}
\label{tab:eval:observer}
\begin{center}
\setlength\tabcolsep{5pt}
\begin{tabular}{ r r r r r r r r r }
\hline
&& \multicolumn{7}{c}{\textbf{Iteration 1}} \\
\textbf{Model} & Ax. & cost & $\tilde{h}_1$ & $\tilde{h}_2$ & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ & $\alpha_{V,2}$\\ \hline
\multirow{6}{*}{R+}&$1$&$\mathbf{0.12}$ & $10.55$ & $29.73$ & $0.03$ & $1.58$ & $0.00$ & $0.10$ \\
&$2$&$\mathbf{0.16}$ & $11.72$ & $45.85$ & $0.02$ & $5.89$ & $0.00$ & $0.03$ \\
&$3$&$\mathbf{0.13}$ & $10.55$ & $29.73$ & $0.03$ & $1.32$ & $0.00$ & $0.09$ \\
&$4$&$\mathbf{0.15}$ & $10.55$ & $29.73$ & $0.04$ & $2.41$ & $0.00$ & $0.10$ \\
&$5$&$\mathbf{0.14}$ & $10.55$ & $29.73$ & $0.03$ & $3.07$ & $0.00$ & $0.07$ \\
&$6$&$\mathbf{0.20}$ & $10.55$ & $29.73$ & $0.04$ & $4.11$ & $0.00$ & $0.08$ \\[0.1cm]
\multirow{6}{*}{RD+} &$1$&$0.21$ & $7.99$ & $21.79$ & $0.04$ & $2.19$ & $0.00$ & $0.02$ \\
&$2$&$0.39$ & $7.99$ & $21.81$ & $0.06$ & $5.68$ & $0.00$ & $0.05$ \\
&$3$&$0.21$ & $7.99$ & $21.82$ & $0.03$ & $2.26$ & $0.00$ & $0.02$ \\
&$4$&$0.25$ & $7.99$ & $21.81$ & $0.04$ & $3.37$ & $0.00$ & $0.02$ \\
&$5$&$0.29$ & $7.99$ & $21.81$ & $0.04$ & $3.77$ & $0.00$ & $0.03$ \\
&$6$&$0.31$ & $7.99$ & $21.81$ & $0.04$ & $4.15$ & $0.00$ & $0.04$ \\[0.1cm]
\hline
\multicolumn{7}{c}{\textbf{bold}: best model candidate for this axis}
\end{tabular}
\end{center}
\end{table}
\subsubsection*{Approach 2}
The goal of the synthesis is to minimize the transient response time, as well as the steady-state error caused by the measurement error. As \cite{Prasov2013} points out, these are conflicting goals for high-gain observers, because a faster transient leads to noise amplification, while a slower transient attenuates noise. To resolve this conflict, we consider the transient response time as the sole cost, while we regard the maximum steady-state error as an constraint. The overall synthesis is formalized in the following optimization problem:
\begin{subequations} \label{eq:eval:observerOptimization}
\begin{alignat*}{2}
&\!\min_{\mathcal{P}} &\qquad& t_\infty,\\
&\text{subject to} & & \mathcal{R}_{\hat{q},\dot{\hat{q}}}(t_\infty) \in \mathcal{Y}_s,
\end{alignat*}
and
\begin{equation*}
\mathcal{P} := \lbrace \tilde{h}_1,\tilde{h}_2 \rbrace,
\end{equation*}
\end{subequations}
where $\mathcal{R}_{\hat{q},\dot{\hat{q}}}(t_\infty)$ is the positive invariant set representing the steady-state error, towards which the system converges, and $t_\infty$ is the time of the convergence, which we consider as the transient response time. We set the measurement error at $\mathcal{V} = [-1,1]$ Millidegrees, and considera set of step responses by starting the reachability analysis of the observer system with a non-empty set $x(0) \in \mathcal{X}_0 = [-0.1,0.1] \times [-0.1,0.1 ]$, while keeping the reference signal $q_r=0$. We constrain the steady-state error of $\dot{\hat{q}}$ to $\mathcal{Y}_s = [-0.005,0.005]$. We only analyse the discrete-time observer, since this is the one that is implemented on the real controller. The results are shown in Tab. \ref{tab:eval:observer2}.
Interestingly, the optimal observer gains obtained through Approach 2 are similar to the ones obtained through Approach 1. In addition, we discover that by increasing the gains for discrete-time observers, the transient response time $t_\infty$ decreases at first, but increases again, because the system starts to oscillate due to discretization effects. Therefore, the $t_\infty$ in Tab. \ref{tab:eval:observer2} is the actual minimum, without violating $\mathcal{Y}_s$. We additionally show this behavior in Fig. \ref{fig:eval:observer2conv} by varying $\epsilon$: for $\epsilon=0.02$, the steady-state error is small, but $t_\infty=0.0112$ s is large. For $\epsilon=0.01$, the steady-state error is still small, and $t_\infty=0.064$ s is the smallest. For $\epsilon=0.005$, the steady-state error is large, and so is $t_\infty=0.128$ s.
\begin{figure}
\includegraphics[width=\columnwidth]{figs/observer2conv.pdf}
\caption{\textit{Observer synthesis (Approach 2):} Comparison of transient response time $t_\infty$ for $\epsilon=0.005,\epsilon=0.01$, and $\epsilon=0.02$}
\label{fig:eval:observer2conv}
\end{figure}
\begin{table}
\caption{Observer synthesis results (approach 2)}
\label{tab:eval:observer2}
\begin{center}
\setlength\tabcolsep{5pt}
\begin{tabular}{ r r r }
\hline
transient response time [s]& $\tilde{h}_1$ & $\tilde{h}_2$ \\\hline
$0.064$ & $10.13$ & $25.69$\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Output-feedback control synthesis}
\begin{figure}
\includegraphics[width=\columnwidth]{figs/outputfeedback.pdf}
\caption{\textit{Simultaneous output-feedback synthesis and identification:} we minimize $\mathcal{R}_{\hat{q},\dot{\hat{q}}}$ and $\mathcal{R}_{u_r}$ is under constraint. Variables are shown in \textbf{bold}.}
\label{fig:eval:outputfeedback}
\end{figure}
Merging the linear observer and the state-feedback controller, the overall mechanism becomes an output-feedback controller. We briefly show in this section of the case study, that we can also synthezise the controller and observer at the same time. The block diagram is shown in Fig. \ref{fig:eval:outputfeedback}, and the optimization problem is the same one as in Sec. \ref{sec:eval:statefeedback}, except that the parameter set is now $\mathcal{P} := \lbrace \omega,\zeta,\tilde{h}_1,\tilde{h}_2,\alpha_W,\alpha_V \rbrace$. For brevity, we only evaluate RD+ as the model candidate the results are shown in Tab. \ref{tab:eval:outputfeedback}.
Because the variable parameter set has now grown, the nonlinear programming algorithms often reached local minima. We restarted the synthesis with differing initial values until we reached a global minimum. As we can observe for the cost of each axis, they are all smaller than the corresponding costs of the ROD+ model in Tab. \ref{tab:eval:stateFeedback}, for which the reason is obvious: in the previous experiment, $\tilde{h}_1$ and $\tilde{h}_2$ were manually tuned and fixed; in this experiment, output-feedback synthesis has found superior values. The observer gains for axis 2 and 6 are significantly larger than the rest, but resulted in smaller reachable sets and did not result in unstable robot behavior.
\begin{table}
\caption{Output-feedback synthesis results}
\label{tab:eval:outputfeedback}
\setlength\tabcolsep{3.2pt}
\begin{center}
\begin{tabular}{ r r r r r r r r r r }
\hline
&& \multicolumn{8}{c}{\textbf{Iteration 1}} \\
\textbf{Model} & Ax. & cost & $\omega$ & $\zeta$ &$\tilde{h}_1$ & $\tilde{h}_2$ & $\alpha_{W,1}$ & $\alpha_{W,2}$ & $\alpha_{V,1}$ \\ \hline
\multirow{6}{*}{RD+}&$1$&$\mathbf{0.37}$ & $18.72$ & $1.00$ & $11.38$ & $27.36$ & $0.04$ & $1.23$ & $0.00$ \\
&$2$&$\mathbf{0.85}$ & $6.37$ & $1.00$ & $59.05$ & $37.20$ & $0.09$ & $1.52$ & $0.00$ \\
&$3$&$\mathbf{0.37}$ & $18.06$ & $1.00$ & $10.77$ & $25.00$ & $0.04$ & $1.18$ & $0.00$ \\
&$4$&$\mathbf{0.48}$ & $20.09$ & $1.00$ & $11.51$ & $27.59$ & $0.05$ & $1.91$ & $0.00$ \\
&$5$&$\mathbf{0.48}$ & $17.96$ & $1.00$ & $11.31$ & $27.25$ & $0.05$ & $1.58$ & $0.00$ \\
&$6$&$\mathbf{0.64}$ & $22.31$ & $1.00$ & $119.82$ & $360.36$ & $0.06$ & $3.23$ & $0.00$ \\[0.1cm]
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}\label{sec_conclusion}
\section{Introduction}\label{sec_introduction}
\subsection{Motivation}
\begin{itemize}
\item Formal methods are mathematical techniques for reasoning about systems, their requirements, and their guarantees \cite{Kress-Gazit2018}.
\item Formal synthesis are frameworks where tasks are specified in precise language and automatically transform them into correct-by-construction robot controllers.
\end{itemize}
\subsection{Statement of contributions}
\noindent In the following, we list the contributions of this work:
\begin{itemize}
\item We formulate a unified optimal control framework for reachability-based model identification, controller synthesis, and the combination of both.
\item We propose a model identification method for non-deterministic systems, which preserves reachset conformance with respect to the test data of the real system. Computationally efficient solutions for continuous-time and discrete-time linear systems are presented using zonotopes as set representation.
\item We extend reachability-based controller synthesis to general linear controller systems. Using our unified framework, we combine controller synthesis with model identification and propose an iterative method to generate optimal controller with formal guarantees for real systems.
\item We extensively study the application of reachability-based methods to feedback-linearizing tracking controllers of robots. We use our approaches to obtain formal guarantees on the tracking error, velocity estimation error, and whether input constraints can be met.
\item We provide software in the form of a reachability-based identification toolbox written in MATLAB. The underlying foundation is the COntinuous Reachability Analyzer (CORA) \cite{Althoff2015}.
\end{itemize}
\subsection{Literature overview}\label{sec_survey}
Traditionally, system identification and model-based control design for continuous dynamical systems have been regarded as two separate disciplines \cite{VanDenHof1995}: a nominal model of a robot is identified based on an optimality criterion, e.g., minimizing a least-squares error \cite{Atkeson1986a,Ljung1999}; control design and stability analysis are then applied assuming that the model is an exact representation of the physical dynamics \cite{An1988model}.
With the advance of robust control, it became clear that determining an exact model of physical systems might be unfeasible, and that instead, uncertainties should be included in the control design \cite{Abdallah1991}. Such uncertainties can be additive, multiplicative \cite{VanDenHof1995}, or parametric \cite{Swevers1997,Ramdani2005}. The main criterium for the identification of such uncertainties has been their size. However, small model errors do not necessarily lead to good robust control, and large model errors do not necessarily lead to bad control performance, as \cite{Skelton1989} points out. Therein lies the motivation for \textit{identification for control}, in which the model uncertainties are determined in a way, such that it is optimal for the control goal \cite{VanDenHof1995}.
Model errors can be mainly divided in two ways: stochastic bounds vs. set bounds, and frequency-domain vs. time domain uncertainties. A discussion on frequency-domain uncertainties for robust control can be found in \cite{Douma2005}. Stochastic aspects of model errors are treated in large detail in \cite{Ljung1999}. In \cite{Santolaria2013}, the stochastic uncertainty of the parameters of robot kinematics is identified through Monte-Carlo sampling.
In the following paragraphs, we will focus on set bounds for time domain uncertainties.
Most of the previous literature belong to set-membership identification \cite{Vicino1996,Milanese2004,Kieffer2006,Bravo2006,Ramdani2005}, which usually refers to works based on finding a feasible solution set (FSS). Given an unknown model and a set of measurements, it is the goal to identify a FSS that is consistent with all measurements of the system. The general technique \cite{Vicino1996} is to model measurements, including their a priori assumed errors, as strips in parameter space. The FSS is then the intersection of all strips, such that the unknown parameter is guaranteed to be inside. It is important here to distinguish that the FSS does not actually represent non-deterministic parameters, rather it seeks to find the 'true' deterministic parameter value by narrowing the FSS down as much as possible. The non-determinism, in fact, must be assumed a priori. E.g., in \cite{Zhang2020}, the non-deterministic disturbance of a linear system must be known a priori to identify the FSS of the system matrix parameters. As \cite{Ramdani2005} showed on real robot manipulators, the set-membership identification technique frequently returns empty FSS, such that a priori non-determinisms actually have to be manually increased. Also, the authors exclude data considered as 'outliers', if the measurement strip is far away from the FSS. The work in \cite{Reppa2008} proposes to use the outliers for fault detection of robots. The work in \cite{Bravo2006} presents an set-membership approach that aims to track time-varying parameters. In contrast to these works, we are interested in identifying bounds of time-varying and non-deterministic parameters, for which the mentioned FSS approaches are not suitable.
Since formal methods are increasingly applied to robotic systems, the question arises, how far verification results obtained for a model are actually transferable to the real system. This problem is also known as \textit{model conformance} and has been treated in-depth in \cite{Roehm2019}. Most literature on set-based identification are based on the \textit{simulation relation}, since it allows a transfer of, e.g., temporal logic properties for the entire state space. The model can be a coarse-grained abstraction of the state-space into a discrete automaton (e.g, for the navigation of mobile robots \cite{Kress-Gazit2018}), or differential equations \cite{Chen2019a,Sadraddini2018} with non-deterministic disturbance. Chen et al. \cite{Chen2019a} identify a linear system with non-determinism such that all state measurements are within a polytopic reachable set. Saddradini and Belta \cite{Sadraddini2018} identify piece-wise affine models using Mixed-Interger Linear Programming, also establishing a simulation relation between measured states with hyperrectangular reachable sets.
However, if a system is high-dimensional, but only few outputs are relevant for verification, then the simulation relation can be too restrictive and conservative. Thus, \textit{trace} and \textit{reachset conformance} have been proposed to relax the formal relation only to the output of a system \cite{Roehm2019}. In \cite{Schurmann2018}, the authors apply trace conformance by reconstructing disturbance traces for a real autonomous vehicle. The set of non-deterministic disturbances is then taken as the outer bounds of all disturbance traces. \textit{Reachset conformance}, on the other hand, is a further relaxation which only requires that the output traces of a system must be within the reachable set of the model, instead of traces. The main advantage is that a user can now more freely choose the internal structure, as long as the output is conformant, resulting in a more flexible model-order reduction \cite{Althoff2012a}, or even applying black-box identification methods \cite{Wang2021}.
Although the amount of transferable properties reduces to reachability-based ones only, it is only a supposed disadvantage: many verification problems in formal methods are actually based on reachability, such as the computation of control invariant sets \cite{Gruber2020} and verifying collision avoidance \cite{Althoff2019}. First works on the identification of robot manipulators based on reachset conformance can be found in \cite{Liu2018,Giusti2021}.
A different view on set-based identification is to formulate it as a synthesis problem. The authors in \cite{Dang2019,Batt2007} are able to incorporate additional model knowledge as Temporal logic constraints to improve identification.
\input{sections/Literature_Formal_Synthesis.tex}
At last, we make the connection of this work to the area of robust control for robots. The approach of this paper can be directly applied for the reachability analysis of feedback-linearizing robust linear controllers, where--similarly to our work--an uncertainty of the linear system due to an imperfect model is assumed \cite{Sage1999}.
Robustness analysis involves bounding of uncertain parameters of the model, e.g., in \cite[Section 8.5.3]{Siciliano2009a} the mass matrix and other nonlinear terms of the robot dynamics are bounded to prove uniform boundedness of the tracking error. The approach in \cite{Zenieh1997} discusses a control scheme for robots, that achieves a desired tracking performance with a specified convergence rate.
Uniform ultimate boundedness despite system uncertainties of the computed torque controller (which we analyse in our work) has already been shown in previous work \cite{Qu1991}. Further works on robust control for robots are surveyed in \cite{Sage1999}.
$\mathcal{H}_\infty$-synthesis (e.g., in \cite{Kim2015,Makarov2016}) generate controllers that minimize the influence of disturbances on the system dynamics expressed in frequency domain.
Often, such as in \cite{Makarov2016}, the validation of these approaches are done in time-domain through a Monte-Carlo simulation of the uncertain parameters to analyse the systems reachability. In contrast, our work computes the reachable set directly to evaluate control performance.
In fact, reachability analysis can be interpreted as a direct evaluation of robust properties such as uniform ultimate boundedness.
\subsection{Structure of this paper}
In Sec. \ref{sec_preliminaries} we introduce zonotopes and reachability analysis of linear systems. The reachability-based methods are presented in Sec. \ref{sec:methods}. We then address the application of these methods to the tracking control problem of robot systems in Sec. \ref{sec_evaluation}. This paper concludes in Sec. \ref{sec_discussion}.
\subsection{Reachability-based control synthesis}
Based on the identified uncertain model from the last section, we want to compute a controller which minimizes the resulting reachable sets while formally guaranteeing the satisfaction of state and input constraints. To do so, we use techniques from \cite{Schuermann2017b,Schuermann2021a}, where we combine the controller synthesis with reachable set computation in a single optimization problem.
Since we have a linear system, we use a classical linear trajectory tracking controller which has the form
\begin{align}
u_{ctrl}(x[k])=u_{ref}[k] + K(x[k]-x_{ref}[k]).
\end{align}
Here, $ x_{ref}[\cdot] $ denotes a reference trajectory and $ u_{ref}[\cdot] $ the corresponding reference input, and $ K $ is the feedback matrix which we use to track this reference trajectory.
We consider constraint sets for the states and the inputs of the form
\begin{align}
x[k] &\in \mathcal{X}, \label{eq:method:StateConstraint}\\
u[k] &\in \mathcal{U},\label{eq:method:InputConstraint}
\end{align}
$ \forall k \in \mathbb{N}^+_0. $ \textcolor{red}{State constraints unnecessary?} Due to the linearity of the system dynamics, we can use the superposition principle to independently consider the problems of finding a reference trajectory $ x_{ref}[\cdot] $ and a feedback matrix $ K. $ We use an optimization problem to find the optimal feedback matrix $ K $ offline once and use this feedback matrix to track any (online) generated reference trajectory. In order to decouple these two control problems, we divide the state constraints into two parts $ \mathcal{X}_{ref} $ and $ \mathcal{X}_{fb} $ for the reference trajectory and for the feedback controller, respectively. We do the same for the input constraints with $ \mathcal{U}_{ref} $ and $ \mathcal{U}_{fb} $. We choose these sets such that
\begin{align}
\mathcal{X}_{ref} \oplus \mathcal{X}_{fb} \subseteq \mathcal{X},\\
\mathcal{U}_{ref} \oplus \mathcal{U}_{fb} \subseteq \mathcal{U}.
\end{align}
For simpler computation, we choose $ \mathcal{X}_{fb} $ and $ \mathcal{U}_{fb} $ as polytopes.
We obtain the feedback control law by solving the following optimal control problem
\begin{subequations}
\begin{alignat}{2}
&\!\min_{K} &\qquad& \texttt{size}(\mathcal{R}),\\
&\text{subject to} & & \mathcal{R}\subseteq \mathcal{X}_{fb},\label{eq:method:OCStateConstraint}\\
& & & \forall t: K \mathcal{R}_y(t) \subseteq \mathcal{U}_{fb}.\label{eq:method:OCInputConstraint}
\end{alignat}
\end{subequations}
Since $ \mathcal{R} $ and $ \mathcal{R}_y(t) $, and therefore also $ K \mathcal{R}_y(t), $ are zonotopes, checking the constraints \eqref{eq:method:OCStateConstraint}--\eqref{eq:method:OCInputConstraint} requires only to check if a zonotope is inside a polytope. As shown in \cite{Schuermann2021a}, this can be computed very efficiently. \textcolor{red}{I can add a Lemma, if necessary.}
In contrast to the optimization problem in the previous subsection, optimizing the feedback matrix which is multiplied with the output cannot be expressed as a linear problem anymore. As discussed in \cite{Schuermann2017b}, the optimization problem can be solved using standard nonlinear programming techniques. \textcolor{red}{Over-approximative reachable set using varying disturbances?}
The result of the optimization problem, is a feedback matrix $ K $ which minimizes the reachable set in which the state is guaranteed to stay around a reference trajectory while ensuring the satisfaction of the state and input constraints. During application, one only has to find a reference trajectory with respect to $ \mathcal{X}_{ref} $ and $ \mathcal{U}_{ref} $ and it is guaranteed that the closed-loop system satisfies the actual constraints \eqref{eq:method:StateConstraint}--\eqref{eq:method:InputConstraint}. The combined tracking controller then works similar to a tube-based robust MPC approach (\textcolor{red}{reference}) with the advantage of the optimal feedback controller from a reachability-based controller synthesis.
\section{Preliminaries}\label{sec_preliminaries}
\subsection{Zonotopes}
The advantage of zonotopes as a set-representation is that they scale well to high dimensions. In addition, algebraic operations are closed-form, i.e., the result of an operation involving zonotopes is again a zonotope. In the following definitions, we define its generator-space and halfspace representation, as well as the algebraic operations. We denote sets in calligraphic letters (e.g., $\mathcal{A}$), matrices with upper case letters (e.g., $A$), vectors by $\vec{\cdot}$, and scalar values by lower case letters (e.g., $a$). The $n$-dimensional identity matrix is denoted by $I_n$.
\begin{definition}[Zonotope: generator-space representation \cite{Althoff2010a}]\label{def:zonotopeG}
A zonotope $\mathcal{Z}$ is defined by a center $\vec{c}$; a generator matrix $G$, where $\alpha^{(h)}\vec{g}^{(h)}$ is its $h$-th column; and $\alpha^{(h)}>0$, which is a scaling factor determining the length of each generator:
\begin{align*}
\mathcal{Z} &= (\vec{c},G) :=\left\lbrace \vec{x}=\vec{c} + \sum_{h=1}^{p}\beta_h \vec{g}^{(h)} \Bigg\vert \beta_h \in [-\alpha^{(h)},\alpha^{(h)}] \right\rbrace. \\
&=\left\lbrace \vec{x}=\vec{c} + \sum_{h=1}^{p}\beta \alpha^{(h)} \vec{g}^{(h)} \Bigg\vert \beta \in [-1,1] \right\rbrace = (\vec{c},G'\diag(\alpha)).
\end{align*}
\end{definition}
\begin{definition}[Zonotope: halfspace representation \cite{Althoff2010a}]\label{def:zonotopeH}
A zonotope $(\vec{c},G)$ with $p$ generators has $2{p \choose n-1}$ facets. The generators that span a facet are obtained by cancelling $p-n+1$ generators from the $G$-matrix. This is denoted by $G^{\langle\gamma,\dots,\eta\rangle}$, where $\gamma,\dots,\eta$ are the $p-n+1$ indices of the generators that are taken out of $G$. The halfspace representation of a zonotope is $N \cdot \vec{x} \leq \vec{d}$, where
\begin{equation*}
N = \begin{bmatrix}
N^+ \\ -N^+
\end{bmatrix},\quad
\vec{d} = \begin{bmatrix}
\vec{d}^+\\ \vec{d}^-
\end{bmatrix},
\end{equation*}
and the $j$-th row $j \in 1..{p \choose n-1}$ of $N^+$, $\vec{d}^+$, and $\vec{d}^-$ are:
\begin{align*}
\vec{n}_j^+ &:= \nX (G^{\langle\gamma,\dots,\eta\rangle})/ ||\nX (G^{\langle\gamma,\dots,\eta\rangle})||_2 \\
d_j^+ &:= \vec{n}_j^{+T} \cdot \vec{c} + \Delta d_j \\
d_j^- &:= -\vec{n}_j^{+T} \cdot \vec{c} + \Delta d_j \\
\Delta d_j &:= \sum_{\nu=1}^{p}|\vec{n}_j^{+T} \cdot g^{(\nu)}|\\
\nX(H) &:= [\dots, (-1)^{j+1}\det(H^{[j]}), \dots]^T.
\end{align*}
\end{definition}
\begin{definition}[Minkowski sum of zonotopes]
The Minkowski sum of sets is defined as $\mathcal{A} \oplus \mathcal{B} = \lbrace \vec{a} + \vec{b} \mid \vec{a} \in \mathcal{A}, \vec{b} \in \mathcal{B}\rbrace$. For zonotopes, their Minkowski sum has a closed-form solution in generator space
\begin{align*}
\mathcal{Z}_1 \oplus \mathcal{Z}_2 = (\vec{c}_1,G_1) \oplus (\vec{c}_2,G_2) = (\vec{c}_1+\vec{c}_2,[G_1,G_2]).
\end{align*}
\end{definition}
\begin{definition}[Linear transformation of zonotopes]
Zonotopes are closed under linear transformation: $A \mathcal{Z} = (A\vec{c},AG)$.
\end{definition}
\begin{definition}[Interval hull of zonotopes]\label{def:intervalHull}
The interval hull $\mathcal{I}(\mathcal{Z}) = \lbrace\vec{i}^-,\vec{i}^+\rbrace$ is a tight outer-approximation of a zonotope $\mathcal{Z} = (\vec{c},[\dots,\vec{g}^{(h)},\dots])$, which is defined by
\begin{gather*}
\vec{i}^- := \vec{c} - \vec{\delta g}, \qquad \vec{i}^- := \vec{c} + \vec{\delta g}, \qquad \vec{\delta g} := \sum_{h=1}^{p} |\vec{g}^{(h)}|.
\end{gather*}
\end{definition}
\begin{definition}[Norm of zonotopes]\label{def:zonotopeNorm}
We define the norm of a zonotope as sum of the side lengths of its interval hull: $||\mathcal{Z}|| := ||\vec{\delta g}||_1$, where $||.||_1$ is the (scalar) 1-norm.
\end{definition}
\subsection{Reachability analysis of linear time-invariant systems}
This paper mainly regards linear systems $S$ with uncertainties described by the following differential inclusion
\begin{align}\label{eq:methods:continuousSystem}
\begin{split}
\dot{\vec{x}}(t) &\in A\vec{x}(t) + B\vec{u}(t) \oplus \mathcal{W}, \\
\vec{y}(t) &\in C\vec{x}(t) + D\vec{u}(t) \oplus \mathcal{V}
\end{split}
\end{align}
If input $\vec{u}(t)$ and the uncertainties $\mathcal{V},\mathcal{W}$ are constant within one sampling time $\Delta t$, then we can formulate a discrete-time version $\tilde{S}$, where the integer $k = t/\Delta t$:
\begin{align} \label{eq:methods:discreteSystem}
\begin{split}
\vec{x}[k+1] &\in \tilde{A} \vec{x}[k] + \tilde{B} \vec{u}[k] \oplus \tilde{E} \mathcal{W},\\
\vec{y}[k] &\in C \vec{x}[k] + D \vec{u}[k] \oplus \mathcal{V},
\end{split}
\end{align}
where the system matrices are
\begin{align*}
\tilde{A} &= e^{At_s}, \tilde{B} = \int_{0}^{\Delta t}e^{A(t-\tau)}Bd\tau \\
\tilde{E} &= \int_{0}^{\Delta t}e^{A(t-\tau)}Ed\tau
\end{align*}
The \textit{reachable set} $\mathcal{R}$ of a linear system $\tilde{S}$ after one time-step is computed through a set-based evaluation of \eqref{eq:methods:discreteSystem}:
\begin{multline}
\mathcal{R}[k+1] = C \tilde{A} \mathcal{X}[k] \oplus C \tilde{B} \vec{u}[k] \\ \oplus C\tilde{E} \mathcal{W}
\oplus D \vec{u}[k] \oplus \mathcal{V}, \label{eq:methods:reachableSet}
\end{multline}
where $\mathcal{X}[k]$ is the current set of states. If an initial state $x[0]$ is given, then the reachable set at $k$ can be computed by recursively applying \eqref{eq:methods:reachableSet}:
\begin{multline}
\mathcal{R}[k] = C \tilde{A}^k x[0] \oplus \sum_{i=0}^{k-1} C \tilde{A}^i \tilde{B} \vec{u}[i] \\
\oplus \bigoplus_{i=0}^{k-1}C \tilde{A}^{i}\tilde{E}\mathcal{W}
\oplus D \vec{u}[k] \oplus \mathcal{V}. \label{eq:methods:reachableSetRecursive}
\end{multline}
Since \eqref{eq:methods:reachableSetRecursive} only involves the Minkowksi sum and linear transformations, the resulting reachable set is closed-form and exact for linear systems $\tilde{S}$.
It is an inner-approximation of the reachable sets of linear systems $S$, as shown by the following Lemma
\begin{lemma}\label{lemma:constantInput}
By moving the set $\mathcal{W}$ out of the convolution integral of the particular solution of a linear time-invariant system, assuming $w$ is constant, the result is an inner-approximation of the time-varying $w(\tau)$ case.
\begin{multline*}
\left\{\int_{0}^{t}e^{A(t-\tau)} d\tau w \bigg| w \in \mathcal{W} \right\} \subseteq\\ \left\{\int_{0}^{t}e^{A(t-\tau)} w(\tau) d\tau \bigg| \forall \tau: w(\tau) \in \mathcal{W}\right\}.
\end{multline*}
The notation already shows that the right-hand side contains more solutions than the left-hand side. $\square$
\end{lemma}
For nonlinear systems in the form of $\dot{x} \in f(x,u,\mathcal{W})$, the solution is generally non-closed. Further works on outer and inner-approximations of reachable sets are surveyed in \cite{Althoff2020}.
In this work, we regard the compositional analysis of linear subsystems. We define the operators $\texttt{series}(S_1,S_2)$ and $\texttt{feedback}(S_1,S_2)$, which refer to the series and feedback interconnection of linear multi-input, multi-output subsystems $S_1$ and $S_2$, for which the results are also linear systems. The derivation is straight-forward and details can be found in the supplied software code and in \cite{Duke1986}.
| {
"attr-fineweb-edu": 1.28125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeAHxK7IAHQ7RZXDc | \section{Introduction}\label{intro}
Due to the absence of the measurement of B-mesons and precise
measurement of D-mesons, it is difficult to separate bottom and
charm contributions experimentally in current non-photonic
electron measurements for both spectra and elliptic flow $v_2$. As
discussed previously, the suppression behavior of heavy quarks is
quite different from light quarks due to the "dead cone"
effect~\cite{deadcone}, and this is especially true for the bottom
quark. Even when the elastic energy loss is included, the bottom
quark still loses much less energy. The bottom contribution may
reduce the energy loss of non-photonic electrons from heavy flavor
decays. But recent measurements show that the suppression of the
non-photonic electron \mbox{$R_{AA}$}\ is as large as light
hadrons~\cite{starcraa}. Both the theoretical result with charm
energy loss only and the theoretical calculations with
charm+bottom energy loss by assuming large $\hat{q}$ or counting
elastic energy loss can describe the data within
errors~\cite{DGLV06,Wicks05,RappRaa,Ivancoll,armeloss}.
Recently, PHENIX has measured the non-photonic electron
$v_2$~\cite{Phenixv2}. The observed large elliptic flow of the
non-photonic electron may indicate strong coupling of heavy quarks
with medium. There are many theoretical calculations for the
non-photonic electron $v_2$, such as charm thermal+flow
model~\cite{kocharmflow}, A multi-phase transition (AMPT) model
assume cross section $\sigma_{p}$=10 mb~\cite{AmptCharmflow},
resonance states of D-/B- mesons~\cite{vanHCharmflow}, {\em etc.}
The comparison with theories also showes that both the model
results with charm only and the results with charm+bottom have
good agreement with data within errors.
Thus, the puzzle of the bottom contributions in non-photonic
electron spectra and $v_2$ still remains. We present the following
method to estimate the bottom contributions and to study the
possible charm $v_2$.
\section{Fit to non-photonic electron spectrum and relative cross section ratio}\label{techno}
The non-photonic electron spectrum up to 10 \mbox{${\mathrm{GeV/}}c$}\ has been
measured by STAR experiment in 200 \mbox{$\mathrm{GeV}$}\ \mbox{\textit{p}+\textit{p}}\ collisions. The idea
is that we use the sum of electron spectra from both charm and
bottom decays in PYTHIA model~\cite{pythia} to fit the STAR \mbox{\textit{p}+\textit{p}}\
data~\cite{starcraa} to extract the fraction of the bottom
contribution. Since the D-mesons and their decay electrons spectra
from default PYTHIA parameters are soft~\cite{ffcharm}, a modified
Peterson Fragment Function (FF) and the high \mbox{$p_T$}\ tuned parameter
are used to make spectra harder to be comparable with the form
factor decays~\cite{XYLin04}.
Table~\ref{PYpar} lists the parameter initialization for PYTHIA
6.131:
\begin{table}[hbt]
\caption[PYTHIA parameters for heavy flavor decays]{PYTHIA
parameters for heavy flavor decays.} \label{PYpar} \vskip 0.1 in
\centering\begin{tabular}{|c|c|} \hline \hline
Parameter & Value \\
\hline
MSEL & 4 (charm), 5 (bottom) \\
\hline
quark mass & $m_c=1.25$, $m_b=4.8$ (\mbox{$\mathrm{GeV}$}) \\
\hline
parton dist. function & CTEQ5L \\
\hline
$Q^2$ scale & 4 \\
\hline
K factor & 3.5 \\
\hline
$\langle K_t\rangle$ & 1.5 \\
\hline
Peterson Frag. function & $\varepsilon=10^{-5}$ \\
\hline
high \mbox{$p_T$}\ tuned PARP(67) & 4 \\
\hline \hline
\end{tabular}
\end{table}
Fig.~\ref{pyspecratio} (a) shows the \mbox{$p_T$}\ distributions of the
heavy flavor hadrons and their decay electrons from PYTHIA with
above parameters. The D-meson spectrum, shown as the hatched band,
is normalized to \begin{equation} dN/dy = dN/dy(D^0)/\langle
N_{bin}\rangle/R_{dAu}/R, \end{equation} where
$dN/dy(D^0)=0.028\pm0.004\pm0.008$ measured in \textit{d}+Au\
collisions~\cite{stardAucharm}. $\langle N_{bin}\rangle=7.5\pm0.4$
in \textit{d}+Au\ collisions. $R_{dAu}=1.3\pm0.3$~\cite{Xinthesis}. $R$
factor stands for $D^0$ fraction in total charmed hadrons, the
fragmentation ratio $R(c\rightarrow D^0)\equiv
N_{D^0}/N_{c\bar{c}}=0.54\pm0.05$~\cite{PDG}. All these
normalization errors are propagated into the uncertainty band of
the D-meson spectrum. The curve in this band is the lower limit of
the D-meson spectrum in our simulation. Correspondingly, its decay
electron spectrum is shown as the solid band. The non-photonic
electron spectrum measured in \mbox{\textit{p}+\textit{p}}\ collisions at
STAR~\cite{starcraa} is shown as the open squares. The decay
electron band alone can describe the data, indicating that the
contribution of electrons from bottom decay could be very small.
In order to estimate the upper limit of bottom contribution, we
use the lower limit of the decay electron spectrum, shown as the
open circles. B-meson spectrum (solid curve) and its decay
electron spectrum (open triangles) are normalized by varying the
ratio of $\sigma_{b\bar{b}}/\sigma_{c\bar{c}}$. The summed
spectrum (solid circles) by combining the lower limit of
$D\rightarrow e$ and $B\rightarrow e$ is used to fit STAR data in
\mbox{\textit{p}+\textit{p}}\ collisions, and then the upper limit of $B\rightarrow e$
contribution will be extracted.
\begin{figure} \centering \begin{minipage}[c]{0.47\textwidth} \centering
\includegraphics[width=1.\textwidth]{PY_cbespec.eps}
\end{minipage}%
\begin{minipage}[c]{0.47\textwidth} \centering
\includegraphics[width=1.\textwidth]{PY_bcratio.eps}
\end{minipage}%
\caption[D-/B- mesons and their decay electron spectra from PYTHIA
and the relative spectra ratio]{Panel (a): D-/B- mesons and their
decay electron spectra from PYTHIA. The $B+D\rightarrow e$ fit to
STAR non-photonic electron data in \mbox{\textit{p}+\textit{p}}\ collisions. Panel (b): The
relative spectra ratio, upper limit of $B\rightarrow e$
contributions as a function of \mbox{$p_T$}.} \label{pyspecratio}
\end{figure}
Fig.~\ref{pyfitchi2} (a) shows the fit $\chi^2$ as a function of
the unique variable $\sigma_{b\bar{b}}/\sigma_{c\bar{c}}$. The
best fit with a minimum $\chi^2/ndf=16.6/14$ gives the upper limit
of the total cross section ratio as
$\sigma_{b\bar{b}}/\sigma_{c\bar{c}}\leq(0.49\pm0.09\pm0.09)\%$.
The first term of the errors is calculated from
$\chi^2=\chi_{min}^2+1$. The second term is from the 15\%
normalization error of the $dN/dy$ converted to total cross
sections due to the uncertainties of the model dependent rapidity
distributions~\cite{Xinthesis}. Fig.~\ref{pyfitchi2} (b) shows the
B-/D- mesons rapidity distributions from PYTHIA. The cross section
ratio from FONLL calculation is 0.18\%-2.6\%~\cite{cacciari}. The
upper limit is consistent with theory prediction.
\begin{figure} \centering \begin{minipage}[c]{0.47\textwidth} \centering
\includegraphics[width=1.\textwidth]{PY_fitchi2.eps}
\end{minipage}%
\begin{minipage}[c]{0.47\textwidth} \centering
\includegraphics[width=1.\textwidth]{PY_DBrapidity.eps}
\end{minipage}%
\caption[$\chi^2$ distribution from fitting to non-photonic
electron spectrum and rapidity distributions from PYTHIA]{Panel
(a): Fit $\chi^2$ as a function of
$\sigma_{b\bar{b}}/\sigma_{c\bar{c}}$. Straight lines is for the
$\chi^2=\chi_{min}^2+1$. Panel (b): B- (solid curve) /D- (dashed
curve) mesons rapidity distributions from PYTHIA.}
\label{pyfitchi2} \end{figure}
The upper limit of $B\rightarrow e$ contributions as a function of
\mbox{$p_T$}\ is shown in Fig.~\ref{pyspecratio} (b). It is increasing and
becomes flat around 7 \mbox{${\mathrm{GeV/}}c$}. The \mbox{$p_T$}\ crossing point, where the
bottom contribution is equal to charm, of electron spectra from
B,D decay is very sensitive to the cross section ratio, since at
high \mbox{$p_T$}, these electron spectra shapes are similar. From the
$B+D\rightarrow e$ fit to STAR \mbox{\textit{p}+\textit{p}}\ data, we estimate the crossing
point $p_T^c\geq7$ \mbox{${\mathrm{GeV/}}c$}.
Table~\ref{crosspt} lists the crossing points of heavy flavor
decay electrons in several \mbox{$p_T$}\ bins.
\begin{table}[hbt]
\caption[Crossing points of heavy flavor decay electrons]{Crossing
points of heavy flavor decay electrons as a function of $p_T$.}
\label{crosspt} \vskip 0.1 in
\centering\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \hline
\mbox{$p_T$}\ (\mbox{${\mathrm{GeV/}}c$}) & 2 & 3 & 4 & 5 & 6 & 7 ($p_T^c$) & 8 \\
\hline
$(B\rightarrow e)/(D\rightarrow e)\leq$ & 0.11 & 0.31 & 0.53 & 0.77 & 0.85 & 1.2 & 1.1 \\
\hline \hline
\end{tabular}
\end{table}
\section{Fit to non-photonic electron $v_2$}\label{others}
Besides the non-photonic electron spectrum, the non-photonic
electron $v_2$ has also been measured in 200 \mbox{$\mathrm{GeV}$}\ Au+Au\
collisions at RHIC~\cite{Phenixv2}. In this measurement, bottom
contribution has not been separated, which can be studied by
comparing simulations and data. Since heavy flavor hadrons \mbox{$p_T$}\
distributions and $v_2$ are unknown, our simulations have to base
on the following assumptions:
\begin{description}
\item[--] The same relative $(B\rightarrow e)/(D\rightarrow e)$
ratio from \mbox{\textit{p}+\textit{p}}\ to Au+Au. \item[--] Assume the B-/D- meson $v_2$
as the inputs for the simulation, here we assume three aspects:
\begin{itemize}
\item \uppercase \expandafter {\romannumeral 1}: B-/D- meson $v_2$
are similar as light meson $v_2$. \item \uppercase \expandafter
{\romannumeral 2}: D-meson $v_2$ as light meson $v_2$ but B-meson
does not flow. \item \uppercase \expandafter {\romannumeral 3}:
$B\rightarrow e$ contribution is neglected and D-meson $v_2$
decreases at $p_T>2$ \mbox{${\mathrm{GeV/}}c$}.
\end{itemize}
\end{description}
Here heavy flavor baryons, $\Lambda_c$, $\Lambda_b$ are taken into
account as 10\% of total heavy flavor hadrons~\cite{PDG,pdgerr}.
Their $v_2$ are assumed to follow light baryon $v_2$. This baryon
contribution effect in this simulation is small.
We use the light meson $v_2$ curve from fitting experimental
data~\cite{minepiv2} as the input B/D $v_2$ distributions
(Assumption \uppercase \expandafter {\romannumeral 1}), see
Fig.~\ref{cbev2} (a). That means in each \mbox{$p_T$}\ bin, the B/D
$\Delta\phi$ distribution is initialized. The electron
$\Delta\phi$ distributions in each \mbox{$p_T$}\ bin will be obtained via
B/D decays in PYTHIA model. Then the electron $v_2$, shown in
Fig.~\ref{cbev2} (b), will be extracted by fitting the
$\Delta\phi$ distributions in each \mbox{$p_T$}\ bin.
\begin{figure} \centering\mbox{
\includegraphics[width=0.9\textwidth]{PY_cbev2.eps}}
\caption[Decay electron $v_2$ from assumed B-/D- mesons
$v_2$]{Panel (a): Assumed B-meson $v_2$ (open circles) and D-meson
$v_2$ (dashed curve) as light meson $v^2$. Panel (b): Electron
$v_2$ from B-meson decays (open circles) and D-meson decays (open
squares).} \label{cbev2} \end{figure}
Fig.~\ref{cbev2} shows the obvious mass effect: The B/D $v_2$ are
assumed as the same, but the decay electron $v_2$ can be very
different due to decay kinematics. This is not surprising, since
we know B-meson is much heavier than D-meson and light hadrons.
The decay electrons can only have a small momentum fraction of
B-mesons. The momentum and angular correlations between decay
electrons and B-mesons are weak, especially at low \mbox{$p_T$}. Therefore,
at low \mbox{$p_T$}\ the decay electron $\phi$ angle will almost randomly
distribute. So we see the zero or negative $v_2$ for the electron
from B-meson decays. But from previous study, we know that bottom
contribution below 3 \mbox{${\mathrm{GeV/}}c$}\ is small, thus the mass effect to the
total electron $v_2$ is not significant.
Fig.~\ref{v2bnob} (a) shows the total electron $v_2$ from PYTHIA
simulation compared to data. The measured non-photonic electron
$v_2$ from PHENIX is shown as the triangles. The solid curve
(Assumption \uppercase \expandafter {\romannumeral 1}) is the sum
$v_2$ of the two decay electron $v_2$ distributions in
Fig.~\ref{cbev2} (b) by taking the relative ratio of
$(B\rightarrow e)/(D\rightarrow e)$ into account. It can not
describe the data. If we assume B-meson does not flow (Assumption
\uppercase \expandafter {\romannumeral 2}), the total decay
electron $v_2$ will become decreasing, shown as the band. The band
is corresponding to the
$\sigma_{b\bar{b}}/\sigma_{c\bar{c}}=(0.3-0.7)\%$ (The upper
limit, 0.49\%, is in between). It has better agreement with data,
but still higher. The decreasing of non-photonic electron $v_2$
could be due to $B\rightarrow e$ contribution and B-meson $v_2$
could be very small. But below 3 \mbox{${\mathrm{GeV/}}c$}, $B\rightarrow e$
contribution is not significant. That indicates D-meson $v_2$
should be smaller than light meson $v_2$ and start decreasing at
higher \mbox{$p_T$}\ ($>2$ \mbox{${\mathrm{GeV/}}c$}).
\begin{figure} \centering\mbox{
\includegraphics[width=0.9\textwidth]{PY_v2bnob.eps}}
\caption[The total electron $v_2$ from PYTHIA simulation compared
to data]{Panel (a): The total electron $v_2$ from PYTHIA
simulation assuming that bottom flows (solid curve) and bottom
does not flow (band) compared to data. Panel (b): The total
electron $v_2$ from PYTHIA simulation fit to data and the
estimated D-meson $v_2$.} \label{v2bnob} \end{figure}
So ignoring $B\rightarrow e$ contribution, we try to speculate the
D-meson $v_2$ by fitting the data using decay electron $v_2$
(Assumption \uppercase \expandafter {\romannumeral 3}). In
Fig.~\ref{v2bnob} (b), the best fit of the decay electron $v_2$ is
shown as the open circles. The estimated D-meson $v_2$ is shown as
the dashed curve, which is smaller than light meson $v_2$ above 1
\mbox{${\mathrm{GeV/}}c$}\ and start decreasing above 2 \mbox{${\mathrm{GeV/}}c$}.
\section{Conclusions}\label{concl}
Charm/bottom and their decayed electron spectra and $v_2$ have
been studied using PYTHIA simulation. From fitting to the STAR
non-photonic electron spectra in p+p collisions, we estimate the
upper limit of the total cross-section ratio as
$\sigma_{b\bar{b}}/\sigma_{c\bar{c}}\leq(0.49\pm0.09\pm0.09)\%$.
And the crossing point of electron spectra from B decay and D
decay is estimated as $p_T^c\geq7$ \mbox{${\mathrm{GeV/}}c$}.
The bottom contribution due to mass effect can decrease the
non-photonic electron $v_2$, but this effect is not significant.
The decrease of the non-photonic electron $v_2$ is mainly due to
the decrease of the parent D-meson $v_2$. The estimated D-meson
$v_2$ is smaller than light meson $v_2$ above 1 \mbox{${\mathrm{GeV/}}c$}\ and start
decreasing above 2 \mbox{${\mathrm{GeV/}}c$}. This most possible D-meson $v_2$
distribution shows that at $p_T<3$ \mbox{${\mathrm{GeV/}}c$}, where the bottom
contribution is negligible, D-meson has large $v_2$, indicating
that charm strongly flows in high dense medium, which could be the
evidence of light flavor thermalization in QGP created at RHIC
energy.
\section*{Acknowledgments}
We thank to S.~Esumi, H.~Huang, Y.~Miake, S.~Sakai and N.~Xu for
their cooperation. We thank to the conference organizers. We would
also like to appreciate Drs. L.J.~Ruan and Z.B.~Xu for helpful
discussions.
| {
"attr-fineweb-edu": 1.490234,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeDc5qhLACA4PwNwd | \section*{\color{red} To-Do list and open questions}
\begin{itemize}
\color{red}
\item additional GRMHD plot page (commented out) plots $\log_{10}\left(B^2\right)$ and $Be$
\item -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
\item stagnation surface?
\item interpretations for Mdot trend fine?
\item source size of Rh=1 GRRT/recon images too large?
\item comp. metric section sufficient?
\item sourze size displayed as table or plot
\item why are the Kerr images generically brighter even though in dilaton the jet contributes so significantly?
\item edf comparison plot: why is the torus brighter for kappa models and not the jet?
\item why no comparison metrics between GRRT and reconstruction, like Mizuno+\,2018 and Fromm+\,2021?
\end{itemize}
\section*{\color{red} Issues}
{\color{red} Due to the numerous changes done to the draft, the following issues arise.
1. The GRRT section was written specifically with the two big image collages in mind. I describe in detail the source morphology and other features, and now the figures are only in the appendix. Further, the still contains references to panels in the big GRRT image collages that will be moved to the appendix. I am not sure how to resolve this issue without losing guidance for the reader.
2. I still am not sure why in the eDF comparison plots, the torus is brighter in the non-thermal case. intuitively, the jet should be brighter....
3. The carpet plot section seems out of place. I tried to write some short sentences but it is likely that it is insufficient. Moreover, the jump from comparing Kerr to dilaton exclusively to also comparing the accretion model comes out of nowhere. See section 3.3.3.}
\end{comment}
\section{Introduction}\label{sec:intro}
In April 2019, the Event Horizon Telescope Collaboration (EHTC) published the first ever image of the supermassive black hole (SMBH) at the heart of M87 \citep{EHT_M87_PaperI,EHT_M87_PaperII,EHT_M87_PaperIII,EHT_M87_PaperIV,EHT_M87_PaperV,EHT_M87_PaperVI}. The 2017 observation campaign included numerous other targets, including the Galactic Center black hole Sgr\,A$^\ast$. The existence of an SMBH in the center of our own galaxy has been inferred from stellar dynamics observations over multiple decades by two independent groups \citep{Ghez:2008,Gillessen:2009}. In the wake of the publication of the first EHT radio image of Sgr\,A$^\ast$ \citep{EHT_SgrA_PaperI,EHT_SgrA_PaperII,EHT_SgrA_PaperIII,EHT_SgrA_PaperIV,EHT_SgrA_PaperV,EHT_SgrA_PaperVI}, open questions concern the type of compact object residing in the Galactic Center. At the same time, accretion physics onto compact objects and emission processes in the direct vicinity around them are not fully understood. In the past, numerous theoretical comparisons between background spacetimes and their effect on the photon ring size \citep{Wielgus2021,Cruz-Osorio2021a}, black hole charge \citep{Kocherlakota2021} and synthetic images have been carried out \citep{Lu2014,Lu2016,Olivares2020,Mizuno2018,Fromm2021a,Younsi2021,Ozel2021,Kocherlakota2022}. Still, tests of general relativity in the strong-field regime pose an immense challenge, especially once effects such as plasma turbulence and limited telescope resolution come into play.
This work aims to compare two fundamentally different spacetimes and various combinations of accretion and emission models. We build on the pioneering work of \cite{Mizuno2018}, who compared a moderately rotating Kerr to a non-rotating dilaton black hole in a full-3D GRMHD simulation. First, we expand on this work by including a Magnetically Arrested Disk (MAD) accretion model in addition to the Standard and Normal Evolution (SANE).
From the GRMHD simulations, we find that the accretion torus consists of hot low-density collision-less ion plasma. Since we are interested in the radiating electrons, a bridging function for the respective temperatures is required. Many works such as \cite{Mizuno2018} used a constant value to relate the two; however, electrons radiate efficiently and Coulomb-collision ion cooling is suppressed due to low densities and high temperatures. Additionally, various heating processes are at play \cite{Chael2018,Mizuno2021}. Due to the different impacts of the aforementioned processes, the electron and proton temperatures are believed to be non-linearly related. Since our simulations do not include electrons, we employ a parametrization of the electron temperature as an effective model in radiative post-processing \citep{Moscibrodzka2016}. Further, plasma processes such as magnetic reconnection and instabilities lead to acceleration of electrons in some regions around black holes. Therefore, we employ a non-thermal electron energy distribution function \citep[see, e.g.][]{Ball2018a,Davelaar2018,Davelaar2019,Cruz-Osorio2021b}. Following \cite{Mizuno2018}, we also consider matching the Kerr and dilaton black holes at their unstable circular photon orbits and at their event horizons. Additionally, in order to disentangle effects of black hole spin from the presence of additional fields in the background spacetime, we include a Schwarzschild simulation.
Other works investigating alternative theories of gravity \citep[e.\,g.][]{EHT_SgrA_PaperVI,Kocherlakota2021,Kocherlakota2022,Younsi2021,Ozel2021,Vagnozzi2022} often make use of semi-analytical plasma physics and emission models but include a broad spectrum of theories in their study. In this work, we restricted us to a single theory, the dilaton black hole, but aimed to create a scenario more akin to reality. The dilaton parameter used in this study is well within the current constraints \citep[see Fig. 18 of][]{EHT_SgrA_PaperVI}. Instead of semi-analytic models, we use state-of-the-art GRMHD and GRRT simulations to model accretion and emission physics in this alternative theory of gravity.
In this study, we scale our simulations to Sgr~A$^\ast$ (RA 17h 45m 40s, Dec -29$^\circ$ 0' 28'', \citealt{Petrov2011}) as a representative system. We use a mass of $M_{\rm BH}=4.148\times10^{6}$\,M$_\odot$, at a distance of $D_{\rm BH}=8.175$\,kpc \citep{Gravity2019}.
The paper is structured as follows: In Section \ref{sec:methods} we describe the setup of the GRMHD simulations and GRRT calculations. We present the results in the same order in Section \ref{sec:results} along with a spectral analysis, and discuss them in Section \ref{sec:summary}. We present our final conclusions in Section \ref{sec:conclusion}.
\section{Methods}\label{sec:methods}
\subsection{General-Relativistic Magneto-hydrodynamics (GRMHD)}\label{sec:GRMHD}
In this work, we investigate two exemplary black hole systems in full GRMHD. Following the setup in \cite{Mizuno2018}, we choose a Kerr black hole with dimensionless spin $a_\star=0.6$ and a dilaton black hole with dilaton parameter $\hat{b}=0.504$ in spherically symmetric polar coordinates. The value of $\hat{b}$ is consistent with constraints obtained in recent studies \citep{EHT_SgrA_PaperVI,Kocherlakota2021} and quantifies a deviation from GR through a contribution to the black hole mass caused by the presence of the dilaton.
The dilaton black hole is described by Einstein-Maxwell-dilaton-axion (EMDA) gravity, which in turn has its roots in the low-energy effective formulation of string theory \citep{Garcia1995}. In EMDA gravity, the scalar dilaton and axion vector fields couple to the Faraday tensor.
In order to arrive at two comparable systems with similar plasma dynamics, the black holes are matched to have the same innermost stable circular orbit (ISCO). Likewise, event horizon or photon ring can be chosen to mach the spacetimes. To this end, the dilaton parameter was calculated from the Kerr black hole's spin by equating the respective expressions for the ISCO. The analytic expressions of the dilaton metric along with charactersitic radii and the resulting matchings are reported in appendix \ref{sec:matchings_app}. For more details on EMDA gravity, see e.\,g. \citep{WeiLiu2013,Flathmann2015,Banerjee2021a,Banerjee2021b}.
It is important to note that instead of simply highlighting differences between two objects, we show that even differences between to two fundamentally dissimilar objects are not appreciable at the moment. In the former case, it would be best to compare similar objects, e.\,g. a Schwarzschild to a non-rotating dilaton black hole. Since we however take the latter approach to this challenge, we consider the rotating Kerr and the non-rotating dilaton black hole. Additionally, given a fixed black hole mass, the Schwarzschild metric does not contain any degree(s) of freedom through which systems with a common ISCO, unstable photon orbit or event horizon radii could be explored.
This work does not aim to extensively investigate spacetime parameters or non-GR spacetimes in a general manner, but rather serves as a case study on the distinguishability of two different spacetimes. Moreover, we study plasma properties in the framework of accretion models, electron temperature, and electron distribution function by changing plasma parameters like density, emissivity and opacity.
In the EMDA metric, we set both the axion field and the spin of the dilaton black hole to zero \citep{Mizuno2018}. The metric then reduces to a Schwarzschild-like expression with the dilaton parameter $\hat{b}$ as the remaining degree of freedom, in a sense quantifying the deviation from general relativity.
The fundamental GRMHD equations read \citep[e.\,g.][]{Rezzolla_book:2013,Porth2017}:
\begin{linenomath}\begin{equation} \label{eq:grmhd_eqs}
\nabla_\mu\left(\rho u^\mu\right)=0,\ \ \ \nabla_\mu T^{\mu\nu}=0,\ \ \ \nabla_\mu {}^{*}\!F^{\mu\nu}=0,
\end{equation}\end{linenomath}
and describe local conservation of mass, energy and momentum and Faraday's law. In the first equation, $\rho$ is the rest mass density and $u^\mu$ is the fluid four-velocity. The energy-momentum tensor $T^{\mu\nu}$ and the dual of the Faraday tensor $^{*}\!F^{\mu\nu}$ read:
\begin{linenomath}\begin{equation}
T^{\mu\nu}=\rho h_{\rm tot}u^\mu u^\nu +p_{\rm tot} g^{\mu\nu}-b^\mu b^\nu,\ \ \ ^{*}\!F^{\mu\nu}=b^\mu u^\nu - b^\nu u^\mu,
\end{equation}\end{linenomath}
where $p_{\rm tot}=p+b^2/2$ and $h_{\rm tot}=h+b^2/\rho$ are the total pressure and specific enthalpy, respectively.
The magnetic field strength in the fluid frame and magnetic field four-vector are denoted by $b^2=b^\mu b_\mu$ and $b^\mu$.
In total, four GRMHD simulations of the Kerr and dilaton black holes with two distinct magnetic field configurations were carried out. The extents of the numerical grid and other parameters are summarized in Table \ref{tab:grmhd_params}.
The setup is identical to \cite{Mizuno2018}: An initially stationary torus in hydrostatic equilibrium with a weak poloidal magnetic field is set up around the black hole. For both the Kerr and dilaton tori a constant specific angular momentum distribution is chosen, with $l_{\rm Kerr}=4.5$ and $l_{\rm dilaton}=4.567$. These values determine the inner edge of the torus to be $r_{\rm in}=10.3$\,M in both systems \citep{Font02b,Rezzolla_book:2013,Cruz2020}. Inside the outermost closed equipotential surface, thermodynamic quantities are computed with an ideal gas equation of state with an adiabatic index $\Gamma=4/3$. In order to avoid vacuum regions, floor values are applied whenever a cell satisfies $\rho\leq\rho_{\rm floor}=10^{-4} r^{-3/2}$ or $p \leq p_{\rm floor}=(10^{-6}/3) r^{-5/2}$ \citep{Mizuno2018}. Since the torus is stationary by construction, the magneto-rotational instability (MRI) is triggered by randomly perturbing the gas pressure by about 1\%. The MRI develops and subsequently drives the accretion process.
The vector potential of the poloidal magnetic field (with $q$ as defined below) has the general form
\begin{linenomath}\begin{equation}
A_\phi \propto \max\left(q-0.2,0\right),
\end{equation}\end{linenomath}
and is added on top of the constructed torus. In this work, both a weak and a strong magnetic field configuration are considered. The former will produce a Standard and Normal Evolution (SANE) scenario \citep{Narayan2012,Mizuno2018,Ripperda2020}, while the latter is likely to result in a Magnetically Arrested Disk (MAD) situation. For each case,
\begin{linenomath}\begin{align}
q &= \frac{\rho} {\rho_{\rm max}}, \ &\mathrm{for\ \ SANE}\\
q &= \frac{\rho} {\rho_{\rm max}}\left(\frac{r}{r_{\rm in}}\right)^3 \sin^3 \theta \ \exp \left(-\frac{r}{400}\right), &\mathrm{for\ \ MAD}
\end{align}\end{linenomath}
see also \cite{Fishbone76,Font02b,Rezzolla_book:2013}.
In SANE, matter can continuously accrete onto the black hole since the magnetic field is weak and disordered \citep{Narayan2012}. In the strong field case, more magnetic flux can pile up near the black hole, blocking off accretion and thereby "arresting" the disk \citep{Narayan2012}.
The simulation domain spans $r\in\left(0.8\,r_{\rm eh}, 1,000\,{\rm M}\right)$, $\theta\in\left(0.01\pi, 0.99\pi\right)$ and $\phi\in\left(0, 2\pi\right)$, covered by the numerical grid with $(N_r,N_\theta,N_\phi)=\left(256,128,128\right)$ gridpoints. The grid is logarithmic in the radial direction to naturally grant a higher resolution near the black hole, and uniform in the azimuthal and polar directions. At the inner and outer radial boundaries, standard inflow and outflow boundary conditions are employed, respectively. Along the polar boundaries, a solid reflective wall sets the flux through it to zero \citep{Shiokawa2012,Mizuno2018}. In the azimuthal direction, boundary conditions are simply periodic.
The GRMHD equations \eqref{eq:grmhd_eqs}, in conservative and 3+1 split form, are solved by the \texttt{Black Hole Accretion Code BHAC} \citep{Porth2017}. It is a multidimensional extension to the \texttt{MPI-AMRVAC} framework \citep{Porth2014,Keppens2012} capable of evolving the GRMHD equations in a finite volume representation, in arbitrary spacetimes and coordinates. For a comparison with other state-of-the-art GRMHD codes, see \cite{Porth2019_etal}.
In our setup, grid cell-interface values of primitive variables are calculated using a piecewise-parabolic method, resulting in local Riemann problems handled by a total variation diminishing Lax-Friedrichs scheme. For the time advance, a predictor-corrector scheme is employed \citep{Porth2017}. The spherically symmetric dilaton metric is implemented in Rezzolla-Zhidenko (RZ) parametrized form \citep{Rezzolla2014,Konoplya2016a}.
Even though \texttt{BHAC} is capable of adaptive mesh refinement (AMR), our setup does not make use of it, enabling us to handle conservation of the $\boldsymbol{\nabla\cdot B} =0$ constraint using flux-interpolated constrained transport (FCT) \citep{Olivares2019}. We further employ modified Kerr-Schild coordinates, along with their parametrized form for the dilaton system \citep{Porth2017,Mizuno2018}. \\
\begin{table*}[h!]
\vspace{-0pt}
\centering
\def1.0{1.2}
\caption{GRMHD parameters for SANE and MAD simulations, adapted from \cite{Mizuno2018}, $r_{\rm eh}$ is the event horizon radius.}
\begin{tabular}{llll}
\hline \hline
\multicolumn{4}{c}{Plasma}\\
adiab. index $\Gamma$ & density floor $\rho_{\rm fl}$ & pressure floor $p_{\rm gas,\ fl}$ & accretion model\\
$4/3$ & $10^{-4} r^{-3/2}$ &$ (10^{-6}/3) r^{-5/2}$ & MAD, SANE\\
\hline
\multicolumn{4}{c}{Spacetime$^\ast$}\\
$a$ & $\hat{b}$ & $l_{\rm torus,\ Kerr}$ & $l_{\rm torus,\ dilaton}$ \\
$0.6$ & $0.504$ & $4.5$ & $4.567$ \\
\hline
\multicolumn{4}{c}{Grid extent}\\
radial, $r$ & azimuthal, $\theta$ & polar, $\phi$ & cells, $(N_r,N_\theta,N_\phi)$\\
$ \left(0.8\,r_{\rm eh}, 1,000\,{\rm M}\right)$ &$ \left(0.01\pi, 0.99\pi\right)$ &$\left(0, 2\pi\right) $&$\left(256,128,128\right)$ \\
\hline
\multicolumn{4}{l}{$^\ast$ Kerr dimensionless spin parameter, dilaton parameter and specific angular momenta} \\
\label{tab:grmhd_params}
\end{tabular}
\end{table*}
\subsection{Electron temperature and distribution function}\label{sec:ET}
In hot low-density plasmas, temperatures of ions and electrons are generally not equal,
resulting in a two-temperature state \citep[][and references therein]{Yuan2014}. Since our GRMHD simulation only evolves the dynamically important protons, they have to be linked to the radiating electrons. \cite{Mizuno2018} used a constant proton-to-electron temperature ratio $T_{\rm p}/T_{\rm e}=3$; in this study, $T_{\rm p}/T_{\rm e}$ is set by a parametrization depending on the plasma parameter $\beta$ and an additional free parameter $R_{\rm high}$ \citep{Moscibrodzka2016}, where $\beta\equiv p_{\rm\,gas}/p_{\rm\,mag}$ is the ratio of gas-- to magnetic pressure. The $T_{\rm p}/T_{\rm e}$ parametrization is defined as:
\begin{linenomath}\begin{equation} \label{eq:tratio_beta}
\frac{T_{\rm p}}{T_{\rm e}}=\frac{R_{\rm low}+R_{\rm high}\,\beta^2}{1+\beta^2}\,.
\end{equation}\end{linenomath}
For alternative electron temperature prescriptions, see \cite{Anantua2020}. The free parameters $R_{\rm high}$ and $R_{\rm low}$ control the temperature ratio in the disk ($\beta\gg1$) and in the jet ($\beta\ll1$). Throughout this work, the simplified version of the parametrization characterized by $R_{\rm low}=1$ is employed \citep{EHT_M87_PaperV}.
The electron temperature in cgs units is then calculated as
\begin{linenomath}\begin{equation}\label{eq:dim_less_T_e}
T_{\rm e}=\frac{m_{\rm e}c^2}{k_{\rm B}} \Theta_{\rm e} = \frac{m_{\rm e}c^2}{k_{\rm B}} \Theta_{\rm p} \frac{m_{\rm p}}{m_{\rm e}} \left(\frac{T_{\rm p}}{T_{\rm e}}\right)^{-1}.
\end{equation}\end{linenomath}
$\Theta_{\rm e} \equiv k_{\rm B} T_{\rm e}/m_{\rm e} c^{2}$ is the dimensionless electron temperature; its proton (ion) equivalent $\Theta_{\rm p}$ is known from the GRMHD simulation. The R-$\beta$ parametrization has been shown to well model the presence of turbulent and magnetic reconnection heating of electrons \citep{Mizuno2021,Chael2018}.
In addition to the thermal electron distribution function, a non-thermal kappa model is adopted \citep[e.\,g.][or, for recent application, \citealt{Davelaar2018,Davelaar2019}]{Vasyliunas1968,Tsallis1988,Tsallis1998,LivadotisMcComas2009}. All formulations of electron energy distribution functions, absorptivities and emissivities are taken from \cite{Pandya2016}.
The thermal distribution function reads \citep{Leung2011}:
\begin{linenomath}\begin{equation}\label{eq:rel_thermal}
\frac{dn_{\rm e}}{d\gamma_{\rm e}\,d\cos\xi\,d\phi} = \frac{n_{\rm e}}{4 \pi \Theta_{\rm e}} \frac{\gamma_{\rm e} \left(\gamma_{\rm e}^2 - 1\right)^{1/2}}{K_2\left(1/\Theta_{\rm e}\right)} \exp \left(- \frac{\gamma_{\rm e}}{\Theta_{\rm e}}\right),
\end{equation}\end{linenomath}
with electron number density $n_{\rm e}$, gyrophase $\phi$, Lorentz factor $\gamma_{\rm e}$, pitch angle $\xi$ and modified Bessel function of the second kind $K_2$. The kappa distribution function can be written as \citep{Xiao2006}:
\begin{linenomath}\begin{equation} \label{eq:rel_kappa}
\frac{dn_{\rm e}}{d\gamma_{\rm e}\,d\cos{\xi}\,d\phi} = \frac{N}{4 \pi} \gamma_{\rm e} \left(\gamma_{\rm e}^2 - 1\right)^{1/2} \left(1 + \frac{\gamma_{\rm e}-1}{\kappa w}\right)^{-(\kappa+1)},
\end{equation}\end{linenomath}
with normalization factor $N$ \citep{Pandya2016}. The kappa index is related to the high-energy power law slope $s$ as $\kappa=s+1$. The width of the distribution is explained below.
In this work, $\kappa$ is not set to a constant value but is calculated from fluid variables based on particle-in-cell simulations of magnetic reconnection in current sheets \citep{Ball2018a}:
\begin{linenomath}\begin{equation}
\kappa= 2.8+0.7\,\sigma^{0.5}+3.7\,\sigma^{-0.19}\,\tanh \left(23.4\,\sigma^{0.26}\,\beta\right),
\end{equation}\end{linenomath}
In the above equation, $\sigma=b^2/\rho$ is the magnetization. For different $\beta$ and $\sigma$, analytical fitting functions are obtained for $10^{-4}<\beta<1.5$ and $0.1<\sigma<7.2$. These values are believed to be consistent with typical values found in the outer jet wall.
The width of the distribution $w$ can be written to contain a thermal and a magnetic energy term \citep{Davelaar2019}:
\begin{linenomath}\begin{equation}\label{eq:w}
w=\frac{\kappa-3}{\kappa}\left(\Theta_{\rm e}+\frac{\varepsilon}{2}\left[1+\tanh\left(r-r_{\rm inj}\right)\right]\frac{m_{\rm p}}{m_{\rm e}} \frac{\sigma}{6}\right),
\end{equation}\end{linenomath}
where $\varepsilon$ sets the fraction of the magnetic energy contribution to the electron temperature. We set $\varepsilon$ to $0$ and $0.015$.\\
\subsection{General Relativistic Radiative Transfer (GRRT)}\label{sec:GRRT}
In order to model millimeter and sub-millimeter synchrotron emission, GRRT calculations are carried out on the GRMHD simulations. First, null geodesics (light rays) are integrated directly between the black hole system and a far-away observer. Then, the differential equations for intensity and optical depth are integrated along each ray \citep{Younsi2012}. They read:
\begin{equation}
\frac{d\tau_\nu}{d\lambda} = \xi^{-1}\,\alpha_{0,\nu} \vphantom{\frac12}\,,\quad
\frac{d\mathcal{I}}{d\lambda} = \xi^{-1}\left(\frac{j_{0,\nu}}{\nu^3}\right)e^{-\tau_\nu}\vphantom{\frac12}\label{eq:int_ode_I},
\end{equation}
with affine parameter $\lambda$, optical depth $\tau_\nu$, Lorentz invariant intensity $\mathcal{I}$, frequency $\nu$, absorptivity $\alpha_{0,\nu}$ and emissivity $j_{0,\nu}$. For the latter two, subscript 0 denotes measurement in the rest frame.
In this work, we make use of the code \texttt{Black Hole Observations in Stationary Spacetimes BHOSS} \citep{Younsi2020}. The geodesics are handled by a Runge-Kutta-Fehlberg integrator, solving the equations to fourth order and adjusting the step size using a fifth-order error estimate \citep{Fehlberg1969}. The intensity equations are integrated in an Eulerian scheme along each previously obtained light trajectory. In \texttt{BHOSS}, a far-away observer is initialized in the form of an image plane perpendicular to the line of sight (towards the black hole system). The full camera setup is reported in \cite{Younsi2016}. All generated images are averages over 101 snapshots taken between 11,000\,M and 12,000\,M of the GRMHD simulation. This time span corresponds to about six hours for Sgr A$^\ast$. For each emission model, we iterate the mass accretion rate $\dot{M}$ until an average flux of 2.5\,Jy at 230\,GHz is obtained \citep{Bower2019}.
Recently, it has been shown in ultra-high spatial resolution GRMHD simulations that magnetic reconnection takes place in both jet and disk \citep[e.\,g.][]{Ripperda2020}. Nonetheless, we apply non-thermal emission only in a narrow region within the jet wall, consistent with existing literature \citep{Davelaar2018,Davelaar2019}.
We neglect any emission from regions where $\sigma\geq\sigma_{\rm cut}=1$. Additionally, we employ a constraint on the Bernoulli parameter $Be=-hu_t\geq1.02$. Where $Be$ exceeds unity, the gas is unbounded, feeding jet and wind outflow \citep{Moscibrodzka2013}. Image extent and resolution are summarized in Table \ref{tab:bhoss_params}, along with mass of, and distance to the black hole.
For moderate spin, the most recent EHT results favor models with lower inclinations and higher values of $R_{\rm high}$ for Sgr\,A$^\ast$ \citep{EHT_SgrA_PaperV}. The inclination used here was adapted from \cite{Mizuno2018} to maintain comparability of results. As a follow-up to Mizuno et al., this study was started long before these results were on the horizon. We investigated $R_{\rm high}=80$ and 160 in \cite{Roder2022}. However, at the inclination and field of view chosen, the SANE image morphology does not change for $R_{\rm high}\geq40$, while the MAD images stop changing already for $R_{\rm high}\geq10$.
\begin{table}[h]
\vspace{-0pt}
\centering
\def1.0{1.2}
\caption{GRRT parameters. }
\begin{tabular}{llll}
\hline \hline
\multicolumn{4}{c}{Images}\\
pixels & FOV ($\upmu$as)& inclination (deg) & $S_{\rm 230\,GHz}$ (Jy)\\
1024 & 300 &60 &2.5\\
\hline
\multicolumn{4}{c}{Emission model}\\
$R_{\rm low}$ & $R_{\rm high}$& eDF & $\varepsilon$ \\
1& 1, 10, 20, 40 & thermal, non-thermal & 0.0, 0.015\\
\hline
\label{tab:bhoss_params}
\end{tabular}
\end{table}
\begin{table} [h!]
\centering
\def1.0{1.2}
\setlength{\tabcolsep}{2.5pt}
\caption{Mean values and standard deviations for $\dot{M}$, $\Phi_{\rm BH}$, and $\psi=\Phi_{\rm BH}/\sqrt{\dot{M}}$, computed between 11\,000\,M to 12\,000\,M. Values for the Schwarzschild spacetime are taken from \cite{Fromm2021b}.}
\begin{tabular}{ l l lll }
\hline\hline
Metric & Model & $\langle\dot{M}\rangle$ & $\langle\Phi_{\rm BH}\rangle$ & $\langle\psi\rangle$ \\
\hline
Kerr & SANE & $6.5\pm0.8$ & $5.09\pm0.13$ & $2.00\pm0.11$ \\
Dilaton & SANE & $5.2\pm0.7$ & $2.42\pm0.07$ & $1.06\pm0.07$ \\
Schwarzschild & SANE & $0.36\pm0.03$ & $0.60\pm0.02$ & $0.99\pm0.05$ \\
Kerr & MAD & $2.2\pm0.5$ & $12.79\pm0.09$ & $8.6\pm0.7$ \\
Dilaton & MAD & $2.5\pm0.4$ & $12.15\pm0.13$ & $7.8\pm0.7$ \\
Schwarzschild & MAD & $5.0\pm0.7$ & $32\pm1$ & $14\pm1$ \\
\hline
\end{tabular}
\label{tab:grmhd_mean_stdev}
\end{table}
\section{Results}\label{sec:results}
\subsection{GRMHD simulations}\label{sec:grmhd_results}
The four Kerr/dilaton and SANE/MAD model configurations are evolved until 15,000\,M. Since the Kerr and dilaton black holes were matched to have the same ISCO, the overall dynamical behavior is quite similar. Past 10,000\,M (SANE) or 11,000\,M (MAD), the systems begins to saturate and finally enters a quasi-steady state. The analysis of magnetization, $\sigma$, plasma $\beta$, Lorentz factor, $\gamma$ and electron temperature, $\Theta_{\rm e}$, as well as all GRRT calculations are therefore carried out on the interval between 11,000\,M and 12,000\,M, equivalent to an observation time of around six hours for Sgr\,A$^\ast$.
Time averages and corresponding standard deviations for $\dot{M}$, $\Phi_{\rm BH}$ and $\psi$ over this interval are listed in Table \ref{tab:grmhd_mean_stdev}.
Within the full evolution time of 15,000\,M, the MAD simulations are approaching the characteristic value $\psi\approx10$ for a MAD state \citep{Tchekhovskoy2011}. This is however, consistent with $\psi_{\rm max}\approx 15$ for $a=0.9735$ \citep{Porth2019_etal}, since the Kerr black hole in this work is only moderately rotating.
Figures \ref{fig:SBG2Te_MAD} and \ref{fig:SBG2Te_SANE} shows time and azimuthally averaged magnetization $\sigma$, plasma $\beta$ and electron temperature $\Theta_{\rm e}$ for two values of $R_{\rm high}$. The panels expand to 30\,$r_g$, corresponding to 150\,$\upmu$as in the GRRT images. We define here the jet spine to be bounded by $\sigma=1$ and the jet sheath as the region where $0.1<\sigma<1 \land Be<1.02$, where a Bernoulli parameter $Be>1$ describes unbounded gas feeding jet and wind outflow \citep{Moscibrodzka2013}.
In SANE, the torus in both spacetimes is weakly magnetized; this is a generic feature also present in the MAD simulation (panels a and e). The jet spine, however, is much more magnetized in Kerr than the corresponding regions in the dilaton system. For Kerr, in both SANE and MAD simulations, in agreement with Blandford-Znajek mechanism, more magnetic flux accumulates near the horizon and the black hole's rotation causes an almost evacuated but highly magnetized separation region between sheath and spine, especially apparent in the Lorenz factor (see below).
The Kerr system shows a much wider jet opening angle compared to the dilaton case. This can be seen at $|\,z\,|\,=30\,r_g$, where the outer edge of the sheath traced by $\sigma=0.1$ in the Kerr system extends out roughly twice as far in the $x$ direction (up to $15\,r_g$ for SANE and up to $21\,r_g$ for MAD) as in the dilaton system. The $Be=1.02$ contour line shows the same qualitative behavior.
In SANE model, the highest magnetized region, where $\sigma\geq5$, is confined to the innermost $\sim$$5\,r_g$ for the dilaton black hole, whereas in in the Kerr case, it stretches out five times as far. In the dilaton system even the $\sigma=1$ line wraps around the central region at $15\,r_g$ from the black hole (panel e), while in the Kerr system it extends into a similar direction as $\sigma=0.1$, following along the jet wall.
For both spacetimes (and both SANE and MAD accretion models), the $Be\geq1.02$ region shows a uniform distribution of low plasma $\beta$ (panels b, and f). In the dilaton torus (panel f), larger parts of the torus show higher values of $\beta$ near the mid-plane. Through Eq. \eqref{eq:tratio_beta}, this difference in the distribution of $\beta$ in the torus plays an important role for the source morphology in the GRRT images (see below and \ref{sec:GRRT_images}), where the proton temperature is expected to be greater than electron temperature.
The Lorentz factor is generically low in the torus for both spacetimes. The aforementioned low-density separation region in the Kerr system is characterized by significantly higher Lorentz factors up to $\gamma\sim10$; this region is entirely absent in the dilaton system.
The MAD simulation further enhances the above-mentioned differences between Kerr and dilaton spacetimes. Both Kerr and dilaton jet opening angles are wider (e.g. $\sigma=0.1$ contour line), and now the dilaton black hole also shows a highly magnetized jet spine. However, in the now larger $Be\geq1.02$ region, plasma $\beta$ remains low. For both spacetimes, the distribution in the torus on the other hand shows much lower values compared to the SANE simulation.
The two last columns of Figs. \ref{fig:SBG2Te_MAD} and \ref{fig:SBG2Te_SANE} show the dimensionless electron temperature, $\Theta_{\rm e}$, for $R_{\rm high}=1$ and 40 for SANE and MAD simulations. The evacuated separation region in the Kerr system appears as a particularly low electron temperature zone ($T_{\rm e}\sim T_{\rm p}$), extending $\sim$$10\,r_g$ outwards from the Kerr black hole in the SANE case. At low $R_{\rm high}$, the disk is filled with hot electrons ($\Theta_{\rm e}\sim10$). When $R_{\rm high}$ is increased, the electron temperature in the disk is decreased (compare e.g. panels c and d). The increase in $R_{\rm high}$ does not impact the temperature beyond $\sigma=0.1$ in the polar direction since we have fixed $R_{\rm low}=1$, for both SANE and MAD simulations of either spacetime.
While the MAD simulation enhances the low-temperature appearance of the separation region in Kerr, the dilaton system also begins to show signs of such a region (panels c and g). At $R_{\rm high}=40$, the transition between low and high-temperature regions is sharper compared to the SANE simulation. In MAD, the Kerr jet sheath is moderately hotter, while the dilaton sheath is significantly hotter ($\Theta_{\rm e}\sim30$) than it is in SANE ($\Theta_{\rm e}\sim10$).
\begin{figure*
\centering
\subfloat{\includegraphics[width=0.9\textwidth]{figures/SB2T_KMa06-compressed}} \\\vspace{-13pt}
\subfloat{\hspace*{-0.055cm}\includegraphics[width=0.9\textwidth]{figures/SB2T_DMa06I-compressed}} \\
\caption{Magnetization $\sigma$, plasma $\beta$ and electron temperature $\Theta_{\rm e}$ at $R_{\rm high}=1$ and 40 for MAD simulations in Kerr and dilaton spacetimes. White dashed line: Bernoulli parameter $Be=1.02$. Annotated solid contour lines: levels of $\sigma$. The azimuthally averaged GRMHD data is shown time averaged over 1000\,M.} \label{fig:SBG2Te_MAD}
\end{figure*}
\begin{figure*
\centering
\subfloat{\includegraphics[width=0.9\textwidth]{figures/SB2T_KSa06-compressed}} \\\vspace{-13pt}
\subfloat{\hspace*{-0.055cm}\includegraphics[width=0.9\textwidth]{figures/SB2T_DSa06I-compressed}} \\
\caption{Same as Fig. \ref{fig:SBG2Te_MAD}, but for the SANE simulation.} \label{fig:SBG2Te_SANE}
\end{figure*}
\subsection{GRRT images}\label{sec:GRRT_images}
Figures \ref{fig:edf_comp_SANE} and \ref{fig:edf_comp_MAD} show
time-averaged GRRT images for Kerr and dilaton black holes in SANE and
MAD simulations at 230\,GHz, with differences between electron
distribution functions in the rightmost column and differences between
the spacetimes at a given emission model in the bottom row. There is no
visual difference between in source morphology whether non-thermal emission
is included or not for two reasons: one, the kappa model is applied only
in a narrow region in the jet sheath, and two, we fix the flux at 230\,GHz and the kappa
distribution shows its effects only at much higher energies (see Fig. \ref{fig:SED_SgrA}).
In the right column of Figs. \ref{fig:edf_comp_SANE} and \ref{fig:edf_comp_MAD}, the pixel-by-pixel
differences between two images with different distribution functions are shown
(the non-thermal image is subtracted from the thermal one). Intuitively, one may assume that the
jet should be brighter in the non-thermal images and the torus should be dimmer; yet, the opposite is
the case. This can be explained by the shapes of the electron distribution functions: moving from a
thermal to a non-thermal distribution, more electrons gain energy, shifting the maximum of the distribution
and leaving the energy level we observe in the images with a lower number of electrons.
For any combination of accretion model with Kerr or dilaton spacetimes,
the absolute difference between two corresponding pixels in
different electron distribution functions (eDFs) at 230\,GHz does not exceed
$5.5\upmu$Jy. Comparing the Kerr and dilaton spacetimes
to a Schwarzschild one yields only marginally larger differences. In the Kerr spacetime, higher total flux is produced by non-thermal particles in the jet accelerated
by the Blandford-Znajek mechanism. The total flux in the dilaton black system is
lower than for corresponding Schwarzschild simulations. For details, see Appendix \ref{sec:schwarzschild}.
\begin{figure*}
\centering
\includegraphics[width=0.83\textwidth]{figures/SgrA_SANE_cc_a06_230GHz_Rh40_Kerr_Dila}
\caption{Kerr and dilaton GRRT images at $R_{\rm high}=40$ in the SANE simulation. The images are averages of 100 snapshots over 1\,000\,M simulation time ($\sim 6$\,hrs for Sgr\,A$^*$). This model configuration shows the largest difference between different electron distribution (eDF) functions (panel f) in the Dilaton spacetime in the given parameter space. The bottom row shows pixel-by-pixel differences between the two spacetimes at a given eDF.} \label{fig:edf_comp_SANE}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.83\textwidth]{figures/SgrA_MAD_cc_a06_230GHz_Rh40_Kerr_Dila}
\caption{Same as Fig. \ref{fig:edf_comp_SANE}, but for the MAD case.} \label{fig:edf_comp_MAD}
\end{figure*}
\subsubsection{SANE simulation}
For $R_{\rm high}<10$, the Kerr and dilaton black holes show a very similar source morphology (panels a and d on the left side of Fig. \ref{fig:GRRT_ISCO}). The structure is highly asymmetric, with nearly all the flux concentrated in the left half of the image. The region of peak emission (down to 80\% of the peak flux, almost white) traces the left edge of the shadow at around zero relative declination in both spacetimes. It extends a few tens of micro-arcseconds outwards and is enclosed by the 60\% region (orange-red), which is still confined to the approaching side for $R_{\rm high}=1$. The same holds for the 40\% emission regime (green), wrapping half-way around the shadow and stretching out in a thin veil across it. At $R_{\rm high}=1$, the total fluxes in Kerr and dilaton images are identical, regardless of the chosen electron distribution function.
When $R_{\rm high}$ is increased, the receding side in the dilaton spacetime begins to become more prominent, while in the Kerr system it stays faint. This is due to the Kerr black hole's ergosphere, where photons and matter are frame-dragged along with the spacetime in the direction of he black hole's rotation. This effect adds to the Doppler boosting (and thereby to the source asymmetry), brightening the approaching side and darkening the receding side. The dilaton black hole is non-rotating and therefore does not have an ergosphere, and the asymmetry is caused purely by Doppler boosting.
Up until $R_{\rm high}=10$, the 60\% flux region in the dilaton torus stretches out much farther over the torus compared to the Kerr case, up to $\sim30\,\upmu$as from the left edge of the shadow (panels b and e in Fig. \ref{fig:GRRT_ISCO}). Nevertheless, the Kerr images are generically brighter than the dilaton images for $R_{\rm high}>1$ despite the fact that in the dilaton system more emission from the jet contributes to the total flux. This can be seen from the two rightmost columns in Fig. \ref{fig:SBG2Te_SANE}, where the $\sigma_{\rm cut}=1$ contour line traces a significantly larger region in the Kerr system from where we exclude all emission.
At $R_{\rm high}=20$, filamentary structures begin to stretch out from the torus in to the north and south directions. This gives the torus in both spacetimes a fuzzy appearance that develops into a more clearly defined jet onset upon further increase in $R_{\rm high}$. While the thin veil of emission across the shadow is caused by the dominant torus for low $R_{\rm high}$, it traces the jet foot-point for higher values of $R_{\rm high}$. Since the electron temperature in the torus is decreased significantly in the jet onset-dominated images, the veil cannot be identified with a plasma feature in the torus anymore, but has to be located further away from the shadow.
\subsubsection{MAD simulation}
At $R_{\rm high}=1$, the source morphology is comparable to the corresponding SANE case for both spacetimes since the emission from the disk is dominating and the electron temperature is similar in all cases, as we can observe in panels b and f. The overall source size is smaller in the dilaton system. This is consistent with Fig. \ref{fig:SBG2Te_MAD}, where panel a shows that in the MAD case the $\sigma=1$ contour line traces a much larger jet opening angle than it does in SANE (panel a in Fig. \ref{fig:SBG2Te_SANE}).
Moving to $R_{\rm high}=10$, the MAD images for both spacetimes reach a source morphology that remains unchanged for $R_{\rm high}>10$. Kerr-- and dilaton images show a very similar source structure, due to the distribution of electron temperature outside $\sigma=1$, which in MAD is more similar between spacetimes compared to the SANE case (compare panels c and d of Fig. \ref{fig:SBG2Te_MAD}). Albeit much smaller, the main difference apart from the shadow size is again the appearance of the receding side. In MAD images, it is almost as faint in the dilaton images as it is in the Kerr case (e.g. panels a an d on the right side in Fig. \ref{fig:GRRT_ISCO}.
In $R_{\rm high}\geq10$ images, the electron temperature in the torus is decreased and a thin arc spans across the shadow, tracing the jet foot-point. The jet base in the MAD simulation is wider than in the SANE case; this is explained by comparing panels a and d for the Kerr or for the dilaton black hole in Figs. \ref{fig:SBG2Te_MAD} and \ref{fig:SBG2Te_SANE} (see Sec. \ref{sec:grmhd_results}). The region of peak emission is mostly confined to a $\sim20\,\upmu$as\,$\times\,20\,\upmu$as patch located at the top left of the shadow.
Around $30\%$ of total emission is concentrated in the left half in all MAD images; while this was the case for the Kerr black hole in SANE at least for $R_{\rm high}\leq10$, for the dilaton system this is a stark contrast to the SANE simulation.
\subsubsection{Image comparison}
In order to gain a wider overview of the differences between the various models, we compute the $\rm L_{2}$ norm of the pixel-by-pixel differences between images of a given electron distribution function, but varying spacetime, accretion model and $R_{\rm high}$ parameter in the electron temperature model. Figure \ref{fig:carpet} depicts $\rm l_{2}=1- L_{2}$, in such a way that on the diagonal the comparison between identical models yields the identical norm value $l_{2}=1$ (red fields). Since the plot is symmetric, we only show the upper triangle for clarity. Throughout the parameter space, a common feature is that $R_{\rm high}=1$ images show a high degree of similarity, which is consistent with plasma properties known from GRMHD. Overall, comparisons of SANE models show larger differences (yellow fields) to other SANE models than comparisons within the MAD parameter space. Generally, the largest differences appear for combinations with different $R_{\rm high}$. Comparing the upper left and lower right quadrants to the upper right one, it is evident that the differences among combinations of models with mixed spacetime, accretion model and electron temperature are not clearly distinguishable from comparisons between a Kerr and a dilaton model with fixed accretion and emission model, and either a thermal or a non-thermal emission model.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/L2_map_GRRT_230}
\caption{ $\rm L_{2}$ norm of the pixel-by-pixel differences between images of given models. The upper left and lower right quadrants show comparisons of spacetime-$R_{\rm high}$ combinations in SANE and MAD, respectively. The upper right quadrant shows comparisons also for different accretion models. The labels are abbreviated as "K": Kerr, "D": dilaton, "S": Schwarzschild, and $R_{\rm high}=1,40$.} \label{fig:carpet}
\end{figure}
\subsection{Spectral analysis}
Figure \ref{fig:SED_SgrA} shows thermal and non-thermal model broad-band spectra for SANE and MAD cases, together with observational total flux measurements (see Table \ref{tab:obsSED} for details). Regardless of background spacetime, accretion model and eDF, the $R_{\rm high}=1$ spectrum behaves strikingly different compared to all other models due to the "cooling" effect of non-thermal particles in the jet introduced by the R-$\beta$ model, i.e. we see purely thermal emission from the accretion disk. While they appear to be the best fit to observational data for frequencies below the $R_{\rm high}=1$ turnover of about 80\,GHz, determined by the low magnetic field in the accretion disk (see Eq. \ref{eq:turnoverfreq}), they deviate greatly from observations for $\nu\geq230$\,GHz and are therefore excluded from further discussion.
To complete our comparison, we include Schwarzschild simulations for $R_{\rm high}=40$. In SANE, the corresponding SEDs are comparable to $R_{\rm high}=10$ SEDs of both Kerr and dilaton spacetimes, while in MAD they lie distinctly in between $R_{\rm high}=1$ and $R_{\rm high}>1$ SEDs of Kerr and dilaton curves. For a more detailed discussion on the Schwarzschild simulations, see Appendix \ref{sec:schwarzschild}.
Before comparing the Kerr to the dilaton spectra, it is important to
understand the general behavior of the Spectral Energy Distribution (SED)
under changes of $R_{\rm high}$ and the eDF. In the weakly magnetized
disk, high values of plasma-$\beta$ lead to an inverse dependence of the
electron temperature on $R_{\rm high}$, i. e. $\Theta_{\rm e}\sim R_{\rm
high}^{-1}$ (see Eqs. \ref{eq:tratio_beta} and
\ref{eq:dim_less_T_e}). Applying a kappa eDF in the jet sheath as
described above, an additional high energy contribution in the form of a
power law tail is introduced in the SED, in contrast to the exponential
decay of the thermal models. If we now use the aforementioned dependence
of the electron temperature on $R_{\rm high}$ in the expressions for
thermal and kappa synchrotron emissivity $j_{\nu,\rm tot}$ taken from
\cite{Pandya2016}, we can write the high energy part of the SED as
\citep{Fromm2021b}
\begin{equation}
j_{\nu,\rm tot}\propto\exp\left(-R_{\rm high}^{2/3}\nu^{1/3}\right)+\nu^{-(\kappa-2)/2} \left[1/R_{\rm high} + \varepsilon\sigma\right]^{\kappa-2},
\label{eq:jnukapprox}
\end{equation}
where $\sigma$ is again the magnetization and $\varepsilon$ sets the magnetic contribution to the energy of the electrons. The first term in the above equation describes the thermal contribution to the total emission and the steepening of the high energy part of the SED with increasing $R_{\rm high}$ (most prominent in the near infrared, $\nu\gtrsim1.36\times10^{14}$\,Hz). On the other hand, the second term adds the non-thermal electrons in the jet wall and decreases the dependence on $R_{\rm high}$ of the high frequency ($\nu\gtrsim2\times10^5$\,GHz) spectrum compared to the purely thermal case (see top row of Fig. \ref{fig:SED_SgrA}). The lower energy part of the SED is governed by the jet, characterized by low to intermediate plasma-$\beta$, where the electron temperature is effectively independent of $R_{\rm high}$.
Lastly, the turnover position is affected by the choice of $R_{\rm high}$, as well as mass accretion rate and magnetic field strength
\citep[see e.g.][and \citealt{Fromm2021b} for details]{Zdziarski1998}.
\begin{linenomath}\begin{equation}
\nu_{\rm t,th}\propto B_{\rm code}\sqrt{\dot{m}}/R_{\rm high}^2.
\label{eq:turnoverfreq}
\end{equation}\end{linenomath}
The above equation explains the shifts in turnover with increasing $R_{\rm high}$ in the SANE models, and the reduced shift in the MAD cases where the magnetic field is much stronger and the mass accretion rate is smaller.
In SANE, for $\nu\leq230$GHz the total flux slightly increases with $R_{\rm high}$ (top left panel of figure \ref{fig:SED_SgrA}). As explained above, an increase in $R_{\rm high}$ decreases the electron temperature and subsequently the emission from the disk, making the jet comparatively brighter when fixing the total 230\,GHz flux of the average image to the same value. Regardless of $R_{\rm high}$ and eDF, the dilaton SEDs turn over at higher frequencies with higher total fluxes, and the difference in turnover position between spacetimes increases with $R_{\rm high}$. For example, for a thermal distribution, at $R_{\rm high}=10$ the turnover happens at $\sim4\times10^{11}$\,Hz for Kerr and $\sim5\times10^{11}$\,Hz for the dilaton case, whereas for $R_{\rm high}=40$ the turnovers move to $\sim5.5\times10^{11}$\,Hz and $\sim7.5\times10^{11}$\,Hz, respectively. Past the turnover, in the thermal case dilaton SEDs remain steeper than their Kerr counterparts throughout the rest of the frequency range. Above a certain frequency between $2\times10^{12}$ and $8\times10^{12}$\,Hz, the total flux of the dilaton models drops below the corresponding Kerr SEDs. In the near infrared, around $1.36\times10^{14}$\,Hz, the differences in total flux between spacetimes can be of an order of magnitude.
Non-thermal emission, despite being solely applied in the jet, introduces the characteristic power-law tail in the near infrared (NIR) for all $R_{\rm high}\geq10$ SEDs for $\nu\gtrsim10^{14}$\,Hz \citep[see][]{Cruz-Osorio2021b}. For a given spacetime, the already small dependence on $R_{\rm high}$ for $R_{\rm high}\geq20$ is decreased even further in the NIR. The flattening due to the non-thermal contribution is slightly more pronounced for the dilaton SEDs, so that for $R_{\rm high}\geq20$ Kerr and dilaton SEDs lie closely together, and for $R_{\rm high}\geq20$ the Kerr SEDs are steeper past $1.36\times10^{14}$\,Hz. Figure \ref{fig:alphaNIR_MAD} visualizes the above observation: thermal SEDs show NIR spectral indices $\alpha_{\rm NIR}\sim-2.3$, while non-thermal models are flatter in the $\alpha_{\rm NIR}\sim-2.0$, consistently for $R_{\rm high}\geq10$ and either spacetime. Thermal dilaton SEDs are steeper, and non-thermal ones are slightly flatter compared to their Kerr counterparts, respectively. The $R_{\rm high}=10$ curves are separated by more than an order of magnitude from the other SEDs, and show the same separation between Kerr and dilaton SEDs (yellow solid and dashed lines in the top right panel).\\
In thermal MAD SEDs, the dependence on both $R_{\rm high}$ and the background spacetime is much weaker compared to the SANE case, especially between the turnover positions and the NIR (bottom left panel in Fig. \ref{fig:SED_SgrA}). The turnovers of Kerr-- and dilaton SEDs are much closer together and show much more similar total fluxes in the MAD models. With increasing $R_{\rm high}$, the high energy part shows the expected steepening. In contrast to the SANE models, the $R_{\rm high}=10$ curves are no longer clearly separated from the other SEDs. Introducing non-thermal emission, dilaton SEDs again flatten more than the respective Kerr models, leading to a more clear separation of SEDs of different spacetimes (bottom right panel in Fig. \ref{fig:SED_SgrA}). The NIR spectral index is plotted in Fig. \ref{fig:alphaNIR_MAD}, indicating steeper thermal dilaton SEDs (compared to Kerr), but much flatter non-thermal SEDs. This behavior, albeit weaker, is also present in the SANE models as described above.\\
The observed NIR flux and spectral index of Sgr\,A$^\ast$ is highly variable \citep[e.\,g.][]{Witzel2014,Witzel2018}. In a bright or flare state, $\alpha_{\rm NIR}=-0.6\pm0.2$ was determined from synchronous observations at $8.102\times10^{13}$\,Hz and $1.874\times10^{14}$\,Hz \citep{Hornstein2007,Witzel2014}. The NIR spectral indices calculated from the Kerr and dilaton spectra (Fig. \ref{fig:alphaNIR_MAD}) are clearly inconsistent with a such a steep value. They indicate a quiescent state of the systems, regardless of background spacetime, accretion model or emission model (universally $\alpha_{\rm NIR}\lesssim-1.50$). The spectral indices are consistent with dim state measurements giving $\alpha_{\rm NIR}=-1.7\pm-0.4$ \citep{Gillessen2006} and $\alpha_{\rm NIR}=-1.64\pm0.06$ \citep{Witzel2018}, or even steeper values \citep[see e.g.][for details]{Witzel2014}.
In terms of total NIR flux, the SANE SEDs fit the observational data better than the MAD ones. More precisely, the top row of Fig. \ref{fig:SED_SgrA} shows that Kerr $R_{\rm high}=10$ images for either electron distribution function well match the $1.36\times10^{14}$\,Hz flux reported by \cite{Gravity2020c} (bright pink point on the 136\,THz line indicated in each plot). Among the thermal SEDs, $R_{\rm high}\geq20$ dilaton SEDs not only match various $1.36\times10^{14}$\,Hz measurements, but also those taken around the $3.0\times10^{13}$\,Hz mark. The corresponding non-thermal dilaton SEDs tend to slightly overshoot the NIR observations. While around $3.0\times10^{13}$\,Hz the MAD SEDs match the data well, they collectively overshoot the NIR flux.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/SED_SgrA_NoRef_Schw}
\caption{Time-averaged spectral energy distributions for SANE/MAD and thermal/non-thermal models. For all non-thermal models, $\varepsilon=0.015$. Over-plotted: observational data (see Table \ref{tab:obsSED}).}
\label{fig:SED_SgrA}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{figures/alphaNIR_SANE}
\includegraphics[width=0.45\textwidth]{figures/alphaNIR_MAD}
\caption{Near infrared spectral indices for the SANE (left panel) and MAD (right panel) simulations, obtained from the time-averaged spectra (Fig. \ref{fig:SED_SgrA}).}
\label{fig:alphaNIR_MAD}
\end{figure*}
\section{Summary and Discussion}\label{sec:summary}
In this work, we have investigated the possibility to distinguish between different spacetimes by means of simulations and observations of black holes, under the aspect of different emission models. To this end, we first carried out SANE and MAD GRMHD simulations of a Kerr and a dilaton black hole in full 3D. In radiative post-processing, we first parametrized the electron temperature and studied the effect of the R$-\beta$ parametrization on GRRT images obtained with a purely thermal electron energy distribution. We subsequently repeated this process with a non-thermal kappa electron distribution applied in the jet wall, with an optional contribution of magnetic energy. For each model, we fixed the mass accretion rate to fit the flux of an average GRRT image to 2.5\,Jy at 230\,GHz. Further, we computed synchrotron SEDs and near-infrared spectral indices for all models. For each emission model, we computed image comparison metrics between Kerr and dilaton images to quantify differences.
\subsection{GRMHD simulations}
The goal of this work is a theoretical comparison of two background spacetimes, with the comparison to observational data playing only a minor role. Therefore, the spin of the Kerr black hole is fixed to $a=0.6$, and in the process the dilaton parameter is fixed to $\hat{b}=0.5$ by matching the black holes at their ISCO. Once both systems have entered a quasi steady state in their evolution (past 10,000\,M), the Kerr black hole shows wider jet opening angles in both SANE and MAD simulations compared to the dilaton spacetime, as well as a higher magnetized jet. Overall, the two systems behave rather similarly in either accretion model. In the MAD simulations, neither systems fully reaches the "MAD state" characterized by $\psi>10$ \cite{Tchekhovskoy2011}.
\subsection{GRRT calculations}
\subsubsection{Spectral analysis}
From the multi-wavelength GRRT calculations, we generate time-averaged broad-band spectra and 230\,GHz images. We employ the R$-\beta$ parametrization, choosing $R_{\rm high}\in{[1,10,20,40]}$. In the next step, we apply a non-thermal kappa electron energy distribution function in the jet wall, with and without an additional magnetic contribution described by $\varepsilon=0$ and 0.015, respectively. In the spectra, the accretion model affects the position of the turnover point, and non-thermal emission introduces a power-law tail in the near infrared compared to the steep decrease in the thermal case. The dependence of the spectra of a given emission model on the background spacetime is much more prominent in the SANE case, where for a given frequency the total flux can differ by an order of magnitude between spacetimes. In the thermal case, an increase of $R_{\rm high}$ steepens the high energy part of the spectrum; with non-thermal emission, for $R_{\rm high}\geq20$ also the dependence on the spacetime decreases significantly. Up until the near-infrared, the MAD spectra are almost independent of both spacetime and emission model. In the non-thermal case, the NIR spectral indices of Kerr- and dilaton black hole systems are $\gtrsim0.25$ apart. For the thermal models, there is a clear difference only for $R_{\rm high}\geq20$ (see Fig. \ref{fig:alphaNIR_MAD}). While the reported differences in spectral indices are potentially observable features, the spectra are universally steeper than recent observations indicate \citep{Witzel2018}. Observed NIR spectral indices of Sgr\,A$^\ast$ range from -0.6 to -1.64 \citep{Witzel2018}.
In order to better match observations, the target flux of the average image and the contribution of magnetic energy of the kappa electron distribution may be increased in a follow-up study. Likewise, $\sigma_{\rm cut}$ can be increased to modify the extent and position of the jet sheath.
\subsubsection{GRRT images}
The R$-\beta$ parametrization splits the source morphology into torus- ($R_{\rm high}\lesssim10$) and jet dominated ($R_{\rm high}\gtrsim20$) images. In SANE, the transition is rather smooth and takes place between $R_{\rm high}=10$ and 20, for both Kerr- and dilaton black holes. The wider jet opening angle in the Kerr system indicated in the GRMHD simulations translates to the jet dominated images, and due to increased Doppler boosting from the rotation of the Kerr black hole the receding side is particularly faint. Since the dilaton black hole in non-rotating, the receding side is more prominent. The filamentary low flux features are more fuzzy compared to the Kerr images, and the shadow is smaller due to the ISCO match.
The transition from torus to jet dominated images is rather abrupt for the MAD models: before $R_{\rm high}=10$, the source morphology converges and stays the same for $R_{\rm high}\geq10$. The jet opening angle is more similar between spacetimes in the MAD images, and apart from the shadow size the source morphologies of Kerr and dilaton systems are visually identical.
\subsection{Differently matched black holes}\label{seq:matchings}
This study focused on the special case of Kerr and dilaton black holes matched at their ISCO, concluding that distinguishing between spacetimes is still challenging even in a non-observational framework. To reinforce our argument, we consider SANE simulations of the dilaton black hole matched to the Kerr one at its unstable circular photon orbit, and at its event horizon (see Appendix \ref{sec:matchings_app}). As for the ISCO case, the dilaton characteristic radii are always matched to the equatorial equivalent in the Kerr spacetime. We summarize below the size of the matched radii and their apparent size on the sky, scaled to the mass and distance of Sgr\,A$^\ast$.
\begin{linenomath}\begin{align}
&{\rm I.} & \ \ r_{\rm ISCO}^{\rm dilaton}&=r_{\rm ISCO,\ equatorial}^{\rm Kerr}&\approx 3.83\,M &\approx 19.17\, {\rm \upmu as} \\
&{\rm II.} & \ \ r_{\rm PH}^{\rm dilaton}&=r_{\rm PH,\ equatorial}^{\rm Kerr}&\approx 2.19\,M &\approx 10.96\, {\rm \upmu as}\\
&{\rm III.} & \ \ r_{\rm EH}^{\rm dilaton}&=r_{\rm EH,\ equatorial}^{\rm Kerr}&\approx 1.80\,M &\approx 9.01\, {\rm \upmu as}
\end{align}\end{linenomath}
From Figs. \ref{fig:GRRT_ISCO} and \ref{fig:GRRT_PO_EH} it is evident that moving away from a match at the ISCO, the ability to distinguish between spacetimes decreases further. For photon orbit and event horizon matches, the dilaton source morphology becomes more similar to that in the Kerr system for any value of $R_{\rm high}$.
Even if the images differ in terms of plasma features, those may as well have astrophysical causes and need not be gravitational (that is, looking at two images without prior knowledge of the background metrics). Comparing Figs. \ref{fig:GRRT_ISCO} and \ref{fig:GRRT_PO_EH} it is apparent that choosing a different matching case on the one hand has a minuscule effect on the visual appearance of the shadow size, and on the other hand affects the source morphology so that the dilaton system looks progressively more similar to the Kerr system moving to smaller characteristic radii for the match. In the photon orbit and event horizon matchings, the spacetimes are hence even harder to distinguish.
\subsection{Limitations of the models}
In this study, we investigate only a small fraction of the available parameter space.
Comparisons to rotating dilaton black holes could be another valuable addition to this study. Further, magnetic field geometries and initial conditions of the torus can affect the evolution of the black hole system \citep{Cruz2020}. Finally, our simulations only evolve the dynamically important protons, thereby neglecting effects of electron heating mechanisms \citep[e.\,g.][]{Chael2018,Mizuno2021}, radiative feedback and resistivity of the plasma \citep[see, e.\,g.][]{Ripperda2020}.
In the GRRT calculations, the R$-\beta$ parametrization emulates electron heating processes in the vicinity of black holes \citep{Mizuno2021}. Alternative prescriptions for the electron temperature \citep{Anantua2020} could alter the source structure considerably. When employing the kappa electron distribution function, the inner boundary of the jet wall can be modified to change the size of the region containing accelerated electrons. In the same vein, the magnetic contribution to the distribution function could enable us to better match observed NIR spectral indices. The inclination can further enhance the prominence of the jet in the GRRT images. Lastly, including polarization in the GRRT calculations would enable us to map the magnetic field geometry in the images. These combined effects and extensions to the study could increase the chances of distinguishing between two spacetimes.
This work is a phenomenological approach to the goal of testing general relativity in an imaging framework comparing exemplary models. A model-independent approach through feature extraction from the images, such as fitting crescent or ring models to images and visibilities and analyzing emission profiles, could help us to better quantify differences between spacetimes.
\section{Conclusion}\label{sec:conclusion}
Combining the results from GRMHD and GRRT simulations, we conclude that it is still challenging to distinguish black holes characterized by different background metrics, at least in the case of the dilaton metric. The overall behavior of the GRMHD simulations is very similar in the MAD case, even more so than in the SANE simulations, due to the matching at the ISCO. From the GRRT images, we see that the accretion and emission models have a much larger impact on the source morphology than the underlying spacetime does. The $R_{\rm high}$ parameter alone changes the source morphology drastically from torus to jet dominated; this transition is smooth in SANE, but takes place abruptly in MAD for some $R_{\rm high}<10$. The prominent, potentially observable differences between spacetimes in the GRRT images can be summarized as follows:
\begin{itemize}
\item The jet opening angle is wider in the Kerr spacetime;
\item The receding side of the torus is fainter in Kerr due to increased Doppler boosting;
\item The Kerr shadow is larger than the dilaton shadow due to the ISCO match.
\end{itemize}
From the spectra, the differences between spacetimes in near-infrared spectral index and total flux potentially suffer from degeneracy between accretion model, emission model and spacetime. It is questionable whether even fitting the whole spectrum to observational data would enable us to distinguish between spacetimes.
Including a Schwarzschild black hole in our investigation shows that the differences in image space to the Kerr and dilaton black hole are larger than between the latter two spacetimes, but overall remain small. From the comparison metrics (see Fig. \ref{fig:carpet}), the Schwarzschild metric is indistinguishable from the other considered models.
\begin{acknowledgements}
We thank Dr. G. Witzel and Dr. N. MacDonald for their role as internal referees at the MPIfR and for helpful discussions and comments, as well as Dr. P. Kocherlakota for his perspective and comments.
JR received financial support for this research from the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics at the Universities of Bonn and Cologne. This research is supported by the European Research Council for advanced grant ``JETSET: Launching, propagation and emission of relativistic jets from binary mergers and across mass scales'' (Grant No. 884631) and European Horizon Europe staff exchange (SE)
programme HORIZON-MSCA-2021-SE-01 Grant No. NewFunFiCO-101086251.
CMF is supported by the DFG research grant ``Jet physics on horizon scales and beyond'' (Grant No. FR 4069/2-1). ZY is supported by a UKRI Stephen Hawking Fellowship and acknowledges support from a Leverhulme Trust Early Career Fellowship. YM acknowledges the support by the National Natural Science Foundation of China (Grant No. 12273022). The simulations were performed on GOETHE-HLR LOEWE at the CSC-Frankfurt, Iboga at ITP Frankfurt, and SuperMUC-NG in Garching.
\end{acknowledgements}
{\noindent\tiny {\it Software.} {\tt BHAC}\footnote{\href{https://bhac.science/}{https://bhac.science/}} \citep{Porth2017}, {\tt BHOSS} \citep{Younsi2020}}
\bibliographystyle{aa}
| {
"attr-fineweb-edu": 1.37207,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeEDxK0wg09KOXz8F | \section{Introduction}
\label{sect:introduction}
{}From its inception, renormalization theory in perturbative quantum
field theory (pQFT) had a combinatorial flavour, as well as an
analytic one. The former manifests itself in the self-similarity of
Feynman graphs, the building blocks of pQFT. The intricate
combinatorics of extracting and combining subgraphs, required in the
renormalization process, is encoded in the Bogoliubov recursion,
respectively its solution via Zimmermann's forest
formula~\cite{CaswellK1982,collins1984,Lowen1975,Vasilev04,Zimmermann}.
Kreimer's discovery of a Hopf algebra structure underlying Bogoliubov
and Zimmermann's formulae and illuminating their internal
structure~\cite{kreimer1998} was the starting point of a new approach
in the field. Then the Connes--Kreimer decomposition \`{a} la
Birkhoff--Wiener--Hopf (BWH) of Feynman rules~\cite{ck2000} captured
the process of renormalization in pQFT in the framework of dimensional
regularization (DR) with the minimal subtraction (MS) scheme. Further
work by Connes, Kreimer and others has since then established various
links between the BWH decomposition of characters in renormalizable
quantum field theories and relevant mathematical topics, culminating
recently in work by Connes and Marcolli on motivic Galois
theory~\cite{cm22004,cm2006} and by Bloch, Esnault and Kreimer on the
motivic nature of primitive Feynman graphs~\cite{BEK2005}.
In the present work, largely motivated by~\cite{cm22004}, we return to
the origin of the Connes--Kreimer theory and concentrate on algebraic
features of renormalization relevant to pQFT, trying to unravel
further fundamental properties of renormalization schemes by methods
inspired on the classical theory of free Lie algebras (FLAs).
It has been known since the mid-nineties that many properties of~FLAs,
as exposed e.g. in~\cite{OldNiko89,reutenauer1993}, can be lifted to
general graded Lie algebras and their enveloping algebras. In other
terms Lie theory is relevant to the study of arbitrary graded
connected cocommutative or commutative Hopf algebras. In particular,
the Solomon algebras of type $A_n$~\cite[Chap.~9]{reutenauer1993} act
naturally on arbitrary graded connected commutative Hopf
algebras~\cite[Thm.~II.7]{patras1994}.
The observation applies to the Hopf algebras of renormalization,
yet it has not received the attention it deserves. Here we
develop it systematically, considering abstract renormalization
Hopf algebras~$H$ and commutative target algebras~$A$ of quantum
amplitudes endowed with a Rota--Baxter operator. We show that
some of the deepest combinatorial properties of renormalization
schemes are codified in the composition with the Dynkin idempotent
of Lie theory, and in its inverse map. We derive in particular
from their study the properties of characters under locality
assumptions for counterterms in renormalization and, in the
process, establish that the data relevant to their computation are
contained in the ``beta function''. The phenomenon is well known
in pQFT; the Lie theoretic approach, however, provides a
remarkably efficient way to deduce it from the locality
assumptions.
Furthermore, the direct sum of Solomon algebras (the
\textit{descent algebra}) is naturally provided with a graded
connected cocommutative Hopf algebra structure; the corresponding
pro-unipotent group is naturally isomorphic to the universal
group~$U$ of the Connes--Marcolli Galois theory of
renormalization. This isomorphism paves the way to an algebraic
and combinatorial approach to the later theory.
Some advantages of our method are: (i)~It appears to be independent of
the DR with MS prescription, applying in principle to any
renormalization procedure which can be formulated in terms of a
Rota--Baxter structure; (ii)~Use of the Dynkin map explains naturally
the coefficients of the universal singular frame of~\cite{cm22004}
---the same as in the Connes--Moscovici index formula~\cite{gafa95}
for the case of simple dimension spectrum.
The article is organized as follows. After settling some notation in
the next section, we ponder in Section~\ref{sect:HopfCharacters} the
convolution algebra of linear maps~$\Lin(H,A)$. It cannot be made
into a Hopf algebra in general; but a suitable algebra of
\textit{characteristic functions} can. This is our playground; it
encodes, at the Hopf algebra level, the properties of the group of
$A$-valued characters of $H$. Section~\ref{sect:dynkinOperator} is
the heart of the paper: starting from a short survey on the Dynkin
idempotent for cocommutative Hopf algebras, we establish the formal
properties of its sibling in the commutative case, then introduce and
exhaustively study the inverse Dynkin map. In particular, we show
that the latter bijectively sends infinitesimal characters into
characters of commutative connected Hopf algebras ---applying in
particular to the Hopf algebras of Feynman diagrams and rooted trees
of renormalization theory and the corresponding Feynman rules. In
Section~\ref{sect:birkhoffdecomp} we recall the BWH decomposition of
characters, choosing once again to obtain it algebraically from
Rota--Baxter operator theory and the `Baker--Campbell--Hausdorff (BCH)
recursion'. After that, our Lie theoretic machine is fully
operational.
In the rest of the paper, we show the relevance of that machine to
pQFT. In Section~\ref{sect:reminder} we briefly review some
standard lore of renormalization and remind the reader of the
dictionary between it and the Connes--Kreimer paradigm. Next we
study in Section~\ref{sect:locality} the locality properties for
dimensional regularization (DR) schemes by exploiting the
properties of the Dynkin pair of maps, together with the BWH
decomposition. The main results concerning the Connes--Kreimer
beta function and the renormalization group (RG) in DR are
rederived from direct computations in the algebra of
characteristic functions; hopefully, the role of that beta
function is thereby illuminated. Sections~\ref{sect:DSlocality}
and~\ref{sect:BPHZlocality} are essays on the same question in
other renormalization frameworks; in the second we invoke the BPHZ
scheme of renormalization and exhibit the underlying Rota--Baxter
algebra structure, exemplifying with the
(Ginzburg--Landau--Wilson) $\vf^4_4$~scalar model in Euclidean
field theory.
To finish, in Section~\ref{sect:cosmic} we go back to the mathematical
setting, trying to place our results in the `great scheme of things'
of combinatorial Hopf algebra theory. We show there how the
Connes--Marcolli ``motivic Galois group'' of renormalization relates
with FLAs as well as the theory of descent algebras. Together with
the links between the same group and Connes--Moscovici's index theorem
in noncommutative geometry, these new connections give further
evidence for Connes' and Kreimer's ---already much documented--- claim
that the divergences of pQFT do reveal the presence of deep
mathematical structures.
\section{Notational conventions}
\label{sect:convs}
Let $H=\bigoplus_{n=0}^{\infty}H_n$ be a graded connected commutative
Hopf algebra (of finite type) over a field~$k$ of characteristic zero;
this is necessarily free as a commutative
algebra~\cite[Prop.~4.3]{patras1994}. We write~$\epsilon$ for the
augmentation from~$H$ to~$H_0=k\subset H$ and~$H^+$ for the
augmentation ideal $\bigoplus_{n=1}^{\infty} H_n$ of~$H$. The
identity map of~$H$ is denoted~$I$. The product in~$H$ is
written~$\pi$ or simply by concatenation. The coproduct is
written~$\delta$; we use Sweedler's notation and write $h^{(1)}\otimes
h^{(2)}$ for~$\delta (h),\,h\in H_n$; or $\sum_{i=0}^n
h_i^{(1)}\otimes h_{n-i}^{(2)}$ when the grading has to be taken into
account. The usual restrictions apply, that is, $h^{(1)}\otimes
h^{(2)}$ stands for a sum $\sum_{j\in J}h_j^{(1)}\otimes h_j^{(2)}$
and should not be interpreted as the tensor product of two elements
of~$H$. The same convention applies in forthcoming notation such as
$h^{(1)}\otimes g^{(1)}\otimes h^{(2)}\otimes g^{(2)}$, that should be
understood as $\sum_{j\in J}\sum_{k\in K}h_j^{(1)}\otimes
g_k^{(1)}\otimes h_j^{(2)}\otimes g_k^{(2)}$, where $\delta
(g)=\sum_{k\in K}g_k^{(1)}\otimes g_k^{(2)}$.
Graduation phenomena are essential for all our forthcoming
computations, since in the examples of physical interest they
incorporate information such as the number of loops (or vertices) in
Feynman graphs, and the subdivergence structure, relevant for the~RG.
They are expressed by the action of the grading operator $Y:H\to H$,
given by:
$$
Y(h) = \sum\limits_{n\in\N} n h_n \sepword{for} h =
\sum\limits_{n\in\N}h_n\in\bigoplus\limits_{n\in\N} H_n.
$$
We write $|h_n|:=n$. Notice that $Y$ is usually denoted by~$D$ in the
FLA literature. In the present article we stick to the notation most
often used in the context of the Hopf algebra approach to
renormalization~\cite{ck1998,ck2000,ck2001,cm22004,ek2005,fg2005} and
reserve~$D$ for the Dynkin operator.
\section{The Hopf algebra of characteristic functions}
\label{sect:HopfCharacters}
A \textit{character} is a map $\gamma$ of unital algebras
from~$H$ to the base field $k$:
$$
\gamma (hh') = \gamma(h)\gamma(h').
$$
It should be clear that the product on the right hand side is the one
in~$k$. We write $\gamma_n$ for the restriction of $\gamma$ to a map
from~$H_n$ to~$k$.
Let $A$ be a commutative $k$-algebra, with unit~$1_A=\eta_A(1)$,
$\eta_A:k\to A$ and with product~$\pi_A$, which we sometimes denote by
a dot: $\pi_A(u\otimes v)=:u\cdot v$. The main examples we have in
mind are $A=\CC,\,A=\CC[[\varepsilon,\varepsilon^{-1}]$ and~$A=H$. We
extend now the definition of characters and call an ($A$-valued)
character of $H$ any algebra map from~$H$ to~$A$. In particular
$H$-valued characters are simply algebra endomorphisms of $H$.
An \textit{infinitesimal character} is a linear map~$\alpha$
from~$H$ to~$k$ such that:
$$
\alpha (h h') = \alpha(h) \epsilon(h') + \epsilon (h) \alpha (h').
$$
As for characters, we write $\alpha(h) = \sum_{n\in\N} \alpha_n
(h_n)$. The same notational convention will be used in the sequel
without further notice: $f_n$ stands for the restriction of an
arbitrary map~$f$ on~$H$ to $H_n$. We remark that by the very
definition of characters and infinitesimal characters
$\gamma_0(1_H)=1$, that is $\gamma_0 = \epsilon$, whereas
$\alpha_0(1_H)=0$.
We extend as well in the obvious way the notion of infinitesimal
characters to maps from~$H$ to a commutative $k$-algebra $A$. We
have now:
$$
\alpha(hh') = \alpha(h) \cdot e(h') + e(h) \cdot \alpha (h'),
$$
where $e:=\eta_A\circ\epsilon$. They can be alternatively defined
as $k$-linear maps from $H$ to $A$ with $\alpha_0=0$ that vanish
on the square of the augmentation ideal of $H$. In particular, if
$\alpha$ is an infinitesimal character, the linear map
$\alpha^{|n}$, the restriction of which is~0 on all the graded
components of~$H$ excepted $H_n$, and $\alpha^{|n}_n= \alpha_n$,
is also an infinitesimal character. Thus $\alpha$ decomposes as a
sum of infinitesimal characters $\alpha =\sum_{n>0}\alpha^{|n}$.
The vector space of infinitesimal characters, written $\Xi_H(A)$,
or just $\Xi(A)$, decomposes accordingly as the direct product of
its graded components: $\Xi(A)=\prod_{n\in\N^\ast} \Xi_n(A)$,
where~$\Xi_n(A)$ is the linear span of the~$\alpha^{|n}$. Thus we
regard $\Xi(A)$ as the natural `topological' completion of the
graded vector space $\oplus_{n\in\N^\ast}\Xi_n(A)$. In more
detail, we consider the subspaces $\oplus_{i\ge n\in\N^\ast}
\Xi_i(A)$ and the associated onto homomorphisms, and
we take the inverse limit, whose elements are infinite series.
This property we agree to abbreviate from now on to ``$\Xi(A)$ is
a graded vector space''; the sundry objects defined below are
graded in that sense ---that is, they are actually completions of
graded vector spaces, completions of graded Lie algebras, and so
on.
The space $\Lin(H,A)$ of $k$-linear maps from~$H$ to~$A$,
$\Lin(H,A):=\prod_{n\in\N}\Lin(H_n,A)$, is naturally endowed with
an algebra structure by the \textit{convolution product}:
\allowdisplaybreaks{
\begin{equation*}
f\ast g := \pi_A\circ(f\otimes g)\circ\delta: \qquad H
\xrightarrow{\delta} H \otimes H \xrightarrow{f \otimes g} A
\otimes A \xrightarrow{\pi_{A}} A.
\end{equation*}}
The unit for the convolution product is precisely~$e: H \to A$.
Especially when $A=H$, the convolution product provides various tools
to deal with properties of characters, such as the logarithm of the
identity, which is a projector on~$H$ with kernel the square of the
augmentation ideal. As a corollary, $A$-valued infinitesimal
characters can be characterized as those maps~$\alpha$ from~$H$ to~$A$
such that $\alpha_n \circ I^{\ast k}(h_n)=k \,\alpha_n(h_n)$, for
any~$k,n\in\N$. We refer the reader interested in a systematic study
of these phenomena and of their connections to the classical structure
theorems for Hopf algebras such as the Cartier-Milnor-Moore theorem
to~\cite{patras1993,patras1994,Cartier2006}.
The set $G_H(A)$ of characters, or just $G(A)$, is a group for the
convolution product. The inverse is given explicitly by the
formula:
\begin{equation*}
\gamma^{-1} = \Big(e + \sum\limits_{n\in\N^\ast}
\gamma_n\Big)^{-1} = e +
\sum\limits_{k\in\N^\ast}(-1)^{k}\,
\bigg(\,\sum\limits_{n\in\N^\ast}
\gamma_n\bigg)^{\ast k}.
\end{equation*}
The last sum is well-defined as a power series, since only a
finite number of terms appear in each degree. We denote as usual
by~$S$ the convolution inverse of the identity map~$I$
in~$\End(H):=\Lin(H,H)$. Then $\gamma^{-1}= \gamma \circ S \in
G(A)$; the reader unfamiliar with this identity can deduce it
easily from the next lemma and other notions introduced soon.
Now, $\Lin(H,A)$ is \textit{not} naturally provided with a Hopf
algebra structure over the ground field $k$, except under
particular assumptions on the target space $A$. For example, it is
(up to the completion phenomena) a Hopf algebra if $A=k$. This
follows from the usual argument to construct a Hopf algebra
structure on the graded dual of a graded connected Hopf algebra of
finite type. It is not when $A=k[[\varepsilon,\varepsilon^{-1}]$,
that is when the coefficient algebra~$A$ is the field of Laurent
series ---an example relevant to renormalization. However, as will
be shown below, a Hopf algebra structure can always be defined on
certain remarkable spaces naturally related to~$\Lin(H,A)$ and,
most importantly in view of applications to pQFT, to the group of
characters $G(A)$.
\begin{lem}
\label{lem:AstCirc}
Assume that, for given $\phi ,\psi \in \Lin(H,A)$, there exist
elements $\phi^{(1)}\otimes\phi^{(2)}$, respectively
$\psi^{(1)}\otimes \psi^{(2)}$, in $\Lin(H,A)\otimes \Lin(H,A)$ such
that, for any $h,h'\in H$\/:
$$
\phi^{(1)}(h) \cdot \phi^{(2)}(h') = \phi (hh') \sepword{and}
\psi^{(1)}(h) \cdot \psi^{(2)}(h') = \psi (hh');
$$
then
$$
\phi \ast \psi (h h') = \big(\phi^{(1)}
\ast \psi^{(1)}(h)\big) \cdot
\big(\phi^{(2)} \ast \psi^{(2)}(h')\big).
$$
Moreover, when $\psi\in \End(H)$ and $\phi\in \Lin(H,A)$, with the
same hypothesis and $\psi^{(1)},\psi^{(2)}$ now in $\End(H)$\/:
$$
\phi \circ \psi (hh') =
\big(\phi^{(1)} \circ
\psi^{(1)}(h)\big) \cdot
\big(\phi^{(2)} \circ
\psi^{(2)}(h')\big).
$$
The last identity in particular holds when $A=H$, that is,
in~$\End(H)$.
\end{lem}
\begin{proof}
Indeed, we have:
\allowdisplaybreaks{
\begin{eqnarray*}
\phi \ast \psi (hh') &=& \phi(h^{(1)}{h'}^{(1)}) \cdot
\psi(h^{(2)}{h'}^{(2)})\\
&=& \phi^{(1)}(h^{(1)}) \cdot
\phi^{(2)}({h'}^{(1)}) \cdot
\psi^{(1)}(h^{(2)}) \cdot
\psi^{(2)}({h'}^{(2)})\\
&=& \phi^{(1)}(h^{(1)}) \cdot
\psi^{(1)}(h^{(2)}) \cdot
\phi^{(2)}({h'}^{(1)}) \cdot
\psi^{(2)}({h'}^{(2)})\\
&=& \big(\phi^{(1)}\ast\psi^{(1)}(h)\big)
\cdot
\big(\phi^{(2)}\ast\psi^{(2)}(h')\big),
\end{eqnarray*}}
an identity that we also write, for later use,
$$
(\phi\ast\psi )^{(1)}\otimes (\phi\ast\psi )^{(2)}=
(\phi^{(1)}\otimes \phi^{(2)})\ast (\psi^{(1)}\otimes \psi^{(2)}).
$$
We also clearly have:
$$
\phi \circ \psi (hh') = \phi (\psi^{(1)}(h)\ \psi^{(2)}(h'))
= \big(\phi^{(1)}\circ\psi^{(1)}(h)\big) \cdot
\big(\phi^{(2)} \circ \psi^{(2)}(h')\big).
\eqno \qed
$$
\renewcommand{\qed}{}
\end{proof}
\begin{cor}
The graded vector space of infinitesimal characters $\Xi(A)$ is a
graded Lie subalgebra of~$\Lin(H,A)$ for the Lie bracket induced
on the latter by the convolution product.
\end{cor}
\begin{proof}
Indeed, infinitesimal characters are precisely the elements
$\alpha$ of~$\Lin(H,A)$ such that:
$$
\alpha^{(1)}\otimes\alpha^{(2)} = \alpha\otimes e + e\otimes\alpha
\sepword{satisfy, for any $h,h'\in H$:} \alpha^{(1)}(h) \cdot
\alpha^{(2)}(h') = \alpha(hh').
$$
According to the foregoing lemma, for $\alpha$ and $\beta$ two graded
infinitesimal characters, we obtain:
\allowdisplaybreaks{
\begin{eqnarray}
[\alpha ,\beta ](hh') &:=& (\alpha\ast\beta -\beta\ast\alpha)(hh')
\nonumber\\
&=& \pi_A [ (\alpha\otimes e + e \otimes \alpha)\ast(\beta
\otimes e + e \otimes \beta)
\nonumber\\
& & -(\beta \otimes e + e \otimes \beta) \ast (\alpha\otimes e + e
\otimes \alpha)](h \otimes h')
\nonumber\\
&=& [\alpha ,\beta ](h)\cdot e(h') + e (h)\cdot [\alpha,\beta](h'),
\nonumber
\end{eqnarray}}
hence the corollary follows.
\end{proof}
\begin{prop}
The enveloping algebra $U(\Xi (A))$ of the Lie algebra~$\Xi(A)$ maps
naturally to the convolution subalgebra of~$\Lin(H,A)$ generated
by~$\Xi (A)$.
\end{prop}
The existence of that natural algebra map from~$U(\Xi(A))$ to~$\Lin(H,A)$
follows from the previous lemma and from the universal property of
enveloping algebras.
Notice that $U(\Xi(A))$ is also, as the enveloping algebra of a graded
Lie algebra, a graded connected cocommutative Hopf algebra, which we
call the Hopf algebra~$\Char_H(A)$, or just~$\Char(A)$, of
characteristic functions on~$H$ (with values in~$A$). We write
$\barast$ for the product on~$\Char(A)$ and use~$\Delta$ for its
coproduct. Thus by definition of~$\Char(A)$ the primitive elements are
the infinitesimal $A$-valued characters of~$H$. Besides providing Hopf
algebra tools for the study of Feynman rules, the Hopf algebra of
characteristic functions ---and the associated pro-unipotent group---
will play a crucial role in Section~\ref{sect:cosmic}, when relating
the FLA approach to renormalization to the Connes--Marcolli motivic
Galois group.
Notice that $\Delta$ is not defined in general on $\Lin(H,A)$, and
neither on the image of $\Char(A)$ in $\Lin(H,A)$, see~\cite{hazy04}
and~\cite{patreu2002} for a discussion on the subject in the
particular case $A=H$.
\begin{prop}
\label{prop:recilaw}
We have, for any $\phi \in \Char(A)$ and any $h,h'\in H$, the
reciprocity law:
$$
\phi (hh') = \phi^{(1)}(h) \cdot \phi^{(2)}(h'),
$$
where we use the Sweedler notation for~$\Delta(\phi)$, and where the
action of~$\phi$ on~$H$ is induced by the natural map from~$\Char(A)$
to~$\Lin(H,A)$.
\end{prop}
\begin{proof}
This is true when~$\phi$ is an infinitesimal character. According
to the previous proposition, for $\phi,\phi '$ in $\Char(A)$, we
have: $\phi\barast\phi'(hh')=\phi\ast\phi' (hh')$. Due to
the Lemma \ref{lem:AstCirc}, it follows that the identity holds
for $\phi\barast\phi'$ if it holds for $\phi$ and $\phi'$.
Since $\Char(A)$ is generated as an associative algebra by
infinitesimal characters, the proposition follows.
\end{proof}
We remark that the reciprocity law can be rewritten:
$$
\phi \circ \pi = \pi_A \circ \Delta(\phi).
$$
In the cocommutative case, the identity playing a similar role
(mutatis mutandis) is~\cite{patreu2002}:
$$
\delta \circ \phi = \Delta(\phi) \circ \delta.
$$
\smallskip
As a consequence of Proposition~\ref{prop:recilaw}, the set
$G'(A)$ of group-like elements in $\Char(A)$ maps to characters,
that is elements of $G(A)$ ---since the identity $\Delta
(\phi)=\phi\otimes\phi$ in $\Char(A)$ translates into the identity
$\phi(hh')=\phi(h)\phi(h')$ in $H$. We show now that, as usual,
the convolution exponential and logarithm maps are inverse
bijections from $\Xi (A)$ onto~$G(A)$ and from $\Xi(A)$ onto
$G'(A)$. Indeed, for any $\alpha\in\Xi (A)$, we have in
$\Char(A)$:
\allowdisplaybreaks{
\begin{eqnarray*}
\Delta \big(\exp(\alpha)\big) &=& \exp\big(\Delta (\alpha) \big)
= \exp(\alpha\otimes e + e \otimes\alpha )
= \exp (\alpha\otimes e) \barast \exp(e \otimes\alpha) \\
&=& \big(\exp (\alpha)\otimes e \big) \barast \big(e\otimes
\exp(\alpha)\big) = \exp(\alpha) \otimes \exp(\alpha),
\end{eqnarray*}}
which also implies that we have $\exp(\alpha )\in G(A)$
in~$\Lin(H,A)$. We have used first that $\alpha$ is a graded
infinitesimal character, then that $\alpha\otimes e$ and
$e\otimes\alpha$ commute. The other way round, if $\gamma$ is a
character:
\allowdisplaybreaks{
\begin{eqnarray*}
\log(\gamma )(hh') &=& \pi_A\big( \log(\gamma\otimes\gamma)(h
\otimes h') \big) \\
&=& \pi_A\big((\log(\gamma\otimes e) +
\log(e\otimes\gamma))(h\otimes h')\big) \\
&=& \pi_A\big((\log(\gamma)\otimes e + e
\otimes \log(\gamma))(h\otimes h')\big) \\
&=& \log(\gamma)(h) \cdot e(h') + e(h) \cdot
\log(\gamma)(h'),
\end{eqnarray*}}
whereas if $\gamma\in G'(A)$:
\allowdisplaybreaks{
\begin{eqnarray*}
\Delta (\log \gamma) &=& \log (\Delta \gamma ) =
\log (\gamma\otimes\gamma ) \\
&=& \log ((\gamma\otimes e)\barast (e\otimes\gamma )) =
\log\gamma\otimes e + e\otimes\log\gamma .
\end{eqnarray*}}
\begin{cor}
The natural algebra map from~$\Char(A)$ to~$\Lin(H,A)$ induces a
bijection between the group $G'(A)$ of group-like elements
in~$\Char(A)$ and $G(A)$, the group of $A$-valued characters on~$H$.
\end{cor}
We identify $G(A)$ with~$G'(A)$ henceforth. In particular, both the
identity map~$I$ and the antipode~$S$ can be viewed as elements
of~$\Char(H)$, and we won't distinguish between~$I,S$ and their
respective preimages in~$\Char(H)$.
\section{Logarithmic derivatives and the Dynkin operator}
\label{sect:dynkinOperator}
Although the logarithm is the simplest bijection from group-like
elements to primitive elements in a graded connected cocommutative
Hopf algebra, the most relevant bijection in view of applications to
renormalization is a bit subtler. It is a kind of logarithmic
derivative, closely related to a Lie idempotent known as the
\textit{Dynkin idempotent}. Presently we survey the properties of the
Dynkin operator (the composition of the Dynkin idempotent with the
grading map $Y$) pertinent to our forthcoming computations, and also
obtain new results on the operator, such as the existence of the
advertised bijection between~$G(A)$ and~$\Xi(A)$. The results
generalize the fine properties of Hopf algebras encapsulated in the
notion of descent algebra of a Hopf algebra~\cite{patras1994}. They
rely as well on the Hopf algebraic treatment of the Dynkin operator
given in~\cite{patreu2002}, and on more classical Lie theoretic
properties of the operator. We give in particular closed formulas for
the inverse map to Dynkin's operator, i.e., from $\Xi(A)$ to $G(A)$.
\smallskip
The classical Dynkin operator is defined as follows. Let $X=
\{x_1,\ldots,x_n,\ldots\}$ be an alphabet. The tensor algebra
$T(X):=\bigoplus_{n\geq 0}T_n(X)$ over~$X$ is a cocommutative graded
Hopf algebra with the set of words $x_{i_1}\ldots x_{i_l}$ as a linear
basis in degree $l$. It is also, canonically, the enveloping algebra
of the FLA~$\Lie(X)$ over~$X$. The product in~$T(X)$ is induced by
concatenation of words, whereas the coproduct is fully characterized
by the property that elements of~$X$ are primitive in~$T(X)$. The
Dynkin operator $D:T(X)\to\Lie(X)$ is given by:
$$
D(x_{i_1}\ldots x_{i_n}) =
[\dots[[x_{i_1},x_{i_2}],x_{i_3}],\dots,x_{i_n}];
$$
with $D_{T_0(X)}=0$ and $D_{T_1(X)}=\mathrm{id}_X$. According to an idea
essentially due to von~{\makebox{Waldenfels}}, this iterated
bracketing operator can be represented in a more abstract way as
the convolution product of the antipode~$S$ of~$T(X)$ with the
grading operator~$Y$, acting as the multiplication by~$n$
on~$T_n(X)$:
\begin{equation*}
D = S \ast Y; \sepword{equivalently} I \ast D = Y.
\end{equation*}
The most famous property of~$D$ is the Dynkin--Specht--Wever theorem,
stating that an element~$v$ in $T_n(X)$ ---a linear combination of
words of length $n$--- is in~$\Lie(X)$ if and only if $D(v) = nv$. In
effect, such a~$v$ is in the primitive part of the tensor algebra,
and:
$$
nv = Yv = \pi(I \otimes D)(1 \otimes v + v \otimes 1) = D(v).
$$
The converse is also well known. These definitions and properties
have been generalized to bialgebras in~\cite{patreu2002}, that we
follow mainly here. However, for our purposes we need to give a
somewhat detailed account. Indeed, that reference as well as the
classical theory of the Dynkin operator do focus on the study of
graded connected cocommutative Hopf algebras, whereas we are mainly
interested here in \textit{commutative} Hopf algebras. The interested
reader will find further historical and technical information about
the Dynkin operator and its relevance to Lie computations, as well as
other classical Lie idempotents, in references~\cite{gelfand1995}
and~\cite{reutenauer1993}.
So let $H$ be graded, connected and commutative. Since $I\in\Char(H)$,
so its graded components $I_n\in\Char(H)$. Notice, for further use,
that the subalgebra of $\Char(H)$ generated by the~$I_n$ maps to the
descent algebra of~$H$ ---the convolution subalgebra of~$\End(H)$
generated by the~$I_n$, see \cite{patras1994}. Moreover, the
grading operator $Y:= \sum_{n\in\N} n I_n$ belongs to~$\Char (H)$.
Its coproduct is given by:
$$
\Delta (Y) = Y \otimes I + I \otimes Y,
$$
another way of expressing that $Y$ is a derivation of~$H$:
$$
Y(hh') = Y(h)h'+ hY(h').
$$
Let us adopt the notation $Yf$ for $f\circ Y$, according to the custom
in distribution theory. Under this guise the operator $Y$ extends
naturally to a derivation on $\Lin(H,A)$. We find,
with~$f,g\in\Lin(H,A)$ and $h\in H$:
\allowdisplaybreaks{
\begin{eqnarray*}
Y(f \ast g)(h) &:=& f \ast g\,(Y(h)) = |h|(f \ast g)(h) \\
&=& |h|f(h^{(1)})g(h^{(2)}) \\
&=& |h^{(1)}|f(h^{(1)})g(h^{(2)}) +
|h^{(2)}|f(h^{(1)})g(h^{(2)}) \\
&=& Yf \ast g\,(h) + f \ast Yg\,(h),
\end{eqnarray*}}
where we used that $\Delta (Y(h))=|h|\Delta (h)=
\big(|h^{(1)}|+|h^{(2)}|\big)\,h^{(1)}\otimes h^{(2)}$.
\begin{prop} \label{prop:Sconvd}
Convolution of the $H$-valued character~$S$ with any derivation~$d$
of~$H$ yields an $H$-valued infinitesimal character.
\end{prop}
\begin{proof}
Indeed, since $d$ is a derivation, we have $d(hh')=d(h)h'+hd(h')$.
Since, furthermore, $\Delta (S)=S\otimes S$, we get:
\allowdisplaybreaks{
\begin{eqnarray*}
(S \ast d)(hh') &=& \pi\circ [ (S\otimes S) \ast (d\otimes I+I\otimes d)]
(h\otimes h') \\
&=&\pi\circ [ (S \ast d) \otimes (S \ast I)
+ (S \ast I) \otimes (S \ast d)](h\otimes h') \\
&=& S \ast d (h)\cdot e(h') + e(h) \cdot S
\ast d(h'),
\end{eqnarray*}}
hence the proposition follows.
\end{proof}
\begin{cor} \label{cor:DynkinInfChar}
The Dynkin operator $D:=S\ast Y$ is an $H$-valued infinitesimal
character of~$H$.
\end{cor}
Notice also that $D$ satisfies $D\circ D=D\circ Y$ ---in other terms,
$D$ is an idempotent up to a scalar factor depending on the grading,
that is, $D$ is a quasi-idempotent. Equivalently, $D\circ Y^{-1}$ is
an idempotent on $H^+$ (also known when $H$ is the cotensor algebra
$T^\ast(X)$ ---see below--- as the Dynkin idempotent). Indeed, for any
$h\in H$,
$$
D\circ D(h) = D\circ (S\ast Y)(h)
= D\big(S(h^{(1)}) Y(h^{(2)})\big).
$$
However, since $D$ is an infinitesimal character, $D(hh')=0$ if
$h,h' \in H^+$ and therefore,
$$
D\circ D(h) = D\big(S(h)Y(1_H)+S(1_H)Y(h)\big)
= D\circ Y(h),
$$
since $Y(1_H)=0$.
\begin{prop}
Right composition with an infinitesimal
character~$\alpha\in\Xi(H)$ induces a map from~$G(H)$ to~$\Xi(H)$.
This also holds for~$G(A)$ and~$\Xi(A)$, where~$A$ is an arbitrary
commutative unital algebra.
\end{prop}
\begin{proof}
Indeed, let $\gamma \in G(H)$ or $G(A)$, we have:
$$
\gamma \circ \alpha (hh') = \gamma\circ\alpha(h)\,e(h') +
e(h)\,\gamma\circ\alpha(h'),
$$
by virtue of Lemma~\ref{lem:AstCirc}, since $\gamma\circ e=e$ for
any character~$\gamma$.
\end{proof}
\begin{cor}
\label{cor:DynkinD}
Right composition with the Dynkin operator $D$ induces a map
from~$G(A)$ to~$\Xi(A)$.
\end{cor}
In general, for~$\gamma$ belonging to~$G(H)$ or~$G(A)$ and any
$f_1,\ldots,f_k\in\End(H)$, we have the distributive property:
$$
\gamma \circ (f_1 \ast \dots \ast f_k) = (\gamma\circ f_1) \ast
\dots \ast (\gamma\circ f_k).
$$
Particularly,
$$
\gamma \circ D = \gamma \circ (S \ast Y)
= \gamma^{-1} \ast Y\gamma.
$$
\begin{thm}
\label{thm:Gamma}
Right composition with $D$ is a bijective map from~$G(H)$ to~$\Xi(H)$.
The inverse map is given by:
\begin{equation}
\Gamma : \alpha\in\Xi (H) \longmapsto \sum\limits_n
\sum_{\substack{k_1, \dots, k_l\in\N^\ast \\ k_1 + \dots +
k_l = n}} \frac{\alpha_{k_1} \ast
\dots\ast \alpha_{k_l}} {k_1(k_1+k_2) \dots (k_1+ \dots +k_l)} \in
G(H).
\label{eq:madre-del-cordero}
\end{equation}
The theorem also holds if $G(H)$ and~$\Xi(H)$ are replaced
by~$G(A)$, respectively~$\Xi(A)$.
\end{thm}
We show first that~$\Gamma$ is a left inverse to right composition
with~$D$. The following lemma has been obtained in~\cite{gelfand1995}
in the setting of noncommutative symmetric functions and
quasi-determinants. We include the proof, which is elementary.
\begin{lem}\label{dynkid}
For~$n\ge1$ we have:
$$
I_n = \sum_{\substack{k_1, \dots, k_l\in\N^\ast \\ k_1 + \dots +
k_l = n}} \frac{D_{k_1}\ast \dots \ast D_{k_l}}{k_1(k_1 + k_2)
\dots (k_1 + \cdots + k_l)}.
$$
\label{lem:lemma-of-lemmata}
\end{lem}
\begin{proof}
As already remarked, the definition of~$D$ implies $I \ast D = I
\ast S \ast Y = Y$. In particular, since $D_0=0$:
$$
Y_n = n I_n = (I \ast D)_n = \sum\limits_{i=1}^{n}I_{n-i}\ast D_i.
$$
Inserting recursively the value of~$I_i$ in the right member of
the identity, we obtain:
\allowdisplaybreaks{
\begin{eqnarray*}
I_n &=& \frac{D_n}{n} + \sum\limits_{i=1}^{n-1}\sum\limits_{1\leq
j\leq n-i} \frac{I_{n-i-j}\ast D_{j} \ast D_i}{(n-i)n}
\\
&=& \frac{D_n}{n} + \sum_{\substack{j+i=n \\ i,j\not=0}}
\frac{D_j\ast D_i}{j\cdot n} + \sum\limits_{i=1}^{n-1}
\sum\limits_{1\leq j\leq n-i-1}\frac{I_{n-i-j}\ast D_{j} \ast
D_i}{(n-i)n}
\\
&=& \sum_{\substack{k_1,\dots,k_l\in \N^\ast \\ k_1 + \dots +
k_l=n}} \frac{D_{k_1}\ast \dots \ast D_{k_l}}{k_1(k_1+k_2) \dots
(k_1+ \dots +k_l)}.
\qquad\qquad\mathqed
\end{eqnarray*}}
\renewcommand{\qed}{}
\end{proof}
Now we compute $\gamma=\gamma\circ I$, where~$I$ is expanded
according to the previous lemma, yielding:
\allowdisplaybreaks{
\begin{eqnarray*}
\gamma &=& e + \gamma \circ\bigg\{\sum\limits_{n\in\N^\ast}\,
\sum_{\substack{k_1, \dots ,k_l\in \N^\ast \\ k_1 + \dots + k_l=n}}\,
\frac{D_{k_1}\ast \dots \ast D_{k_l}}{k_1(k_1 + k_2)\dots(k_1 +
\dots + k_l)}\bigg\}
\\
&=& e + \sum\limits_{n\in\N^\ast}\,
\sum_{\substack{k_1,\dots ,k_l\in \N^\ast \\ k_1 + \dots + k_l = n}}\,
\frac{\gamma \circ D_{k_1}\ast \dots \ast \gamma\circ D_{k_l}}{k_1
(k_1 + k_2) \dots (k_1 + \dots + k_l)}.
\end{eqnarray*}}
As $D$ preserves the grading, it follows that $\Gamma$ is a left
inverse to the right composition with~$D$.
Similar calculations help to prove that $\Gamma$ is character-valued,
that is, is actually a map from~$\Xi(H)$ to~$G(H)$. Indeed,
let~$\alpha$ be any infinitesimal character in~$\Xi(H)$. Then we have
in $\Char(H)$:
\allowdisplaybreaks{
\begin{eqnarray*}
\lefteqn{\Delta (\Gamma (\alpha )) = e \otimes e + \sum_{n >
0} \sum_{\substack{k_1, \dots, k_l\in\N^\ast \\ k_1 + \dots
+ k_l = n}}\, \frac{\Delta (\alpha_{k_1}\barast \cdots
\barast \alpha_{k_l})} {k_1(k_1+k_2) \dots (k_1 + \dots +
k_l)}} \\
&=& e \otimes e + \sum_{n > 0} \sum_{\substack{k_1, \dots,
k_l\in\N^\ast \\ k_1 + \dots + k_l = n}}
\,\sum_{\substack{I\sqcup J = \{k_1,\dots,k_l\} \\
|I| = m,|J| = p}} \frac{(\alpha_{i_1}\barast \cdots
\barast \alpha_{i_m}) \otimes(\alpha_{j_1}\barast
\cdots \barast \alpha_{j_p})} {k_1(k_1 + k_2) \dots
(k_1 + \dots + k_l)},
\end{eqnarray*}}
where $I=\{i_1,\dots,i_m\},\,J=\{j_1,\dots,j_p\}$ and we have used
that the $\alpha_{i_m}$ are all infinitesimal characters.
Particularly, the assertion we intend to prove, that is $\Delta
(\Gamma(\alpha))= \Gamma(\alpha)\otimes\Gamma(\alpha)$, amounts to
the equality:
\allowdisplaybreaks{
\begin{eqnarray*}
\lefteqn{\bigg( e + \sum_{n > 0} \sum_{\substack{k_1, \dots,
k_l\in\N^\ast \\ k_1 + \dots + k_l =
n}}\,\frac{\alpha_{k_1}\barast \cdots \barast
\alpha_{k_l}}{k_1(k_1 + k_2) \dots (k_1 + \dots +
k_l)}\bigg)^{\otimes 2}} \\
&=& e \otimes e + \sum_{n > 0} \sum_{\substack{k_1, \dots,
k_l \in \N^\ast \\ k_1 + \dots + k_l = n}}
\,\sum_{\substack{I\sqcup J=\{k_1,...,k_l\} \\ |I| = m,|J| = p}}
\frac{(\alpha_{i_1}\barast \cdots \overline \ast
\alpha_{i_m})\otimes (\alpha_{j_1}\barast \cdots
\barast\alpha_{j_p})}{k_1(k_1 + k_2) \dots (k_1 + \dots + k_l)}.
\end{eqnarray*}}
This follows from the identity:
\allowdisplaybreaks{
\begin{eqnarray}
\lefteqn{\sum\limits_{I\sqcup J=K}\frac{1}{k_1(k_1 + k_2)
\dots (k_1 + \dots + k_{p+m})}} \nonumber \\
&=& \sum_{\substack{i_1,\dots,i_m \in \N^\ast \\ j_1,\dots,j_p \in
\N^\ast}} \, \frac{1}{i_1(i_1 + i_2)\dots (i_1 + \dots
+ i_m)}\cdot\frac{1}{j_1(j_1 + j_2)\dots (j_1 + \dots + j_p)},
\label{eq:estamos-fritos}
\end{eqnarray}}
where~$K$ runs over all sequences $(k_1,...,k_{p+l})$ obtained by
shuffling the sequences $I$ and $J$. In turn, the identity follows if
the equation $\Delta (\Gamma (\alpha ))=\Gamma (\alpha )\otimes \Gamma
(\alpha )$ ---that is, $\Gamma (\alpha )\in G(H)$--- holds for a
particular choice of $H$ and $\alpha$ such that the $\alpha_i$ form a
family of algebraically independent generators of a subalgebra of the
convolution algebra $\End(H)$ ---which are therefore also
algebraically independent as elements of~$\Char(H)$.
So, let us consider the particular case $H=T^\ast(X)$ where $H$ is the
graded dual of the enveloping algebra of the FLA over an infinite
alphabet $X$ and $\alpha =D$ is the Dynkin operator. Then, we already
know that $\Gamma (D)=I$, due to the previous lemma, so that $\Gamma
(D)$ is group-like in $\Char(T^\ast (X))$. On the other hand, as is
well known, the graded components of the Dynkin operator are
algebraically independent in the convolution algebra
$\End(T^\ast(X))$, and the identity follows. (For details on the
algebraic independence of the graded components of the Dynkin
operator, consult~\cite{gelfand1995,reutenauer1993}.)
Although we have preferred to give a conceptual proof based on FLAs,
let us mention that identity~\eqref{eq:estamos-fritos} is elementary,
and well known in boson Fock space theory. It also follows e.g. from
the shuffle identity for the product of two iterated
integrals~\cite{shst1993}:
\allowdisplaybreaks{
\begin{eqnarray*}
\lefteqn{\sum\limits_{I\sqcup J=K}\int\limits_0^1x_{p+l}^{k_{p+m}-1}
\dots \int\limits_0^{x_2}x_1^{k_1-1}dx_1\dots dx_{p+m}}\\
&=& \int\limits_0^1x_m^{i_m-1}\dots
\int\limits_0^{x_2}x_1^{i_1-1}dx_1\dots dx_m\cdot
\int\limits_0^1y_p^{j_p-1}\dots
\int\limits_0^{y_2}y_1^{j_1-1}dy_1\dots dy_p,
\end{eqnarray*}}
which generalizes the integration by parts rule.
\smallskip
To conclude the proof of Theorem~\ref{thm:Gamma} we show that $\Gamma$
is also a right inverse to the composition with~$D$. We contend that,
for any~$h$ in the augmentation ideal of~$H$ and arbitrary $\alpha\in
\Xi(H)$, the following relation holds:
$$
\alpha (h) = \Gamma(\alpha)^{-1} \ast Y\Gamma(\alpha)\,(h)
\sepword{or, equivalently,}
Y \Gamma(\alpha)(h) = \Gamma(\alpha) \ast \alpha\,(h).
$$
Indeed, we have:
\allowdisplaybreaks{
\begin{eqnarray*}
Y\Gamma(\alpha)(h) &:=& |h| \sum_{\substack{k_1,\dots,k_l\in \N^\ast
\\ k_1+ \dots + k_l=|h|}}\, \frac{\alpha_{k_1}\ast \dots \ast
\alpha_{k_l}}{k_1(k_1+k_2) \dots (k_1+ \dots +k_l)}\,(h)
\\
&=& \sum_{\substack{k_1,\dots ,k_l\in \N^\ast \\ k_1+ \dots +
k_l=|h|}}\, \frac{\alpha_{k_1}\ast \dots \ast
\alpha_{k_{l-1}}}{k_1(k_1+k_2) \dots (k_1 + \dots + k_{l-1})}\ast
\alpha_{k_l}\,(h)
\\
&=& \Gamma(\alpha) \ast \alpha\,(h).
\end{eqnarray*}}
This together with the fact that $\Gamma(\alpha)\in G(H)$ for
$\alpha \in\Xi(H)$ implies:
\begin{equation*}
\Gamma(\alpha) \circ D = \Gamma(\alpha)^{-1}\ast Y\Gamma(\alpha) =
\alpha.
\end{equation*}
Our task is over.\qed
\smallskip
When $H$ is both commutative and cocommutative the convolution product
is commutative and $\gamma\circ D=Y\log\gamma:=\log(\gamma)\circ Y$.
In particular in this case $D=Y\log I$. This was put to good use
in~\cite{bk2000}. Thus clearly $D$, in the general case, is a
noncommutative logarithmic derivative; and the inverse Dynkin
operator~$\Gamma$ a extremely powerful tool. We finally remark that $Y
\ast S$, corresponding, in the free cocommutative case and as an
operator from the tensor algebra to the FLA, to the right-to-left
iteration of bracketings, is another possible form for the
noncommutative logarithmic derivative, leading in particular to
$\gamma \circ (Y\ast S) = Y \gamma \ast \gamma^{-1}$. More generally,
any interpolation of the form $S^a \ast Y \ast S^b$, with $a+b=1$,
yields a notion of noncommutative logarithmic derivative.
\section{Algebraic BWH decomposition of characters}
\label{sect:birkhoffdecomp}
In this section we summarize previous research on Rota--Baxter
operators, relevant for our purpose. Let~$H$ be still graded,
connected and commutative and let again $A$ stand for a commutative
unital algebra. Assume the algebra~$A$ to split directly, $A = A_+
\oplus A_-$, into the subalgebras $A_+,A_-$ with $1_A \in A_+$. We
denote the projectors to~$A_\pm$ by~$R_\pm$, respectively. The pair
$(A,R_-)$ is a special case of a (weight one) Rota--Baxter
algebra~\cite{egk2005} since $R_-$, and similarly $R_+$, satisfies the
relation:
\begin{equation}
R_-(x)\cdot R_-(y) + R_-(x\cdot y) = R_-\big(R_-(x)\cdot y +
x\cdot R_-(y)\big), \qquad x,y \in A.
\label{eq:RBR}
\end{equation}
The reader may verify that integration by parts rule is just the
weight zero Rota--Baxter identity, that is, the second term on left
hand side of~\eqref{eq:RBR} is absent. One easily shows that
$\Lin(H,A)$ with the idempotent operator $\mathcal{R}_-$ defined by
$\mathcal{R}_-(f)=R_- \circ f$, for $f\in \Lin(H,A)$, is a (in general
not commutative) unital Rota--Baxter algebra~\cite{egk2005}.
The subspace $L^{(1)}$ of~$\Lin(H,A)$ made of linear maps that
send the Hopf algebra unit to zero forms a Lie algebra with
$\Xi(A)\subset L^{(1)}$ as a Lie subalgebra. To $L^{(1)}$ does
correspond the group $G_0 = e + L^{(1)}=\exp(L^{(1)})$ of linear
maps sending the Hopf algebra unit to the algebra unit. It
contains the group of characters~$G(A)$ as the subgroup~$\exp(\Xi
(A))$. Due to the characterization of infinitesimal characters as
maps that vanish on the square of the augmentation ideal of~$H$,
both $\mathcal{R}_+(\Xi (A))$ and $\mathcal{R}_-(\Xi (A))$ embed
into~$\Xi(A)$. In particular, both are Lie subalgebras
of~$\Xi(A)$.
The Lie algebra decomposition, $\Xi(A)=\mathcal{R}_+(\Xi (A))\oplus
\mathcal{R}_-(\Xi (A))$ lifts to the group of characters $G(A)$ as
follows. Recall the Baker--Campbell--Hausdorff~(BCH)
formula~\cite{reutenauer1993} for the product of two exponentials,
that we write:
$$
\exp(x)\exp(y) = \exp\big(x + y + \BCH(x,y)\big).
$$
In~\cite{egk2005,egk2004} the following non-linear map was defined,
whose properties where further explored in~\cite{egm2006}. See
also~\cite{manchon2001}. For $f\in L^{(1)}$, define
$\chi^{\mathcal{R}_-}(f)=\lim_{n \to \infty}
\chi^{\mathcal{R}_-}_n(f)$ where $\chi^{\mathcal{R}_-}_n(f)$ is given
by what we call the BCH recursion:
\allowdisplaybreaks{
\begin{eqnarray}
\chi^{\mathcal{R}_-}_0(f) &=& f, \qquad
\chi^{\mathcal{R}_-}_1(f) = f - \BCH\big(\mathcal{R}_-(f),
\mathcal{R}_+(f)\big),\ \ldots\ ,
\nonumber \\
\chi^{\mathcal{R}_-}_{n+1}(f) &=& f - \BCH\Big(\mathcal{R}_-
\big(\chi^{\mathcal{R}_-}_n(f)\big),
\mathcal{R}_+\big(\chi^{\mathcal{R}_-}_n(f)\big) \Big).
\nonumber
\end{eqnarray}}
Then the fixed-point map $\chi^{\mathcal{R}_-}: L^{(1)} \to L^{(1)}$
satisfies:
\begin{equation}
\label{eq:BCHrecursion1}
\chi^{\mathcal{R}_-}(f) = f -
\BCH\Big(\mathcal{R}_-\big(\chi^{\mathcal{R}_-}(f)\big),
\mathcal{R}_+\big(\chi^{\mathcal{R}_-}(f)\big)\Big).
\end{equation}
The superscript ${\mathcal{R}_-}$ is justified by the dependency of
the limit on the Rota--Baxter operator, and the following result.
\begin{lem}
\label{simpleCHI} The map $\chi^{\mathcal{R}_-}$
in~\eqref{eq:BCHrecursion1} solves the simpler recursion:
\begin{equation*}
\chi^{\mathcal{R}_-}(f) = f + \BCH\Big(-\mathcal{R}_-
\big(\chi^{\mathcal{R}_-}(f)\big),f\Big), \quad f \in L^{(1)}.
\end{equation*}
\end{lem}
Following \cite{egm2006}, the following factorization theorem holds.
\begin{thm}
\label{thm:bch} For any $f \in L^{(1)}$, we have the unique
factorization:
\begin{equation*}
\exp(f) = \exp\Big(\mathcal{R}_ -
\big(\chi^{\mathcal{R}_-}(f)\big)\Big)
\ast
\exp\Big(\mathcal{R}_+ \big(\chi^{\mathcal{R}_-}(f)\big)\Big).
\end{equation*}
\end{thm}
Uniqueness of the factorization follows from $R_{-}$ being
idempotent. In the particular case that $f\in \Xi (A)$, the BCH
recursion~\eqref{eq:BCHrecursion1} takes place inside the Lie
algebra~$\Xi(A)$, and the decomposition of $\exp(f)$ holds
therefore inside the group of characters $G(A)$. In particular, it
follows ultimately from Theorem~\ref{thm:bch} that $G(A)$
decomposes as a set as the product of two subgroups:
$$
G(A) = G_-(A)\ast G_+(A), \sepword{where} G_-(A) =
\exp(\mathcal{R}_-(\Xi(A))), \; G_+(A) =\exp(\mathcal{R}_+(\Xi (A))).
$$
\begin{cor} \label{groupfact}
For any $\gamma=\exp(\alpha)\in G(A)$, with $\alpha\in\Xi(A)$, we
have unique $\alpha_{\pm}:=\mathcal{R}_{\pm}
(\chi^{\mathcal{R}_{-}}(\alpha))\in\mathcal{R}_{\pm}(\Xi(A))$, and
unique characters $\gamma_\pm:=\exp(\pm\alpha_{\pm})\in
G_{\pm}(A)$ such that:
\begin{equation}
\gamma = \gamma_-^{-1} \ast \gamma_+.
\label{eq:BCHbirkhoff}
\end{equation}
\end{cor}
A remark is in order. The
factorization in Theorem~\ref{thm:bch} is true for any (filtration
preserving) linear map $P$ on $\Lin(H,A)$, that is, for
$\chi^{P}(f)$, see~\cite{egm2006}. Uniqueness follows from $P$
being idempotent. The Rota--Baxter property~\eqref{eq:RBR} implies
that both, $G_\pm(A)$ are subgroups. We may reformulate the last
statement about the nature of $G_\pm(A)$ in the next lemma, saying
that the BWH decomposition of Connes and Kreimer, originally found
by a more geometrical method~\cite{ck2000}, is recovered from
Theorem~\ref{thm:bch} by using the Rota--Baxter
relation~\cite{egk2005,Echo,egm2006}:
\begin{lem} \label{ck-Birkhoff}
{}For any $\gamma=\exp(\alpha)$ the unique characters
$\gamma_\pm:=\exp(\pm\alpha_{\pm})\in G_{\pm}(A)$ in the previous
corollary solve the equations:
\begin{equation}
\gamma_{\pm} = e \pm \mathcal{R}_{\pm}(\gamma_{-} \ast (\gamma - e)).
\label{eq:BogoliubovFormulae}
\end{equation}
\end{lem}
\begin{proof}
There is a sharpened version~\cite{egk2004} of Atkinson's
theorem~\cite{atk1963}, stating that the characters
$\gamma,\gamma_{\pm}$ of~\eqref{eq:BCHbirkhoff} verify
$\gamma_-=e- \mathcal{R}_-(\gamma_{-}\ast(\gamma-e))$ and
$\gamma_+=e- \mathcal{R}_+(\gamma_{+}\ast(\gamma^{-1}-e))$. Now:
$$
\gamma_{+} \ast (\gamma^{-1}-e) = \gamma_- \ast \gamma \ast
(\gamma^{-1}-e) = \gamma_-\ast (e - \gamma)
$$
gives~\eqref{eq:BogoliubovFormulae}.
\end{proof}
\section{On renormalization procedures}
\label{sect:reminder}
Prima facie in pQFT, to a Feynman graph~$F$ does correspond by the
Feynman rules a multiple $d$-dimensional momentum space
integral. Each independent loop in a diagram yields one
integration:
\begin{equation}
F \mapsto J_F(p) = \bigg[\int\prod_{l=1}^{|F|}\,d^dk_l \bigg]I_F(p,k).
\label{eq:stone-of-contention}
\end{equation}
Here $|F|$ is the number of loops, $k=(k_1,\ldots,k_{|F|})$ are
the $|F|$ independent internal (loop) momenta and
$p=(p_1,\ldots,p_N)$, with $\sum_{k=1}^N p_k=0$, denote the~$N$
external momenta. In the most relevant kind ($d=4$,
renormalizable with dimensionless couplings) of field theories,
these integrals suffer from ultraviolet (UV) divergencies.
Concretely, under a scale transformation, the integrand behaves as
$$
\bigg[\prod_{l=1}^{|F|}d^d(\lambda k_l)\bigg] I_F(\lambda p,\lambda k)
\sim \lambda^{s(F)},
$$
with~$s(F)$ the superficial UV degree of divergence of the
graph~$F$. Power-counting renormalizable theories are such that
all interaction terms in the Lagrangian are of dimension~$\le d$;
then $s(F)$ is bounded by a number independent of the order of the
graph. For instance in the $\vf^4_4$~model the superficial UV
degree of divergence of a graph with~$N$ external legs is:
$$
s(F) = 4 - N.
$$
The Weinberg--Zimmermann theorem says: ``provided all free propagators
have nonzero masses, the integral associated to the Feynman graph~$F$
is absolutely convergent if its superficial UV degree of divergence
and that of each of its one-particle irreducible (1PI) subgraphs is
strictly negative''. The BPHZ momentum space subtraction method is
rooted in this assertion: the idea is to redefine the integrand
$I_F(p,k)$ of a divergent integral by subtracting from it the first
terms of its Taylor expansion in the external momenta~$p$, after these
subtractions have been performed on the integrands corresponding to
the 1PI subgraphs of~$F$ that are renormalization parts; in such a way
the UV degrees of the integral and its subintegrals are lowered until
they become all negative. The combinatorics of those subgraph
subtractions leads to Bogoliubov's recursive $\bar R$-operation and
Zimmermann's forest formula; we assume the reader acquainted with the
former at least~\cite{CaswellK1982,collins1984,smirnov1991,Vasilev04}.
\smallskip
Less straightforward conceptually, but way more practical, is the
DR~method~\cite{h73}. This belongs to the class of regularization
prescriptions, which parameterize the divergencies appearing in $J_F$
upon introducing non-physical parameters, thereby rendering them
formally finite. In DR one introduces a complex parameter
$\varepsilon \in \mathbb{C}$ by changing the integral measure, that
is, the space-time dimension, to $\mathrm{D}\in\CC$:
$$
d^dk \xrightarrow{\rm dim\,reg} \mu^{\varepsilon}\,d^\mathrm{D}k,
$$
where $\varepsilon=(d-\mathrm{D})$. Henceforth always $d=4$. The
arbitrary parameter $\mu\neq0$ ('t~Hooft's `unit-mass' parameter) is
introduced for dimensional reasons. Take the~$\vf^4_4$~model: if we
wrote the interaction term simply in the form $g\vf^4/4!$, then the
(naive) dimension of~$g$ would be $[g]=\mu^\eps$. The redefinition
$\tilde{g}\mu^\eps\vf^4/4!$ of the original vertex in the Lagrangian
includes the mass parameter~$\mu$, introduced to make~$\tilde{g}$
dimensionless. Now, in any given theory there is a rigid relation
between the numbers of loops and vertices, for each given $N$-point
function. For instance in the $\vf^4_4$~model, for graphs associated
to the 2-point function the number of vertices is just equal to the
number of loops. For graphs associated to the 4-point function the
number of vertices is equal to the number of loops plus one, and so an
extra $\tilde{g}\mu^\eps$ factor comes in; but, because we are
correcting the vertex, this extra factor becomes the coupling
constant, and is not expanded in powers of the regularization
parameter~$\eps$; only the expression multiplying it contributes to
the renormalization constant $Z_g$ ---with a pole term independent of
the mass scale~$\mu$. The outcome is that in practice one computes as
many $\mu^\eps$ factors as loops:
\begin{equation}
\label{mugamma}
F \longrightarrow J_F^{(\varepsilon,\mu)}(p) =
\mu^{|F|\varepsilon} \bigg[\int \prod_{l=1}^{|F|} d^\mathrm{D}k_l
\bigg] \ I_F(p,k).
\end{equation}
See the discussions in~\cite{ck2000,ck2001}
and~\cite[Sections~7~and~8]{greendanger2001} as well. The point is
important from the algebraic viewpoint, since it makes the grading
of Feynman graphs by the number of loops immediately relevant to
the renormalization process.
The next step in~DR consists of a specific subtraction rule of
those $\varepsilon$-parameterized expressions which allows to take
the limit $\varepsilon \downarrow 0$. Now, Connes and Kreimer's
BWH decomposition of Feynman rules~\cite{ck2000} is
extraordinarily well adapted to~DR in~pQFT. In the Connes--Kreimer
paradigm, any renormalizable quantum field theory gives rise to a
Hopf algebra~$H$ of Feynman graphs, polynomially generated by 1PI
Feynman graphs and graded by graph loop number. The coproduct
operation of~$H$ mirrors the combinatorics of the subgraphs.
Looking back to~\eqref{eq:stone-of-contention}, the unrenormalized
Feynman integral does define a character because:
$$
I_{F_1 \cup F_2}(p_1,p_2,k_1,k_2) =
I_{F_1}(p_1,k_1)I_{F_2}(p_2,k_2)
$$
for disjoint graphs $F_1,F_2$. On the Hopf algebra~$H$ the
Feynman rules single out a distinguished character~$\gamma$ with
values in a suitable target algebra of regularized amplitudes.
Precisely, Connes and Kreimer establish the above decomposition
$G(A)=G_{-}(A)\ast G_{+}(A)$, for~$A$ the algebra of
amplitude-valued Laurent series in the dimension
defect~$\varepsilon$, using the MS scheme in DR on momentum space.
The characters $\gamma_{\pm}$ in the
decomposition~\eqref{eq:BCHbirkhoff} solve Bogoliubov's
renormalization formulae ---see Corollary~\ref{BogoliubovMap}
below--- and may be called the renormalized and counterterm
character, respectively. The sought after result is given by the
`positive' part in~\eqref{eq:BCHbirkhoff}. To wit,
$\gamma_+^{(\varepsilon,\mu)}= \gamma_-^{(\varepsilon,\mu)}\ast
\gamma^{(\varepsilon,\mu)}$, and the limit $\gamma_+^{(\varepsilon
\downarrow 0,\mu)}$ exists, giving the renormalized Feynman
amplitude. In what follows, when dealing with dimensionally
regularized characters we drop the superscript~$\varepsilon$. We
also forget about the other involved variables, that do not enter
our considerations, and just write
$\CC[[\varepsilon,\varepsilon^{-1}]$ for~$A$. Thus $R_-$ will be
the projection onto the subalgebra $A_-:= \varepsilon^{-1}
\mathbb{C}[\varepsilon^{-1}]$. In summary, $A$ encodes the
Feynman rules within the chosen regularization procedure, whereas
the splitting of~$A$, hence the projector~$R_-$, reflects a
renormalization scheme within that choice.
\begin{cor} \label{BogoliubovMap}
The map $\bar{\gamma}:=\gamma_{-} \ast (\gamma - e) = \gamma_+ -
\gamma_-$ in~\eqref{eq:BogoliubovFormulae} gives Bogoliubov's
preparation map~$\bar R$.
\end{cor}
\noindent Indeed with the Connes--Kreimer definition of the Hopf
algebra of Feynman graphs, equations~\eqref{eq:BogoliubovFormulae}
coincide with Bogoliubov's recursive formulas for the counterterm
and renormalized parts.
Recalling the remark after
Corollary~\ref{groupfact} we may characterize the Rota--Baxter
structure on $A$ as follows. Theorem~\ref{thm:bch} implies that
regularized, i.e. $A$-valued, Feynman rules on $H$ factorize
uniquely upon the choice of an idempotent linear map on $A$
respectively $\Lin(H,A)$. The extra demand for a Rota--Baxter
structure on $A$, and hence on $\Lin(H,A)$ essentially implies the
particular recursive nature of the process of perturbative
renormalization as encoded in Bogoliubov's preparation map~$\bar
R$ respectively the equations~(\ref{eq:BogoliubovFormulae}).
\section{Locality and the Connes--Kreimer beta function}
\label{sect:locality}
The results in Sections~\ref{sect:HopfCharacters}
to~\ref{sect:birkhoffdecomp} apply to any graded connected commutative
bialgebra $H$ and any commutative unital algebra $A$ with a direct
splitting. In this section we restrict our consideration to the class
of Hopf algebra characters possessing a \textit{locality} property,
with~$H$ as before. This will correspond to the example given by the
Feynman rules for a renormalizable pQFT in DR, using the framework of
the~MS and the~$\overline{\rm MS}$~\cite[Section~5.11.2]{collins1984}
schemes. There locality comes from the dependency of DR on the
arbitrary `mass parameter'~$\mu$. It is handy to provisionally fix the
value of~$\mu$ at some convenient reference point~$\mu_0$. The
difference between both schemes boils down to:
\begin{equation}
\mu_0 = {\overline\mu}_0\frac{e^{\gamma_E/2}}{2\sqrt\pi},
\label{eq:tapa-del-perol}
\end{equation}
with $\mu_0,{\overline\mu}_0$ respectively the MS, $\overline{\rm MS}$
values and $\gamma_E$ the Euler--Mascheroni constant. Our aim is to
recover by our methods the results in~\cite{ck2001,cm22004}; the latter
constitute a stronger, algebraic version of a famous theorem
by~'t~Hooft~\cite{h73}. Place of pride corresponds to
the Connes--Kreimer abstract beta function. This is a conceptually very
powerful beast, giving rise to the ordinary beta function through the
(tangent morphism to) the morphism (of unipotent groups) from~$G(\CC)$
to the group of transformations of the coupling
constants~\cite{ck2001}.
Any $f \in \Lin(H,A)$ is now of the form:
$$
f(h) = \sum_{k=-U}^\infty f_{:k}(h)\varepsilon^k =: f(h;\varepsilon)
$$
for $h \in H$. Here every $f_{:k}\in \Lin(H,\CC )$, the dual of $H$;
and the last notation is used when we want to emphasize the dependency
on the~DR parameter~$\varepsilon$. If~$h$ is a $|h|$-loop 1PI Feynman
graph, a general theorem~\cite{smirnov1991} in DR says that $U = |h|$
at non-exceptional momenta.
We define on the group of $A$-valued characters $G(A)$ a one-parameter
action of~$\CC^* \owns t$ given for~$h$ homogeneous in~$H$ by:
\begin{equation}
\psi^t(h;\varepsilon) := t^{\varepsilon|h|}\psi(h;\varepsilon).
\label{eq:pesar-de-los-pesares}
\end{equation}
Physically this amounts to neatly replacing the $\mu_0^{\eps|h|}$
factor present in each dimensionally regularized Feynman
diagram~\eqref{mugamma} by $(\mu_0 t)^{\eps|h|}$; that is, the mass
scale is changed from~$\mu_0$ to~$t\mu_0$ ---or
from~${\overline\mu}_0$ to~$t{\overline\mu}_0$, as the case might be.
As~$\eps$ is a complex variable, there is no harm in taking~$t$
complex, too.
It is clear that $\psi^t$ in~\eqref{eq:pesar-de-los-pesares} is
still a character, and that $(\psi_1\ast
\psi_2)^t=\psi_1^t\ast\psi_2^t$. This last property also holds if
$\psi_1,\psi_2$ in~\eqref{eq:pesar-de-los-pesares} belong more
generally to~$\Lin(H,A)$. Besides, for future use we store:
\begin{equation}
t\frac{\partial}{\partial t}\psi^t =
\varepsilon |h|\psi^t(h;\varepsilon)
= \varepsilon\,Y\psi^t\ \quad {\rm{such\ that}} \quad
t\frac{\partial}{\partial t}\Big|_{t=1}\,\psi^t = \varepsilon\,Y\psi.
\label{eq:hour-of-reckoning}
\end{equation}
For any~$t$ and any homogeneous~$h \in H$ we have
$t^{\varepsilon|h|}\in\mathcal{R}_+(A)=A_+:=
\mathbb{C}[[\varepsilon]]$, so that the one-parameter action on~$G(A)$
restricts to a one-parameter action on the group~$G_+(A)$~:
$$
\psi \in G_+(A) \mapsto \psi^t \in G_+(A).
$$
We now have for the regularized, but unrenormalized character
$\gamma^t \in G(A)$ a BWH decomposition:
$$
\gamma^t = {(\gamma^t)}_-^{-1} \ast {(\gamma^t)}_+.
$$
Notice that we write instead $(\gamma_-)^t$ and~$(\gamma_+)^t$ for the
image of~$\gamma_-$ and~$\gamma_+$ under the one-parameter group
action. The locality property simply states that for the counterterm
the following holds.
\begin{thm}
\label{loctheom}
Let $\gamma$ be a dimensionally regularized Feynman rule
character. The counterterm character in the BWH decomposition
$\gamma^{t}= (\gamma^{t})^{-1}_-\ast(\gamma^{t})_+$ satisfies:
\begin{equation}
\label{locality}
t\,\frac{\partial{(\gamma^t)}_-}{\partial t} = 0 \sepword{or\
${(\gamma^t)}_-$ is equal to~$\gamma_-$, i.e. independent of~$t$.}
\end{equation}
We say the $A$-valued characters $\gamma \in G(A)$ with this property
are \textit{local} characters.
\end{thm}
The physical reason for this is that the counterterms can be taken
mass-independent; this goes back at least to~\cite{ArcheoCollins}. For
this fact, and more details on the physical significance of the
locality property in pQFT, we refer the reader
to~\cite{bergkrei2005,collins1984,ck2000,cm22004}.
In the sequel, albeit mustering Dynkin technology, we partly
follow the plan in reference~\cite{manchon2001}. Denote by $G^{\rm
loc}(A)$ the subset of local elements of~$G(A)$ and $G^{\rm
loc}_-(A)$ the subset of elements in~$G^{\rm loc}(A)$ for which
$\psi(h) \in A_-$ when~$h$ has no scalar part.
\begin{prop}\label{prop:BWHloc}
The set $G^{\rm loc}(A)$ decomposes into the product $G^{\rm
loc}_-(A)\ast G_+(A)$.
\end{prop}
\begin{proof}
Notice first that $G_+(A)$ embeds in $G^{\rm loc}(A)$, since
$\psi^t\in G_+(A)$ for any $\psi\in G_+(A)$. Next, if~$\phi$ is
local and $\rho \in G_+(A)$, then $\phi \ast \rho$ is local.
Indeed, we have:
$$
\phi^t \ast \rho^t = {(\phi^t)}_-^{-1} \ast {(\phi^t)}_+ \ast
\rho^t,
$$
with polar part the one of $\phi^t$, which is constant since $\phi$ is
local. Let~$\phi$ still be local. Then $\phi \ast \phi_+^{-1} =
\phi_-^{-1}$, and the proposition follows if we can show $\phi_-^{-1}
\in G^{\rm loc}_-(A)$. Now, if~$\phi$ is local, we have:
$$
(\phi_-^{-1})^t \ast (\phi_+)^t = \phi^t = {(\phi^t)}_-^{-1} \ast
{(\phi^t)}_+;
$$
so that the BWH decomposition of~$(\phi_-^{-1})^t$ is given by:
\begin{equation}
(\phi_-^{-1})^t = {(\phi^t)}_-^{-1} \ast {(\phi^t)}_+ \ast
((\phi_+)^t)^{-1},
\label{eq:RG1}
\end{equation}
with polar part ${(\phi^t)}_-$, the one of~$\phi$, constant and
equal to~$\phi_-$.
\end{proof}
Now we wish to complete the study of locality by understanding how the
decomposition of $G^{\rm loc}(A)$, which is a refinement of the
decomposition $G(A)=G_-(A)\ast G_+(A)$, is reflected at the Lie
algebra level. More precisely, we would like to know if there is a
subspace of~$\mathcal{R}_-(\Xi(A))$ naturally associated to~$G_-^{\rm
loc}(A)$. The answer (quite surprising at first sight) is simple and
well known to physicists: beta functions are enough. As shown below,
an excellent tool to understand this is the Dynkin operator pair. Let
now $\beta\in\Xi(\CC)$ be a \textit{scalar}-valued infinitesimal
character. Notice that $\beta/\varepsilon$ can be regarded as an
element of~$\mathcal{R}_-(\Xi(A))$.
\begin{prop}\label{prop:GammaBeta}
With $\Gamma$ as defined in Eq.~(\ref{eq:madre-del-cordero}) of
Theorem~\ref{thm:Gamma}, we find:
\begin{eqnarray*}
\psi_\beta &:=& \Gamma(\beta/\varepsilon) \in G^{\rm loc}_-(A).
\end{eqnarray*}
\end{prop}
\begin{proof}
{}From Eq.~\eqref{eq:madre-del-cordero} it is clear that~:
\begin{equation}
\label{eq:GammaBeta}
\psi_\beta = \Gamma(\beta/\varepsilon) = \sum\limits_n
\bigg(\,\sum\limits_{k_1, \dots,k_n\in \N^\ast}
\frac{\beta_{k_1} \ast \dots\ast \beta_{k_n}} {k_1(k_1 + k_2)
\dots (k_1 + \dots + k_n)}\bigg) \frac{1}{\varepsilon^{n}}
\end{equation}
implying $\psi_\beta \in G_-(A)$. Next we observe $\psi_\beta^t =
\Gamma(\beta^t/\varepsilon)$, which follows simply
from~\eqref{eq:GammaBeta} and, on use
of~\eqref{eq:hour-of-reckoning}:
$$
(\psi_\beta^t)^{-1} \ast t\frac{\partial}{\partial t}\psi_\beta^t
= \varepsilon {(\psi_\beta^t)}^{-1} \ast Y \psi_\beta^t
= \varepsilon \psi_\beta^t\circ D
= \varepsilon \Gamma(\beta^t/\varepsilon) \circ D
= \beta^t.
$$
Now, the BWH decomposition $\psi_\beta^t = (\psi_\beta^t)^{-1}_-
\ast(\psi_\beta^t)_+$ is such that~:
\allowdisplaybreaks{
\begin{eqnarray*}
(\psi_\beta^t)^{-1} \ast t\frac{\partial}{\partial t} \psi_\beta^t
&=& (\psi_\beta^t)^{-1} \ast
t\frac{\partial}{\partial t} {(\psi_\beta^t)}^{-1}_- \ast
{(\psi_\beta^t)}_+ + (\psi_\beta^t)^{-1} \ast
{(\psi_\beta^t)}^{-1}_- \ast
t\frac{\partial}{\partial t}{(\psi_\beta^t)}_+ \\
&=& {(\psi_\beta^t)}_+^{-1} \ast {(\psi_\beta^t)}_- \ast
t\frac{\partial}{\partial t}{(\psi_\beta^t)}^{-1}_-
\ast{(\psi_\beta^t)}_+ + {(\psi_\beta^t)}^{-1}_+
\ast t\frac{\partial}{\partial t}{(\psi_\beta^t)}_+;
\end{eqnarray*}}
hence we find:
\allowdisplaybreaks{
\begin{eqnarray*}
{(\psi_\beta^t)}_+ \ast \beta^t \ast {(\psi_\beta^t)}_+^{-1} &=&
{(\psi_\beta^t)}_- \ast t\frac{\partial}{\partial
t}{(\psi_\beta^t)}^{-1}_- + t\frac{\partial}{\partial
t}{(\psi_\beta^t)}_+ \ast {(\psi_\beta^t)}^{-1}_+.
\end{eqnarray*}}
Using $\psi_\beta \in G_-(A)$ and that $\beta^t$ takes values
in~$A_+$, we find by applying the projector~$\mathcal{R}_-$ on
both sides of the last equation~:
\begin{equation*}
\mathcal{R}_-\Big( (\psi_\beta^t)_+ \ast \beta^t \ast
(\psi_\beta^t)_+^{-1} \Big) = 0 = (\psi_\beta^t)_- \ast
t\frac{\partial}{\partial t} (\psi_\beta^t)^{-1}_- = -
t\frac{\partial}{\partial t}(\psi_\beta^t)_- \ast
(\psi_\beta^t)^{-1}_-
\end{equation*}
implying that ${(\psi_\beta^t)}_-$ is independent of~$t$; thus
$\Gamma(\beta/\varepsilon)$ is a local character.
\end{proof}
The last proposition is suggestive of the fact that local $A$-valued
characters are essentially determined by \textit{ordinary}
(scalar-valued) infinitesimal characters (beta functions). This is
indeed the case. Before proving the main result of this section, we
remark that, for any $\phi\in G^{\rm loc}(A)$, we can write~:
\begin{equation}
\phi^t = \phi_-^{-1} \ast (\phi^t)_+ = \phi \ast h_{\phi}^t,
\label{eq:localBHW}
\end{equation}
where $h_{\phi}^t := (\phi_+)^{-1} \ast (\phi^t)_+ \in G_+(A)$.
Also, for $\phi\in G_-^{\rm loc}(A)$ we denote $\phi_{:-1}$
by~$\Res\phi$.
\begin{thm}
\label{thm:residue} The map $\phi \mapsto \varepsilon(\phi\circ
D)$, with~$D$ the Dynkin operator, sends $G^{\rm loc}(A)$ to
$\Xi(A_+)$ and $G^{\rm loc}_-(A)$ to $\Xi(\CC)$; explicitly, in
the second case:
$$
G^{\rm loc}_-(A) \ni \phi \mapsto
\varepsilon(\phi \circ D) = Y\Res\phi \in \Xi(\CC).
$$
\end{thm}
\begin{proof}
First, recall from Corollary~\ref{cor:DynkinD} that $D$ sends
characters to infinitesimal characters and that by
Proposition~\ref{prop:BWHloc} any local $\phi \in G^{\rm loc}(A)$
decomposes as $\phi^a \ast \phi^b$ with $\phi^a \in G_-^{\rm
loc}(A), \phi^b\in G_+(A)$. Therefore, we see that:
$$
\phi \circ D = \phi^{-1}\ast Y\phi
= (\phi^b)^{-1}\ast (\phi^a)^{-1}\ast Y\phi^a\ast \phi^b
+ (\phi^b)^{-1}\ast Y\phi^b;
$$
since $(\phi^b)^{-1}$ and $Y\phi^b$ belong to $\Lin(H,A_+)$, the
theorem follows if we can prove that:
$$
\epsilon(\phi\circ D) = \epsilon(\phi^{-1}\ast Y\phi)\in \Xi(\CC )
$$
when $\phi\in G_-^{\rm loc}(A)$. Assume the latter is the case and
recall the decomposition $\phi^t = \phi \ast h_{\phi}^t$
in~\eqref{eq:localBHW}, with now simply $h_{\phi}^t=(\phi^t)_+$. So
that on the one hand~\eqref{eq:hour-of-reckoning} implies~:
$$
t\frac{\partial}{\partial t}\Big|_{t=1}h^t_\phi
= \phi^{-1} \ast t\frac{\partial}{\partial t}\Big|_{t=1}\phi^t
= \phi^{-1} \ast \varepsilon\,Y\phi
= \varepsilon(\phi \circ D).
$$
On the other hand, observe that by using the Bogoliubov
formula~\eqref{eq:BogoliubovFormulae} one finds:
\allowdisplaybreaks{
\begin{eqnarray}
t\frac{\partial}{\partial t}\Big|_{t=1}h^t_\phi &=&
t\frac{\partial}{\partial t}\Big|_{t=1}(\phi^t)_+ =
t\frac{\partial}{\partial t}\Big|_{t=1} \mathcal{R}_+\big(\phi_-
\ast (\phi^t - e)\big)\\
&=& t\frac{\partial}{\partial t}\Big|_{t=1}
\mathcal{R}_+\big(\phi^{-1} \ast \phi^t - \phi^{-1}\big) =
t\frac{\partial}{\partial t}\Big|_{t=1}
\mathcal{R}_+\big(\phi^{-1} \ast t^{\varepsilon|\cdot|}\phi \big)
\nonumber \\
&=& \mathcal{R}_+\big(\varepsilon\,Y\phi) = Y\Res\phi.
\end{eqnarray}}
In the step before the last we took into account that $Y(1_H)=0$
and that $\phi_- \in G_-^{\rm loc}(A)$ which implies for $h \in
H^+$:
$$
\phi^{-1} \ast t^{\varepsilon|\cdot|}\phi(h) = \phi^{-1}(h) +
t^{\varepsilon|h|}\phi(h) + \phi^{-1}(\overline{h}^{(1)})
t^{\varepsilon|\overline{h}^{(2)}|}\phi(\overline{h}^{(2)}),
$$
where $h^{(1)}\otimes h^{(2)}=h\otimes 1+1\otimes h+
\overline{h}^{(1)}\otimes\overline{h}^{(2)}$. Here $\phi^{-1}(h)\in
A_-$ and $\phi^{-1}(\overline{h}^{(1)})
t^{\varepsilon|\overline{h}^{(2)}|}\phi(\overline{h}^{(2)})$ are both
mapped to zero by $t\frac{\partial}{\partial
t}\Big|_{t=1}\mathcal{R}_+$.
\end{proof}
A glance back to Theorem~\ref{thm:Gamma} and
Proposition~\ref{prop:GammaBeta} allows us to conclude that for
$\phi\in G^{\rm loc}_-(A)$ one has:
\begin{equation}
\Gamma\bigg(\frac{Y\Res\phi}{\varepsilon}\bigg) = \phi,
\label{eq:dies-illa}
\end{equation}
so indeed the polar part of a local character $\phi$ can be
retrieved from its beta function, $\beta(\phi):= Y\Res\phi \in
\Xi(\mathbb{C})$, by the universal
formula~\eqref{eq:madre-del-cordero}.
Equation~\eqref{eq:dies-illa} is equivalent to the `scattering
type formula' of~\cite{ck2001}; we can refer to~\cite{manchon2001}
for this. Now it is an easy task to recover the other results
of~\cite{ck2001,cm22004}. We ought to remark that, for~$\gamma$ a
general local character, one defines $\Res\gamma=-\Res\gamma_-$
---see in this respect formulae~\cite[Equation~(11)]{ck2001}
or~\cite[Equation~(2.111)]{cm22004}.
\begin{thm}
\label{thm:prueba-de-fuego} For the renormalized character
$\gamma_{\rm ren}(t):= (\gamma^t)_+(\eps=0)$ it holds:
\begin{equation}
\label{eq:Trojan-gift}
t\frac{\partial}{\partial t} \gamma_{\rm ren}(t) = (Y \Res\gamma)
\ast \gamma_{\rm ren}(t),
\end{equation}
the abstract RG~equation.
\end{thm}
\begin{proof}
First, in the proof of Theorem~7.2 we saw already that~$D$ verifies a
cocycle condition~\cite{em2006}: for $\phi,\psi\in G(A)$:
\begin{equation*}
(\phi \ast \psi) \circ D = \psi^{-1} \ast (\phi \circ D) \ast \psi
+ \psi \circ D.
\end{equation*}
This together with Theorem~\ref{thm:residue} implies for $\phi\in
G_-^{\rm loc}(A)$ that $\Res \phi =-\Res \phi^{-1}$. Indeed, this
follows by taking the residue~$\Res$ on both sides of the equation:
\begin{equation*}
0 = (\phi^{-1} \ast \phi) \circ D
= \phi^{-1} \ast (\phi^{-1} \circ D) \ast \phi + \phi \circ D
= \phi^{-1} \ast \frac{\Res\phi^{-1}}{\varepsilon} \ast \phi +
\frac{\Res\phi} {\varepsilon}.
\end{equation*}
Now, let $\gamma \in G^{\rm loc}(A)$ with BWH decomposition $\gamma^t
= \gamma^{-1}_- \ast {(\gamma^t)}_+$. Recall that ${(\gamma^t)}_+ =
\mathcal{R}_+\big(\gamma_- \ast t^{\varepsilon|\cdot|}\gamma \big)$
maps~$H^+$ into~$A_+\otimes\mathbb{C}[[\log(t)]]$ such that:
$$
t\frac{\partial{(\gamma^t)}_+}{\partial t}(0) =
t\frac{\partial}{\partial t}{\gamma}_{\rm ren}.
$$
As $\gamma_- \in G_-^{\rm loc}(A)$, we then find:
\allowdisplaybreaks{
\begin{eqnarray*}
t\frac{\partial(\gamma^t)_+}{\partial t}
&=& \gamma_- \ast t\frac{\partial}{\partial t}\gamma^t
= \gamma_- \ast \varepsilon Y \gamma^t
= \gamma_- \ast \varepsilon Y(\gamma^{-1}_- \ast (\gamma^t)_+)
\\
&=& \gamma_- \ast \varepsilon Y(\gamma^{-1}_-) \ast {(\gamma^t)}_+
+ \varepsilon Y{(\gamma^t)}_+ = \varepsilon (\gamma^{-1}_-\circ D)
\ast {(\gamma^t)}_+ + \varepsilon Y{(\gamma^t)}_+ \\
&=& (Y\Res\gamma^{-1}_-) \ast {(\gamma^t)}_+ + \varepsilon
Y{(\gamma^t)}_+ = - (Y\Res\gamma_-) \ast {(\gamma^t)}_+ +
\varepsilon Y(\gamma^t)_+ \\
&=& (Y\Res\gamma) \ast {(\gamma^t)}_+ + \varepsilon Y(\gamma^t)_+.
\end{eqnarray*}}
Therefore both sides have a limit as~$\varepsilon\downarrow0$,
yielding the sought after RG equation~\eqref{eq:Trojan-gift}.
\end{proof}
Equation~\eqref{eq:Trojan-gift} is solved using the beta function
$\beta(\gamma):= Y\Res\gamma \in \Xi(\mathbb{C})$:
$$
\gamma_{\rm ren}(t) = \exp(\ln(t)\beta(\gamma)) \ast
\gamma_{\rm ren}(1).
$$
The last statement and equation~\eqref{eq:RG1} tell us that:
$$
\lim_{\varepsilon \to 0} \gamma_{-} \ast (\gamma_-^{-1})^t =
\lim_{\varepsilon \to 0} (\gamma^{t})_+ \ast ((\gamma_+)^t)^{-1} =
\gamma_{\rm ren}(t) \ast \gamma_{\rm ren}^{-1}(1) =
\exp(\ln(t)\beta(\gamma)).
$$
The scalar-valued characters
$$
\Omega_t(\gamma):= \exp(\ln(t)\beta(\gamma)) \in G(\mathbb{C})
$$
obviously form a one-parameter subgroup in~$G(A)$: $\Omega_{t_1}
(\gamma)\ast\Omega_{t_2}(\gamma)=\Omega_{t_1t_2}(\gamma)$, generated
by the beta function and controlling the flow of the renormalized
Feynman rule character with respect to the mass scale.
\section{Through the prism of other renormalization schemes I}
\label{sect:DSlocality}
We plan now to prospect the usefulness of our approach in other
schemes of renormalization. Doubtless DR provides the cleanest
formulation of locality in the BWH decomposition for
renormalization. However, it is physically clear that in any
scheme one has still to parameterize the arbitrariness in
separating divergent from finite parts; and that the physical
irrelevance of the choices made still gives rise to the
RG~equation. On the mathematical side, it is worth to recall that
the algebraic BWH decomposition of
Section~\ref{sect:birkhoffdecomp} is not necessarily linked to
loops on the Riemann sphere. It is thus legitimate to ponder the
question in schemes other than those based on~DR. We plan to
exemplify with the BPHZ~scheme in the next section, but, before
dwelling on that, let us briefly indicate other pieces of
evidence.
A first one concerns old research. In the early seventies, Callan
set out to prove that broken scale invariance~\cite{callan1970} is
all that renormalization was about. He was eventually able to give
a treatment of the RG, and proofs of renormalizability of field
theories based on the former, by relying entirely in the BPHZ
formalism. To derive the beta function, he just set up
RG~equations by studying the dependency of the $N$-point functions
on the renormalized mass. See in this
respect~\cite{by1974,Blue1976}. In a renormalization method
without regularization, information about the~RG must be stored
somehow in the \textit{renormalized} quantities. Concretely, as
hinted at by our last theorem, one finds it in the scaling
properties of the renormalized integral. This was noted in the
context of Epstein--Glaser renormalization in~\cite{jmgb2003}. In
DR this shows in the RG equation~\eqref{eq:Trojan-gift}.
A second piece of evidence is furnished by more recent work by
Kreimer and
collaborators~\cite{bergkrei2005,bk2001,kreimer2006,kreiyea2006}.
Indeed, Kreimer has long argued that locality (and
renormalizability) is determined by the Hochschild cohomology of
renormalization Hopf algebras. This cohomology is trivial in
degree greater than one. The coproduct on~$H$ can be written
recursively in terms of closed Hochschild 1-cochains. Those are
grafting maps indexed by primitive 1PI diagrams, that govern the
structure of Feynman graphs and give rise through the Feynman
rules to integral Dyson--Schwinger equations. Here is not the
place for details and we refer the reader to~
\cite{bergkrei2005,bk2001,kreimer2005,kreimer2006,kreiyea2006},
and especially Kreimer's recent review~\cite{kreimer2006f}. The
interaction between the Hopf algebra of characteristic functions
of~$H$ of this paper and the Hochschild 1-cocycles on~$H$ is a
promising field of interest.
In the indicated references the Dynkin operator~$D$ (and its close
cousins $S \ast Y^n$) appears, again defining the residue, in
renormalization schemes without regularization. There Green's
functions, $\Sigma=\Sigma(g,p)$, are defined in terms of their
(combinatorial) Dyson--Schwinger equations using the Hochschild
1-cocycles; $g,p$ denote the coupling constant and external momenta,
respectively. Those Green's functions are expanded as power series
in~$g$:
$$
\Sigma = 1 + \sum_{k>0}\phi(c_k)g^k,
$$
for Feynman rules $\phi \in G(\mathbb{C})$ and with order by order
divergent Feynman amplitudes $\phi(c_k)$ as coefficients. Here the
$c_k$'s are particular linear combinations of graphs of loop order~$k$
in~$H$~\cite{kreimer2005}. Renormalization of~$\Sigma$ is achieved by
using a renormalized character $\phi_{\rm ren}\in G(\mathbb{C})$
defined by subtraction at a specific point $p^2=\lambda^2$
---corresponding to Taylor expansion up to zeroth order.
Here~$\lambda$ plays the role of the mass-parameter~$\mu$.
Locality is automatically fulfilled in this approach. The
renormalized Green's functions $\Sigma_{\rm ren}=\Sigma_{\rm
ren}(g,p,\lambda)$ can be developed in terms of the parameter
$L:=\log(p^2/\lambda^2)$, hence $\Sigma_{\rm ren} = 1 +
\sum_{k>0}\alpha_k(g)L^k$, with $\alpha_1(g) \in
\Xi(\CC)$~\cite{kreiyea2006}. Following the above references and
adapting partially to our notation, the residue is found to be:
$$
\Xi(\CC) \ni \sigma_1 := \frac{\partial}{\partial L}(\phi_{\rm
ren} \circ D) \bigg|_{L=0} = \alpha_1(g).
$$
In~\cite{kreiyea2006} Kreimer and Yeats outline how to derive
$\alpha_k(g)$, $k>1$ recursively from $\alpha_1(g)$. This confirms
that, in a deep sense, the beta function \textit{is} composition with
the Dynkin operator.
\section{Through the prism of other renormalization schemes II}
\label{sect:BPHZlocality}
Let us now explore the classical BPHZ scheme in the context of the
former sections. With~$I_F$ the integrand
of~\eqref{eq:stone-of-contention} corresponding to the graph $F$,
let us write for the Taylor subtraction employed in BPHZ
renormalization:
\begin{equation*}
I_F(p,k) \mapsto I_F(p,k) - t^{s(F)}_pI_F(p,k) := I_F(p,k) -
\sum_{|\alpha|\le s(F)}\,\frac{p^{\alpha}}{\alpha!}
\partial_{\alpha}I_F(0,k).
\end{equation*}
We borrowed here the standard multi-index notation
$$
\alpha = \{\alpha_1,\dots,\alpha_n\}\in\N^n,
\quad
|\alpha|:= \sum_{i=1}^n\alpha_i,
\quad
\alpha!=\prod_{i=1}^n\alpha_i!\,;
$$
each $\alpha_i$ takes values between~$0$ and~$3$, say. We are
assuming that only massive particles are present, otherwise the
subtraction at zero momentum gives rise to infrared divergences;
the expression of~$t^{s(F)}_pI_F$ in practice simplifies because
of Lorentz invariance.
Notice that the integral~$J_F(p)$
in~\eqref{eq:stone-of-contention} \textit{does} originally have a
meaning: it is a well defined functional on the linear subspace
$\cS_{s(F)}(\R^{4N})$ of Schwartz functions~$\phi$ on the external
momenta, whose first moments $\int p^\alpha\phi(p)\,d^{4N}p$ up to
order~$|\alpha|\le s(F)$ happen to vanish. The ``divergent'' loop
integrals inside~$J_F(p)$ become harmless when coupled exclusively
with Schwartz functions of this type. The Taylor `jet' projector
map $t^l_p$ subtracts polynomials, that are functionals of the
same type, in such a way that the result (eventually) becomes
moreover a tempered distribution.
The first question is whether we have a Rota--Baxter algebra in the
BPHZ context. Actually, we do have the Rota--Baxter property
for~$t^l_p$. Indeed, the following is obtained by a simple
calculation from the definitions.
\begin{prop}
Let $I_{F_i},\,i=1,2$ have associated degrees of divergence
$l_i,\,i=1,2$. Then
\begin{equation*}
t^{l_1}_{p_1}\big(I_{F_1}\big)\,
t^{l_2}_{p_2}\big(I_{F_2}\big) =
t^{l_1+l_2}_{p_1,p_2}\big(I_{F_1}\,t^{l_2}_{p_2}(I_{F_2})\big)
+
t^{l_1+l_2}_{p_1,p_2}\big(t^{l_1}_{p_1}(I_{F_1})\,I_{F_2}\big)
-
t^{l_1+l_2}_{p_1,p_2}\big(I_{F_1}I_{F_2}\big).
\end{equation*}
\end{prop}
We leave the verification of this to the care of the reader. In
general, if $U$ is a multiplicative semigroup, a family of linear
operators $R_u,\,u\in U$ on the algebra~$A$ is called a Rota--Baxter
family if for any $u,v\in U$ and $a, b\in A$, we have
\begin{equation*}
R_u(a)R_v(b) = R_{uv}(aR_v(b)) + R_{uv}(R_u(a)b) - R_{uv}(ab),
\quad \sepword{for all} a,b \in A.
\end{equation*}
Thus the $l$-jets define a Rota--Baxter family. Now, a Rota--Baxter
family is almost the same thing as a Rota--Baxter operator.
\begin{prop} \label{pp:RBF}
Let $\mathcal{A}=A[U]$ be the semigroup algebra associated to~$A$. Let
$R_u: A \to A,\,u\in U$ be a Rota--Baxter family. Define
$$
R: \mathcal{A} \to \mathcal{A},
\sepword{by}
R\Big(\sum_u a_u u\Big) :=\sum_u R_u(a_u) u.
$$
Then $R$ is a Rota--Baxter operator on~$\mathcal{A}$ such that
$R(au)=a'u$ with~$a'$ in~$A$. Conversely, if $R:\mathcal{A}\to
\mathcal{A}$ is a Rota--Baxter operator such that $R(au)=a'u$
with~$a'$ in~$A$, then we obtain a Rota--Baxter family $R_u,u\in
U$, by defining $R_u(a)=a'$ where $R(a\,u)=a'\,u$.
\end{prop}
The proof is immediate\footnote{We thank L.~Guo for suggesting the
notion of Rota--Baxter family}. On the strength of the previous
result, we may refer to a Rota--Baxter family as a Rota--Baxter
operator. Now, pQFT in practice furnishes an even more radical answer
to the question of whether one has here the Rota--Baxter framework.
For this is obviously the case when one deals only with logarithmic
divergences; and indeed most often only the latter is required. In
general, differentiation of an amplitude with respect to an external
momentum lowers the overall degree of divergence of a diagram. In~DR,
the Caswell--Kennedy theorem~\cite{CaswellK1982} states that the pole
part of any diagram, besides being independent of the scale, is a
polynomial in the external momentum. This follows easily from that
derivation and the projector $R_-$ commute in~DR. But even in the BPHZ
scheme $\partial_p t^l=t^{l-1}\partial_p$, and this is enough for the
differentiation trick to work.
Let us then consider the $J_F(p)\in \cS'_{s(F)}(\R^{4N})$
of~\eqref{eq:stone-of-contention}. Suppose moreover the multi-loop
divergent graph~$F$ has all its 1PI subgraphs~$\gamma$ made
convergent by application of Bogoliubov's preparation map~$\bar
R$. Then the renormalized integral $J^{\rm ren,BPHZ}_F(p)$ can be
defined as
\begin{equation}
\label{eq:first-things-first}
J^{\rm ren,BPHZ}_F(p) = \bigg[\int\prod_{l=1}^{|F|} d^4k_l\bigg]
\big(I_F(p,k) - t^{s(F)}_p{\bar R}I_F(p,k)\big)
=: \bigg[\int\prod_{l=1}^{|F|} d^4k_l\bigg]R_F(p,k).
\end{equation}
This recipe is however not unique. We can write as well
\begin{equation}
J^{\rm ren,BPHZ}_F(p) = P^{s(F)}(p) + \bigg[\int\prod_{l=1}^{|F|}
d^4k_l\bigg]R_F(p,k),
\label{eq:hammer-and-anvil}
\end{equation}
with $P^{s(F)}$ a polynomial of order~$s(F)$ in the external
momenta. This effects a `finite renormalization', in principle
undetermined, that might be put to use to fulfil renormalization
prescriptions (again, the form of that polynomial is severely
restricted by Lorentz invariance).
We now come to the key point. The coefficients of~$P^{s(F)}$
in~\eqref{eq:hammer-and-anvil} exhibit the ambiguity of
renormalization in the context of the BPHZ scheme. On the face of
it, the `pole terms' $t^{s(F)}_p I_F(p,k)$ do not depend at all on
the mentioned coefficients, and thus locality of the BWH
decomposition is guaranteed, in a trivial sense. On the other
hand, the Galois group approach to renormalization
theory~\cite{cm2004,cm22004} stems originally from the idea that
ambiguities should be, insofar as possible, handled from a
group-theoretic point of view, much as classical Galois theory
handles the multiple solutions of polynomial equations. Here
however the mentioned form of the ambiguity does not apparently
lend itself to RG~analysis. We contend, however, that the
ambiguity is expressed essentially in the same form as before. The
Caswell--Kennedy theorem is suggestive of a direct link between
the DR and BPHZ formalisms, and next we endeavour to prove the
pertinence of the RG to BPHZ renormalization by the most direct
possible route: introducing a mass scale in the latter formalism
in direct analogy with the former.
To express the ambiguity implicit in the~$P^{s(F)}$
of~\eqref{eq:hammer-and-anvil} in terms of a mass scale, we use
the modified BPHZ scheme proposed in~\cite{ScheuBlume}. For
instance, it is well known that the famous
graph~$\scalebox{0.85}{\fish}$ (`fish'~graph) giving the first
nontrivial contribution to the vertex correction in the~$\vf^4_4$
model in the Euclidean yields the amplitude
$$
J^{\rm DR}_{\rm fish}(p) = \tilde{g}^2\mu^{2\eps} \int
\frac{d^{\mathrm{D}}k}{(2\pi)^4}\,
\frac{1}{k^2+m^2}\,\frac{1}{(p+k)^2+m^2},
$$
where $p=p_1+p_2$, say, and that, by use of Feynman's parametrization
(see below) and relation~\eqref{eq:tapa-del-perol} one obtains
\begin{equation*}
J^{\rm DR}_{\rm fish}(p) = g\frac{\tilde{g}}{(4\pi)^2}
\biggl[\frac{2}{\eps} + \int_0^1 dz\,
\log\frac{{\overline\mu}^2}{p^2z(1-z) + m^2}\, + O(\eps)\biggr].
\end{equation*}
Now, the natural `zero point' for the mass scale in this problem is
clearly~$m$, and we note
$$
R_+\big(J^{\rm DR}_{\rm fish}(p=0;\overline\mu=m)\big) = 0,
$$
as $\eps\downarrow0$. This, together with the mentioned
Caswell--Kennedy theorem, feeds the suspicion that the last expression
is just the $J^{\rm ren,BPHZ}_{\rm fish}(p)$
of~\eqref{eq:first-things-first}. The suspicion is correct. The
computation required for the renormalized fish~graph in the BPHZ
scheme is
\begin{equation}
g^2\int\frac{d^4k}{(2\pi)^4}(1 - t^0_p) \bigg(\frac{1}{k^2 + m^2}
\,\frac{1}{(p + k)^2 + m^2}\bigg).
\label{eq:hic-Rhodas}
\end{equation}
Introduce the Feynman trick, prior to the Taylor subtraction,
\begin{align}
&g^2\int_0^1 dz\int\frac{d^4k}{(2\pi)^4}(1 - t^0_p)\,
\frac{1}{\big[((p + k)^2 + m^2)z + (1 - z)(k^2 + m^2)\big]^2}
\nonumber \\
&= g^2\int_0^1 dz\int\frac{d^4k}{(2\pi)^4}(1 - t^0_p)
\frac{1}{[k^2 + p^2z(1 - z) + m^2]^2}.
\label{eq:fishandchips}
\end{align}
The translation $k\to k-zp$, depending on the Feynman parameter, has
been made in order to obtain here the same denominator as in DR
calculations. With~$\Omega_4$ the area of the unit sphere in~$\R^4$,
the integral~\eqref{eq:fishandchips} now becomes
\begin{align*}
&\frac{\Omega_4\,g^2}{(2\pi)^4}\int_0^1 dz\int_0^\infty dk\,
\bigg[\frac{k^3}{[k^2 + p^2z(1 - z) + m^2]^2} - \frac{k^3}{[k^2 +
m^2]^2}\bigg] \\
&= \frac{g^2}{16\pi^2}\int_0^1 dz\,\log\frac{m^2}{p^2z(1 - z) + m^2}.
\end{align*}
The last step is to convert the $p$-independent part in the argument
of the logarithm into a mass scale: $m\to\overline\mu$. With this, we
recover on the nose the DR result, in the ${\rm\overline{MS}}$~scheme
as it turns out. Incidentally, as remarked in~\cite{Rio1995}, this
procedure allows us to give the exact value of the BPHZ
integral~\eqref{eq:hic-Rhodas}: the expression $\int_0^1
dz\,\log\big(1 + \frac{p^2}{m^2}z(1 - z)\big)$ is actually well known
in statistical physics, and leads by elementary manipulations
involving the golden ratio to
\begin{equation*}
J^{\rm ren,BPHZ}_{\rm fish}(p) = -\frac{g^2}{16\pi^2}
\bigg(\sqrt{1 + \frac{4m^2}{p^2}}\log\frac{\sqrt{4m^2/p^2} + 1}
{\sqrt{4m^2/p^2} - 1} - 2\bigg).
\end{equation*}
Thus, what we have done above amounts to \textit{identify the
constant} term ---recall $P^{s(F)}(p)$
in~\eqref{eq:hammer-and-anvil}. We have the right to add to the
previous expression the term $g^2/16\pi^2$ times
$\log\big({\overline\mu}^2/m^2\big)$. We note also that one can
recover the residue~$g^2/8\pi^2$ here from~$J^{\rm ren,BPHZ}_{\rm
fish}$, as the coefficient of the term logarithmic in the scaling
factor,
$$
J^{\rm ren,BPHZ}_{\rm fish}(\lambda p) \sim J^{\rm ren,BPHZ}_{\rm
fish}(p) - \frac{g^2}{8\pi^2}\log\lambda,
$$
as~$\lambda\uparrow\infty$.
The steps of the modified BPHZ procedure are: (i)~Introduction of the
Feynman parame\-trization in~$J^{\rm BPHZ}_F(k,p)$. (ii)~Exchange of
the integrations. (iii)~Translation of the integration variables
by~$\lambda p$, with~$\lambda$ dependent on the Feynman parameter.
(iv)~Taylor subtraction. (v)~Integration over loops and replacement
of the mass~$m$ in the $p$-constant part of the resulting logarithm by
a mass scale. There is nothing to forbid the same operations to be
performed on any primitive logarithmically divergent graph of any
field theory and then we are optimistic that, by use of skeletal
expansions and the integral equations, we would be led to a procedure
largely parallel to~DR, and so to a brute-force proof that the
coefficients of the higher powers of the scaling logarithms in BPHZ
renormalization are determined by the residues. To verify this with
full particulars, however, would take us too far afield.
Recapitulating, the Lie-theoretic method shows promise in dealing
with renormalization schemes others than~DR with~MS. The presence
of a Rota--Baxter structure is a requisite for the validity of
such a framework; it obviously holds for the
${\rm\overline{MS}}$~prescription in~DR as well. What of other
renormalization methods? For massive fields, the BPHZ scheme does
verify the required conditions. We have learned, however, that
the details are very idiosyncratic: as stated above, locality is
moot; and the beta function enters the picture through
Theorem~\ref{thm:prueba-de-fuego}, referring to renormalized
quantities, rather than to counterterms. For massless fields, the
price of the Rota--Baxter property is relinquishing Lorentz
invariance, and this is too heavy to pay. The Taylor subtraction
in Epstein--Glaser renormalization has roughly the same properties
as the BPHZ scheme, both in regard of massive and massless fields;
nevertheless, there one is confronted to the problems of good
definition that plague the attempts~\cite{BK,Etoile}. Procedures
based on analytic renormalization or Hadamard regularization
\cite{jmgb2003} have not been investigated yet from the
Rota--Baxter viewpoint. Thus it is too early in the game to draw
a list of known schemes that would definitely fit in our approach;
we plan to come to this in future work. It is intriguing that the
case study of BPHZ renormalization points out to the pertinence of
the ${\rm\overline{MS}}$~prescription in~DR.
\section{On Connes--Marcolli's motivic Galois theory}
\label{sect:cosmic}
In the Connes--Kreimer picture of renormalization, group theory
and the scheme-theoretic equivalent theory of commutative Hopf
algebras have become a fundamental tool of pQFT. Connes and
Marcolli identified recently~\cite{cm2004} a new level at which
Hopf algebra structures enter pQFT. Namely, they constructed an
affine group scheme $U^\ast$, universal with respect to physical
theories, and pointed out its analogies with number theory, e.g.
with the motivic Galois group of the scheme of 4-cyclotomic
integers $\Z[i][{\frac{1}{2}}]$.
In their work the initial physical problem attacked through the
Connes--Kreimer paradigm translates into the classification of
equisingular $G$-valued flat connections on the total space of a
principal bundle over an infinitesimal punctured disk (with $G$ the
group scheme represented by~$H$). From the representation theoretic
point of view, the classification is provided by representations
$U^\ast\longrightarrow G^\ast$, where $U^\ast$ is the semi-direct
product with the grading of the pro-unipotent group~$U$, the Lie
algebra of which is the free graded Lie algebra with one generator
$e_n$ in each degree $n>0$, and similarly for~$G^\ast$. Returning to the
geometrical side of the correspondence and featuring the DR setting
that leads to the Riemann--Hilbert picture of renormalization, Connes
and Marcolli construct a universal singular frame on principal
$U$-bundles over~$B$. A formal expression for it is given by:
$$
\gamma (\varepsilon,v)=\sum\limits_{n\geq 0}\sum\limits_{k_j}
\frac{e(k_1)\cdots e(k_n)}{k_1(k_1 + k_2)\cdots (k_1 + \cdots +
k_n)}v^{\sum k_j}\varepsilon^{-n}.
$$
As already remarked in~\cite{cm2004} and our introduction, it is
interesting that the coefficients of the frame are essentially those
appearing in the index formula of Connes--Moscovici; this would hint
at the rooting of noncommutative geometry in quantum field theory,
which has been Connes' contention for a long while.
We have already shown that other Hopf algebra structures (or, from the
scheme-theoretic point of view, pro-unipotent groups) do appear
naturally in pQFT, namely the Hopf algebras $\Char(A)$ of
characteristic functions associated to commutative target algebras,
e.g., although not exclusively, of quantum amplitudes. These Hopf
algebra structures arise naturally from algebraic-combinatorial
constructions on Hopf algebras, and therefore do not immediately
relate to the geometrical-arithmetical constructions underlying the
definition of the motivic Galois group in~\cite{cm2004}.
Nevertheless, the formula rendering the universal singular frame in
the motivic understanding of renormalization also essentially
coincides with our map~$\Gamma$ ---the inverse of the Dynkin map.
This indicates that the practical consequences for renormalization of
the Riemann-Hilbert and/or motivic point of view can be translated to
the setting of FLA theory ---which, besides being more familiar to
many, is independent of the geometry embedded in the DR~scheme. As it
turns out, the pro-unipotent groups/Hopf algebras $\Char(H)$ and
$\Char(A)$ are related naturally to the group~$U$. In the remainder
of the present section, we would like to make explicit how both
viewpoints connect ---although the reasons behind this connection
certainly ought to be deepened.
Let us recall a few general facts from the theory of Solomon algebras
---see~\cite{patras1994,reutenauer1993} for details. Let~$\sigma$ be
a permutation in the symmetric group~$S_n$ of order~$n$. The
descent set $D(\sigma )$ of~$\sigma$ is the set $D(\sigma):=
\{i,\,\sigma (i) > \sigma (i+1)\}$. Note $n \notin D(\sigma)$. The
descent composition $C(\sigma)$ of~$\sigma$ is the composition
of~$n$ (that is, the sequence $(c_1,\ldots,c_k)$ of strictly
positive integers of total sum~$n$) such that, when viewed as a
word, $\sigma=\sigma(1)\ldots\sigma (n)$ can be written $u_1\ldots
u_k$, where each word~$u_i$ is increasing and of length~$c_i$, and
where~$k$ is minimal. For example, $D(21534)=\{1,3\}$ and
$C(21534)=(1,2,2)$. The notions of descent set and descent
composition are obviously equivalent, the equivalence being
induced by the map:
$$
(c_1,\ldots,c_k) \longmapsto \{c_1,c_1 + c_2,\ldots,c_1 + \cdots +
c_{k-1}\}.
$$
The Solomon algebra~$\Sigma_n$ of type $A_n$ was first introduced
as a \textit{noncommutative lift} to the group algebra of the
representation ring of $S_n$~\cite{solomon1976}. As a vector
space, $\Sigma_n$ is the linear span of the elements $D_{\subseteq
S}$ in~$\Q[S_n]$, where~$S$ runs over subsets of~$[n-1]$ and
$$
D_{\subseteq S} := \sum_{\substack{\sigma\in S_n \\
D(\sigma)\subseteq S}}\sigma.
$$
Then Solomon's fundamental theorem states that $\Sigma_n$ is closed
under the composition product in~$S_n$.
Now, let $X$ be an infinite alphabet. The dual graded Hopf
algebra $T^\ast(X)=\bigoplus_{n\in\N}T_n^\ast (X)$ of~$T(X)$ is
graded connected commutative, with the shuffle product as the
algebra product and the deconcatenation coproduct:
$$
x_{i_1}\ldots x_{i_n} \longmapsto \sum\limits_{k=0}^n x_{i_1}\ldots
x_{i_k}\otimes x_{i_{k+1}}\ldots x_{i_n},.
$$
where we view $x_{i_1} \ldots x_{i_n}$ as an element of $T^\ast
(X)$ using the usual pairing $\langle x_{i_1} \ldots
x_{i_n}|x_{j_1}\ldots x_{j_k} \rangle = \delta_{x_{i_1}\ldots
x_{i_n}}^{x_{j_1}\ldots x_{j_k}}$. The symmetric group of order
$n$ embeds into $\End(T_n^{\ast }(X))\subset\End(T^\ast(X))$:
$$
\sigma (x_{i_1}\ldots x_{i_n}) := x_{i_{\sigma^{-1}(1)}}\ldots
x_{i_{\sigma^{-1}(n)}}.
$$
This map induces an embedding of algebras of $\Sigma_n$ into
$\End(T^\ast(X))$, where the product on the latter algebra is the
composition of maps.
Let us write now $\mathcal D$ for the descent algebra of $T^\ast
(X)$, that is the convolution subalgebra of $\End(T^\ast (X))$
generated by the projections $p_n:T^\ast (X)\longrightarrow
T_n^\ast(X)$ on the graded components of $T^\ast (X)$. The algebra
$\mathcal D$ is naturally graded and provided with a Hopf algebra
structure for which the $p_n$ form a sequence of divided powers:
$$
\Delta (p_n) = \sum\limits_{i+j=n}p_i\otimes p_j.
$$
We write ${\mathcal D}_n$ for the component of degree $n$.
\begin{lem}
The convolution algebra $\mathcal D$ is also closed under the
composition of maps $\circ$ in $End(T^\ast(X))$.
\end{lem}
The result follows from Corollary~9.4 in \cite{reutenauer1993}
(where the dual setting is considered, that is, the convolution
subalgebra $\Gamma$ of $End(T(X))$ generated by the graded
projections in $T(X)$) and also follows directly from~\cite[Thm
II.7]{patras1994}.
\begin{prop}
The embedding of $\Sigma_n$ into $(End(T^\ast(X)),\circ )$ induces an
isomorphism of algebras
$$
\Sigma_n \longrightarrow {\mathcal D}_n.
$$
\end{prop}
The proof follows from Corollary~9.2 in \cite{reutenauer1993} by
duality. It basically amounts to observe that, if $C(\sigma)=
(c_1,\ldots,c_k)$, then $\sigma^{-1}$ is a
$(c_1,\ldots,c_k)$-shuffle. For example, if $\sigma=(21534)$ then
$C(\sigma )=(1,2,2)$ and $\sigma^{-1}=(21453)$, so that the word
$\sigma (x_1...x_5)=x_2x_1x_4x_5x_3$ is a shuffle of $x_1$,
$x_2x_3$ and $x_4x_5$, and appears therefore in the expansion of
$p_1\ast p_2\ast p_2 (x_1...x_5)$.
\begin{prop}
The algebra $\mathcal D$ is freely generated as an associative
algebra by the graded projections $p_n$. Equivalently, it is
freely generated by the graded components $p_n\circ D$ of the
Dynkin idempotent $D=S\ast Y$, regarded as an element of
$\End(T^\ast(X))$.
\end{prop}
The first part of the proof of this proposition is Corollary~9.14
of~\cite{reutenauer1993} (stated for $\Gamma$, that is, in the
dual setting). The second assertion is found e.g.
in~\cite[Sect.~5.2]{gelfand1995}.
\begin{cor}
Regarded as a pro-unipotent group scheme, the graded dual Hopf
algebra $\mathcal D^\ast$ is canonically isomorphic to the ring of
coordinates of the Connes--Marcolli group~$U$ of renormalization
theory.
\end{cor}
Through this correspondence, by our Lemma~\ref{dynkid}, the
coefficients of the universal singular frame are reflected in the
coefficients of the expansion of the identity of $T^\ast (X)$ on
the natural linear basis of $\mathcal D$ viewed as the free
associative algebra generated by the graded components of the
Dynkin operator. Now, the universal properties of the Galois
group $U$ for renormalization, when the group is understood by
means of $\mathcal D$, follow from the constructions
in~\cite{patras1994}, where it is shown that the descent algebra
is an algebra of natural (endo)transformations of the forgetful
functor from graded connected commutative Hopf algebras to graded
vector spaces ---that is, an universal endomorphism algebra for
graded connected commutative Hopf algebras. In other terms, there
is a natural map from $\mathcal D$ to $\End(H)$, where $H$ is an
arbitrary graded connected commutative Hopf algebra. Using the
arguments developed in the first sections of this article, one
shows easily that this map factorizes through an Hopf algebra map
from $\mathcal D$ to~$\Char(H)$; this follows e.g. from the fact
that the graded projections generate $\mathcal D$ as a convolution
algebra and form a sequence of divided powers both in ${\mathcal
D}\subset\End(T^\ast (X))$ and in~$\Char(H)$. In summary,
\begin{cor}
The descent algebra $\mathcal D$ acts naturally by right composition
on $\Lin(H,A)$. Moreover, the group of group-like elements in
$\mathcal D$ acts naturally on the group $G(A)$ of Feynman rules.
\end{cor}
The second part of the corollary follows from the third identity in
Lemma~\ref{lem:AstCirc}.
Besides providing a natural link between the Galoisian approach to
renormalization and the noncommutative representation theory of
the symmetric groups~\cite{BlessenohlS}, the combinatorial
approach implies moreover that the Connes--Marcolli universal
Galois group~$U$ inherits the very rich structure of the descent
algebra. The appearance of the descent algebra (or equivalently of
the Hopf algebras of noncommutative symmetric functions and
quasi-symmetric functions, see \cite{gelfand1995}), beyond
broadening the scope of the mathematical theory of
renormalization, should result into new developments in the field,
possibly complementary with the arithmetic ones.
\vspace{0.4cm}
\textbf{Acknowledgements}
\smallskip
The first named author\footnote{currently at: Max Planck Institute for
Mathematics, Vivatsgasse 7, D-53111 Bonn, Germany} acknowledges
greatly the support by the European Post-Doctoral Institute and the
Institut des Hautes \'Etudes Scientifiques (I.H.\'E.S.). He is also
indebted to Laboratoire J.~A.~Dieudonn\'e at Universit\'e de Nice
Sophia-Antipolis for warm hospitality. He is very grateful to
C.~Bergbauer for useful discussions. The second named author
acknowledges partial support from CICyT, Spain, through
grant~FIS2005-02309. He is also grateful for the hospitality of
Laboratoire J.~A.~Dieudonn\'e. The present work received support from
the ANR grant AHBE~05-42234. We are pleased to thank the anonymous
referee, whose comments prompted us to clarify several aspects of the
paper.
| {
"attr-fineweb-edu": 1.571289,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUePjxK1UJ-2RwXKu- |
\section{Introduction}
\label{Sec:Intro}
Wireless sensor networks (WSNs) are typically formed by spatially distributed sensors with limited communications and processing capabilities that cooperate with each other to achieve a common goal. One of the most important applications of such networks is {\em distributed estimation}, which is a key enabling technology for a wider range of applications such as event classification and object tracking.
In a WSN performing distributed estimation, sensors make noisy observations that are correlated with an unknown parameter to be estimated, process their local observations, and send their processed data to a fusion center (FC), which then combines all of the locally processed samples to perform the ultimate global estimation.
Recently, the problem of distributed estimation in WSNs has extensively been studied in the literature~\cite{Xiao06,Luo05,Xiao08,Banavar10,Cui07Diversity}.
The type of local processing that is performed on each sensor's noisy observation before it is transmitted to the FC differentiates these works and can be either a local quantization~\cite{Xiao06,Luo05} or an amplify-and-forward strategy~\cite{Banavar10,Cui07Diversity,Xiao08}.
We consider the second approach in this paper due to its simplicity and practical feasibility. One of the main issues to be addressed in the case of analog amplify-and-forward local processing is finding the optimal local amplification gains.
The values of these gains set the instantaneous transmit power of sensors; therefore, we refer to their determination as the {\em power allocation} to sensors.
Cui~et~al.~\cite{Cui07Diversity} have proposed an optimal power-allocation scheme to minimize the variance of the best linear unbiased estimator (BLUE) for a random scalar parameter at the FC of a WSN, given a total transmission-power constraint in the network. In their proposed approach, the optimal local amplification gains depend on the instantaneous fading coefficients of the channels between the sensors and FC. Therefore, in order for the FC to achieve the minimum estimation variance of the BLUE estimator, it must feed the exact channel fading gains back to sensors through infinite-rate, error-free links. This requirement is not practical in most WSN applications, especially when the number of sensors is large. In this paper, we investigate the application of a {\em limited-feedback strategy} for the optimal power-allocation scheme proposed in~\cite{Cui07Diversity}. We use the generalized Lloyd algorithm with modified distortion functions to design an optimal codebook, which is then used to
quantize the space of the optimal power-allocation vectors used by the sensors to set their amplification gains.
Note that the approach proposed in this paper is different from other works that have applied limited feedback to distributed estimation. In particular, the phrase ``limited feedback'' has a different meaning in this paper compared to other works in the field of distributed estimation. For example, Banavar~et~al.~\cite{Banavar10} have investigated the effects of feedback error and the impact of the availability of different amounts of full, partial, or no channel state information at local sensors on the estimation variance of the BLUE estimator of a scalar random parameter at the FC.
The problem of distributed estimation using amplify-and-forward local processing over {\em unknown} parallel fading channels is studied in~\cite{Senol08}. A two-phase approach is proposed based on pilot-based channel estimation followed by source parameter estimation at the FC, where the estimated channel between each sensor and the FC is used at the sensor for local power optimization.
The rest of this paper is organized as follows: In Section~\ref{Sec:SystemModel}, the system model of the WSN under study is introduced.
Section~\ref{Sec:ProbStatement} summarizes the analysis of the power-allocation scheme proposed in~\cite{Cui07Diversity} and introduces the main concepts of our proposed limited-feedback strategy. Details of the implementation of the proposed approach are discussed in Section~\ref{Sec:CodeDesign}. The numerical results are presented in Section~\ref{Sec:NumResults}, and the paper is concluded in Section~\ref{Sec:Conclusions}.
\section{System Model}
\label{Sec:SystemModel}
Consider a wireless sensor network (WSN) composed of $K$ spatially distributed sensors, as depicted in Fig.~\ref{Fig:SystemModel}. The goal of the WSN is to estimate an unknown random parameter $\theta$ at its fusion center (FC) using amplified versions of local noisy observations received through orthogonal coherent channels corrupted by fading and additive Gaussian noise.
Assume that $\theta$ has zero mean and \ifbool{HomogeneousWSN}{unit power}{variance $\sigma^2_{\theta}$}, and is otherwise unknown.
\begin{figure}[!t]
\centering
\includegraphics[width=0.83\linewidth]{Figures/SystemModel}
\caption{System model of a WSN in which the FC finds an estimate of $\theta$.}
\label{Fig:SystemModel}
\end{figure}
Suppose that the local noisy observation at each sensor is a linear function of the unknown random parameter as
\begin{eqnarray}
x_i
& = &
h_i \theta + n_i,
\qquad
i = 1, 2, \dotsc, K,
\end{eqnarray}
where $x_i$ is the local observation at the $i$th sensor, $h_i$ is the fixed local observation gain known at the sensor and FC, and $n_i$ is the spatially independent \ifbool{HomogeneousWSN}{and identically distributed}{} additive observation noise with zero mean and known variance $\sigma_{\text{o}i}^2$. Note that no further assumption is made on the distribution of the random parameter to be estimated and the observation noise. Without loss of generality, we define the {\em observation signal-to-noise ratio} (OSNR) at sensor $i$ as $\beta_i = \frac{\left| h_i \right|^2 \ifbool{HomogeneousWSN}{}{\sigma^2_{\theta}}}{\sigma_{\text{o}i}^2}$, $i = 1, 2, \dotsc, K$,
where $\left| \cdot \right|$ denotes the absolute-value operation.
We assume that there is no inter-sensor communication and/or collaboration among spatially distributed sensors. Each sensor uses an {\em amplify-and-forward} scheme to amplify its local noisy observation before sending it to the FC as
\begin{eqnarray}
z_i
\ = \
a_i x_i
\ = \
a_i h_i \theta + a_i n_i,
\qquad
i = 1, 2, \dotsc, K,
\end{eqnarray}
where $z_i$ is the signal transmitted from sensor $i$ to the FC and $a_i$ is the local amplification gain at sensor $i$. Note that the instantaneous transmit power of sensor $i$ can be found as
\begin{eqnarray} \label{Eq:PowerDef}
P_i
\ = \
a_i^2 \left( \left| h_i \right|^2
\ifbool{HomogeneousWSN}{}{\sigma^2_{\theta}} + \sigma_{\text{o}i}^2 \right)
\ = \
a_i^2 \sigma_{\text{o}i}^2 \left( 1 + \beta_i \right).
\end{eqnarray}
As it can be seen in \eqref{Eq:PowerDef}, the value of the local amplification gain at each sensor determines the instantaneous transmit power allocated to that sensor. Therefore, we will call any strategy that assigns a set of local amplification gains to sensors a {\em power-allocation scheme}.
All locally processed observations are transmitted to the FC through orthogonal channels corrupted by fading and
additive Gaussian noise. The received signal from sensor $i$ at the FC can be described as
\begin{eqnarray}
y_i
& = &
g_i z_i + w_i,
\qquad
i = 1, 2, \dotsc, K,
\end{eqnarray}
where $g_i$ is the multiplicative fading coefficient of the channel between sensor $i$ and the FC, and $w_i$ is the spatially independent \ifbool{HomogeneousWSN}{and identically distributed}{} additive Gaussian channel noise with zero mean and variance $\sigma_{\text{c}i}^2}$. In this paper, we assume that the FC can reliably estimate the fading coefficient of the channel between each sensor and itself. This can be achieved by the transmission of pilot sequences from local sensors to FC. Note that in the above model, we have also assumed that each sensor is synchronized with the FC. Without loss of generality, we define the {\em channel signal-to-noise ratio} (CSNR) at sensor $i$ as $\gamma_i = \frac{\left| g_i \right|^2 }{\sigma_{\text{c}i}^2}}$,
$i = 1, 2, \dotsc, K$.
\section{Problem Statement}
\label{Sec:ProbStatement}
Suppose that given a power-allocation scheme and a realization of the fading gains, the FC combines the set of received signals from sensors to find the {\em best linear unbiased estimator} (BLUE) for the unknown parameter $\theta$ as~\cite[Chapter 6]{Kay93}
\begin{eqnarray}
\widehat{\theta}
\ = \
\left(
\sum_{i=1}^K
\frac{h_i^2 a_i^2 g_i^2}{a_i^2 g_i^2 \sigma_{\text{o}i}^2 + \sigma_{\text{c}i}^2}}
\right)^{-1}
\sum_{i=1}^K
\frac{h_i a_i g_i y_i}{a_i^2 g_i^2 \sigma_{\text{o}i}^2 + \sigma_{\text{c}i}^2}},
\end{eqnarray}
where the corresponding estimator variance can be found as
\begin{eqnarray} \label{Eq:ThetaVariance}
\text{Var}
\left(
\widehat{\theta}
\big|
\vc{a},\vc{g}
\right)
& = &
\ifbool{HomogeneousWSN}{}{\sigma_\theta^2}
\left(
\sum_{i=1}^K
\frac{\beta_i \gamma_i a_i^2 \sigma_{\text{o}i}^2}{1+\gamma_i a_i^2 \sigma_{\text{o}i}^2}
\right)^{-1},
\end{eqnarray}
in which $\vc{a} \triangleq \left[ a_1, a_2, \dotsc, a_K \right]^T$ and $\vc{g} \triangleq \left[ g_1, g_2, \dotsc, g_K \right]^T$ are column vectors containing the set of local amplification gains $a_i$ and fading coefficients of the channels $g_i$, respectively.
Cui~et~al.~\cite{Cui07Diversity} have derived the optimal local amplification gains or equivalently, the optimal power-allocation scheme to minimize the BLUE-estimator variance, defined in~\eqref{Eq:ThetaVariance}, given a total transmission-power constraint in the network as
\begin{eqnarray} \label{Eq:OptimalGain}
a_i
& = &
\begin{cases}
\sqrt{ \frac{1}{\gamma_i \sigma_{\text{o}i}^2} \left( \sqrt{\delta_i} \, \rho(K_1) - 1 \right) }, & i \leq K_1 \\
0, & i > K_1
\end{cases},
\end{eqnarray}
where $\delta_i \triangleq \frac{\beta_i \gamma_i}{1 + \beta_i}$, $i = 1, 2, \dotsc, K$,
the sensors are sorted
so that $\delta_1 \geq \delta_2 \geq \cdots \geq \delta_K$, the function $\rho(\cdot)$ is defined for any integer argument $n$ as
\begin{eqnarray}
\rho(n)
& = &
\frac{P_{\mathsf{Total}} + \sum_{i=1}^n \frac{\beta_i}{\delta_i}}{\sum_{i=1}^n \frac{\beta_i}{\sqrt{\delta_i}}},
\end{eqnarray}
$K_1$ is the {\em unique} largest integer for which $\sqrt{\delta_{K_1}} \, \rho(K_1) > 1$ and $\sqrt{\delta_{K_1+1}} \, \rho(K_1+1) \leq 1$, and $P_{\mathsf{Total}}$ is the constraint on the total power consumed in the entire network so that $\sum_{i=1}^K P_i \leq P_{\mathsf{Total}}$.
The above power-allocation strategy assigns a zero amplification gain or equivalently, zero transmit power to the sensors for which $\delta_i \leq \left[ \rho(K_1) \right]^{-2}$, because either the sensor's observation SNR or its channel SNR is too low. The assigned instantaneous transmit power to other sensors is non-zero and based on the value of $\delta_i$ for each sensor. Note that based on the above power-allocation scheme, there is a unique one-to-one mapping between $\vc{g}$ and $\vc{a}$ that could be denoted as $\vc{a} = f \left( \vc{g} \right)$.
It can be observed that the optimal power-allocation scheme proposed in~\cite{Cui07Diversity} is based on the assumption that the complete forward channel state information (CSI) is available at local sensors. In other words, Equation~\eqref{Eq:OptimalGain} shows that the optimal value of the local amplification gain at sensor $i$ is a function of its channel SNR $\gamma_i$, which in itself is a function of the instantaneous fading coefficient of the channel between sensor $i$ and the FC.
Therefore, in order for the FC to achieve the estimator variance given by~\eqref{Eq:ThetaVariance}, it must feed the exact instantaneous amplification gain $a_i$ back to each sensor.\footnote{Note that instead of feeding $a_i$ back to each sensor, the FC could send back the fading coefficient of the channel between each sensor and the FC. However, the knowledge of $g_i$ alone is not enough for sensor $i$ to compute the optimal value of its local amplification gain $a_i$. The sensor must also know whether it needs to transmit (i.e., $i \leq K_1$) or stay silent (i.e., $i > K_1$). There are two ways that the extra data can be fed back to the sensors: This information could be encoded in an extra one-bit command instructing the sensor to transmit or stay silent, or the sensor could listen for the entire vector of $\vc{g}$ sent by the FC over a broadcast channel. Sending back each value of $a_i$ avoids the problems of either having to send each sensor an additional bit, or requiring the sensor to listen to the entire vector of $\vc{g}$.} This requirement is not practical in most applications, especially in large-scale WSNs, since the feedback information is typically transmitted through finite-rate {\em digital} feedback links.
In this paper, we propose a limited-feedback strategy to alleviate the above-mentioned requirement for infinite-rate digital feedback links from the FC to the local sensors.
In the proposed limited-feedback strategy, the FC and local sensors must agree on a {\em codebook} of the local amplification gains or equivalently, a codebook of possible power-allocation schemes. Before the sensors transmit their amplified observations, the FC reliably estimates the channels between them and itself (i.e., $\vc{g}$), and finds the optimal power allocation to the sensors for the given channel realization, using the approach proposed in~\cite{Cui07Diversity}. Note that the FC has access to the perfect {\em backward} CSI; i.e., the instantaneous fading gain of the channel between each sensor and itself. Therefore, it can find the {\em exact} power-allocation strategy of the entire network based on~\eqref{Eq:OptimalGain}, given any channel realization. The FC will then identify the closest codeword to the optimal power-allocation vector, and broadcasts the {\em index} of that codeword to all sensors over an error-free digital feedback channel.\footnote{Since the rate of the feedback link is very low,
an error-free channel can be realized by using capacity-approaching channel codes.} The optimal codebook can be designed offline by quantizing the space of the optimized power-allocation vectors using the {\em generalized Lloyd algorithm}~\cite{GershoGray92} with modified distortion functions.
\section{Codebook Design Using Lloyd Algorithm}
\label{Sec:CodeDesign}
Let $L$ be the number of feedback bits that the FC uses to quantize the space of the optimal local power-allocation vectors into $2^L$ disjoint regions. Note that $L$ is the total number of feedback bits broadcast by the FC, and {\em not} the number of bits fed back to each sensor.
A codeword is chosen in each quantization region. The length of each codeword is $K$, and its $i$th entry is a {\em real} number representing a quantized version of the optimal local amplification gain for sensor $i$. The proposed quantization scheme could then be considered as a mapping from the space of channel state information to a discrete set of $2^L$ length-$K$ real-valued power-allocation vectors.
Let $\mx{C} = \left[ \vc{a}_1 \; \vc{a}_2 \; \cdots \; \vc{a}_{2^L} \right]^T$ be a $2^L \times K$ codebook matrix of the optimal local amplification gains, where $\left[ \mx{C} \right]_{\ell,i}$ denotes its element in row $\ell$ and column $i$ as the optimal gain of sensor $i$ in codeword $\ell$. Note that each $\vc{a}_\ell$, $\ell=1,2,\dotsc, 2^L$ is associated with a realization of the fading coefficients of the channels between local sensors and the FC, denoted by $\vc{g}_\ell$. We apply the generalized Lloyd algorithm with modified distortion functions to solve the problem of vector quantization in the space of the optimal local amplification gains and to design the optimal codebook $\mx{C}$ in an iterative process.
In order to implement the generalized Lloyd algorithm, a distortion metric must be defined for the codebook and for each codeword. Let $D_\text{B} \left( \mx{C} \right)$ denote the average distortion for codebook $\mx{C}$ defined as
\begin{eqnarray} \label{Eq:BookDistortion}
D_\text{B} \left( \mx{C} \right)
& \triangleq &
\mathbb{E}_{\vc{g}}
\left[
\underset{ \ell \in \left\{ 1,2,\dotsc,2^L \right\} }{\min}
D_\text{W} \left( \vc{a}_\ell | \vc{g} \right)
\right],
\end{eqnarray}
where $\mathbb{E}_{\vc{g}} \left[ \cdot \right]$ denotes the expectation operation with respect to the fading coefficients of the channel and $D_\text{W} \left( \vc{a}_\ell | \vc{g} \right)$ represents the conditional codeword distortion resulting from assigning a suboptimal quantized power-allocation vector $\vc{a}_\ell$ (instead of the optimal one, denoted by $\vc{a}^{\mathsf{OPT}}$ and found using~\eqref{Eq:OptimalGain}), given a realization of the vector of channel fading coefficients $\vc{g}$. We define this conditional codeword distortion as
\begin{eqnarray} \label{Eq:WordDistortion}
D_\text{W} \left( \vc{a}_\ell | \vc{g} \right)
\; \triangleq \;
\left|
\text{Var}
\left(
\widehat{\theta}
\big|
\vc{a}_\ell,\vc{g}
\right)
-
\text{Var}
\left(
\widehat{\theta}
\big|
\vc{a}^{\mathsf{OPT}},\vc{g}
\right)
\right|,
\end{eqnarray}
where the estimation variance $\text{Var} \left( \widehat{\theta} \big| {}\cdot{}, \vc{g} \right)$
is found using~\eqref{Eq:ThetaVariance}.
Let $\mathcal{G} \subseteq \mathbb{R}^{K+}$ and $\mathcal{A} \subseteq \mathbb{R}^{K+}$ be the $K$-dimensional vector spaces of fading coefficients of the channel and optimal local amplification gains, respectively, whose entries are chosen from the set of real-valued non-negative numbers. Note that each vector of channel fading coefficients $\vc{g} \in \mathcal{G}$ is {\em uniquely} mapped into an optimal power-allocation vector $\vc{a}^{\mathsf{OPT}} \in \mathcal{A}$ by applying~\eqref{Eq:OptimalGain} to find each element of $\vc{a}^{\mathsf{OPT}}$. We denote this mapping by $f: \mathcal{G} \to \mathcal{A}$. Given the distortion functions for the codebook $\mx{C}$ and for each one of its codewords defined in Equations~\eqref{Eq:BookDistortion} and~\eqref{Eq:WordDistortion}, respectively, the two main conditions of the generalized Lloyd algorithm could be reformulated for our vector-quantization problem as follows~\cite[Chapter 11]{GershoGray92}.
\noindent {\bf Nearest--Neighbor Condition:} This condition finds the optimal quantization regions (or Voronoi cells) for the vector space to be quantized, given a fixed codebook. Based on this condition, each point $\vc{a} \in \mathcal{A}$ in the vector space of the optimal local amplification gains is assigned to partition $\ell$ represented by codeword $\vc{a}_\ell \in \mx{C}$ if and only if its distance to codeword $\vc{a}_\ell$, with respect to the conditional codeword distortion function defined in~\eqref{Eq:WordDistortion}, is less than its distance to any other codeword in the codebook. In this paper, given a codebook $\mx{C}$, the space $\mathcal{G}$ of channel fading coefficients is divided into $2^L$ disjoint quantization regions with the $\ell$th region defined as
\begin{eqnarray} \label{Eq:PartitionDef}
\mathcal{G}_\ell
\ = \
\left\{
\vc{g} \in \mathcal{G} {}:{}
D_\text{W} \left( \vc{a}_\ell | \vc{g} \right)
\leq
D_\text{W} \left( \vc{a}_k | \vc{g} \right)
, \forall k \neq \ell
\right\}.
\end{eqnarray}
Since for every $\vc{g} \in \mathcal{G}$, there is a {\em unique} $\vc{a} \in \mathcal{A}$ that can be found using~\eqref{Eq:OptimalGain}, we could define the Voronoi cell $\mathcal{A}_\ell$ as the image of $\mathcal{G}_\ell$ under mapping $\mathcal{A}_\ell = f \left( \mathcal{G}_\ell \right)$. In other words, to find the optimal partition for each $\vc{a} \in \mathcal{A}$, its corresponding vector of channel fading coefficients $\vc{g} \in \mathcal{G}$ is considered. The distortion of using any codeword $\vc{a}_\ell \in \mx{C}$ instead of the optimal power-allocation vector for that channel realization is found using~\eqref{Eq:WordDistortion}, and $\vc{a}$ is assigned to the region with the lowest conditional codeword distortion, given $\vc{g}$.
\noindent {\bf Centroid Condition:} This condition finds the optimal codebook, given a specific partitioning of the vector space of the optimized power-allocation vectors $\left\{ \mathcal{A}_1, \mathcal{A}_2, \dotsc, \mathcal{A}_{2^L} \right\}$. Based on this condition, the optimal codeword associated with each Voronoi cell $\mathcal{A}_\ell \subseteq \mathcal{A}$ is the {\em centroid} of that cell with respect to the conditional codeword-distortion function introduced in~\eqref{Eq:WordDistortion}, and is defined as
\begin{eqnarray} \label{Eq:CodewordDef}
\vc{a}_\ell
& = &
\argmin{\vc{a} \in \mathcal{A}_\ell} \;
\mathbb{E}_{ \vc{g} \in \mathcal{G}_\ell }
\left[
D_\text{W} \left( \vc{a} | \vc{g} \right)
\right],
\end{eqnarray}
where the expectation operation is performed over the set of realizations of the channel fading coefficients, whose associated optimal power-allocation vectors are members of partition $\mathcal{A}_\ell$. This set is denoted by $\mathcal{G}_\ell$.
The optimal codebook is designed offline by the FC using the above two conditions. It can be shown that the average codebook distortion is monotonically non-increasing through the iterative usage of the Centroid Condition and the Nearest-Neighbor Condition~\cite[Chapter 11]{GershoGray92}. Details of the codebook design process are summarized in Algorithm~I. The optimal codebook is stored in the FC and all sensors.
\begin{table}[!t]
\centering
\newlength{\MyColumnTextWidth}
\setlength{\MyColumnTextWidth}{1\linewidth}
\addtolength{\MyColumnTextWidth}{-1\columnsep}
\begin{tabular}{m{1\MyColumnTextWidth}}
\toprule
\begin{minipage}[c]{1\linewidth}
\centering
ALGORITHM I: The process of optimal codebook design based on the generalized Lloyd algorithm with modified distortion functions.
\end{minipage} \\
\midrule
\alglanguage{pseudocode}
\renewcommand{\alglinenumber}[1]{{\footnotesize #1}.}
\begin{algorithmic}[1]
\algsetblock[Init]{Initialization}{EndInitialization}{6}{0cm}
\Require $K$, $L$, and channel-fading model.
\Require $M$.\Comment{$M$ is the number of {\em training vectors} in space $\mathcal{A}$.}
\Require $\epsilon$.\Comment{$\epsilon$ is the distortion threshold to stop the iterations.}
\Initialization
\State\hspace{\algorithmicindent} \AlgParBox{$\mathcal{G}_s \longleftarrow \ $ A set of $M$ length-$K$ vectors of channel-fading realizations based on the given fading model of the channels between local sensors and the FC.\Comment{$M \gg 2^L$ and $\mathcal{G}_s \subseteq \mathcal{G}$.}}
\State\hspace{\algorithmicindent} \AlgParBox{$\mathcal{A}_s \longleftarrow \ $ The set of optimal local power-allocation vectors associated with the channel fading vectors in $\mathcal{G}_s$, found by applying Eq.~\eqref{Eq:OptimalGain}.\Comment{$\mathcal{A}_s$ is the set of training vectors, and $\mathcal{A}_s \subseteq \mathcal{A}$.}}
\State\hspace{\algorithmicindent} \AlgParBox{{\em Randomly} select $2^L$ optimal power-allocation vectors from the set $\mathcal{A}_s$ as the initial set of codewords. Denote the codewords by $\vc{a}_\ell^0$.
}
\State\hspace{\algorithmicindent} \AlgParBox{$\mx{C}^0 \longleftarrow \left[ \vc{a}_1^0 \;\; \vc{a}_2^0 \;\; \cdots \;\; \vc{a}_{2^L}^0 \right]^T$\Comment{$\mx{C}^0$ is the initial codebook.}}
\State\hspace{\algorithmicindent} \AlgParBox{$j \longleftarrow 0$ and $\text{NewCost} \longleftarrow D_\text{B} \left( \mx{C}^0 \right)$}
\parbox[t]{\dimexpr\linewidth}{\raggedleft \Comment{The average distortion of codebook is found using Eq.~\eqref{Eq:BookDistortion}.}\strut}
\EndInitialization
\Repeat
\State \AlgParBox{$j \longleftarrow j+1$ and $\text{OldCost} \longleftarrow \text{NewCost}$}
\State \AlgParBox{Given codebook $\mx{C}^{j-1}$, optimally partition the set $\mathcal{A}_s$ into $2^L$ disjoint subsets based on the {\em Nearest-Neighbor Condition} using Eq.~\eqref{Eq:PartitionDef}. Denote the resulted optimal partitions by $\mathcal{A}_\ell^{j-1}$.
}
\ForAll{$\mathcal{A}_\ell^{j-1}$, $\ell=1,2,\dotsc,2^L$}
\State \parbox[t]{\dimexpr(\linewidth-\algorithmicindent)-\algorithmicindent}{Find the optimal codeword associated with partition $\mathcal{A}_\ell^{j-1}$ based on the {\em Centroid Condition} using Eq.~\eqref{Eq:CodewordDef}. Denote this new optimal codeword as $\vc{a}_\ell^j$.\strut}
\EndFor
\State \AlgParBox{$\mx{C}^j \longleftarrow \left[ \vc{a}_1^j \;\; \vc{a}_2^j \;\; \cdots \;\; \vc{a}_{2^L}^j \right]^T$\Comment{$\mx{C}^j$ is the new codebook.}}
\State \AlgParBox{$\text{NewCost} \longleftarrow D_\text{B} \left( \mx{C}^j \right)$}
\Until{$\text{OldCost} - \text{NewCost} \leq \epsilon$}
\State \Return $\mx{C}^\mathsf{OPT} \longleftarrow \mx{C}^j$
\end{algorithmic}
\\
\bottomrule
\end{tabular}
\end{table}
Upon observing a realization of the channel fading vector $\vc{g}$, the FC finds its associated optimal power-allocation vector $\vc{a}^{\mathsf{OPT}}$. It then identifies the closest codeword in the optimal codebook $\mx{C}$ to $\vc{a}^{\mathsf{OPT}}$ with respect to the conditional codeword distortion defined in~\eqref{Eq:WordDistortion}. Finally, the FC broadcasts the {\em index} of that codeword to all sensors as
\begin{eqnarray}
\ell
& = &
\argmin{ k \in \left\{ 1,2,\dotsc, 2^L \right\}, \vc{a}_k \in \mx{C} }
D_\text{W} \left( \vc{a}_k | \vc{g} \right).
\end{eqnarray}
Upon reception of the index $\ell$, each sensor $i$ knows its local amplification gain
as $\left[ \mx{C} \right]_{\ell,i}$, where $\ell$ and $i$ are the row and column indexes of the codebook $\mx{C}$, respectively.
\section{Numerical Results}
\label{Sec:NumResults}
In this section, numerical results are provided to verify the effectiveness of the proposed limited-feedback strategy in achieving a BLUE-estimator variance close to that of a WSN with full feedback. In our simulations,
\ifbool{HomogeneousWSN}{}{we have set $\sigma^2_{\theta} = 1$ and}the local observation gains are randomly chosen from a Gaussian distribution with unit mean and variance 0.09, i.e., $h_i \sim \mathcal{N} \left( 1, 0.09 \right)$. In all simulations, the average power of $h_i$ across all sensors is set to be 1.2. \ifbool{HomogeneousWSN}{The observation and channel noise variances are set to $\sigma_{\text{o}i}^2 = 10 \text{ dBm}$ and $\sigma_{\text{c}i}^2} = -90 \text{ dBm}$, respectively.}{The observation noise variances $\sigma_{\text{o}i}^2$ are uniformly selected from the interval $(0.05,0.15)$ such that the average power of the noise variances across all sensors is kept at 0.01. The channel noise variance for all sensors is set to $\sigma_{\text{c}i}^2} = -90 \text{ dBm}$, $i = 1, 2, \dotsc, K$.} The following fading model is considered for the channels between local sensors and the FC:
\begin{eqnarray}
g_i
& = &
\eta_0 \left( \frac{d_i}{d_0} \right)^{-\frac{\alpha}{2}} f_i,
\qquad
i = 1, 2, \dotsc, K,
\end{eqnarray}
where $\eta_0 = -30 \text{ dB}$ is the nominal path gain at the reference distance $d_0 = 1$ meter, $d_i$ is the distance between sensor $i$ and the FC, and uniformly distributed between 50 and 150 meters,
$\alpha=2$ is the path-loss exponent, and $f_i$ is the independent and identically distributed (i.i.d.) Rayleigh-fading random variable with unit variance.
The size of the training set in the optimal codebook-design process described in Algorithm~I is set to $M =$ 5,000. The codebook-distortion threshold for stopping the iterative algorithm is assumed to be $\epsilon = 10^{-6}$. The results are averaged over 50,000 Monte-Carlo simulations.
\begin{figure}[!t]
\centering
\subfloat[The number of sensors in the network is $K = 5$.]{\label{Fig:K5_L}\includegraphics[width=0.92\linewidth,page=1]{Figures/K5_L}} \\
~
\subfloat[The number of sensors in the network is $K = 10$.]{\label{Fig:K10_L}\includegraphics[width=0.92\linewidth,page=1]{Figures/K10_L}}
\caption{\label{Fig:BlueVar_K} Average BLUE-estimator variance versus the total transmit power $P_{\mathsf{Total}}$ for different values of the number of feedback bits $L$.
}
\end{figure}
Figure~\ref{Fig:BlueVar_K} illustrates the effect of $L$ as the number of feedback bits from the FC to local sensors on the performance of the BLUE estimator. It should be emphasized that $L$ is the total number of feedback bits broadcast by the FC, and not the number of bits fed back to each sensor. This figure depicts the average BLUE-estimator variance
versus the total transmit power $P_{\mathsf{Total}}$ for different values of the number of feedback bits $L$, when there are $K=5$ or $K=10$ sensors in the network, shown in Figs.~\ref{Fig:K5_L} and~\ref{Fig:K10_L}, respectively. The results for the case of full-feedback from the FC to local sensors proposed in~\cite{Cui07Diversity} are shown with solid lines as a benchmark. As it can be seen in this figure, the performance of the BLUE estimator with limited feedback is close to that with full feedback, and gets closer to it as the number of feedback bits is increased from 2 to 4.
In Figure~\ref{Fig:L2_K}, similar results are shown to illustrate the effect of the number of sensors in the network on the performance of the BLUE estimator.
The number of feedback bits for the limited-feedback case is $L=2$. As expected, the average BLUE-estimator variance decreases substantially as the number of sensors increases.
\begin{figure}[t]
\centering
\includegraphics[width=0.93\linewidth]{Figures/L2_K}
\caption{Average BLUE-estimator variance versus the total transmit power $P_{\mathsf{Total}}$ for different values of the number of sensors $K$. The number of feedback bits
is $L=2$.}
\label{Fig:L2_K}
\end{figure}
\section{Conclusions}
\label{Sec:Conclusions}
In this paper, a limited-feedback strategy was proposed to be applied in an adaptive power-allocation scheme for distributed BLUE estimation of a random scalar parameter at the FC of a WSN. The proposed approach eliminates the requirement for infinite-rate feedback of the instantaneous forward CSI from the FC to local sensors in order for them to find their optimal local amplification gains. The generalized Lloyd algorithm with modified distortion functions was used to quantize the vector space of the optimal local amplification gains and to design an optimal codebook for this space. Upon observing the CSI, the FC broadcasts the index of the closest codeword to the corresponding optimal power-allocation vector, rather than feeding back the exact instantaneous CSI. Numerical results showed that even with a small number of feedback bits, the average estimation variance of the BLUE estimator with adaptive power allocation based on the proposed limited-feedback strategy is close to that with perfect CSI feedback.
\appendices
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\fontsize{9.4}{12}
\selectfont
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.762695,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeR7xK4tBVicoDyQN | \section{Introduction}
In \cite{QuantumWielandt}, Sanz, Perez-Garcia, Wolf and Cirac proved a quantum version of the Wielandt inequality. This theorem was motivated by the study of Matrix Product States and conjectures stated in \cite{PerezGarciaVerstraete}. Quantum Wielandt theorem proved upper bounds on the support of parent Hamiltonians for injective matrix product states, which was the final piece missing for proving that the manifold of MPS is in one to one correspondence with ground states of local Hamiltonians \cite{fannes1992finitely}.\\
In mathematical terms, the quantum Wielandt theorem can be stated as follows:\\
Let $\mathcal{A}=(A_1,\ldots,A_d)$ be a $d$-tuple of $D \times D$-matrices, and assume that there is an $N$ such that the linear span of $\{A_{i_1}\cdots A_{i_N} | 1 \leq i_j \leq d\}$ equals the space of $D \times D$-matrices. Then already for $N=C(D,d):=(D^2-d+1)D^2$, the linear span of $\{A_{i_1}\cdots A_{i_N} | 1 \leq i_j \leq d\}$ equals the space of $D \times D$-matrices.
The bound $C(D,d)$ was recently improved to $O(D^2\log D)$ in \cite{MichalekShitov} and is conjectured to be $O(D^2)$ \cite{PerezGarciaVerstraete}. \\
The 2-dimensional generalizations of MPS are called projected entangled pair states (PEPS), and play a central role in the classification of the different quantum phases of spin systems defined on two-dimensional grids. PEPS are much more complex than MPS: just as MPS can be understood in terms of completely positive maps on matrices, PEPS deal with completely positive maps on tensors, for which no analogues of eigenvalue and singular value decompositions exist. It has been a long standing open question in the field of quantum tensor networks whether an analogue of the quantum Wielandt theorem exists for PEPS, which is the missing piece in proving that every PEPS has a parent Hamiltonian with finite support. This paper proves the existence of such a theorem, albeit in a weaker form than for MPS as the upper bound in nonconstructive. In physics terms, it is proven that the notion of injectivity for PEPS is well defined, in the sense that there is only a finite amount of blocking needed for the map from the virtual to the physical indices to become injective.\\
The bounds for quantum Wielandt theorem in \cite{QuantumWielandt, MichalekShitov, rahaman2018new} were obtained using explicit methods from linear algebra.
Our main new insight is the application of nonconstructive Noetherian arguments from non-linear algebra.
For matrix product states, this gives an easy proof of Conjecture 1 in \cite{PerezGarciaVerstraete}
\section{PEPS and injective regions}
By a \emph{grid} we mean a triple $G=(\mathcal{V},E_I,E_O)$ where $\mathcal{V},E_I,E_O$ are finite sets, respectively called vertices, inner edges and outgoing edges. We will write $E=E_I \cup E_O$. One can think of a grid as (a part of) a system of particles, where the vertices are the particles and the edges indicate interaction between the particles. Every inner edge $e$ may be identified with a two element subset $\{v_1,v_2\}$ of $\mathcal{V}$. Every outgoing edge distinguishes one element of $\mathcal{V}$, which we call its endpoint. For $v$ a vertex, $e(v)$ will denote the set of edges incident with that vertex.
Let $G$ be a grid. To every edge $e$ of $G$ we associate a finite-dimensional $\mathbb{C}$-vector space $V_e$, equipped with an inner product.
To every vertex $v$ of $G$, we associate two $\mathbb{C}$-vector spaces: the \emph{virtual space} $V_v:=\bigotimes_{e \in e(v)} V_e$, and the \emph{physical space} $W_v$, which represents the physical state of the particle.\\
Let $(v \mapsto A_v)_{v \in G}$ be a function that assigns to every vertex $v$ of $G$ a tensor $A_v \in V_v$. We then obtain a new tensor in $\bigotimes_{e\in E_O}V_e$ by contracting along all inner edges of $G$. This tensor will be denoted by $\mathcal{C}[(v \mapsto A_v)_{v \in G}]$, or simply by $\mathcal{C}[v \mapsto A_v]$.
\begin{example}
Multiplication of matrices.
Let $G$ be the following grid: \vskip-1cm
\[ \picH \]
with vertices $1$ and $2$.
Then $\mathcal{C}[(v \mapsto A_v)_{v \in G}]$ is simply the matrix product $A_1A_2$.
\end{example}
\noindent If we assign to every vertex $v$ a linear map $\phi_v: V_v \to W_v$, we obtain a linear map
\[
\bigotimes_{e \in E_O}{V_e} \to \bigotimes_{e \in E_O}{V_e} \otimes \bigotimes_{e \in E_I}{(V_e \otimes V_e)} \cong \bigotimes_{v\in \mathcal{V}}{V_v} \to \bigotimes_{v\in \mathcal{V}}{W_v}
\]
where the first map is induced by the inner product on $V_e$. If our grid has no outgoing edges, this corresponds to a physical state of our system. Any state that arises in this way is called a \emph{projected entangled pair state (PEPS)}.\\
We now give a description in coordinates: write $d_v= \dim W_v$, $D_e = \dim V_e$, and suppose $\phi_v$ is given by
\[
\sum_{i=1}^{d_v}\sum_{\boldsymbol{j}}{A^{(v)}_{i,\boldsymbol{j}} \ket{i}\bra{\boldsymbol{j}}}
\]
where the second sum is over all tuples $\boldsymbol{j}=(j_e)_{e \in e(v)}$, $1 \leq j_e \leq D_e$. Then our map is
\[
\sum_{i_1,i_2,\ldots =1}^{d}{\mathcal{C}[v \mapsto A^{(v)}_{i_v}]}^*\ket{i_1,i_2,\ldots} \text{.}
\]
\begin{definition}
If the above map is injective, we say that $(G,\{\phi_v\}_{v \in \mathcal{V}})$ is an \emph{injective region}. Equivalently, $(G,\{\phi_v\}_{v \in \mathcal{V}})$ is an injective region if and only if the tensors ${\mathcal{C}[v \mapsto A^{(v)}_{i_v}]}$ span the whole space $\bigotimes_{e \in E_O}{V_e}$.
\end{definition}
\section{Main theorem}
We fix natural numbers $n$ (grid dimension), $D$ (bond dimension), $d$ (physical dimension). \\
For $N_1,\ldots,N_n \in \mathbb{N}$, we let $G=G(N_1,\ldots,N_n)$ be the $n$-dimensional square grid of size $N_1 \times \ldots \times N_n$. In particular, every vertex has degree $2n$.
The grid $G(3,5)$ is presented below:
$$\picG .$$
We will denote the outgoing edges of $G$ by $(\boldsymbol{j},\pm \mathbf{e}_i)$, where $\boldsymbol{j}$ is a vertex on the boundary of the grid, and $\pm \mathbf{e}_i$ indicates the direction of the outgoing edge.\\
To every edge $e$ we associate the vector space $V_e = V = \mathbb{C}^D$. We stress that $D$ is now the same for every edge.
Now we can identify all virtual spaces $V_v=\bigotimes_{e\in e(v)} V_e=(\mathbb{C}^D)^{\otimes \deg(v)}=(\mathbb{C}^D)^{\otimes 2n}$ in the obvious way: the tensor factor of $(\mathbb{C}^D)^{\otimes \deg(v)}$ associated to an edge out of $v$ will be identified with the tensor factor of $(\mathbb{C}^D)^{\otimes \deg(w)}$ associated to the edge out of $w$ pointing in the same direction.
We also identify all physical spaces $W_v$ with a fixed $d$-dimensional vector space.\\
Having done these identifications, we can now associate the same linear map $\phi_{\mathcal{A}} = \sum_{i=1}^{d}\sum_{\boldsymbol{j}}{A_{i,\boldsymbol{j}} \ket{i}\bra{\boldsymbol{j}}}$, where $\mathcal{A} = (A_1,\ldots,A_d)$ be a collection of $d$ tensors $A_i \in (\mathbb{C}^D)^{\otimes2n}$. From now on, we will assume we have a fixed $\mathcal{A}$, and let the size of the grid $G=G(N_1,\ldots,N_n)$ vary.
\begin{definition}
We say that \emph{$G$ is an injective region for $\mathcal{A}$} if $(G,\{\phi_{\mathcal{A}}\}_{v \in \mathcal{V}})$ in an injective region.
Explicitly, $G$ is an injective region for $\mathcal{A}$, if the tensors $\mathcal{C}[(v \mapsto A_{i_v})_{v \in G}]$ linearly span whole space $(\mathbb{C}^D)^{\otimes E_O(G)}$, when we consider all possible ways of placing a tensor from $\mathcal{A}$ at every vertex of $G$.\\
If $\mathcal{A}$ has an injective region, we say that $\mathcal{A}$ is \emph{injective}.
\end{definition}
\begin{remark}\label{rem:subspace}
We note that being an injective region for $G$ and being injective are properties of the linear span of $\mathcal{A}$, not a particular choice of tensors $A_i$.
\end{remark}
In the following lemma we prove that being an injective region is stable under extension of the grid.
\begin{lemma} \label{lemma:biggergrid}
Let $G_1 \subseteq G_2$ be $n$-dimensional square grids. If $G_1$ is an injective region for $\mathcal{A}$, then so is $G_2$.
\end{lemma}
\begin{proof}
By induction, we may assume that $G_1=G(N_1-1,N_2,\ldots,N_n)$ and $G_2=G(N_1,N_2,\ldots,N_n)$. If $N_1=2$ the statement is true, because $G_2$ is the union of two injective regions, cf \cite[Lemma 1]{PEPS}. Thus we assume $N_1>2$.
The vertices of $G_2$ will be identified with vectors $\boldsymbol{j} = (j_1,\ldots,j_n) \in \mathbb{N}^n$, with $1 \leq j_i \leq N_i$. Such a vertex is in $G_1$ if additionally $N_1 \leq N_1-1$.
We need to show that every tensor $T \in V^{\otimes E_O(G_2)}$ can be written as a linear combination of tensors of the form $\mathcal{C}[(\boldsymbol{j} \mapsto A_{i_{\boldsymbol{j}}})_{\boldsymbol{j}\in G_2}]$. In fact it is enough to show this for rank one tensors $T$, since every tensor is a linear combination of rank one tensors.\\
We can identify $E_O(G_1)$ with a subset of $E_O(G_2)$ as follows: to an outgoing edge $(\boldsymbol{j},\pm \mathbf{e}_i)$ of $E_O(G_1)$, we associate $(\boldsymbol{j},\pm \mathbf{e}_i)$ if $\pm \mathbf{e}_i \neq +\mathbf{e}_1$, and $(\boldsymbol{j}+\mathbf{e}_1,+\mathbf{e}_1)$ if $\pm \mathbf{e}_i= +\mathbf{e}_1$. Assuming $T$ has rank one, we have $T=T_1 \otimes T_2 \in V^{\otimes E_O(G_1)} \otimes V^{\otimes r}$, where $r$ equals the cardinality of $E_O(G(N_2,\dots,N_n))$.\\
By assumption we can write $T_1$ as a linear combination of tensors of the form $\mathcal{C}[(\boldsymbol{j} \mapsto A_{i_{\boldsymbol{j}}})_{\boldsymbol{j}\in G_1}]$.
Let $G_1'$ be the grid obtained from $G_1$ by contracting all inner edges among vertices $\boldsymbol{j}$ for which $j_1 >0$. In particular, all vertices with $j_1>0$ get identified to a vertex $v_2$ (resp.~$v_1$).
Then $T_1$ is in particular a linear combination of tensors of the form $\mathcal{C}[(v \mapsto B_v)_{v \in G'_1}]$, where $B_v=A_{i_v}$ if $v$ is one of the vertices that did not get contracted and $B_{v_1}$ are some special tensors. \\%and $B_v=$ we don't care...
Consider the tensors $B_{v_1}\otimes T_2\in V^{\otimes |E_O(G_1)|}$. By assumption each one is a linear combination of tensors of the form $\mathcal{C}[(\boldsymbol{j} \mapsto A_{k_{\boldsymbol{j}}})_{\boldsymbol{j}\in G_1}]$, where now we identified $G_1$ with the subgrid of $G_2$ consisting of all vertices $\boldsymbol{j}$ with $j_1 > 0$.
Thus, we see that $T$ is a combination $\mathcal{C}[(\boldsymbol{j} \mapsto A_{s_{\boldsymbol{j}}})_{\boldsymbol{j}\in G_2}]$
where $s$ may be identified with $i$ above for $\boldsymbol{j}$ such that $j_1=0$ and with $k$ when $j_1>0$. The pictorial representation of the proof can be found below, where to each small box some element of $\mathcal{A}$ is associated.
\end{proof}
\[
\picA = \picB \otimes \picC
\]
\[
\picB=\picD=\picE
\]
\[
\picA = \picF = \picG
\]
Our main theorem says that if $\mathcal{A}$ is injective, then there exists an injective region of bounded size (where the bound only depends on our parameters $D,d,n$). More precisely:
\begin{theorem} \label{mainthm}
Let $G_1\subset G_2 \subset \cdots \subset G_k \subset \cdots$ be a chain of $n$-dimensional grids. Then there exists a constant $C$ (depending on $D$ and $d$) such that the following holds:
If $\mathcal{A} \in ((\mathbb{C}^D)^{\otimes2n})^d$ is chosen so that for some $k$, $G_k$ is an injective region for $\mathcal{A}$, then already $G_C$ is an injective region for $\mathcal{A}$.
\end{theorem}
\begin{proof}
For any grid $G$ and $\mathcal{A} \in ((\mathbb{C}^D)^{\otimes2n})^d$ , we write $S_G(\mathcal{A}) := \{\mathcal{C}[(v \mapsto A_{i_v})_{v \in G}]\} \subseteq (\mathbb{C}^D)^{\otimes E_O(G)}$, and $V_G := \{\mathcal{A} \in ((\mathbb{C}^D)^{\otimes2n})^d | \Span(S_G(\mathcal{A})) \subsetneq (\mathbb{C}^D)^{\otimes E_O(G)} \} $. Thus, $G$ is an injective region for $\mathcal{A}$ if and only if $\Span(S_G(\mathcal{A})) = (\mathbb{C}^D)^{\otimes E_O(G)}$ if and only if $\mathcal{A} \notin V_G$.\\
By Lemma \ref{lemma:biggergrid}, it holds that $V_{G_1} \supseteq V_{G_2} \supseteq \cdots \supseteq V_{G_k} \supseteq \cdots$
We need to to show that this chain eventually stabilizes. We will show that every $V_{G_k}$ is a Zariski closed subset of $((\mathbb{C}^D)^{\otimes2n})^d$, i.e.\ that is is the zero locus of a system of polynomials. This will finish the proof by Hilbert Basis Theorem. \\
Fix a grid $G=G_k$. For every $\mathcal{A} \in ((\mathbb{C}^D)^{\otimes2n})^d$, we can build a $D^{E_O(G)} \times d^{|\mathcal{V}(G)|}$ matrix $M_\mathcal{A}$ whose entries are the coefficients of the elements of $S_G(\mathcal{A})$. The condition $\Span(S_G(\mathcal{A})) \subsetneq (\mathbb{C}^D)^{\otimes E_O(G)}$ is equivalent to $M_{\mathcal{A}}$ having rank smaller than $D^{E_O(G)}$. The entries of $M_{\mathcal{A}}$ are polynomials in the entries of $\mathcal{A}$. Hence, the condition $\mathcal{A} \in V_G$ can be expressed as the vanishing of certain polynomials ($D^{E_O(G)}$-minors of $M_\mathcal{A}$) in the entries of $\mathcal{A}$. Hence, $V_G$ is a Zariski closed subset of $((\mathbb{C}^D)^{\otimes2n})^d$.
\end{proof}
Theorem \ref{mainthm} can be reformulated as follows:
\begin{theorem} \label{mainthm2}
There exists a finite collection of grids $G_1,\ldots,G_M$ (depending on $n,D,d$) such that the following holds: \\
If $\mathcal{A} \in ((\mathbb{C}^D)^{\otimes2n})^d$ is injective, then one of the $G_i$ is an injective region for $\mathcal{A}$.
\end{theorem}
The equivalence of Theorem \ref{mainthm} and Theorem \ref{mainthm2} follows from the following general lemma:
\begin{lemma} \label{lemma:eq}
Let $\mathcal{P}$ be a partially ordered set. We consider $\mathbb{N}^n$ with the coordinatewise partial order. Let $f: \mathbb{N}^n \to \mathcal{P}$ be a map such a that
\begin{enumerate}
\item $\boldsymbol{a}_1 \leq \boldsymbol{a}_2 \implies f(\boldsymbol{a}_1) \geq f(\boldsymbol{a}_2)$.
\item For every chain $\boldsymbol{a}_1 < \boldsymbol{a}_2 < \ldots$ in $\mathbb{N}^n$, the chain $f(\boldsymbol{a}_1) \geq f(\boldsymbol{a}_2) \geq \ldots$ stabilizes after finitely many steps.
\end{enumerate}
Then there is a finite subset $B$ of $\mathbb{N}^n$ such that for any $\boldsymbol{a} \in \mathbb{N}^n$, there is a $\boldsymbol{b} \in B$ with $\boldsymbol{a} \geq \boldsymbol{b}$ and $f(\boldsymbol{a})=f(\boldsymbol{b})$.
\end{lemma}
\begin{proof}
We first claim that there is a $\boldsymbol{b}_0 \in \mathbb{N}^n$ such that $f(\boldsymbol{a})=f(\boldsymbol{b_0})$ for every $\boldsymbol{a} \geq \boldsymbol{b}_0$. Indeed, if there was no such $\boldsymbol{b}_0$ we could build an infinite chain $\boldsymbol{a}_1 < \boldsymbol{a}_2 < \ldots$ in $\mathbb{N}^n$ with $f(\boldsymbol{a}_1) > f(\boldsymbol{a}_2) > \ldots$\\
Now we can proceed by induction on $n$: the subset $\{\boldsymbol{a} \in \mathbb{N}^n | \boldsymbol{a} \ngeq \boldsymbol{b} \}$ can be written as a finite union of hyperplanes, each of which can be identified with $\mathbb{N}^{n-1}$. By the induction hypothesis, in each such hyperplane $H \subset \mathbb{N}^n$ there is a finite subset $B_H \subset H$ such that for any $\boldsymbol{a} \in H$, there is a $\boldsymbol{b} \in B_H$ with $\boldsymbol{a} \geq \boldsymbol{b}$ and $f(\boldsymbol{a})=f(\boldsymbol{b})$.\\
We define $B$ as $b_0$ together with the union of all $B_H$.
\end{proof}
To deduce Theorem \ref{mainthm2} from Theorem \ref{mainthm}, we apply Lemma \ref{lemma:eq} by identifying $\mathbb{N}^n$ with the collection of $n$-dimensional grids and taking $\mathcal{P}$ to be the poset of subsets of $((\mathbb{C}^D)^{\otimes2n})^d$ ordered by inclusion.
We consider $f: G \mapsto V_G$, where $V_G$ was defined in the proof of Theorem \ref{mainthm}.
We note that the constants in Theorem \ref{mainthm} and \ref{mainthm2} can be chosen independent of $d$.
\begin{corollary}
For any $n$ and $D$ there exists a finite collection of grids $G_1,\ldots,G_M$ such that the following holds: \\
For any $d$, if $\mathcal{A} \in ((\mathbb{C}^D)^{\otimes2n})^d$ is injective, then one of the $G_i$ is an injective region for $\mathcal{A}$.
\end{corollary}
\begin{proof}
By Remark \ref{rem:subspace} it is enough to consider the subspaces $\langle \mathcal{A}\rangle\subset((\mathbb{C}^D)^{\otimes2n})$. In particular the dimension of the subspaces is bounded by $D^{2n}$ and for each fixed dimension we obtain a finite number of grids by Theorem \ref{mainthm2}.
\end{proof}
Further we have the following computational implication.
\begin{corollary}
There exists an algorithm to decide if $\mathcal{A}$ is injective.
\end{corollary}
For the case of dimension $n=1$ our result proves the Conjecture \cite[Conjecture 1]{PerezGarciaVerstraete}, whose effective version was proved in \cite{QuantumWielandt}.
| {
"attr-fineweb-edu": 1.411133,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeVnxK7IADTol2Bjg | \section{0pt}{4pt plus 0pt minus 2pt}{0pt plus 2pt minus 2pt}
\usepackage{layouts}
\setlength{\textfloatsep}{5pt}
\setlength{\floatsep}{5pt}
\usepackage[font=small,skip=5pt]{caption}
\newcommand{\IEEEauthorrefmark}{\IEEEauthorrefmark}
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{}\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
}
\begin{document}
\title{GPU Acceleration for Synthetic Aperture Sonar Image Reconstruction}
\makeatletter
\patchcmd{\@maketitle}
{\addvspace{0.5\baselineskip}\egroup}
{\addvspace{-1\baselineskip}\egroup}
{}
{}
\makeatother
\makeatletter
\newcommand{\linebreakand}{%
\end{@IEEEauthorhalign}
\hfill\mbox{}\par
\mbox{}\hfill\begin{@IEEEauthorhalign}
}
\makeatother
\author{
\IEEEauthorblockN{Isaac D. Gerg\IEEEauthorrefmark{1}, Daniel C. Brown\IEEEauthorrefmark{1}, Stephen G. Wagner\IEEEauthorrefmark{1}, Daniel Cook\IEEEauthorrefmark{2}, Brian N. O'Donnell\IEEEauthorrefmark{2}, \\Thomas Benson\IEEEauthorrefmark{3}$^{1}$, Thomas C. Montgomery\IEEEauthorrefmark{1}}\hspace{0.3cm}
\linebreakand
\IEEEauthorblockA{\textit{\IEEEauthorrefmark{1}Applied Research Laboratory} \\
\textit{Pennsylvania State University}\\
State College, PA USA}
\and
\IEEEauthorblockA{\textit{\IEEEauthorrefmark{2}Georgia Tech Research Institute} \\
Atlanta, GA USA}
\and
\IEEEauthorblockA{\textit{\IEEEauthorrefmark{3}Smiths Digital Forge} \\
Fremont, CA USA}
}
\maketitle
\begin{abstract}
Synthetic aperture sonar (SAS) image reconstruction, or beamforming as it is often referred to within the SAS community, comprises a class of computationally intensive algorithms for creating coherent high-resolution imagery from successive spatially varying sonar pings. Image reconstruction is usually performed topside because of the large compute burden necessitated by the procedure. Historically, image reconstruction required significant assumptions in order to produce real-time imagery within an unmanned underwater vehicle's (UUV's) size, weight, and power (SWaP) constraints. However, these assumptions result in reduced image quality. In this work, we describe ASASIN, the Advanced Synthetic Aperture Sonar Imagining eNgine. ASASIN is a time domain backprojection image reconstruction suite utilizing graphics processing units (GPUs) allowing real-time operation on UUVs without sacrificing image quality. We describe several speedups employed in ASASIN allowing us to achieve this objective. Furthermore, ASASIN's signal processing chain is capable of producing 2D and 3D SAS imagery as we will demonstrate. Finally, we measure ASASIN's performance on a variety of GPUs and create a model capable of predicting performance. We demonstrate our model's usefulness in predicting run-time performance on desktop and embedded GPU hardware.
\end{abstract}
\vspace{-1\baselineskip}
\blfootnote{$^{1}$Work performed while at the Georgia Tech Research Institute.}
\section{Brief Introduction to Synthetic Aperture Sonar}
In echo-based imaging techniques, aperture (or antenna) size is inversely proportional to image resolution. However, there are practical limits to the physical aperture size. From a historical perspective, the radar community addressed these physical limits first and overcame through the advent of synthetic aperture processing. In this technique, the platform transmits while moving and the aperture is synthetically formed by coherently combining the reflected signals over several transmissions. This results in a synthetically lengthened aperture which is much larger than the physical aperture resulting in improved resolution. This type of processing forms basis of what we understand today as synthetic aperture radar (SAR).
Synthetic aperture sonar (SAS) image formation techniques grew out of traditional SAR methods. However, there are two key facets between SAS and SAR that play a role in the algorithmic differences:
\begin{enumerate}
\item Significant differences in the speed of the medium (i.e. $1.5\times10^{3}$ m/s versus $3\times10^{8}$ m/s). This often has to be accounted for in SAS but can be generally ignored in SAR.
\item Platform motion between transmissions can be quite severe in underwater environments. With no global position system (GPS) reference available, platform motion must be estimated from the collected sonar data. Inertial measurement units (IMUs) embedded in UUVs do not provide sufficient accuracy to form the synthetic aperture \cite{bellettini2002theoretical}. It's worth mentioning though that high precision satellite positioning has been used with surface-based SAS systems (i.e. downward looking, surface ship mounted sonars) to provide sufficient positional accuracy \cite{brown2019simulation}.
\end{enumerate}
Operational image formation techniques for SAS and SAR have been developed with computational efficiency in mind since the the time domain backprojection algorithm, simple to understand and capable of producing quality imagery in high-motion environments, can be quite expensive to compute. Many image reconstruction systems implement either $\omega$-k or fast factorized backprojection based methods due to their computational efficiency but often at the expense of numerical approximations resulting in reduced image quality in some environments. With the advent of GPUs, time domain backprojection algorithms can now operate in real-time \cite{baralli2013gpu}. For those looking for an in-depth description of challenges specific to SAS image reconstruction, \cite{callow} and \cite{cook2007synthetic} provide good overviews.
This work describes a GPU-accelerated time domain backprojection image reconstruction engine we call ASASIN. ASASIN is capable of creating 2D and 3D SAS imagery.
\textbf{Specifically, this work provides the following technical contributions:}
\begin{enumerate}
\item We describe a processing chain capable of producing high-fidelity 2D and 3D SAS imagery using a time domain backprojection algorithm and outline speedups used to improve the run-time performance.
\item We propose a robust platform motion estimation algorithm capable of generating high-quality imagery even when severe ping-to-ping motion is present.
\item We demonstrate ASASIN's ability to run real-time embedded in a UUV by benchmarking it on a variety of contemporary GPUs including two embedded models.
\end{enumerate}
The sections following provide an overview of the time domain backprojection method for image reconstruction, the ASASIN implementation of our image reconstruction pipeline for efficient computation on GPUs, and experimental results comparing ASASIN compute performance on a variety of commodity GPU hardware.
\section{Proposed Method: ASASIN}
ASASIN consists of a framework that executes a modular sequence of algorithms for SAS image reconstruction. The input to ASASIN is a configuration file containing all of the processing parameters and time-series hydrophone array recordings. The output is image data products, debug, and processing logs. \figurename \ref{high_level_diagram_asasin} shows this schematically. The configuration file guarantees reproducibility in the processing as well as the ability to quickly compare results from different parameter settings. Input hydrophone time-series data can be in a variety of formats as long as an appropriate reader is present which is dictated by the configuration file. All data readers are derived from a base class and implement the necessary methods to read data and navigation information. ASASIN outputs several types of data products including GeoTiff (multi-layer GeoTiff for the 3D imaging case).
\begin{figure}[t]
\includegraphics[width=0.75\linewidth]{block_diagram.png}
\centering
\caption{High-level diagram of the ASASIN processor. ASASIN ingests the raw sonar data along with a configuration file containing the processing parameters. During processing, ASASIN outputs a processing log containing performance metrics, processing path choices, and data issues. Also output are several debug products used to assess the processing quality. At the end of processing, the final output is usually an image data product like GeoTIFF.}
\label{high_level_diagram_asasin}
\end{figure}
\subsection{Time Domain Backprojection Algorithm}
Image reconstruction with a backprojection algorithm requires inverting the acoustic signals received by the sonar to estimate the seafloor acoustic reflectivity. The time-series that is inverted is modeled as Eq. \ref{eqn:backprojection} and depicted for a single pixel in \figurename \ref{fig:scene_forward_model}. The backprojection algorithm seeks to find $\sigma(\mathbf{x})$ from $e(t, \mathbf{x}_{RX}(t))$ through inversion. Often the sonar system is not calibrated so estimating $\sigma (\cdot)$ exactly is not possible. However, one can still form a suitable image by making very simplistic assumptions on the form of $\sigma(\cdot)$.
Any pair of pixel locations in Eq. \ref{eqn:backprojection} can be computed independently allowing the operation to be to be highly parallelizable, and thus is suitable for GPU acceleration.
\begin{table*}
\centering
\begin{minipage}{\textwidth}
\begin{equation}
e(t,\mathbf{x}_{RX}(t)) \approx \int \frac{\sigma(\mathbf{x})}{\norm{\mathbf{x}_{TX}-\mathbf{x}} \norm{\mathbf{x}_{RX}(t)-\mathbf{x}}} \cdot q\left(t - \frac{\norm{\mathbf{x}_{TX}-\mathbf{x}} + \norm{\mathbf{x}_{RX}(t)-\mathbf{x}} }{c}\right) d\mathbf{x}
\label{eqn:backprojection}
\end{equation}
where $e(t, \mathbf{x}_{RX}(t))$ represents the time-series of the array, $t$ is time, $\mathbf{x}_{TX}$ is the transmitter location (we assume it's stationary during transmit), $\mathbf{x}_{RX}(t)$ is the receiver location, $\mathbf{x}$ is the scattering location, $\sigma$ is the acoustic scattering cross-section function \cite{hunter2003simulation}, $c$ is the speed of sound, and $q$ is the transmitted signal waveform. \figurename \ref{fig:scene_forward_model} gives a depiction of this process for one pixel of the integration.
\medskip
\hrule
\end{minipage}
\end{table*}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{sonar_forward_model.png}
\centering
\caption{Depiction of the forward model described by Eq. \ref{eqn:backprojection}. ASASIN uses a time domain backprojection algorithm for image reconstruction where the forward model is inverted to solve for $\sigma(\mathbf{x})$ for each pixel. Since each pixel in the integration can be processed independently, this formulation lends itself to being straightforward to parallelize on a GPU.}
\label{fig:scene_forward_model}
\end{figure}
\subsection{Coordinate System Description of ASASIN}
Image reconstruction is formally a navigation problem linking several coordinate systems. ASASIN uses a north-east-down (NED) coordinate system noted by $x, y, \text{and } z$ respectively shown in \figurename \ref{fig:scene_geometry}. Transducer elevation is noted as $\phi$ representing the full-width-half-maximum power (FWHM) beam. Azimuthal beamwidth is denoted by $\theta$, also FWHM. Attitude manipulation of points in this space is given by the transformation of Eq. \ref{eqn:rollpitchyaw}. The imaging grid for a stripmap collection is shown in \figurename \ref{fig:scene_geometry}. Notice that the first ping and last ping positions are outside the imaging grid. This is done intentionally so that the far range corners of the image have full angular support from the beam during the backprojection process making the image resolution constant throughout the scene.
We place the origin of the scene directly below the vehicle and on the seafloor of the very first ping position. This convention yields two noteworthy attributes:
\begin{enumerate}
\item The position of the first ping is directly above the origin and thus has a negative $z$ position due to the definition of the origin and our NED coordinate system. (The sonar platform operates at -$z$ altitude.)
\item The start of the imaging grid is not that origin but an x-position which is a function of the sensor beamwidth as shown in \figurename \ref{fig:top_down}.
\end{enumerate}
\begin{table*}
\centering
\begin{minipage}{\textwidth}
\begin{equation}
\boldsymbol{\widetilde{p}} = \begin{bmatrix}
\cos(\theta) \cos(\psi) & \cos(\psi) \sin(\theta) \sin(\phi) - \cos(\phi) \sin(\psi) & \cos(\phi) \cos(\psi) \sin(\theta) + \sin(\phi)sin(\psi) \\
\cos(\theta) \sin(\psi) & \cos(\phi) \cos(\psi) + \sin(\theta) \sin(\phi) \sin(\psi) & \cos(\phi) \sin(\theta) \sin(\psi) - \cos(\psi)\sin(\phi) \\
-\sin(\theta) & \cos(\theta) \sin(\phi) & \cos(\theta) \cos(\phi)
\end{bmatrix}
\boldsymbol{p}
\label{eqn:rollpitchyaw}
\end{equation}
where $\phi$, $\theta$, and $\psi$ are the platform roll, pitch, and yaw respectively in radians. We use the following conventions of angles: positive roll lowers the starboard side, positive pitch raises the bow, and positive yaw rotates to starboard.
\medskip
\hrule
\end{minipage}
\end{table*}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{sonar_geometry.png}
\centering
\caption{A well-defined imaging geometry aids in data and debug output interpretation. Shown here is the imaging geometry and coordinate system used in ASASIN for stripmap imaging. The origin is defined outside the imaging grid so the image corners have full angular support to deliver constant resolution throughout the scene. $\phi$ and $\theta$ denote the FWHM elevation and azimuth beamwidths respectively.}
\label{fig:scene_geometry}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.95\linewidth]{top_down.png}
\centering
\caption{Top-down view of the imaging geometry for stripmap imaging. Placing the origin outside the imaging grid gives the image corners appropriate angular support and consequently uniform image resolution across the scene. $\theta$ denotes the FWHM azimuthal beamwidth.}
\label{fig:top_down}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.85\linewidth]{ray_culling.png}
\centering
\caption{Illustration of the ray-culling algorithm used to prune the processing space of the backprojection algorithm. The scene is divided into subblocks where each corner tested to determine if it is in the sonar beam. The red squares indicate the sonar beam is within the FOV of the corner pixel; green indicates otherwise. Any block containing at least one green pixel is further processed, while blocks containing only red pixels are further ignored.}
\label{fig:rayculling}
\end{figure}
\subsection{Computational Considerations to Improve Run-Time Performance}
We were able to increase run-time performance over the initial parallelization of Eq. \ref{eqn:backprojection} by exploiting the problem geometry and particulars of the GPU hardware. This was accomplished through three primary means. First, we implemented our own memory allocator for the GPU. This prevents memory fragmentation with ASASIN and other GPU processes running simultaneously (e.g. a display/GIS tool). We implemented this by structuring our algorithms to derive from a generic base class which requires each derived class to implement a method returning the amount of memory needed for execution. One large memory slab is allocated to accommodate all the requests as algorithms are instantiated by ASASIN. Machine learning practitioners will recall our memory allocation scheme is similar to the Tensorflow library \cite{tensorflow}.
Second, we utilized GPU hardware texture units to perform fast sensor position lookup. The texture units are capable of performing linear interpolation in hardware so we upsample the sensor positions and store them there. Upon querying the sensor position for a specific time, the texture units quickly interpolate (in hardware) from our upsampled table and return the position.
Finally, we utilize ray culling to speed up the integration of a single ping. For each ping in the sonar collection, we must integrate the time-series into the appropriate complex pixels. This procedure requires every pixel to query the sonar geometry and determine if it is in the sonar field of view (FOV); such a search has compute complexity $O(n^2)$. For the majority of SAS systems, the beamwidth is sufficiently small that each ping insonofies a fraction of the total imaging grid. To speed up this computation, we take advantage of the fact that the receive and transmit beams are conically shaped and thus convex. We exploit the convexity by performing a coarse-to-fine block search to quickly prune areas requiring no computation. This is done by first dividing the imaging grid into square blocks of size $b$ pixels. Next, for each block, $B_{i,j}$, we test its four corners for transmitter/receiver visibility. Finally, if none of the tests pass, we discard the block for further processing. When any corner of the block passes the test, we further process all the pixels in the block.
\figurename \ref{fig:rayculling} depicts an example of the ray culling procedure for a single ping. The imaging grid is composed of $m\times n$ blocks each of size 5 pixels x 5 pixels. The gray area shows the FOV of the transmitter and receiver. The four corners of each block, $B_{i,j}$, are tested to see if they are in the FOV of the sonar. Pixels that are in the field of view are marked in green and those that are not are marked in red. Blocks with no green corners are pruned from the FOV search and an explicit FOV test for each pixel is performed for all those remaining. We find the ray culling procedure accelerates our processing by a factor of two.
\subsection{Motion Estimation of Platform Heave, Sway, and Surge}
Inverting Eq. \ref{eqn:backprojection} for $\sigma$ requires precise knowledge of positions $\mathbf{x}$ and $\mathbf{x}^\prime$ for each ping to produce focused imagery. The inversion is straightforward once these quantities are known. Hence, the motion estimation of the sensor platform deserves much attention and several methods have been developed to estimate it \cite{cook2007synthetic,marston2016volumetric,bellettini2002theoretical}.
ASASIN estimates sensor position by a derivative work of the displaced phase center (DPC) method described in \cite{cook2007synthetic}. In this work, time-delays are measured from overlapped coherent looks of the seafloor of consecutive pings. The amount of time-delay measured is a function of the translation of the second ping to the first mainly in the sway and heave directions. Applying this principle as a function of range can unambiguously determine the sway and heave translation between the two pings.
Four degrees of freedom (surge, roll, pitch, and yaw) remain to be addressed. For surge, ASASIN can select from two sources: Doppler velocity logger (DVL) measurement or the along track estimation (ATE) algorithm of \cite{ATE}. For vehicle attitude (roll, pitch, and yaw), ASASIN uses the measurements provided by an on-board, high-fidelity inertial navigation system (INS).
Estimation of the remaining degrees of freedom, platform sway and heave, will be the primary focus of this section. At it's simplest, ASASIN's motion model is described by Eq. \ref{eqn:g} and is composed of three primary parts. The first part describes the time-of-flight from the first ping transmission, its reflection off the seafloor, and its reception by a receive element. The second part is identical to the first but now concerning the second ping. Finally, the third part is the measured time-delay between the two pings. This model makes the assumption that sonar transmission occurs instantly and the sonar platform is continually moving during reception. Forgoing the latter assumption is commonly called the \emph{stop-and-hop} approximation.
In Eq. \ref{eqn:g}, a single time-delay estimate is measured from a common, overlapped patch of seafloor illuminated by consecutive pings. Assuming the measured time-delay is noiseless, possible platform heave and sway estimates are those which equate $g$ to zero. Thus, for a single time-delay estimate, many solutions exist. Therefore, several time-delay measurements as a function of range are required to generate at a unique solution. Such a function ingesting hypothesized parameters of a model and returning a scalar error is referred to as a \emph{residual}. Since the time-delay estimates are noisy, we opt to minimize the sum of all squared residuals, $g^2$, from all the measurements. Assuming $k$ time-delay estimates measured as a function of range / reference time $t_k$ for a given ping pair, $i$, our loss for estimating heave and sway among all ping pairs, which we call $\mathcal{L_{\text{DPC}}}$, is given in Eq. \ref{eqn:motion}. Since we assume continual motion during reception, Eq. \ref{eqn:motion} models the position of the platform as a first order kinematic equation (i.e. $\mathbf{p} = \mathbf{v}t$). Thus, the minimization reduces to determining the platform velocity (specifically the $v_y$ and $v_z$ components) of each ping through minimization of $g$ for all time-delay measurements for all ping pairs. We have experimented with higher order kinematic models but found they provide little benefit. One last item of information is necessary to compute the specific transmitter and receiver locations of each ping in Eq. \ref{eqn:g}: the vehicle attitude and lever arm information which we define as $\mathbf{\Omega}$. This information translates the vehicle position, $\mathbf{p}$, for each ping to the components of $\mathbf{p}_{\text{RX}}$ and $\mathbf{p}_{\text{TX}}$ necessary for computation of $g$.
It is worth explicitly mentioning four important attributes resulting from Eq. \ref{eqn:motion}. The first attribute is the linkage between the velocity estimates of all the pings. The estimate $\mathbf{v}_{i+1}$ depends on the estimates of $\mathbf{v}_{i}$ and $\mathbf{v}_{i+2}$. Therefore, all the ping velocities are estimated jointly. The second attribute is the ease of which we can weight each residual inside the sum of Eq. \ref{eqn:motion}. A common weighting scheme is to use the correlation coefficient associated with $\delta_{i,k}[t_k]$ which is proportional to the measurement's signal-to-noise ratio. The third attribute is the ease of which the residuals can be weighted to mitigate outliers through proper choice of $h$. In Eq. \ref{eqn:motion}, each residual is squared before accumulation by the sum. However, squared-loss can be sensitive to outliers and assign them more weight than desired. This can be mitigated by replacing the squared-loss with another loss which de-weights the contribution of large loss terms (i.e. probable outliers) in the sum. This is easily accomplished by replacing $h$ in Eq. \ref{eqn:motion} with the appropriate convex function such as the Huber loss given in Eq. \ref{eqn:huber}.
\begin{equation}
\label{eqn:huber}
h(a) = \begin{cases}
\frac{1}{2}a^2 & \text{for } \vert a \vert \leq 1 \\
\vert a \vert - \frac{1}{2} & \text{otherwise}
\end{cases}
\end{equation}
Finally, the fourth attribute is that any bathymetry information known about $\mathbf{p}_s$ can be easily integrated into the model as the $z$-component (for a flat seafloor assumption, $\mathbf{p}_{s,z} = 0$).
\begin{table*}
\centering
\begin{minipage}{\textwidth}
\begin{equation}
\label{eqn:g}
g(\mathbf{p}_{\text{TX}_1}, \mathbf{p}_{\text{RX}_1}(t), \mathbf{p}_{\text{TX}_2}, \mathbf{p}_{\text{RX}_2}(t), \mathbf{p}_{s}, \delta(t), c) = c^{-1}
\bigl(
\underbrace{(\| \mathbf{p}_{\text{TX}_1} - \mathbf{p}_{s} \| + \| \mathbf{p}_{s} -
\mathbf{p}_{\text{RX}_1}(t)\|)}_{\text{Path of TX1 to Seafloor to RX1}}
- \underbrace{(\norm{\mathbf{p}_{\text{TX}_2} - \mathbf{p}_{s}} + \| \mathbf{p}_{s} - \mathbf{p}_{\text{RX}_2}(t)\|)}_{\text{Path of TX2 to Seafloor to RX2}} \bigr) + \underbrace{\delta(t)}_{\text{Time-Delay Between Acoustic Returns}}
\end{equation}
Our fundamental residual equation for our motion model. $\delta(t)$ is the measured time difference of arrival between ping one and ping two to a common acoustic footprint location on the seafloor, $\mathbf{p}_{s}$, at a reference time $t$, $\mathbf{p}_{\text{TX}_1}$ is the position of ping one during transmit, $\mathbf{p}_{\text{RX}_1}(t)$ is the position of the receive element of ping one after time $t$, $\mathbf{p}_{\text{TX}_2}$ is the position of the ping two during transmit, $\mathbf{p}_{\text{RX}_2}(t)$ is the position of the receive element of ping two after time $t$, and $c$ is the speed of sound in the medium. We make the approximation that the ping transmission occurs instantaneously.
\medskip
\hrule
\end{minipage}
\end{table*}
\begin{table*}
\centering
\begin{minipage}{\textwidth}
\begin{equation}
\mathcal{L_{\text{DPC}}} = \sum_{i,k} h(g(\mathbf{p}_{\text{TX}_i},\mathbf{p}_{\text{RX}_i}(t_k), \mathbf{p}_{\text{TX}_{i+1}}, \mathbf{p}_{\text{RX}_{i+1}}(t_k), \mathbf{p}_{s}, \delta_{i,k}[t_k], c \vert \mathbf{v}_{i}, \mathbf{v}_{i+1}, \mathbf{\Omega}_{i}, \mathbf{\Omega}_{i+1}))
\label{eqn:motion}
\end{equation}
The DPC-related loss estimating the $y$- and $z$- components of $\mathbf{v}$. $\mathbf{\Omega}_i$ contains the associated lever arms and vehicle attitude measurements for ping $i$ needed to compute the transmitter and receiver positions at time $t_k$ using the proposed heave and sway velocity components of $\mathbf{v}_i, \mathbf{v}_{i+1}$. $h$ is the loss function which we define as $h(a) = a^2$ yielding a sum-of-squared errors/residuals expression.
\medskip
\hrule
\end{minipage}
\end{table*}
We add two additional regularization terms to Eq. \ref{eqn:motion} to smooth the solution and have it not drift too far from the Doppler velocity logger (DVL) measurements. Equations \ref{eqn:smooth} and \ref{eqn:dvl} describe these regularization terms respectively,
\begin{equation}
\mathcal{L_{\text{smooth}}} = \sum_{i} ( {v_y}''[i] )^2 + \sum_{i} ( {v_z}''[i] )^2
\label{eqn:smooth}
\end{equation}
\begin{equation}
\mathcal{L_{\text{DVL}}} = \sum_i \left[ \left( p_{z,1}(0) + \sum_{n=1}^{i} v_z[n] \Delta_{n} \right) - z_{\text{DVL}}[i] \right ] ^2
\label{eqn:dvl}
\end{equation}
where $p_{z,i}(t)$ is platform z-position position of ping $i$ at time $t$ since transmit (e.g. $p_{z,1}(0)$ is the z-position of of first ping at transmission) and $\Delta_{n}$ is the time difference between transmissions of ping $n$ and ping $n+1$. Velocities are converted to positions through integration. The final loss used for optimization is a combination of Eq. \ref{eqn:motion}, Eq. \ref{eqn:smooth}, and Eq. \ref{eqn:dvl} and is given in Eq. \ref{eqn:motionWhole},
\begin{equation}
\label{eqn:motionWhole}
\begin{split}
\argmin_{{v}_{y,1} , ..., {v}_{y,n}, {v}_{z,1} , ..., {v}_{z,n}} &= \mathcal{L}({v}_{y,1} , ..., {v}_{y,n}, {v}_{z,1} , ..., {v}_{z,n}) \\
&= \mathcal{L_{\text{DPC}}} + \lambda_1 \mathcal{L}_{\text{smooth}} + \lambda_2 \mathcal{L}_{\text{DVL}}
\end{split}
\end{equation}
where $\lambda_1$ and $\lambda_2$ are coefficients controlling the regularization strengths of the solution smoothness and fidelity to the DVL respectively.
\begin{figure*}[t]
\includegraphics[width=0.95\linewidth]{sonar_geometry_micronav.png}
\centering
\caption{Four consecutive pings from a single UUV moving across the seafloor depicting our time-of-flight model, $g$, used in the motion estimation algorithm. $\mathbf{p}_i$ is the position of ping $i$ represented as a filled dot on each vehicle, $\mathbf{v}_i$ is the velocity of ping $i$ represented by an arrow on each vehicle, and $z_{DVL}[i]$ is the altitude above the seafloor as measured by the Doppler velocity logger (DVL) on ping $i$. The origin is the coordinate system is directly below the position of the first ping. The dotted-blue line represents the true vehicle trajectory. Each ping's receiver and transmitter positions are determined by their initial position $\mathbf{p}_i$, velocity $\mathbf{v}_i$, and attitude / lever arms $\mathbf{\Omega}_i$. The figure shows an example time of flight pair for one common location between pings two and three. The time-delay estimate between the time-of-flight of ping two and the seafloor and ping three and the seafloor at an associated time reference (similarly ground range also) of $t_k$ is denoted as $\delta_{2,k}(t_k)$; a dotted circle encompasses the transmissions paths compared in the time-delay estimate. The motion solution is determined by finding the $\mathbf{v_1} ... \mathbf{v_n}$ which minimizes Eq. \ref{eqn:motionWhole} given the set of time-delay estimates measured between coherent seafloor looks, $\delta_{i,k}$.}
\label{fig:rpc_micronav_diagram}
\end{figure*}
\figurename \ref{fig:rpc_micronav_diagram} depicts the relationship of the quantities utilized in each loss term of Eq. \ref{eqn:motionWhole}. The figure depicts four pings resulting in three overlapped ping-pairs. We focus on pings two and three to demonstrate how a single residual $g$ is computed in Eq. \ref{eqn:motion}. The position of pings two and three results in a geometry yielding a common acoustic footprint. The time-delay estimate between the transmission paths of ping two and ping three at reference time $t_k$ in given by $\delta_{2,k}(t_k)$. Using the given vehicle attitude and lever arm information, ($\mathbf{\Omega}_2, \mathbf{\Omega}_3$), and the surge velocities $v_{x,2}$ and $v_{x,3}$, Eq. \ref{eqn:motionWhole} estimates $v_y, v_z$ for ping two ping and three (recall $\mathbf{v} = [v_z, v_y, v_z]$). Each ping position is represented by $\mathbf{p}_i$ in the diagram and then converted to the associated transmitter and receiver positions, $\mathbf{p}_{\text{RX}}$ and $\mathbf{p}_{\text{TX}}$, through the information provided by $\mathbf{v}_i$ and $\mathbf{\Omega}_i$. Through integration of the velocities, the absolute platform positions, $\mathbf{p}_i$, are determined relative to the origin. The smoothness constraint described by Eq. \ref{eqn:smooth} is applied to the $y$- and $z$-velocities smoothing the final vehicle trajectory given in blue. Finally, the DVL constraint in Eq. \ref{eqn:dvl} is applied to the $z$-components of $\mathbf{p}_i$ using the DVL measurements $z_{\text{DVL}}[i]$.
Minimization of equation Eq. \ref{eqn:motionWhole} is non-trivial to solve in closed form. We mitigate this by using the Ceres solver library \cite{ceres-solver}. Ceres is a nonlinear least squares solver with built in auto-differentiation capability (auto-diff) coded in C++. Auto-diff provides capability to automatically and analytically evaluate the derivatives of a specified cost function, in this case equation Eq. \ref{eqn:motionWhole}. The auto-diff functionality prevents derivation and implementation errors associated with computing derivatives when minimizing the cost function. Auto-diff accomplishes this feat by computing derivatives via the chain rule. Ceres provides several optimization algorithms, including gradient descent, which leverage the derivatives of the cost function (derived via auto-diff) in order to find a critical point.
To minimize equation Eq. \ref{eqn:motionWhole}, time-delay estimates between consecutive pings must be computed. We compute these estimates using the method of \cite{saebo} whereby we estimate a coarse time, $t_{coarse}$, by using quadratic interpolation of the magnitude correlation function and then refine the estimate, $t_{fine}$, by analyzing the phase of the interpolated point of the corresponding complex correlation function. The estimate of $t_{fine}$ is usually subject to phase wrapping errors due to insufficient fractional bandwidth and insufficient signal-to-noise ratio. We overcome this hurdle by minimizing $\mathcal{L_{\text{DPC}}}$ using a two-step process utilizing a priori knowledge that $t_{coarse}$ is normally distributed around the true solution and $t_{fine}$ is normally distributed around $t_{coarse}$. First, $\mathcal{L_{\text{DPC}}}$ is minimized using only $t_{coarse}$. Second, we unwrap the phase around this solution and estimate $t_{fine}$. We then re-minimize $\mathcal{L_{\text{DPC}}}$ using this unwrapped version of $t_{fine}$. Once the time-delays are estimated and unwraped, we minimize equation Eq. \ref{eqn:motionWhole} by finding the set of ping-to-ping $v_y$ and $v_z$ velocities in ${\mathbf{v}}_1, ..., {\mathbf{v}}_{n}$ which minimize the total loss. All steps are minimized using the Levenberg–Marquardt algorithm implemented in Ceres.
\subsection{Pre- and Post-Processing Algorithms}
The equations modeling SAS described thus far assume ideal collection conditions; this is rarely the case. The realized echoes are corrupted by a variety of sources including source-receiver spectral shaping and analog-to-digital conversion noise. The data must be conditioned for the motion estimation and image reconstruction steps to perform well. We perform a variety of data conditioning steps prior to these operations and outline them here.
\subsubsection{Spectral Whitening}
Most sonar systems have a non-flat frequency spectrum whereby some frequencies are emphasized over others; this is undesirable when forming the synthetic aperture and can result in image artifacts during the reconstruction process. Modeling such phenomenon can be difficult so we adopt a simple data-adaptive technique called \emph{whitening} to flatten the spectrum. The spectrum is flattened by applying a gain attenuation given by Eq. \ref{eqn:whitening},
\begin{equation}
G(f) = h \left( \frac{1}{\gamma\overline{\hat{P}(f)} + \hat{P}(f)} \right)
\label{eqn:whitening}
\end{equation}
where $\hat{P}(f)$ is the power estimate of frequency $f$, $\overline{\hat{P}(f)}$ is the average power over all frequencies, and $\gamma$ is a system-dependent calibration factor, and $h$ is a normalization function ensuring the minimum attenuation is $0$ dB.
\subsubsection{Time Varying Gain}
The received sonar echo is attenuated as a function of time/range by spreading and absorption losses from the medium. We compensate for this non-linear loss of signal by applying a time varying gain (TVG) over the time-series. Estimating an accurate TVG curve can be difficult especially if there are bright scatterers in the scene. We mitigate this using a simple statistical method. The TVG correction is given by Eq. \ref{eqn:tvg},
\begin{equation}
G(t) = \left( \alpha \overline{\tilde{P}(t)} + \tilde{P}(t) \right) ^ {-1}
\label{eqn:tvg}
\end{equation}
where $\tilde{P}(t)$ is the population median power for each time sample $t$ in a batch of sonar pings, $G(t)$ is the gain correction to apply at each sample time $t$, $\overline{\tilde{P}(t)}$ is the average of the median powers at each $t$, and $\alpha$ is a system-dependent calibration factor.
\subsubsection{Dynamic Range Compression of Imagery}
SAS imagery has a dynamic range often exceeding 80dB and is beyond the range of typical displays and the human visual system in daylight. Trivial display of SAS imagery yields an almost empty image with a few bright pixels from specular scatterers.
ASASIN produces dynamic range compressed imagery SAS suitable for human consumption and display. The algorithm used is inspired by the rational mapping function of \cite{schlick1995quantization} given by Eq. \ref{eqn:drc},
\begin{equation}
I_{\text{DRC}}[k,kk] = \frac{q I[k,kk]}{q I[k,kk] + 1}
\label{eqn:drc}
\end{equation}
where $I_{\text{DRC}}[k,kk]$ is the dynamic range compressed output at pixel location $[k,kk]$, $I[k,kk]$ is the input high dynamic range image normalized to $[0,1]$, and $q$ is a tunable brightness parameter.
Equation \ref{eqn:drc} has the advantage of having only one free parameter which is used to control the overall image brightness. For SAS imagery, the median pixel of a scene is often representative of the overall image brightness. Since Eq. \ref{eqn:drc} is bijective, it allows for a closed form solution of parameter $q$. Given the desired brightness of the output image $I_{\text{DRC}}$, which we use the median output pixel value as a proxy for, the free parameter $q$ is computed by Eq. \ref{eqn:p},
\begin{equation}
q = \frac{b - b \tilde{I}}{\tilde{I} - b \tilde{I}}
\label{eqn:p}
\end{equation}
where $\tilde{I}$ is the median pixel value of the high dynamic range input image $I$ normalized to $[0,1]$, $q$ is the free parameter controlling image brightness, and $b\in(0,1)$ is the desired dynamic range compressed output image brightness.
\section{Experiments and Results}
\begin{figure*}[ht]
\includegraphics[width=\linewidth]{2.png}
\caption{An example stripmap SAS image generated using ASASIN. This image was collected while the vehicle experienced significant inter-ping motion but is still able to form a well focused image due to our proposed motion estimation algorithm.}
\label{fig:stripmap_sas}
\end{figure*}
\subsection{Stripmap Imaging}
An example image generated by ASASIN from a stripmap collection geometry is shown in \figurename \ref{fig:stripmap_sas}. The sonar flies from left-to-right along the bottom of the image (the image is collected from the port side sonar) and depicts a sunken vessel at a range far from the sonar. This image exhibits significant inter-ping motion in the form of attitude and altitude variation but still focuses well demonstrating the efficacy of our motion estimation approach.
\subsection{Near-Field Volumetric Imaging}
ASASIN was originally implemented to generate high-resolution SAS imagery from both high-frequency ($>$100 kHz) and mid-frequency ($\sim$10 kHz) imaging sonar systems. Recently, ASASIN's image reconstruction algorithms were generalized to support generation of 3D near-field imagery of the seabed sub-bottom. The sensor used to collect this data forms a downward-looking, two-dimensional synthetic aperture from an array mounted to a surface craft \cite{brown2019simulation} that operates from 1-3 meters above the bottom. The navigation used during the image reconstruction processing is from a real-time kinematic global positioning system (RTK-GPS) aboard the craft; traditional DPC methods are not used. Adapting ASASIN to processing this sensor's data required several modifications. The primary changes were to:
\begin{enumerate}
\item implement a bistatic ray-culling algorithm,
\item include a sediment-water interface refraction model in determining propagation time, and
\item enable 3D data output and provide 3D image viewers.
\end{enumerate}
First, the ray-culling algorithm shown in \figurename \ref{fig:rayculling} creates a binary mask assuming a transmit/receive pair is monostatic. This reduces both the beamforming complexity and the image reconstruction time. This monostatic approximation is valid in the standard imaging domain because the backprojection point is frequently in the far-field of the physical transmit and receive aperture. This approximation is invalid for the near-field sub-bottom sensor, and the bistatic condition must be considered.
Next, traditional SAS image formation assumes an isovelocity (constant sound speed) propagation path between the sensor and the imaging point. While this isovelocity approximation is rarely true, small deviations in sound speed have a minor effect on image quality. In the case of larger deviations, autofocusing techniques may be applied to recover some loss of focus quality \cite{callow}. Imaging within the seafloor may present the backprojection algorithm with a discontinuity in sound speed much greater than that ever observed for propagation in water. The effect of refraction must be included in the image reconstruction algorithm to create high-quality image quality. Fortunately, the backprojection algorithm is well suited to this type of modification.
Finally, the output of ASASIN was modified to produce a 3D image. This was accomplished by an iterative process where a two-dimensional output ``layer'' was generated across a range of focus depths to build up the full 3D volume. Visualization of the 3D imagery is accomplished by generating two-dimensional representations for interpretation. Planar ``slices'' through the volume are used to show a two-dimensional image within the volume. Additionally, the maximum intensity projections (MIPs) are also formed by collapsing the imagery along one of its principal axes and taking the highest intensity voxel \cite{Wallis:1991a}. For example, a slice at a depth of 11 cm is shown in \figurename \ref{fig:sliceExample}. Two targets placed in the field (solid aluminum cylinders) and two clutter object (rocks) are identified in the depth slice.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.999\linewidth]{rocksAndSlice.eps}\\
\caption{ASASIN was generalized to the problem of 3D image reconstruction for the detection of objects buried within the seabed. Here, a two-dimensional slice through a three dimensional image is shown at a depth of 11 cm into the sediment bed. A 2D synthetic aperture was used to create the middle image.}\label{fig:sliceExample}
\end{figure}
\subsection{Modeling ASASIN's Compute Performance}
\begin{figure}[!ht]
\includegraphics[width=0.999\linewidth]{performance.png}
\centering
\caption{ASASIN runtime performance on various GPU models; lower time is better. The thick horizontal line represents the throughput necessary to achieve real-time processing. As show, ASASIN performs faster than real-time on the NVIDIA Xavier embedded GPU.}
\label{fig:performance}
\end{figure}
The creation of a SAS image requires computation after collection of the sensor data. The specification of these computation resources must balance processing speed, compute hardware cost, and compute hardware power requirements. In conducting post-mission analysis, one typically weighs cost against speed. For in-situ processing embedded in a UUV, the processing speed need only occur at a real-time rate, but the power consumption should be minimized to not significantly reduce the UUV's survey duration. To support predicting the computation burden, we modeled the computational performance of ASASIN to determine the minimum hardware needed to deploy on a UUV. ASASIN is able to run on a variety of NVIDIA GPU architectures because of the flexible CUDA compiler. This affords us the ability to run the same source code on both desktop and embedded GPUs significantly reducing the amount of testing needed to reliably deploy ASASIN on board unmanned platforms.
We model the computational performance of ASASIN using a small SAS dataset representative of typical compute loads and several GPU architectures. We process our dataset consecutively three times and report performance of the last run. This was done to ensure the memory cache was sufficiently flushed so its effects do not influence the results. The processed data was much larger than any available cache on the system.
\figurename \ref{fig:performance} shows the results of our performance measurements. We observe that performance versus hardware capability (reported as single-precision GFLOPS derived from \cite{list_of_nvidia_gpus}) approximately obeys a power law. We achieve a fit shown by the gray-dashed curve in \figurename \ref{fig:performance} of $y \approx 485307 \cdot x^{-0.755}$. The thick horizontal line represents the performance needed to achieve real-time image reconstruction. The power law fit is no surprise as the image reconstruction process is highly parallelizable and its run-time inversely proportional to the number of floating point operations per second (FLOPS) available. Additionally, we measure ASASIN performance on a set of eight NVIDIA v100 GPUs contained within a single computer. In this configuration, performance scales linearly until the limit of the hard-disk throughput is reached.
\section{Conclusion}
In this work, we introduced a GPU-accelerated image reconstruction suite for SAS called ASASIN which uses a time domain backprojection image reconstruction algorithm. We developed the design motivation of ASASIN as well as described its algorithmic components. In particular, we gave examples demonstrating ASASIN's ability to reconstruct imagery in 2D and 3D geometries. Furthermore, we benchmarked ASASIN compute performance and developed a regression model capable of accurately predicting image reconstruction performance over a variety of GPU models for both desktop and embedded environments. Consequently, the total of our work demonstrates the feasibility of obtaining excellent image reconstruction using the time domain backprojection algorithm while simultaneously obtaining real-time performance for use on board unmanned platforms.
\section{Acknowledgments}
This research was supported in part by the U.S. Department of Defense, through the Strategic Environmental Research and Development Program (SERDP). The SERDP support was provided under the munitions response portfolio of Dr.~David Bradley. This material is based, in part, upon work supported by the Humphreys Engineer Center Support Activity under Contract No. W912HQ-16-C-0006. This work was additionally supported by the U.S. Navy. The authors would like to thank Benjamin William Thomas of the University of Bath and Thomas E. Blanford of Penn State University for their constructive remarks in improving the manuscript.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.351562,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeXrxK6mkyCKC2BDk | \section{Introduction}
\noindent
The interest in statistical systems close to criticality is
shared by a large community including condensed
matter physicists and particle theorists. An important tool in the
study of such systems are numerical experiments in the form of Monte
Carlo simulations which complement analytical results that are
available for special systems and limiting cases. One of the
central problems in such simulations is the degradation
of most known simulation algorithms as criticality --- the cutoff or
continuum limit in quantum field theories describing particles ---
is approached. The interdisciplinary search of improved techniques
is an on-going effort, but it already has yielded some very positive
results in recent years some of which we want to briefly review here.
Our focus will be on spin systems with variables belonging
to continuous manifolds ($ \sigma $-models) and on pure lattice gauge theory.
This selection thus leaves out the vast field of discrete Ising-like
systems$^1$
as well as the enormous problems (and potential gains from algorithmic
improvement)
faced in QCD simulations with fermions.$^2$
The problem of critical slowing down (CSD)
near criticality is schematically described by the dynamical
scaling law
\begin{equation} \label{z}
\tau = c \; \xi^z.
\end{equation}
Here $\xi$ is some physical scale or correlation length in lattice units
and $\tau$ is a time scale in number of iterations.
A $\tau$ and corresponding $z$ can characterize
the time scale for equilibration or
the rate at which
statistically independent estimates for observables of interest are
produced. Then for the expectation value of $A$,
\begin{equation} \label{avA}
\langle A[s] \rangle = \frac{1}{Z}\int \mbox{\rm D} s \; \mbox{\rm e} ^{-\beta H[s(x)]} \; A[s],
\end{equation}
the accuracy in estimating $A$ improves as
\begin{equation}
\delta_A = \sqrt{\mbox{variance}_A \times 2\tau/\#\mbox{ of iterations}}\;.
\end{equation}
In (\ref{avA}) $ \mbox{\rm D} s$ = $\prod_x \mbox{\rm d} \mu(s(x))$ means integration
over all fields or spins $s(x)$ at lattice sites $x$
with some measure, $Z$ is the partition function and
$\beta H[s]$ is the inverse bare coupling or temperature times
some action or Hamiltonian.
The algorithms to be described here have been designed to lower $z$
from its ``traditional'' value of about two for standard local Metropolis
methods to $z \simeq 1$ or even $z \simeq 0$ in some cases.
They thus improve the efficiency
of simulations by {\it one to two powers} of $\xi$.
Beyond the introduction this article will continue with a section on
the technique of embedded variables (common to many algorithms),
and short descriptions of cluster algorithms, multigrid methods and
overrelaxation followed by some conclusions.
\section{Updating Embedded Variables}
\noindent
Let us imagine some group of transformations $T\in G$ that
act on the configurations $s \hat{=} \{s(x)\}$,
\begin{equation}
s \rightarrow T s,
\end{equation}
such that the measure is left invariant, $ \mbox{\rm D} s = \mbox{\rm D} T s$.
Such transformations can be local, $T s = \{ t(x) s(x) \}$,
and then $t(x)$ is a field similar to $s$ itself with values
in local group factors that make up $G$. One may however also
consider more global changes of $s$, where $T$ acts for instance on some
cluster of spins from $s$ with one and the same rotation.
For a given configuration $s$ we can now think of a statistical system
with configurations $T\in G$ and an effective or induced Boltzmann
factor $\exp(-\beta H[T s])$ whose simulation can also be considered.
Assume now that we have a Monte Carlo algorithm for this system
which is characterized by transition probabilities $w(s;T_1 \to T_2)$
that preserve $\exp(-\beta H[Ts])$ and
depend on s only through the effective Boltzmann factor such that
$w(Ts;T_1\to T_2)$ = $w(s;T_1 T \to T_2 T)$ holds.
Then, if ervalid algorithm for the original field $s(x)$:
\begin{itemize}
\item for momentarily fixed $s=s_1$ put $T_1=Id$ as initial configuration,
\item update $T_1\to T_2$ with the $w$--algorithm,
\item take $s_2=T_2s_1$ as a new $s$.
\end{itemize}
To prove this assertion we first note that the overall transition
probability is given by
\footnote[2]{For simplicity we take a discrete group $G$ here;
in the continuous case the sum over all elements has to be replaced by
an invariant group integration.}
\begin{equation}
W(s_1 \to s_2) = \sum_T w(s;Id \to T) \; \delta(s_2-Ts_1).
\end{equation}
We have to show that $W$ preserves the Boltzmann weight $\exp(-\beta H[s])$.
To this end we transform
\begin{eqnarray}
\int \mbox{\rm D} s_1 \; \mbox{\rm e} ^{-\beta H[s_1]} \; W(s_1 \to s_2) &=& \nonumber\\
\int \mbox{\rm D} s_1 \sum_{T'} \mbox{\rm e} ^{-\beta H[s_1]} \;
w(s_1;Id \to T') \; \delta(s_2-T's_1) \; &=& \nonumber\\
\frac{1}{|G|}\int \mbox{\rm D} s_1 \sum_{T,T'} \mbox{\rm e} ^{-\beta H[Ts_1]} \;
w(s_1;T \to T') \; \delta(s_2-T's_1) \; &=& \nonumber\\
\frac{1}{|G|}\int \mbox{\rm D} s_1 \sum_{T'} \mbox{\rm e} ^{-\beta H[T's_1]} \;
\delta(s_2-T's_1) \; &=&
\mbox{\rm e} ^{-\beta H[s_2]}.
\end{eqnarray}
To arrive at the third line changes of variables $s_1\to Ts_1$
and $T' \to T'T^{-1}$ are carried out and $T$ is averaged over.
In the last line we absorbed $T'$ into $s_1$ and the $\delta$-function
is used.
In many cases,
in particular if $G$ is a lower dimensional manifold than the original
configuration space, the moves described are not ergodic. This can often
be improved by using families of different embeddings between which
one switches
in a random or deterministic order. In cases where this is not sufficient
one can always blend in conventional update steps to achieve ergodicity.
\section{Cluster Algorithms}
\noindent
Cluster algorithms$^3$ for continuous spins
are an example of the successful use ovariables. Their drawback has been up to
now that
the $O(n)$ invariant
$n$-vector models are the only continuous variable $ \sigma $-models
where they are powerful.\footnote[2]{There are principal
reasons for this limitation which
come close to a no-go theorem.$^4$}
Here however, according to accumulated
numerical evidence$^5$, they really achieve $z\simeq 0$ with even
a very small coefficient $c$ in (\ref{z}). With the additional advantage
of variance reduced estimators for Green functions, these models have
become an ideal testing ground for nonperturbative physics$^6$,
in particular
in two dimensions, where they are asymptotically free for $n\ge 3$.
Also the xy-model ($n=2$) is of great interest$^7$ as it allows for
checks of the Kosterlitz Thouless scenario.
We therefore now specialize our general setup, Eq.(\ref{avA}), to
the $O(n)$ models, where
\begin{equation}
s(x)\in R^n, \quad \mbox{\rm D} s= \prod_x \mbox{\rm d} ^n s(x) \; \delta(s(x)^2-1), \quad
-H[s] = \sum_{<xy>} s(x) \cdot s(y).
\end{equation}
The key to efficient cluster algorithms is the embedding of Ising
spins and the use of global update techniques for them.
A family of embeddings is labelled by an $n$-component unit vector
$r$ of the same type as the local spins. The group involved
is $G_r \simeq Z_2^{\# \mbox{sites}}$ corresponding to one
Ising spin $ \sigma (x)=\pm 1$ on each site, $T \leftrightarrow \{ \sigma (x)\}$.
They act as reflections,
\begin{equation} \label{r}
s'=Ts, \quad s'(x)=s_{\perp}(x) + \sigma (x) s_{\|}(x),
\end{equation}
where $\perp$ and $\|$ refer to the $r$-direction.
The induced Hamiltonian
\begin{equation}
-H[Ts] = \sum_{<xy>} r\cdot s(x) \; r\cdot s(y) \; \sigma (x) \sigma (y)
+\mbox{terms independent of } \sigma
\end{equation}
is recognized to describe a {\it ferromagnetic} Ising model with random
bond strengths $J_{<xy>} = r\cdot s(x) \; r\cdot s(y)$. When they are
multiplied along closed loops the result is always positive or zero
due to their factorized form, which shows the absence of frustration.
We also note that there is no magnetic field coupling to $ \sigma $.
This can be regarded as being due to the reflections in (\ref{r})
being part of the $O(n)$ global symmetry which leads to a global
$Z_2$ invariance for $ \sigma $.
For these embedded random systems the Swendsen-Wang algorithm$^8$
or its single cluster variant$^3$ work very well, actually better
than in the standard Ising model where some remaining CSD is still
detectable.
The algorithm just described is ergodic if $r$ is chosen at random
such that all directions can appear. In practical realizations it is
usually convenient to always take $r=(1,0,\ldots,0)$. Then
the embedding requires fewer operations, and ergodicity is restored
by globally $O(n)$-transforming the whole configuration with a random
rotation or reflection after a certain number of updates.
\section{Multigrid Techniques}
\noindent
Multigrid Monte Carlo (MGMC) techniques have been proposed$^9$ to
eliminate CSD by allowing for efficient moves of a critical
system on all scales. They work well on nearly gaussian systems$^{10}$ as
they do in the related problem of linear difference equations
where the method has made its first appearance.
Here we will not review the truly recursive MGMC$^{11}$
approach but a simpler unigrid variant that was proposed recently.$^{12}$
It has become the optimal method for $ \sigma $-models other than the
$O(n)$ family.
We now present the method for the $O(3)$-model.$^{12}$
The actual updates are performed here (and in realizations for
other $ \sigma $-models) on embedded $U(1)$ spins of the xy-model type.
To an $O(3)$ generator corresponding to some rotation, as for
example
\begin{equation}
i\lambda = \left( \begin{array}{ccc} 0 & 1 & 0 \\ -1~ & 0 & 0 \\
0 & 0 & 0 \end{array} \right) \; ,
\end{equation}
we couple local angles $\alpha(x)$ by
\begin{equation}
Ts \hat{=} \left\{ \mbox{\rm e} ^{-i\alpha(x) \lambda} s(x) \right\}.
\end{equation}
This induces a Hamiltonian for $\alpha(x)$,
\begin{equation}
-H[Ts]= \sum_{<xy>} \mbox{Re} \left( J_{<xy>} \mbox{\rm e} ^{i(\alpha(x)-\alpha(y)}
\right) +\mbox{terms independent of } \alpha
\end{equation}
with complex bond strengths
\begin{equation}
J_{<xy>} = (s^1(x)+i s^2(x)) (s^1(y)-i s^2(y))=J_{<yx>}^{\ast},
\end{equation}
where $s^a, a=1,2,3,$ denotes the components of $s$.
The random bonds generated for the embedded xy-model are again
ferromagnetic due to their factorized form. As for $U(1)$ gauge fields
their orientation has to be properly taken into account, of course.
We now need an algorithm for $\alpha(x)$. For the version of MGMC
of Ref.~12 elementary moves are performed on
$B\times B \times \ldots \times B$
subblocks of the lattice. For such blocks one has a priory fixed
profiles of kernels $K(x)$ that vanish outside the block and are smooth.
Possible choices
in arbitrary dimensions are pyramids or the lowest mode
sine waves with the block as a cavity. The kernels appear in the
nonlocal Metropolis proposals
\begin{equation}
\{\alpha(x)\} \to \{ \alpha(x) + \Psi K(x) \}.
\end{equation}
These are accepted or rejected in the usual way, and $\Psi$ is drawn
from a symmetric distribution with a width adjusted for reasonable
acceptance. For a lattice of length $L$, which should be a power of two,
one has to hierarchically cover the lattice with blocks of sizes
$B = L/2, L/4, \ldots , 1$ with the last choice producing just local
updates. It has turned out to be important that either the superimposed
block lattice or equivalently
the field configuration is randomly translated between
updates, such that the block boundaries are not at fixed positions.
Furthermore the generator $\lambda$ is randomized to achieve ergodicity.
For $O(n)$ models this MGMC algorithm is presumably
inferior to cluster methods, although detailed comparisons would
be somewhat hardware dependent.
The importance of the technique derives however from the fact,
that it can also be used for $CP^n$-models and for $SU(n)$ valued
spins. The main change is that appropriate generators
(from $U(n)$ and $SU(n)$ respectively) have to be substituted for $\lambda$.
The resulting embedded $U(1)$ system now {\it can have frustration}
depending on the configuration of the ``background'' spins $s$.
Practical tests for the $SU(3) \times SU(3)$ chiral model and for
the $CP^3$ system have shown that these frustrations do not
seriously slow down the evolution.$^{13}$ In all three cases $z = 0.2(1)$
has been found.
\section{Overrelaxation}
\noindent
In contrast to the two previous algorithms overrelaxation (OR) achieves
an improvement (down to $z \approx 1$) with still local updates
only, and hence it is as fast or even somewhat faster per sweep
than standard algorithms.
It also immediately carries over to gauge fields.
We now present OR in its ``hybrid'' form that found many
applications recently rather than the original ``tunable'' version$^{14}$.
We consider the local update problem at site $x$ with local Boltzmann
weight
\begin{equation} \label{loBo}
\mbox{\rm e} ^{\beta s(x) \cdot M(x)} \quad
\mbox{where } \quad M(x) = \sum_{y=
\mbox{\scriptsize n.n. of } x} s(y),
\end{equation}
again for an $O(n)$ model spin for illustration.
A local heatbath procedure amounts to choosing a new $s(x)$ independently
of the old one with the weight (\ref{loBo}). For OR we need additional
microcanonical steps producing transitions $s_1(x) \to s_2(x)$ such that
\begin{itemize}
\item $s_1(x), s_2(x)$ have the same local weight (\ref{loBo}) and thus
the energy is unchanged,
\item $s_1(x), s_2(x)$ are as far from each other as possible.
\end{itemize}
For our example this principle leads to the change
\begin{equation}
s_2(x) = -s_1(x) + 2 \frac{M \; M\cdot s_1(x)}{M^2}.
\end{equation}
Experience has shown that both for vectorization and for the nonuniversal
it is best to group the local updates such as to do a maximum number
of independent ones in parallel, which usually amounts to checkerboard
ordering.
Now an OR iteration consists of $N$ microcanonical sweeps followed
by one heatbath or other standard ergodic step.
{}From exact gaussian analysis$^{15}$ and from numerical experiments$^{16}$
it is known that to achieve $z\approx 1$ it is necessary do let $N$ grow
proportional to $\xi$ as criticality is approached. Often $N=\xi$
is a reasonably good trial value. The goal is to achieve a roughly constant
autocorrelation time when measured in iterations which
implies $z\approx 1$ when referring to sweeps.
In particular, this kind of OR is the present method of choice
for SU(2) lattice gauge fields
(either fundamental or embedded to move SU(3) fields).
The local problem for a link variable
$U_{\mu}(x) \in SU(2)$ coincides with the $O(4)$ case when $SU(2)$
matrices are expanded in terms of the unit- and the Pauli-matrices.
\begin{figure}
\vspace{9.5cm}
\caption{Autocorrelation time in sweeps for
four dimensional $SU(2)$ lattice gauge theory
in a finite size scaling limit.}
\end{figure}
We close with the example of
a recent simulation of the $SU(2)$ gauge theory$^{17}$
where it was possible to
determine the relation between $\tau$ and a scale in lattice units
for a whole range of scales as shown in Fig.~1.
In this study a renormalized coupling
constant was held fixed which is equivalent to a finite size scaling limit
at fixed $L\sqrt{K}$ with the
string tension $K$ assuming the r\^ole
of a correlation length.
The time $\tau$ refers to independently estimating the
renormalized coupling.
The line in the plot represents a fit with the
form (\ref{z}) giving $z=1.0(1)$ and $c=0.5(1)$.
For further details on algorithmic and physical aspects of this work
we have to refer to Ref.~17.
\section{Conclusions}
\noindent
We have presented some of the accelerated algorithms for the Monte
Carlo simulation of spin
and gauge theories that have been discovered and tested in recent years.
As a result, critical continuum behavior can be studied now much more
accurately, especially in two and three dimensions. In four dimensions,
which is the most interesting case for particle physics, the situation
will become similar as larger systems will be studied on future computers.
In the presence of Goldstones modes, the new techniques are crucial
already now.
When algorithms of the multigrid type with $z < 1$ will be found for
gauge theories, it will strongly depend on the overhead inflicted,
at which system size they really become profitable.
In simulations of the $CP^3$ model, for instance, it has been found
that on vector machines the crossover between OR and MGMC occurs only
for correlation lengths $\xi = 20 \ldots 30$.$^{13,18}$
\vspace{1ex}
\noindent
{\large\bf Acknowledgements} \newline
I would like to thank Martin Hasenbusch and Steffen Meyer for
correspondence and discussions on their multigrid technique.
\newpage
| {
"attr-fineweb-edu": 1.677734,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUecXxK4sA-5fmyYfk | \section{Introduction}
Analyzing QCD and QCD-like theories on $R_3\times S_1$
provides new insights in gauge dynamics at strong coupling
and offers a new framework for discussing various ideas on confinement.
The radius of the compact dimension $r(S_1)$ plays a role of an adjustable
parameter, an obvious bonus and a welcome addition to a rather scarce
theoretical toolkit available in strongly coupled gauge theories.
As the circumference $L$ of the circle $S_1$ varies, so does the
dynamical pattern of the theory. For instance, at $L\ll \Lambda^{-1}$
in some instances the theory becomes weakly coupled.
On the other hand, in the decompactification limit, $L\gg \Lambda^{-1}$,
we recover conventional four-dimensional QCD, with its most salient feature,
non-Abelian confinement.
A qualitative picture of confinement in terms of the Polyakov line
was suggested by Polyakov and Susskind long ago \cite{Polyakov:1978vu, Susskind:1979up}.
Assume that the compactified dimension is $z$.
The Polyakov line (sometimes called the Polyakov loop)
is defined as a path-ordered holonomy of the Wilson line
in the compactified dimension,
\beq
{\cal U} = P\exp\left\{i\int_0^L a_z dz \right\} \equiv V U V^\dagger
\label{onem}
\eeq
where $L$ is the size of the compact dimension while
$V$ is a matrix diagonalizing ${\cal U}$,
\beq
U = {\rm diag}\{ v_1, v_2, ..., v_N\} \,.
\label{twom}
\eeq
According to Polyakov, non-Abelian confinement implies that the eigenvalues
$v_i$ are randomized: the phases of $v_i$ wildly fluctuate over the entire
interval $[0,2\pi]$ so that
\beq
\langle {\rm Tr} U \rangle =0\,.
\label{threem}
\eeq
The exact vanishing of $\langle {\rm Tr} U\rangle$
in pure Yang--Mills
is the consequence of the unbroken $Z_N$ center symmetry in
the non-Abelian confinement regime. Introduction of
dynamical fermions (quarks)
generally speaking breaks the $Z_N$ center
symmetry at the Lagrangian level.\footnote{It is still an emergent dynamical symmetry
in the multicolor limit \cite{Shifman:2007kt,Armoni:2007kd}; however, we limit ourselves to small $N$. In this paper parametrically $N$ is of order one.} However, the picture of wild fluctuations of the phases of $v_i$'s remains intact. Therefore, it is generally expected
that $\langle \frac{1}{N} {\rm Tr} U\rangle $ is strongly suppressed even
with the dynamical fermion fields that respect no center symmetry,
$\langle \frac{1}{N} {\rm Tr} U\rangle \sim 0$.
This expectation is supported by lattice simulations at finite
temperatures \cite{L1} demonstrating
that $\langle {\rm Tr} U\rangle$ is very close to zero at large $L$
(low temperatures).
On the other hand, in QCD and QCD-like theories\,\footnote{
By QCD-like theories we mean non-Abelian gauge theories without elementary scalars,
e.g., Yang--Mills with fermions in the two-index symmetric or antisymmetric representation, to be referred to as S/AS, see below.} at
small $L$ (high temperatures) the center-symmetric field configuration
is dynamically disfavored. In many instances the vacuum is attained at
$\langle\frac{1}{N} {\rm Tr} U \rangle =1$. In this case, the effective low-energy theory is at strong coupling, and it is as hard to deal with it as with
QCD on $R_4$. Typically, the small and large-$L$ domains are separated
by a phase transition (or phase transitions). For instance, for S/AS
with even $N$ this is a $Z_2$ phase transition.
Numerical studies show that for $N\geq 3$ there is a thermal
phase transition between confinement and deconfinement phases.
Similar numerical studies detect a temperature $T_\chi$ at which
the broken chiral symmetry of $T=0$ QCD gives place to restored chiral symmetry
of high-$T$ QCD. The phase transition at $T_\chi$ is that of the chiral symmetry restoration (the lower plot in Fig.~(\ref{fig:surgery})).
\begin{figure}[t]
\begin{center}
\includegraphics[width=3in]{surgery}
\caption
{\small
Quantum chromodynamics as a function of compactified direction
circumference before and after surgery (QCD and QCD$^*$, respectively).
$L_c$ is the point of a phase transition.}
\end{center}
\label{fig:surgery}
\end{figure}
In this case small-$L$ physics says little, if at all, about
large-$L$ physics, our desired goal. We would like to create a different situation.
We would like to design a theory which (i) in the decompactification
large-$L$ limit tends to conventional QCD and its QCD-like sisters;
(ii) at small $L$ is analytically tractable and has both confinement and chiral symmetry breaking;
and (iii) has as smooth transition between the small and large-$L$
domains as possible (the upper plot in Fig.~(\ref{fig:surgery})). If this endeavor
--- rendering small and large-$L$ physics continuously connected ---
is successful, we could try to
use small-$L$ laboratory to
extract lessons about QCD and QCD-like theories on $R_4$.
We will argue below that the goal can be achieved
by performing a so-called double-trace deformation of QCD and QCD-like
theories.\footnote{The double trace deformations were previously discussed in the context of gauge/string theory dualities in \cite{Aharony:2001pa, Witten:2001ua, Berkooz:2002ug,Barbon:2002xk}, as well as
in field theory \cite{Schaden:2004ah, Pisarski:2006hz, Myers:2007vc}.}
To this end we add a non-local operator
\begin{equation}
P[U({\bf x}) ]= \frac{2}{\pi^2 L^4} \,\sum_{n=1}^{\left[\frac{N}{2}\right]} d_n |
\, {\rm Tr}\, U^n({\bf x} )|^2 \qquad {\rm for} \; {\rm SU}(N),
\label{fourma}
\end{equation}
to the QCD action,
\beq
\Delta S =
\int_{R_3} d^3x\, L \, P[U({\bf x}) ]\,,
\label{fivem}
\eeq
were $d_n$ are numerical parameters to be judiciously chosen.
The theories obtained in this way will be labeled by asterisk.
In minimizing $S+\Delta S$ the effect due to deformation
(\ref{fourma}) is two-fold. First, it tends to minimize $ |{\rm Tr}\, U({\bf x} )|$.
Second it tends to maximize the distance between the
eigenvalues of $U$.
It is necessary to have a polynomial
of order $[N/2]$ to force the eigenvalues of the Polyakov line to be maximally apart from one another, i.e. to push the theory towards the center-symmetric point
depicted in Fig.~\ref{zns}.
Here $[x]$ stands for the integer part of $x$. To stabilize the vacuum sufficiently close to the center-symmetric configuration the coefficients $d_n$ must be large enough,
presumably, $d_n\sim 1$. Some technical details are discussed in Appendix.
At large $L$, the deformation switches off and has
no impact on the theory, i.e. QCD$^*\approx$ QCD. However, at small $L$
the impact is drastic. Given an appropriate choice
of $d_n$'s the deformation (\ref{fivem}) forces the theory to pick up
the following set\,\footnote{
More exactly, the set of VEVs will be very close to (\ref{10}).}
of the vacuum expectation values (VEVs):
\beq
v_k = e^{\frac{2\pi i k}{N}},\qquad k=1,... , N,
\label{10}
\eeq
(or permutations), see Fig.~\ref{zns}.
\begin{figure}[h]
\centerline{\includegraphics[width=2in]{znsym}}
\caption{\small $Z_N$ symmetric vacuum fields $v_k$. }
\label{zns}
\end{figure}
If we define
\beq
e^{iaL} \equiv U,
\eeq
\beq
a =\sum_{\rm Cartan\,\, gen} a_c T^c \equiv {\rm diag}\{ a_1, a_2, ..., a_N\}\,,\qquad \sum_{k=1}^N a_k =0\,,
\label{4}
\eeq
it is obvious that Eq.~(\ref{10}) implies
\beqn
&& \{La_i\} = \{ -i L\, \ln v_i \,\, ({\rm mod}\,\, 2\pi)\}
\nonumber\\[3mm]
&&= \left\{-\frac{2\pi [N/2]}{N},\,\, -\frac{2\pi ([N/2]-1)}{N}, ....,\,
\frac{2\pi [N/2]}{N}\right\}\,.
\label{12}
\eeqn
This means, in turn, that
the theory is maximally Higgsed,
\beq
{\rm SU}(N)\to {\rm U}(1)^{N-1}
\label{higgsed}
\eeq
and weakly coupled at $L\ll \Lambda^{-1}$. The gauge bosons from the Cartan subalgebra
(to be referred to as photons) remain classically massless,
while the off-diagonal gauge bosons (to be referred to as $W$ bosons)
acquire large masses. The effective low-energy dynamics is that of compact QED. (See footnote~\ref{f6}, though.)
It is not trivial. Dual photons acquire exponentially small masses nonperturbatively
through the instanton-monopole mechanism \cite{P2,Unsal:2007vu}.
The mass gap generation in the dual description amounts to
linear Abelian confinement (at exponentially large distances).
Chiral bifermion condensates are generated too \cite{Davies:2000nw,Unsal:2007jx}.
Thus, the dynamical patterns in the small and large-$L$ domains
do not seem to be that different from each other. Details are different (e.g. Abelian vs. non-Abelian confinement), but gross features appear to be similar.
It is not unreasonable to expect that there is no phase transition in $L$.
What is meant when we speak of Abelian/non-Abelian confinement
\cite{SY,Armoni:2007kd}?
In the former case the gauge group acting in the infrared (IR) and responsible for the flux tube formation is Abelian (i.e. U(1)$\times$U(1) ...). In the latter case
we deal with a non-Abelian group in the infrared.
The best-known example exhibiting both regimes is the Seiberg--Witten solution
\cite{Seiberg:1994rs} of a deformed
${\mathcal N}=2$ super-Yang--Mills theory. If the deformation parameter $\mu$ is small,
$$
|\mu |\ll \Lambda \,,
$$
the SU$(N)$ gauge group is spontaneously broken down to U(1)$^{N-1}$, and the confining string is a generalization of
the Abrikosov vortex \cite{ANO}.
In the opposite limit
$$
|\mu |\gg\Lambda \,,
$$
the breaking of SU$(N)$ down to U(1)$^{N-1}$ does not occur.
The infrared dynamics is determined by SU$(N)$; the corresponding flux tubes
should be non-Abelian. Since the theory is holomorphic in $\mu$,
the Abelian and non-Abelian confinement regimes are
expected to be smoothly connected.
Another example which should be mentioned
(and which is close in formulation
to what will be presented below) where it is believed that no phase
transition in $L$ takes place is ${\cal N}=1$ supersymmetric Yang--Mills
(SYM) theory on $R_3\times S_1$ \cite{Cohen:1983fd, Katz:1996th, Davies:1999uw, Davies:2000nw, Unsal:2007jx}.
We expect that QCD$^*$ and QCD$^*$-like theories are of this type ---
there is no phase transition between the Abelian confinement small-$L$ and
non-Abelian confinement large-$L$ domains.
\vspace{2mm}
{\bf Conjecture:} \label{claim}The deformed {\bf one}-flavor QCD-like
theories interpolate from small $r(S_1)$ to large $r(S_1)$ without phase transitions.
\vspace{2mm}
Since the theories under consideration are non-supersymmetric we cannot back up this statement by holomorphy.
Thus, the smoothness conjecture is on a somewhat weaker basis than
in the Seiberg--Witten problem. However, arguments to be presented below
can be viewed as at least some evidence in favor of the absence of the phase transition
in $L$. More evidence can (and should) be provided by lattice studies.
In QCD-like theories with more than one flavor, chiral symmetry breaking
($\chi$SB) occurring on $R_4$ at strong coupling produces
$N_f^2-1$ Goldstone mesons. Needless to say, it is impossible to get such Goldstones
at weak coupling at small $L$. However, if one
considers theories with {\em one} fermion flavor in the center-symmetric
regime, there are no obvious reasons for a chiral phase transition.
The chiral symmetry in such theories is discrete, and its spontaneous breaking results
in domain walls rather than Goldstones. This phenomenon can show up both at strong and weak couplings. In this paper we will limit ourselves to QCD-like theories with a single flavor.
To be more exact, we will discuss in some detail SU$(N)$ Yang--Mills theory
with one fermion in the fundamental and two-index AS representations. Analysis of the
two-index S fermion essentially runs parallel to that of the AS case.
We will also dwell on SU$(N)\times$SU$(N)$ Yang--Mills with
the bifundamental fermion. The number of colors $N$ is
{\em not} assumed to be large. The large-$N$ limit and the case of fermions in the
adjoint representation were treated elsewhere
\cite{Armoni:2007kd,Unsal:2007vu}.
Among other results, we will,
in particular,
argue
that many dynamical features of SU$(N) \times$SU$(N)$ orbifold QCD
are remarkably close to those of SYM
theory. The pattern of the chiral symmetry breaking, the mass gap, the nonperturbative spectrum, the $k$-string tensions --- all of the above are demonstrated to
coincide in these two theories.
The paper is organized as follows. In Sect.~\ref{s2} we outline our formulation of the problem and briefly review general aspects of one-flavor QCD-like theories
on $R_4$ and $R_3\times S_1$. We also review dual description
of three-dimensional Yang--Mills (the Georgi--Glashow model),
and Polyakov's confinement. In Sect.~\ref{FF} we consider the case
of one fermion in the fundamental representation and solve the
theory at small $r(S_1)$. In Sect.~\ref{s4} we carry out the same analysis
in the SU$(N)\times$SU$(N)$ theory with one bifundamental fermion (orbifold theory).
In Sect.~\ref{s5} we consider Yang--Mills theory with
one fermion in the two-index antisymmetric representation of SU$(N)$.
Section~\ref{s6} is devoted to $\theta$ dependence.
In Sect.~\ref{plan} we discuss how our results are
related to planar equivalence.
Finally, Section~\ref{s7} summarizes our results and outlines some problems for future investigation.
\section{QCD and QCD-like theories on \boldmath{$R_4$} and \\
\boldmath{$R_3\times S_1$}: general aspects}
\label{s2}
We will consider one-flavor QCD-like theories with the SU$(N)$ gauge group and
fermions in the following representations:
\begin{equation}
{\cal R} = \{\rm F, AS, S, Adj, BF \} \,,
\label{Eq:classes}
\end{equation}
where F stands for fundamental, AS and S are two-index antisymmetric and symmetric representations,
Adj stands for adjoint, while BF for bifundamental. In all cases except Adj we deal with
the Dirac fermion field, while in the adjoint case with the Majorana (Weyl) spinor.
This is nothing but supersymmetric Yang--Mills (SYM) theory.
In the BF case the gauge group is SU$(N)\times$SU$(N)$, with the fermion field
being fundamental with respect to the first SU$(N)$ and antifundamental with respect to the second SU$(N)$.
For the adjoint fermions we will use the following nomenclature.
The theory with one Majorana flavor will be referred to as SYM,
while in the case of two or more flavors we will speak of QCD(Adj).
The boundary conditions for
fermions can be either periodic $(\cal S^{+})$ or antiperiodic $(\cal S^{-})$
in the compactified dimension.
Yang--Mills theories with two-index fermions received
much attention lately in connection with planar equivalence between such theories and
SYM theory (see \cite{ Armoni:2007vb} and references therein). At $N=3$ the AS theory is equivalent to F.
Theoretically the most informative is ${\cal N}=1$ SYM theory.
For periodic spin connection $\cal S^{+}$ this theory has unbroken center symmetry and broken discrete chiral symmetry for any $r(S_1)$.
In fact, the chiral condensate $\langle {\rm Tr} \lambda\lambda\rangle$
was exactly calculated long ago
\cite{SVC,Davies:2000nw}, both on $R_4$ and $R_3\times S_1$,
and was shown to be totally independent of the value of $r(S_1)$.
More recently, this theory was demonstrated \cite{Unsal:2007jx}
to possess Abelian confinement
at small $L$. Therefore, there is no {\em obvious}
obstruction for the $L$ evolution to be smooth. We know that at $L$ larger than
the strong scale $\Lambda^{-1}$, the neutral sector observables in
${\cal N}=1$ SYM theory and QCD(AS/S/BF) are remarkably
close and only differ by mild $O(1/N)$ effects. However, the
complex representation fermions break center symmetry at small $r(S^1)$
implying that these theories become drastically different from ${\cal N}=1$ SYM
theory. The double-trace deformation (\ref{fivem}) is designed
to maintain this similarity at small $r(S_1)$ too. One of the most intriguing
findings of this paper is that the analytical tractability of ${\cal N}=1$ SYM
theory in the small-$r(S_1)$ limit is not necessarily a consequence of supersymmetry.
The unbroken center symmetry is equally important.
Briefly summarizing our knowledge of other one-flavor QCD-like
theories\,\footnote{A part of this knowledge is folklore.} on $R_4$
we can say the following. All these theories are expected to exhibit:
\vspace{1mm}
(i) Mass gap: there are no massless particles in the physical spectrum;
\vspace{1mm}
(ii) Non-Abelian confinement:
the gauge group is not Higgsed, chromoelectric flux tubes are formed between
quarks and antiquarks,
these flux tubes are not stable, generally speaking, since the dynamical quark
pair production can break them. No color-charged objects are present in the physical spectrum;
\vspace{1mm}
(iii) Discrete chiral symmetry breaking\,\footnote{ For F representation, the
anomaly-free $Z_2$ is the fermion number and can not be spontaneously
broken. The theory has a unique vacuum.} for $ {\cal R} = \{\rm AS, S, BF, SYM \}$:
The one-flavor QCD-like theories on $R_4$ possess an axial U(1) symmetry at the classical level. Only a discrete subgroup of it, $Z_{2h}$, is the symmetry of the quantum theory,
\begin{equation}
Z_{2h}= \{ Z_2, Z_{2N-4}, Z_{2N+4}, Z_{2N}, Z_{2N} \} \;\; {\rm for} \;\;
{\cal R} = \{\rm F, AS, S,BF,SYM \} ,
\end{equation}
respectively. Here $2h$ is the number of the fermion
zero modes in the instanton background.
In all cases but F the axial $Z_{2h}$ is spontaneously broken down to $Z_2$.
Discrete symmetry breaking, unlike that of the continuous symmetries, does not lead to Goldstone bosons.
Instead, the
theory must possess $h$ isolated vacua.
The above picture follows from multiple lattice calculations, and
supersym\-metry-based and large-$N$ methods.
In this work the double-trace deformation of QCD(${\cal R}$) on
$S_1 \times R_3$ with small $r(S_1)$ is used to stabilize the
theories under consideration at (or, more exactly, very close to)
a center-symmetric point.
At small $r(S_1)$ the non-Abelian gauge group
is Higgsed down to the maximal Abelian subgroup,
but neither confinement nor the above chiral properties
are lost. We will explicitly demonstrate confinement,
the discrete chiral symmetry breaking, and mass gap generation.
On $S_1 \times R_3$ the Yang--Mills Lagrangian
with one fermion flavor in
the representation ${\cal R}$
takes the form
\begin{equation}
S= \int_{R_3 \times S_1} \frac{1}{g^2} \left[ \frac{1}{2}\, {\rm Tr} \,
F_{MN}^2 (x) + i
\bar \Psi {\rlap{\raise 1pt \hbox{$\>/$}}D}
\Psi \right]
\label{eq:cont}
\end{equation}
where $\Psi$ is the four-dimensional Dirac spinor in the
representation ${\cal R}= \{{\rm F, AS,S } \}$ of the gauge group SU$(N)$, $F_{MN}$ is the non-Abelian gauge field strength,\footnote{Throughout the paper we use
the following notation: $M,\, N=1,\, \ldots,\, 4$ are
four-dimensional Lorentz indices while and ${\mu, {{%\boldsymbol} {\nu}}}=1, 2, 3$ are
three-dimensional indices. We normalize the Lie algebra generators as ${\rm Tr} \; t^A t^B = \frac{1}{2} \delta^{AB}$. } and ${\rlap{\raise 1pt \hbox{$\>/$}}D}=\gamma_M D_M= \gamma_M( \partial_M + i A_M)$ is the covariant derivative acting on representation ${\cal R}$. For QCD(BF), the gauge group is SU$(N) \times {\rm SU}(N)$ and
gauge field part of the action must be replaced by
$$F_{MN}^2 (x) \rightarrow F_{1, MN}^2 (x) +
F_{2, MN}^2 (x)\,.$$
In this theory the fermion is in the bifundamental representation.
In terms of its Weyl components, the Dirac fermions are decomposed as
\begin{equation}
\Psi= \left(\begin{array}{l}
\lambda \cr
\bar \psi
\end{array}
\right)
\end{equation}
where $\lambda,\,\, \psi$ are two-component (complex) Weyl
spinors. In three dimensions $\lambda,\,\, \psi$ represent two
Dirac spinors.
We must use the Kaluza-Klein (KK) mode decomposition
for all fields in the Lagrangian. If we discard all modes other than zero we will arrive at
a three-dimensional theory with a gauge field, a scalar field in the adjoint
and two three-dimensional spinors.
The $S_1 \times R_3$ reduction of $R_4$ Yang--Mills does not quite lead to
three-dimensional Yang--Mills, but at first, we will ignore this nuance, to be discussed
in detail later, and will briefly review the phenomena that
occur in three-dimensional Yang--Mills with a scalar field in the adjoint (discarding fermions for the time being).
Long ago Polyakov considered three-dimensional SU(2) Georgi--Glashow mo\-del
(a Yang-Mills + adjoint Higgs system)
in the Higgs regime \cite{P2}. In this regime SU(2) is broken down to U(1),
so that at low energies the theory reduces to compact electrodynamics.
The dual photon is a scalar field $\sigma$ of the phase type
(i.e. it is defined on the interval $[0, 2\pi ]$):
\beq
F_{\mu{{%\boldsymbol} {\nu}}}
=\frac{g_3^2}{4\pi} \, \varepsilon_{\mu{{%\boldsymbol} {\nu}}\rho}\left( \partial^\rho\,\sigma\right)\,,
\label{14.8}
\eeq
where $g_3^2$ is the three-dimensional gauge coupling with mass dimension
$ [g_3^2]=+1$.
In perturbation theory the dual photon $\sigma$
is massless. However, it acquires a mass
due to instantons (technically, the latter are identical to the 't Hooft--Polyakov
monopoles, after the substitution of one spatial dimension by imaginary time;
that's why below we will refer to them as to the instantons-monopoles).
In the vacuum of the theory, one deals with a gas of instantons interacting
according to the Coulomb law.
The dual photon mass is due to the Debye screening.
In fact, the dual photon mass is determined by the one-instanton vertex,
\beq
m_\sigma \sim m_W^{5/2} g_3^{-3}e^{-S_0/2}
\eeq
where $S_0$ is the one-instanton action,
\beq
S_{0} = 4\pi \, \frac{m_W}{g^2_3}\,,
\label{14.5}
\eeq
$m_W$ is the lightest $W$ boson mass, see below. In terms of four-dimensional quantities
$S_0 = 8\pi^2/(Ng^2)$.
As a result, the low-energy theory
is described by a three-dimensional sine-Gordon model,
\beq
{\cal L}_\sigma = \frac{g_3^2}{32\pi^2} (\partial_\mu\sigma )^2 + c_1
m_W^5g_3^{-4}e^{-S_0}
\, \cos\sigma\,.
\label{ptpm}
\eeq
where $c_1$ is an undetermined prefactor.
The coefficient in front of $ e^{-S_0}
\, \cos\sigma$,
$$
\mu \equiv c_1
m_W^5g_3^{-4} \,,
$$
has mass dimension is $[\mu ]= +3$. The combination $\mu e^{-S_0}$ is the monopole
fugacity.
This model supports a domain line\,\footnote{Similar to the axion domain wall.} (with
$\sigma$ field vortices at the endpoints)
which in 1+2 dimensions must be interpreted as a string.
Since the $\sigma$ field dualizes three-dimensional photon, the $\sigma$ field vortices
in fact represent electric probe charges in the original formulation, connected by the
electric flux tubes which look like domain lines in the dual formulation.
Now, if we switch on massless adjoint fermions, as in \cite{Affleck:1982as},
the mass gap generation does not occur in the Polyakov model {\em per se}.
This is due to the fact that the instanton-monopoles acquire fermion zero modes
which preclude the potential term as in Eq.~(\ref{ptpm}).
Correspondingly, the dual photons remain massless
and the model no longer supports domain lines.
The linear confinement is gone.
This situation, changes, however, if three-dimensional Yang--Mills theory is
obtained as a low-energy reduction of a four-dimensional gauge theory
on $S_1 \times R_3$ with small $r(S_1)$.
When the adjoint Higgs field is compact, as in Fig.~\ref{zns}, in addition to $N-1$
't Hooft--Polyakov monopole-instantons there is one extra monopole
(whose existence is tied up to $\pi_1 (S_1) \neq 0$). It can be referred to as
the Kaluza--Klein (KK) monopole-instanton.\footnote{The eigenvalues
shown in Fig.~\ref{zns} may be viewed as Euclidean D2-branes. $N$ split branes support a spontaneously broken U(1)$^{N}$ gauge theory, whose U(1)
center of mass decouples, and the resulting theory is U(1)$^{N-1}$. The $N-1$
't Hooft--Polyakov monopoles may be viewed as Euclidean D0 branes
connecting the eigenvalues $(a_1 \rightarrow a_2), \,
(a_2 \rightarrow a_3),\, \ldots, \, (a_{N-1} \rightarrow a_{N})$. Clearly,
we can also have a monopole which connects $(a_N \rightarrow a_1)$ which owes its existence to the periodicity of the adjoint Higgs field, or equivalently, to the fact that the underlying theory is on $S_1 \times R_3$. Usually it is called the KK monopole. The
Euclidean D0 branes with the
opposite orientation, connecting $(a_{j} \leftarrow a_{j+1}),\,\, j=1, \ldots
N $, are the antimonopoles. This viewpoint makes manifest
the fact that the KK and 't Hooft--Polyakov monopoles are all on the same footing. The magnetic and topological charges of the monopoles
connecting $(a_{j} \leftrightarrow a_{j+1}) $ is
$\pm \Big( (4\pi/g)\mbox{ \boldmath $\alpha$}_j, \frac{1}{N} \Big)$
where the direction of the arrow is correlated with the sign of the charges.
}
Each of these monopoles carries fermion zero modes, hence they cannot contribute to the bosonic potential at the level $e^{-S_0}$. They can and do
contribute at the level $e^{-2S_0}$.
Indeed,
the bound state of the 't Hooft--Polyakov monopole-instanton with magnetic charge
$\mbox{\boldmath $\alpha$}_i$ and anti-monopole with charge $-\mbox{\boldmath $\alpha$}_{i+1}$ has no fermion zero modes: its topological charge
coincides with that of the perturbative vacuum. Hence, such a bound state
can contribute to the bosonic potential. Let
\beq
\Delta^{0}_{\rm aff}= \{
\mbox{\boldmath $\alpha$}_1, \mbox{\boldmath $\alpha$}_2, \ldots, \mbox{\boldmath $\alpha$}_N \}
\end{equation}
denote the extended (affine) root system of SU(N) Lie algebra.
If we normalize the magnetic and topological charges of the monopoles as
\begin{equation}
\left( \int_{S^2} F, \int
\frac{g^2}{32 \pi^2} \, F_{\mu {{%\boldsymbol} {\nu}}}^a {\widetilde F}^{\mu {{%\boldsymbol} {\nu}}\,,a} \right) = \left( \pm \frac{4 \pi}{g}
\mbox{\boldmath $\alpha$}_i, \pm \frac{1}{N} \right), \quad {\rm for} \; \; \mbox{\boldmath $\alpha$}_i \in \pm \Delta^{0}_{\rm aff}
\label{38pp}
\end{equation}
where $ \mbox{\boldmath $\alpha$}_i$ stands for the simple roots of the affine Lie algebra
then the following bound states are relevant:
\beq
\left[ \frac{4 \pi}{g} \mbox{\boldmath $\alpha$}_i,\,\, \frac{1}{N} \right] + \left[ -\frac{4 \pi}{g} \mbox{\boldmath $\alpha$}_{i+1}, \,\,- \frac{1}{N} \right] =\left[\frac{4 \pi}{g} \left( \mbox{\boldmath $\alpha$}_i - \mbox{\boldmath $\alpha$}_{i+1}
\right), \,\, 0 \right].
\label{38}
\eeq
This pair is stable, as was shown in Ref.~\cite{Unsal:2007vu}, where it is referred
to as a magnetic bion. Thus, we can borrow Polyakov's discussion of magnetic monopoles and apply directly to these objects. The magnetic bions will induce a mass term for the dual photons via the Debye screening, the essence of Polyakov's mechanism.
The vacuum field (\ref{12}) of the deformed SU$(N)$ theory respects the (approximate) center symmetry $Z_N$. This field configuration breaks the gauge symmetry as indicated in
(\ref{higgsed}).
Due to the gauge symmetry breaking, electrically charged particles acquire masses.
(By electric charges we mean charges with regards to $N-1$
``photons" of the low-energy theory.)
The set of $N-1$ electric charges and masses of $N$ lightest $W$ bosons are
\begin{eqnarray}
\mbox{\boldmath $q$}_{W_{\mbox{\boldmath $\alpha$}}} = g \mbox{\boldmath $\alpha$}\,, \qquad m_{W_{\mbox{\boldmath $\alpha$}}}=
\frac{2 \pi }{N L} \,,
\end{eqnarray}
where $\mbox{\boldmath $\alpha$}_i$ ($ i=1, ... , N$) are the simple and affine roots of the SU$(N)$ Lie algebra (see Eq.~(\ref{dop6})). Note that
$N$ lightest $W$ bosons are degenerate in the center-symmetric vacuum. The remaining
$N^2 -N$ charged $W$ bosons can be viewed as composites of the above.
The stabilizing double-trace term (\ref{fourma}) contributes to the self-interaction
of the physical (neutral) Higgs fields. Assuming that all coefficients $d$
are of order one, the masses of these fields are ${\mathcal O}(g/L)$.
For instance, for SU(2) and SU(3) the physical Higgs masses are $(g\sqrt{d_1})/L$.
These masses are much lighter than those of the $W$ bosons but much heavier
than those of the fields in the effective low-energy
Lagrangian (dual photons, see Eq.~(\ref{40}) below).
The stabilizing double-trace term (\ref{fourma}) also contributes
to corrections to the $W$ boson masses. They are expandable in $g^2$,
i.e.
$$
m_{W_{\mbox{\boldmath $\alpha$}}}=
\frac{2 \pi }{N L} \left(1 +{\mathcal O}(g^2)
\right).
$$
In the SU$(N)$ gauge theory with an adjoint fermion on $R_3\times S_1$, which
is Higgsed according to (\ref{higgsed}), the bosonic part of the effective low-energy Lagrangian is generated by the pairs (\ref{38}), and hence the potential is proportional to $e^{-2S_0}$, rather than $e^{-S_0}$ of the Polyakov problem.
If we introduce an $(N-1)$-component vector $\mbox{\boldmath $\sigma$}$,
\beq
\mbox{\boldmath $\sigma$} \equiv \left(\sigma_1, ...., \sigma_{N-1}\right),
\eeq
representing $N-1$ dual photons
of the $U(1)^{N-1}$ theory, the bosonic part of the effective Lagrangian can be written as
\beq
{\cal L}(\sigma_1, ...., \sigma_{N-1}) = \frac{g_3^2}{32\pi^2} (\partial_\mu\mbox{\boldmath $\sigma$} )^2 +
c m_W^6 g_3^{-6}e^{-2S_0}
\, \sum_{i=1}^{N} \cos \left( \mbox{\boldmath $\alpha$}_i - \mbox{\boldmath $\alpha$}_{i+1}\right)\mbox{\boldmath $\sigma$}\,,
\label{40}
\eeq
where $c$ is an undetermined coefficient
and $g_3$ is the three-dimensional coupling constant,
\beq
g_3^2 = g^2\, L^{-1}\,.
\eeq
In terms of four dimensional variables, the magnetic bion fugacity is
\beq
m_W^6 g_3^{-6}e^{-2S_0} \sim m_W^3 g^{-6} e^{-2S_0}
\eeq
We remind that $\mbox{\boldmath $\alpha$}_i$ ($ i=1, ... , N-1$) represent the magnetic charges of $(N-1)$ types of the
't Hooft--Polyakov monopoles while the affine root
\beq
\mbox{\boldmath $\alpha$}_N= -\sum_{i=1}^{N-1} \mbox{\boldmath $\alpha$}_i
\label{dop6}
\eeq
is the magnetic charge
of the KK monopole.
Note that the bion configurations that contribute to the effective Lagrangian
have magnetic charges $\mbox{\boldmath $\alpha$}_i - \mbox{\boldmath $\alpha$}_{i+1}$ and vertices $e^{i(\mbox{\boldmath $\alpha$}_i - \mbox{\boldmath $\alpha$}_{i+1}) \mbox{\boldmath $\sigma$}}$, corresponding to a product of a monopole vertex
$e^{i\mbox{\boldmath $\alpha$}_i \mbox{\boldmath $\sigma$}}$
with charge $\mbox{\boldmath $\alpha$}_i$, and antimonopole
vertex $e^{-i\mbox{\boldmath $\alpha$}_{i+1} \mbox{\boldmath $\sigma$}}$
with charge $-\mbox{\boldmath $\alpha$}_{i+1}$ (without the zero mode insertions). With the $Z_N$-symmetric vacuum
field (\ref{12}) all fugacities are equal.
Equation (\ref{40}) implies that nonvanishing masses
proportional to $e^{-S_0}$ are generated for all $\sigma$'s. They are much smaller
than the masses in
the Polyakov model in which they are $\sim e^{-S_0/2}$.
There are
$N-1$ types of Abelian strings (domain lines). Their tensions are equal to each
other and proportional to $e^{-S_0}$. Linear confinement develops
at distances larger than $e^{S_0}$.
Needless to say, the physical spectrum
in the Higgs/Abelian confinement regime is richer than that
in the non-Abelian confinement regime. If in the latter case
only color singlets act as asymptotic states, in the Abelian confinement regime
all systems that have vanishing $N-1$ electric charges have finite mass and represent
asymptotic states.
{\bf Note 1:} For SU(2) and SU(3) Yang--Mills theories, the double-trace deformation is a particularly simple monomial
\begin{equation}
P[U({\bf x}) ]= \frac{2}{\pi^2 L^4} \ d_1 |{\rm Tr}\, U({\bf x} )|^2 \quad {\rm for} \; {\rm SU}(2),\,\,\, {\rm SU}(3) \,.
\end{equation}
{\bf Note 2:}
One can be concerned that the deformation potential is
given in terms of multi-winding line operators,
and looks nonlocal.
In the $L \Lambda\ll1 $ region where the deformation is crucial, there is no harm in viewing the
deforming operator as ``almost local" since we are concerned with physics at scales much larger than the compactification scale.
In the decompactification limit where the deformation is indeed nonlocal, it is not needed since its dynamical role is negligible. If one wants to be absolutely certain, one
can insert a
filter function
as the coefficient of the double-trace operator which shuts it off exponentially
$\sim e^{- L^2 \Lambda^2}$ at large $L$ in order not to deal with a
non-local theory.
\section{ QCD with one fundamental fermion}
\label{FF}
QCD(F) on $R_4$ possesses a U(1)$_V \times {\rm U}(1)_A $ symmetry, at the classical level acting as
$$
\Psi \rightarrow e^{i \alpha } \Psi,\quad \Psi \rightarrow e^{i \beta \gamma_5 } \Psi\,.
$$
Due to nonperturbative effects, only the anomaly-free $Z_2$ subgroup of the U(1)$_A$ is the genuine axial symmetry of the theory, the fermion number mod
two. This symmetry is already a part of the vector U(1$)_V$ symmetry, and, hence,
cannot be spontaneously broken. However, a bifermion condensate (which does not break any chiral symmetry) is believed to exist on $R_4$ as well as on $S_1 \times R_3 $ with sufficiently large $r(S_1)$.
The microscopic QCD Lagrangian also possesses the discrete symmetries $C, P, T $, and
continuous three-dimensional Euclidean Lorentz symmetry SO(3). Thus, the symmetries of the original theory are
\begin{eqnarray}
{\rm U}(1)_V \times C \times P \times T \,.
\label{Eq:allsymF}
\end{eqnarray}
The double-trace deformation respects all these symmetries. (Otherwise this would
explicitly contradict the claim made in Sect.~\ref{claim}.)
Below, we will construct a low-energy effective theory QCD(F)* assuming
that the double-trace terms stabilize the theory
in the center-symmetric vacuum. As usual, the set of all possible
operators that can appear in the effective
low-energy theory is restricted by the underlying symmetries (\ref{Eq:allsymF}).
Integrating out weakly coupled KK modes with nonvanishing frequencies
$$
|\omega_n| \geq \frac{2 \pi n}{L}\,, \quad n \neq 0\,,
$$
and adding the stabilizing
deformation term (\ref{fourma})
to the QCD(F) Lagrangian, we obtain the QCD(F)* theory.
This is the Yang--Mills + {\it compact} adjoint Higgs system
with fundamental fermions on $R_3$.
The action is\,\footnote{
Our four-dimensional Dirac $\gamma $ matrix conventions
are
$$
\gamma_{M}= \{ \gamma_{\mu} ,\,\,
\gamma_{4} \}\,,\quad \gamma_{\mu} = \sigma_1 \otimes \sigma_{\mu}\,, \quad \gamma_{4} = \sigma_2 \otimes I\,.$$
With this choice, the Dirac algebras in four and three dimensions are
$\{\gamma_M, \gamma_N \} = 2 \delta_{MN}$ and $\{ \sigma_{\mu} , \sigma_{{{%\boldsymbol} {\nu}}} \}= 2 \delta_{\mu {{%\boldsymbol} {\nu}}}$. It will be convenient to define $\bar \sigma_M= (\sigma_{\mu}, -i I) \equiv (\sigma_{\mu}, \sigma_4) $ and
$\sigma_M= (\sigma_{\mu}, i I) \equiv (\sigma_{\mu}, - \sigma_4) $\,. }
\begin{eqnarray}
S=&& \int_{R_3} \; \frac{L}{g^2} \Big[ {\rm Tr} \Big( \frac{1}{2} F_{\mu {{%\boldsymbol} {\nu}}}^2 +
(D_{\mu} \Phi)^2 + g^2 V [\Phi] \Big) \nonumber\\[3mm]
&& + i \bar \lambda
\left( \sigma_{\mu} (\partial_{\mu} + i A_{\mu}) + i \sigma_4 \Phi \right) \lambda
\nonumber\\[3mm]
&& +
i \bar \psi
\left( \sigma_{\mu} (\partial_{\mu} - i A_{\mu}) - i \sigma_4 \Phi \right) \psi
\Big] \,,
\end{eqnarray}
where $\psi$ and $\lambda$ are the two-component three-dimensional
Dirac spinors which arise upon
reduction of the four-dimensional Dirac spinor $\Psi$. Note that $\lambda$ and $\psi$ has opposite gauge charges, where $\lambda$ and $\bar \psi $ are fundamental and
$\bar \lambda$ and $\psi $ are anti-fundamental. As usual, in Euclidean space, there is no relation between barred and unbarred variables, and they are not related to each other by conjugations.
The potential $V [\Phi] $ which is the sum of the one-loop potential and deformation potential has its minimum located at (\ref{10}) (or (\ref{12})).
The fermion contribution to the effective one-loop potential involves terms such as
${\rm Tr} \, U + {\rm Tr}\, U^{*}$. These terms explicitly break the $Z_N$ center symmetry and slightly shift the position of the eigenvalues of $\langle U \rangle $
from the minimum (\ref{10}).
However, this is a negligible ${\cal O}(g/d_n)$ effect suppressed by a judicious choice of the deformation parameters. Hence, we neglect this effect below.\footnote{If the eigenvalues are separated not equidistantly, yet the separations are nonvanishing for any pair, the gauge symmetry breaking SU$(N) \rightarrow$ U(1)$^{N-1}$ still takes place. In the nonperturbative analysis below, this fact manifests itself as an unequal action (or fugacity) for different types of monopoles. The analysis in this latter case will not be qualitatively different.}
There are $N-1$ distinct U(1)'s in this model, corresponding to $N-1$ distinct electric charges. If we introduce a quark $\Psi $ in the fundamental representation of
SU$(N)$ each component $\Psi_i$ ($i=1, ... , N$) will be characterized by a set
of $N-1$ charges, which we will denote by $\mbox{\boldmath $q$}_{\Psi_i}$,
\beq
\mbox{\boldmath $q$}_{\Psi_i} = g\, \mbox{\boldmath $H$}_{ii}\,,\quad
i=1, ... , N\,,
\eeq
where $\mbox{\boldmath $H$}$ is the set of $N-1$ Cartan generators.
All fundamental fermions, but two (one of each type $\psi$ and $\lambda$), acquire masses due to gauge symmetry breaking. These masses
are of order of $2\pi/L$ and depend on
whether periodic or antiperiodic boundary conditions are imposed.
The fermions that remain massless in perturbation theory are the ones
corresponding to the vanishing (mod $2 \pi$) eigenvalue of the algebra-valued compact
Higgs field $\Phi$, see Eq.~(\ref{12}) (equivalently, $v=1$, see Eq.~(\ref{10})).
Thus, the low-energy effective Lagrangian includes $N-1$ photons and two fermions.
Their interactions (in particular, an induced mass gap)
must arise due to nonperturbative effects.\footnote{\label{f6} It is important to distinguish this theory from the case of the noncompact adjoint Higgs field, which is
the Polyakov model with massless (complex representation) fermions.
Both theories have identical gauge symmetry breaking patterns:
SU$(N) \rightarrow {\rm U}(1)^{N-1}$. In perturbation theory,
both theories reduce (by necessity) to compact QED$_3$ with fermions.
However, it is possible to prove that the latter theory lacks confinement
since photons remain massless nonperturbatively.
This implies that if the symmetries at the cut-off scale
are not specified, the question of confinement in compact QED$_3$
with massless fermions is ambiguous. The issue will be further discussed in a separate publication.}
\subsection{Nonperturbative effects and the low-energy \\
Lagrangian}
Nonperturbatively, there exist topologically stable, semiclassical
field configurations --- instantons-monopoles.
If the adjoint Higgs field were noncompact, there would be $(N-1)$ types
of fundamental monopoles. There is, however, an extra KK monopole which
arises due to the fact that the underlying theory is formulated on a cylinder,
$R_3 \times S_1$, or simply speaking, $\Phi ({\bf x})$ is compact.
The magnetic and topological charges of the (anti)monopoles associated with root
$\mbox{\boldmath $\alpha$}_i$ are given Eq.~(\ref{38pp}).
As follows from the explicit zero mode constructions of Jackiw and Rebbi
\cite{Jackiw:1975fn} and the Callias index theorem \cite{Callias:1977kg}, there are
two fermion zero modes localized on one of the $N$ constituent monopoles.
Van Baal et al. demonstrated \cite{ Bruckmann:2003ag, Chernodub:1999wg,GarciaPerez:1999ux, Bruckmann:2007ru} that as the boundary conditions of fermions vary
in the background with nontrivial holonomy, the zero modes hop from a monopole to the next one. With fixed boundary conditions, they are
localized, generally speaking, on a particular monopole.\footnote{
More precisely, the Callias index applies to $R_3$. We need an
index theorem for the Dirac operators in the background of monopoles on $R_3 \times S_1$. Such a generalization of the Callias index theorem was carried out in the work of Nye and Singer \cite{Nye:2000eg}. For a clear-cut lattice realization of the fermion zero modes explicitly showing on which monopole they are localized, see
Ref.~\cite{Bruckmann:2003ag}. }
The above implies that
one of the monopole-induced vertices has two fermion insertions (the one on which
the fermion zero modes are localized) and other $N-1$ elementary monopoles have no
fermion insertions (at the level $e^{-S_0}$).
The set of the instanton-monopole induced vertices can be summarized as
follows:
\begin{equation}
\left\{ e^{-S_0} e^{i \, \mbox{\boldmath $\alpha$}_1 \mbox{\boldmath $\sigma$}} \,\lambda \psi, \qquad e^{-S_0}e^{i \, \mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$}} \, , \;\;
j=2, \ldots N \right\}\,,
\end{equation}
plus complex conjugate for antimonopoles. Thus, the leading
nonperturbatively induced interaction terms in the effective Lagrangian are
\begin{eqnarray}
&& S^{\rm {QCD(F)}^*} = \int_{R_3} \; \Big[\,
\,
\frac{g_3^2}{32\pi^2} (\partial_\mu\mbox{\boldmath $\sigma$} )^2 + \frac{1}{g_3^2}
i \bar \Psi \gamma^{\mu}( \partial_{\mu} + i \mbox{\boldmath $q$}_\Psi \mbox{\boldmath $A$}_{\mu} ) \Psi
\nonumber\\[3mm]
&& + e^{-S_0} \, \Big( \tilde{\mu}\, e^{i \, \mbox{\boldmath $\alpha$}_1 \mbox{\boldmath $\sigma$}} \,\lambda \psi +
\mu\, \sum_{\mbox{\boldmath $\alpha$}_j\in (\Delta^{0}_{\rm aff} - \mbox{\boldmath $\alpha$}_1 )}
e^{i \mbox{\boldmath $\alpha$}_j\mbox{\boldmath $\sigma$} }
+ {\rm H.c.}
\Big) \Big] \,,
\label{Eq:dQCD(F)}
\end{eqnarray}
where $\tilde{\mu}$ is dimensionless constant. Note the non-canonical normalization of the
bosonic and fermionic terms. This choice for fermions will ease the derivations of certain four physical quantities. It is clearly seen that
in the infrared description of QCD(F)*, we must deal not only with the dual photons,
but also with electrically charged fermions.
The three-dimensional effective Lagrangian respects the symmetries (\ref{Eq:allsymF})
of the microscopic (four-dimensional) theory. In particular, the fermion bilinears such as
$\bar \lambda \lambda$ (allowed by U(1)$_V$ and the Lorentz symmetry of the three-dimensional theory) are noninvariant under parity (see Appendix in
Ref.~\cite{Affleck:1982as})
and, hence, cannot be generated.
On the other hand, $\langle \lambda \psi\rangle \neq 0$
can and is generated.
One can check that up to order $e^{-2S_0}$, the Lagrangian
(\ref{Eq:dQCD(F)}) includes all possible operators allowed by the symmetries (\ref{Eq:allsymF}).
In the above Lagrangian, all operators are relevant in the renormalization-group sense.
The fugacity has mass dimension $+3$. If the kinetic term for fermion is canonically
normalized, the covariant photon-fermion interaction and
instanton-monopole-induced term with the fermion insertion has dimension $+1$.
Which operators will dominate the IR physics?
The answer to this question requires a full renormalization-group analysis of all
couplings. A preliminary investigation (along the lines of Ref.\cite{hermele-2004-70})
shows that quantum corrections in the running of the couplings are tame and
do not alter the fact that the instanton-monopole vertex terms are the most relevant
in the IR of QCD(F)*.
The $N-1$ linearly independent instanton-monopole vertices render
all the $N-1$ dual photons massive, with masses proportional to $e^{-S_0/2}$. Thus, the dual scalars are pinned at the bottom of the potential
\beq
\mu\,
e^{-S_0} \sum_{j=2}^{N} \cos \mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$}\,.
\eeq
As a result, the would-be massless fermions will also acquire a mass term
of the type
\begin{equation}
\tilde\mu\, e^{-S_0} \; \lambda \psi \, .
\end{equation}
The fermion mass is proportional to $e^{-S_0}$. Hence it is exponentially smaller than
the dual photon mass $\sim e^{-S_0/2}$.
Note that the fermion mass term is not associated with the spontaneous breaking of chiral symmetry. This
circumstance, as well as the hierarchy of mass between the photon and fermion, is specific to one {\em fundamental} fermion and will change in the case of
the two-index fermions.
Since all $N-1$ dual photons become massive,
a probe quark $Q_i$ of every type $(i=1,...,N$) will be connected to its antiquark
by a domain line/string with the tension \footnote{This is also similar to the axion domain wall.}
\beq
T\sim g_3\, \mu^{1/2}\, e^{-S_0/2}\,.
\eeq
The string between $Q_1$ and $\overline{Q_1}$ is easily breakable due to
pair production of $\lambda$'s and $\psi$'s.
In other words, the external charge $Q_1$ will be screened by the
dynamical fermions with charge $ \mbox{\boldmath $q$}_{\Psi_1}$.
The strings between $Q_i$ and $\overline{Q_i}$ (with $i=2, \,...\, N$)
can break with an exponentially small probability due to pair creation
of the KK modes of $\Psi_i$.
This amounts, of course, to the conventional statement about
large Wilson loops $C$,
\beqn
&&
\Big \langle \frac{1}{N} {\rm Tr}\, W(C) \Big \rangle \sim \frac{1}{N}
\sum_{i=1}^{N} \Big \langle e^{i \int_C \mbox{\boldmath $H$}_{ii} \mbox{\boldmath $A$} }
\Big \rangle
\nonumber\\[3mm]
&&
= \frac{1}{N} e^{- \kappa P(C) } + \left(1-\frac{1}{N}\right) e^{-T {\rm Area} (\Sigma)}
\,.
\eeqn
where $\kappa$ is the coefficient of the perimeter law, $P(C)$ is the perimeter of the loop $C$, the boundary of a surface $\Sigma$.
\vspace{2mm}
{\bf Remark:} The product of the instanton-monopole-induced vertices is proportional to the Belyavin--Polyakov--Schwarz--Tyupkin (BPST)
four-dimensi\-onal instanton vertex
\cite{BPST},
\begin{eqnarray}
&&
\left(e^{-S_0} e^{i \,\mbox{\boldmath $\alpha$}_1 \mbox{\boldmath $\sigma$}} \lambda \psi \right) \,\,
\prod_{j=2}^{N} \left( e^{-S_0} e^{i \,\mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$}}
\right)
\nonumber\\[4mm]
&&\sim
e^{- \frac{8 \pi^2}{g^2}}\,\, \bar \Psi( 1+ \gamma_{5} ) \Psi
\,\, \exp \left( {i\sum_{i=1}^{N} \,\mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}} \right)=
e^{- \frac{8 \pi^2}{g^2}} \,\,\bar \Psi( 1+ \gamma_{5} ) \Psi \,.
\label{36}
\end{eqnarray}
This is consistent with the fact that the instanton-monopoles can be viewed as
the BPST instanton constituents.
In Eq.~(\ref{36}) we used the fact that the sum of the $N$ constituent
instanton-monopole actions is in fact
the BPST instanton action, and the sum of the magnetic and topological charges
of the constituent monopoles gives the correct quantum numbers of the BPST $R_4$ instanton,
\begin{eqnarray}
\sum_{i=1}^{N} \left( \int F \,, \,\,\frac{g^2}{32 \pi^2} \, F_{\mu {{%\boldsymbol} {\nu}}}^a {\widetilde F}^{\mu {{%\boldsymbol} {\nu}}\,,a}\right)_{i} \,\,= (0, 1) \,,
\label{fracins}
\end{eqnarray}
see Eq.~(\ref{38pp}).
\subsection{Bifermion condensate}
As stated earlier, one-flavor QCD formulated on $R_4$ has no chiral symmetry whatsoever. The axial anomaly reduces the classical U(1)$_A$ symmetry to $Z_2$. A bifermion condensate exists and breaks no chiral symmetry.
We can evaluate the value of the chiral condensate in QCD(F)* in the small $r(S_1)$ regime. At large $r(S_1)$ (strong coupling) we know, from volume independence, that the condensate must get a value independent of the radius.
Let $b_0$ denote the leading coefficient of the $\beta$ function divided by $N$,
\begin{equation}
b_0= \frac{1}{N} \left.\left(\frac{11N}{3} - \frac{2N_f}{3}\right)\right|_{N_f=1} = \frac{11}{3} - \frac{2}{3N}\,.
\end{equation}
At weak coupling, $ L \Lambda \ll 1$, the bifermion
condensate in QCD(F)* receives its dominant contribution from the
instanton-monopole with the fermion zero modes insertion,
the first term in the second line in Eq.~(\ref{Eq:dQCD(F)}). The condensate
is proportional to
\beq
\langle \lambda \psi\rangle\sim
e^{-S_0} \sim e^{- \frac{8 \pi^2 }{g^2N}}\,.
\eeq
Above the scale $L \Lambda \sim 1$
we expect the bifermion condensate to be $L$-independent and saturate its value on $R_4$,
\begin{eqnarray}
\langle \bar \Psi \Psi \rangle \sim \left\{ \begin{array}{ll}
\Lambda^3 (\Lambda L)^{b_0-3} = \Lambda^3 (\Lambda L )^{(2/3) (1-N^{-1})} \,,& \quad
L \Lambda \ll 1 \,, \\[3mm]
\Lambda^3 \Big (1+ {\cal O} (\frac{1}{\Lambda L} ) \Big)\,, & \quad
L \Lambda \gsim 1\,.
\end{array} \right.
\end{eqnarray}
The above formula is testable on lattices.
It is natural to believe the saturation scale is associated with the transition from weak to strong coupling and restoration of the spontaneously broken gauge symmetry
U$(1)^{N-1}\rightarrow {\rm SU}(N)$. This is the regime where
the theory passes from the Abelian to non-Abelian confinement. The effective
theory (\ref{Eq:dQCD(F)}) which is only valid at $L\Lambda\ll 1$ looses
its validity when this parameter becomes of order one. Nonetheless, we do
not expect phase transitions (or rapid crossovers)
in the parameter $L \Lambda$. We expect physics of the two regimes to be continuously connected.
It would be immensely useful to study this passage on lattices.
In the strong coupling regime, the volume dependent factors enter in
observables only via subleading ${\cal O}(1/(L \Lambda))$ terms.
\section{QCD with one bifundamental fermion}
\label{s4}
Consider orbifold QCD, a gauge theory with
the SU$(N)_1\times {\rm SU}(N)_2$ gauge group, and one bifundamental Dirac fermion, defined on $R_3 \times S_1$,
\begin{equation}
S^{\rm QCD(BF)}= \int_{R_3 \times S_1} \frac{1}{g^2}\, {\rm Tr}\, \left[ \frac{1}{2} F_{1, MN}^2 (x) +
\frac{1}{2} F_{2, MN}^2 (x) +
i \bar \Psi {\rlap{\raise 1pt \hbox{$\>/$}}D} \Psi \right]\,,
\label{eq:QCDBF}
\end{equation}
where $$ D_M \Psi= \partial_M \Psi + i A_{1,M} \Psi - i \Psi A_{2, M}\,.$$
The theory possesses a U(1$)_V \times (Z_{2N})_A \times (Z_2)_I$ symmetry
which acts on the elementary fields as
\begin{eqnarray}
&&U(1)_V: \; \; \; \lambda \rightarrow e^{i \alpha} \lambda, \;\; \psi \rightarrow e^{-i \alpha} \psi,
\nonumber\\[2mm]
&&(Z_{2N})_A: \; \; \lambda \rightarrow e^{i \frac{2 \pi}{2N}} \lambda, \;\;
\psi \rightarrow e^{i \frac{2 \pi}{2N}} \psi,
\nonumber\\[2mm]
&&(Z_2)_I: \;\; \; \;\; \lambda \leftrightarrow \psi, \; \; A_{\mu, 1} \ \leftrightarrow A_{\mu, 2} \,.
\label{Eq:symorb}
\end{eqnarray}
The $(Z_{2N})_A$ symmetry is the anomaly-free subgroup of the axial
U(1)$_A $. It is a folklore statement that with sufficiently large $r(S_1)$,
the chiral symmetry is broken down to $Z_2 $ by the formation of the bifermion condensate,
\beq
\langle \bar \Psi \Psi \rangle = 4N \Lambda^3 \cos\left({\frac{2 \pi k}{N}} \right)\,,
\qquad k=0,\,1,\, ...\, N-1\,,
\eeq
marking $N$ isolated vacua in the same manner as in ${\cal N}=1$ SYM theory.
QCD(BF) on $R_4$ is believed confine in the same way as ${\cal N}=1$ SYM
theory, possesses a mass gap, and $N$ isolated vacua. We would like to shed some light on these issues by studying
QCD(BF)* with small $r(S_1)$.
\subsection{Deformed orbifold QCD}
On $S_1 \times R_3$ we can deform original QCD(BF)
\begin{eqnarray}
S&=& \int_{R_3} \; \frac{L}{g^2} {\rm Tr}\, \Big[ \frac{1}{2} F_{1, \mu {{%\boldsymbol} {\nu}}}^2 + \frac{1}{2} F_{2, \mu {{%\boldsymbol} {\nu}}}^2 + (D_{\mu} \Phi_1)^2 + (D_{\mu} \Phi_2)^2
+ g^2 V [\Phi_1,\Phi_2] \qquad
\nonumber\\[3mm]
&& + i \bar \lambda
\Big( \sigma_{\mu} (\partial_{\mu} \lambda + i A_{1, \mu} \lambda - i \lambda A_{2, \mu}) + i \sigma_4 (\Phi_1 \lambda - \lambda \Phi_{2}) \Big)
\nonumber\\[3mm]
&&
+ i \bar \psi
\Big( \sigma_{\mu} (\partial_{\mu} \psi - i A_{1, \mu} \psi + i \psi A_{2, \mu}) - i \sigma_4
(\Phi_1 \psi - \psi \Phi_{2}) \Big)
\Big] \,,
\end{eqnarray}
by adding double-trace terms (\ref{fourma}) in such a way that the center
symmetry is not broken in the vacuum.
The center symmetry stability at weak coupling implies that the vacuum of the theory is located at
\beqn
L \langle \Phi_1\rangle = L \langle \Phi_2 \rangle =&& {\rm diag} \left( -\frac{2\pi [N/2]}{N},\,\, -\frac{2\pi ([N/2]-1)}{N}, ....,\,
\frac{2\pi [N/2]}{N} \right) \,, \nonumber\\[2mm]
&&({\rm mod } \; 2 \pi)\,,
\label{Eq:pattern}
\eeqn
cf. Eq.~(\ref{12}).
Consequently, in the weak coupling regime, the gauge symmetry is broken,
\begin{equation}
[{\rm SU}(N)]_1 \times [{\rm SU}(N)]_2 \longrightarrow [{\rm U}(1)^{N-1}]_1 \times [{\rm U}(1)^{N-1}]_2\,.
\end{equation}
In perturbation theory $2(N-1)$ photons remain massless while
all off-diagonal gauge fields acquire masses in the range $\left[\frac{2 \pi}{L},
\frac{2 \pi}{L} \right]$. The three-dimensional mass terms
of the bifundamental fermions are determined by
$$
\sum_{i,k=1}^N\, (a_i^1 -a_k^2) \overline{ \Psi^k_i} \gamma_4 \Psi_i^k
$$
where $a_k^1, a_k^2$ are the eigenvalues of
$\Phi_1$ and $\Phi_2$, see Eq.~(\ref{Eq:pattern}).
The diagonal components of the bifundamental fermions
$$
\left(\lambda^i_k\,,\,\,\,\psi^k_i
\right)_{i=k}
$$
remain massless to all orders in perturbation theory;
we will refer to them as $\lambda_i,\,\,\psi_i$ ($i=1, ..., N$).
Other components get masses $\sim 2\pi (i-k)/L $, and decouple in the low-energy limit,
and so do the $W$ bosons.
The bifundamental fermions are electrically charged under the unbroken
$ [{\rm U}(1)^{N-1}]_1 \times [{\rm U}(1)^{N-1}]_2$ in a correlated fashion.
If in Sect. \ref{FF} the electric charges of each fermion were
characterized by an $(N-1)$-dimensional vector
$\mbox{\boldmath $q$}_{\Psi_i}$, now they are characterized by concatenation of two such
$N-1$ dimensional electric charge vectors
\beq
\mbox{\boldmath $q$}_{\lambda_i} = g\, ( + \mbox{\boldmath $H$}_{ii}, - \mbox{\boldmath $H$}_{ii}
) \,,\quad \mbox{\boldmath $q$}_{\psi_i} = g\, ( - \mbox{\boldmath $H$}_{ii}, + \mbox{\boldmath $H$}_{ii}
) \,,\quad
i=1, ... , N\,,
\eeq
Thus, the low-energy effective Lagrangian in perturbation theory is
\begin{eqnarray}
&&
S^{\rm pert\,\, th}= \int_{R_3} \; \frac{1}{g_3^2}\, \Big[ \sum_{a=1}^{N-1}\,
\Big( \frac{1}{4} F^{a, 2}_{1, \mu {{%\boldsymbol} {\nu}}} + \frac{1}{4} F^{a, 2}_{2, \mu {{%\boldsymbol} {\nu}}} \Big)
\nonumber\\[3mm]
&&+
\sum_{i=1}^{N} i \bar \Psi_i \gamma_{\mu} \Big( \partial_{\mu} + i
\mbox{\boldmath $H$}_{ii} \mbox{\boldmath $A$}_{\mu}^{1} - i
\mbox{\boldmath $H$}_{ii} \mbox{\boldmath $A$}_{\mu}^{2}
\Big)
\Psi_i
\Big] \,.
\end{eqnarray}
The mass gap must arise due to nonperturbative effects, as in Sect.~\ref{FF}. We
will identify and classify nonperturbative effects induced by topologically
nontrivial field configurations momentarily.
\subsection{Nonperturbative low-energy effective Lagrangian}
Nonperturbatively, the gauge symmetry breaking pattern (\ref{Eq:pattern}) implies the
existence of $N$ types of instantons-monopoles associated with each gauge group.
The magnetic and topological charges of these objects are
\begin{eqnarray}
\left( \int_1 F, \int_1 \frac{g^2}{32 \pi^2} \, F^a {\widetilde F}^a\,; \, \int_2 F, \int_2 \frac{g^2}{32 \pi^2} \, F^a {\widetilde F}^{a} \right) =
\left\{ \begin{array}{ll}
\left( \pm \frac{4 \pi}{g}
\mbox{\boldmath $\alpha$}_i\,, \pm \frac{1}{N}, 0, 0\right) \,,\\[4mm]
\left( 0, 0, \pm \frac{4 \pi}{g}
\mbox{\boldmath $\alpha$}_i\,, \pm \frac{1}{N}\right) \,.
\end{array}
\right.
\end{eqnarray}
Consequently, each monopole generates two fermion zero modes, and
the instanton-monopole vertices are
\begin{eqnarray}
&&
{\cal M}_{i}^{1} : ( + \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, + \frac{1}{N}, 0, 0) : \;\; e^{+i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 } ({ \lambda_i \psi_i +
\lambda_{i+1} \psi_{i+1} } )\,, \nonumber\\[3mm]
&&\overline {\cal M}_{i}^{1} : ( -\frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, - \frac{1}{N}, 0, 0) :\; \;\, e^{-i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 }
({\bar \lambda_i \bar \psi_i +
\bar \lambda_{i+1} \bar \psi_{i+1} } ) \,,
\nonumber\\[3mm]
&&{\cal M}_{i}^{2} : ( 0, 0, +\frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, + \frac{1}{N} ) :\; \;\,\, e^{+i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2 }
({ \lambda_{i}
\psi_{i} + \lambda_{i+1} \psi_{i+1} } )\,,
\nonumber\\[3mm]
&&\overline {\cal M}_{i}^{2}: ( 0, 0, -\frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, - \frac{1}{N}) :\; \;\,\,
e^{- i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2 }
({ \bar \lambda_{i} \bar \psi_{i} +
\bar \lambda_{i+1} \bar \psi_{i+1} } ) \,,
\end{eqnarray}
where $\mbox{\boldmath $\sigma$}_1$ is the set of dual photons for
$[{\rm U}(1)^{N-1}]_1$ while $\mbox{\boldmath $\sigma$}_2$
is the set of dual photons for
$[{\rm U}(1)^{N-1}]_2$.
In full analogy with
the SYM theory, the $2N$ fermion zero modes of the BPST instanton
split into $N$ pairs: each instanton-monopole supports two fermion zero modes. This is a natural consequence of the Callias index theorem. (The same conclusion was also reached by Tong \cite{Tong:2002vp}).
As a result, the instanton-monopole contributions give rise to
the following terms in the effective Lagrangian:
\beqn
&& \Delta L^{\rm QCD(BF)*}=
{\rm const.} \times\; g^{-6} e^{ -S_{0}}
\sum_{\alpha_{i} \in \Delta_{\rm aff}^{0}}
\Big( \left(e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 } + e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2 }\right)
\nonumber\\[3mm]
&&\times
( \lambda_i \psi_i + \lambda_{i+1} \psi_{i+1} )
+ {\rm H.c.}
\Big)\,.
\label{51}
\eeqn
At the level $e^{-S_0}$ the
instanton-monopole effects in QCD(BF)* cannot provide mass terms for the dual photons. This situation is completely analogous to that in QCD(Adj)* where all
instanton-monopoles have fermion zero modes and, hence, are unable to contribute to the bosonic potential for the dual photons \mbox{\boldmath $\sigma$}$_1$
and \mbox{\boldmath $\sigma$}$_2$.
The situation drastically changes at order $e^{-2S_0}$. There are nontrivial effects which render the long-distance three-dimensional
fields massive, implying confinement. An easy way to see that this is the case
is to examine the symmetries of the theory.
Since U(1)$_V \times (Z_{2N})_A \times (Z_2)_I$ is the symmetry
of the microscopic theory,
it must be manifest in the low-energy effective theory in three dimensions.
The invariance of the instanton-monopole vertex under U(1)$_V$ and $(Z_2)_I$ is manifest. At the same time, the $(Z_{2N})_A$
invariance requires combining the axial chiral symmetry
with the discrete shift symmetry of the dual photon,
\beqn
(Z_{2N})_A: \; \;\;\; &&\lambda \psi \rightarrow e^{i \frac{2 \pi}{N}} \lambda \psi, \nonumber\\[3mm]
&&
\mbox{\boldmath $\sigma$}_{1,2} \rightarrow \mbox{\boldmath $\sigma$}_{1,2}
- \frac{2 \pi}{N} \mbox{\boldmath $\rho$} \qquad
\label{Eq:symorb2}
\eeqn
where $ \mbox{\boldmath $\rho$} $ is the Weyl vector defined by
\beq
\mbox{\boldmath $\rho$} = \sum_{j=1}^{N-1} \mbox{\boldmath $\mu$}_k\,,
\label{dop1}
\eeq
and \mbox{\boldmath $\mu$}$_k$ stand for the $N-1$ fundamental weights
of the associated Lie algebra, defined through the reciprocity relation,
\beq
\frac{2 \mbox{\boldmath $\alpha$}_i \mbox{\boldmath $\mu$}_j }
{ \mbox{\boldmath $\alpha$}_i^{2}}= \mbox{\boldmath $\alpha$}_i \mbox{\boldmath $\mu$}_j = \delta_{ij}\,.
\label{dop2}
\eeq
Using the identities
\begin{equation}
\mbox{\boldmath $\alpha$}_N \mbox{\boldmath $\rho$} = -(N-1) \,, \quad \mbox{\boldmath $\alpha$}_i \mbox{\boldmath $\rho$}= 1\,
, \quad i=1,\, \ldots\, N-1\; ,
\label{iden}
\end{equation}
the vertex operator
\begin{equation}
e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_{1,2} }
\rightarrow e^{i \mbox{\boldmath $\alpha$}_i\,
( \mbox{\boldmath $\sigma$}_{1,2} -\frac{2 \pi}{N}
\mbox{\boldmath $\rho$}) } =
e^{-i \frac{2 \pi}{N} } \;
e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_{1,2}}
\,, \quad i=1,\, \ldots, \, N\,,
\end{equation}
rotates in the opposite direction compared with the fermion bilinear,
by the same amount. Hence, the instanton-monopole induced vertex
$$(e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 } + e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2 })
( \lambda_i \psi_i + \lambda_{i+1} \psi_{i+1} ) $$
is invariant under the discrete chiral symmetry.
The discrete shift symmetry, (\ref{Eq:symorb2}) as opposed to the continuous shift symmetry, cannot prohibit mass term for the dual photons. At best, it can
postpone its appearance in the $e^{-S_0}$ expansion. Hence, such
a mass term must be, and is, generated.
As in SYM theory, at level $e^{-2S_0}$
there exist magnetically charged bound monopole-antimonopole pairs
with no fermion zero modes. These stable pairs were referred to as magnetic bions in
\cite{Unsal:2007jx}.
In QCD(BF)*, the bions come in a wider variety than in
SYM theory. The analogs of the magnetic bions that appear in SYM
theory are the pairs of the type
${\cal M}_i^{1}$ and $\overline {\cal M}_{i\pm1} ^{1}$ (and $1 \leftrightarrow 2$).
Despite the repulsive Coulomb interactions between these two monopoles they form bound states due to the fermion exchange between them, with the combined effect
$$
\sim \frac{1}{r} + \log r\,.
$$
The corresponding bound state is stable.
Since the fermion zero modes in QCD(BF)* communicate with the mono\-poles
in both gauge groups, the fermion zero mode exchange also generates logarithmic attractive interactions between the monopoles
${\cal M}_i^{1}$ in the first gauge group and the antimonopoles
$\overline {\cal M}_{i, i\pm1} ^{2}$ in the second. Note that there is
no Coulomb interaction between these two since the first
instanton-monopole is charged under the [U(1) $^{N-1}]_1 $ gauge subgroup of [U(1)$^{N-1}]_1 \times [{\rm U}(1)^{N-1}]_2 $ while the second is charged under [U(1)$^{N-1}]_2$.
Thus, the stable magnetic bions in QCD(BF)*, their magnetic and topological charges,
and the vertices they generate are
\begin{eqnarray}
&&{\cal B}^1_i : \Big( \frac{4\pi}{g} (\mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{i-1} ), \; 0, 0, 0 \Big) : \qquad c_1 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{i-1}) \mbox{\boldmath $\sigma$}_1} \nonumber\\[3mm]
&&{\cal B}^2_i: \Big(0, 0, \frac{4 \pi}{g} (\mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{i-1}), 0 \Big ) : \qquad c_1 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{i-1}) \mbox{\boldmath $\sigma$}_2} \nonumber\\[3mm]
& & {\cal B}^{12}_{i,i} : \Big( \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, \frac{1}{N}, - \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_{i}, - \frac{1}{N} \Big) : \qquad
c_2 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i\,
\mbox{\boldmath $\sigma$}_1 -
\mbox{\boldmath $\alpha$}_{i } \mbox{\boldmath $\sigma$}_2) } \nonumber\\[3mm]
& & {\cal B}^{12}_{i,i-1} : \Big ( \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, \frac{1}{N}, - \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_{i-1}, - \frac{1}{N} \Big) : c_2 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 -
\mbox{\boldmath $\alpha$}_{i-1 }\mbox{\boldmath $\sigma$}_2) } \nonumber\\[3mm]
& & {\cal B}^{12}_{i, i+1} : \Big ( \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_i\,, \frac{1}{N}, - \frac{4\pi}{g} \mbox{\boldmath $\alpha$}_{i+1}, - \frac{1}{N}\Big ) : c_2 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 -
\mbox{\boldmath $\alpha$}_{i+1 } \mbox{\boldmath $\sigma$}_2) }
\end{eqnarray}
The vertices for antibions (such as $\overline {\cal B}^1_i$) are the complex conjugates of the ones given above. The above bions are stable due to the
attractive fermion pair exchange between their constituents.
Note that the constituents of the bions
${\cal B}^1_i $ and ${\cal B}^2_i $, unlike the ones of
$ {\cal B}^{12}_{i,i} , {\cal B}^{12}_{i,i+1} , {\cal B}^{12}_{i,i-1}$
need to compete with the Coulomb repulsion for stability.
Thus, in principle,
there are no (symmetry or microscopic) reasons for the prefactor of the first two
to be the equal to the ones of the latter. Therefore, we assume they are not.
As a result, we obtain the bion-induced bosonic potential in QCD(BF)* in the form
\begin{eqnarray}
&& V_{\rm bion} ( \mbox{\boldmath $\sigma$}_1, \mbox{\boldmath $\sigma$}_2 ) =
m_W^3 g^{-6}
e^{-2S_0} \sum_{i=1}^{N} \Big[
c_1 \Big(
e^{i ( \mbox{\boldmath $\alpha$}_i \, - \mbox{\boldmath $\alpha$}_{i-1} \,) \mbox{\boldmath $\sigma$}_1} + e^{i ( \mbox{\boldmath $\alpha$}_i
\, - \mbox{\boldmath $\alpha$}_{i-1} \,) \mbox{\boldmath $\sigma$}_2}
\Big)
\nonumber\\[3mm]
&& + c_2 \Big(
2 e^{i ( \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 -
\mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2) } +
e^{i ( \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 -
\mbox{\boldmath $\alpha$}_{i-1}\, \mbox{\boldmath $\sigma$}_2) }
+ e^{i ( \mbox{\boldmath $\alpha$}_i \, \mbox{\boldmath $\sigma$}_1 -
\mbox{\boldmath $\alpha$}_{i+1}\, \mbox{\boldmath $\sigma$}_2) }
\Big)
\Big]
+ {\rm H.c.} \nonumber
\\
\end{eqnarray}
In full analogy with the superpotential in SYM* theory,
it is convenient to
define a prepotential in QCD(BF)*. To this end we introduce the function
\begin{equation}
{\mathcal W}( \mbox{\boldmath $\sigma$}_1,
\mbox{\boldmath $\sigma$}_2) = m_W g^{-4} e^{-S_0} \sum_{ \mbox{\boldmath $\alpha$}_i \, \in \Delta_0^{\rm aff}} \left( e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 }
+ e^{i \mbox{\boldmath $\alpha$}_i \, \mbox{\boldmath $\sigma$}_2 } \right) \,,
\label{Eq:prepotential}
\end{equation}
to be referred to as prepotential.
Note that the prepotential, as well as its derivatives, transform homogeneously under the
$Z_{2N}$ shift symmetry (\ref{Eq:symorb2}),
$$
Z_{2N}: \quad {\mathcal W}( \mbox{\boldmath $\sigma$}_1, \mbox{\boldmath
$\sigma$}_2)
\longrightarrow e^{-i \frac{2 \pi}{N}} {\cal W}( \mbox{\boldmath $\sigma$}_1, \mbox{\boldmath $\sigma$}_2) \,.
$$
Now, it is easy to express the bion-induced potential in terms of the
prepotential in the form which is
manifestly invariant under the $Z_{2N}$ shift and $(Z_2)_I$ interchange symmetries,
\begin{equation}
V( \mbox{\boldmath $\sigma$}_1, \mbox{\boldmath $\sigma$}_2) = g_3^2 \sum_{a=1}^{N-1} \left(\; c_+ \left |\frac{\partial {\cal W}}{\partial \sigma_{1,a}} +
\frac{\partial {\cal W}}{\partial \sigma_{2,a}}
\right|^2 + c_{-} \left|\frac{\partial {\cal W}}{\partial \sigma_{1,a}} -
\frac{\partial {\cal W}}{\partial \sigma_{2,a}}
\right|^2 \; \right)\,.
\label{Eq:potentialBF}
\end{equation}
We are finally ready to present the low-energy effective theory for QCD(BF)*,
\begin{eqnarray}
&&L^{\rm QCD(BF)^*} =
\frac{g_3^2}{32 \pi^2} \left[ (\partial \mbox{\boldmath $\sigma$}_1)^2
+ (\partial \mbox{\boldmath $\sigma$}_2)^2
\right] +
V_{\rm bion} ( \mbox{\boldmath $\sigma$}_1, \mbox{\boldmath $\sigma$}_2 )
\nonumber\\[4mm]
&& + \frac{1}{g_3^2} \sum_{i=1}^{N} i \overline \Psi_i \gamma_{\mu} \Big( \partial_{\mu} +
i \mbox{\boldmath $H$}_{ii} \mbox{\boldmath $A$}_{\mu}^{1} -
i \mbox{\boldmath $H$}_{ii} \mbox{\boldmath $A$}_{\mu}^{2}
\Big)
\Psi_i
\nonumber\\[4mm]
&&
+ c \; g^{-6} e^{ -S_{0}}
\sum_{\alpha_{i} \in \Delta_{\rm aff}^{0}}
\Big(
( e^{+i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1 } +
e^{+i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2 })
({ \lambda_i \psi_i +
\lambda_{i+1} \psi_{i+1} } )\, +
{ \rm H.c.}
\Big) \,.
\nonumber\\
\label{Eq:dQCDBF2}
\end{eqnarray}
Like in other QCD-like theories with complex-representation fermions (such as
QCD(F/AS/S)*), but unlike the ones with real-representation fer\-mions (such as
SYM theory or QCD(adj)), we have both the electric and magnetic couplings.
The Lagrangian (\ref{Eq:dQCDBF2}) includes all relevant terms allowed by symmetries up to ${\cal O}(e^{-3S_0})$.
The important question at this stage is which operators
in our effective Lagrangian (\ref{Eq:dQCD(F)}) are most important at large distances in the renormalization-group sense. Apparently, the fugacity (the coefficient in front of the bion vertices) has dimension $+3$ and is dominant in the IR. The quantum-mechanical corrections are negligible. This suggests that
in the IR the effects produced by magnetically charged bions are most relevant.
\subsection{Vacuum structure and chiral symmetry realization}
The low-energy effective theory respects all symmetries of the underlying
gauge theory ${\rm U}(1)_V \times (Z_{2N})_A \times (Z_2)_I$ and $C, P, T$.
These symmetries may be spontaneously broken. By studying dynamics of the effective theory we demonstrate that the breaking pattern is
\begin{eqnarray}
{\rm U}(1)_V \times (Z_{2N})_A \times (Z_2)_I \rightarrow {\rm U}(1)_V \times (Z_{2})_A \times (Z_2)_I
\end{eqnarray}
leading to the occurrence of $N$ isolated vacua.
In Eq.~(\ref{Eq:dQCDBF2}) the $Z_{2N}$ chiral symmetry is entangled with the shift symmetry of the dual photon (\ref{Eq:symorb2}), just like in SYM theory. There are $N$ isolated vacua in the $(Z_2)_I$ invariant subspace related to each other by the
action of the $Z_N$ shift symmetry. These vacua are located at
\begin{equation}
\mbox{\boldmath $\sigma$}_1= \mbox{\boldmath $\sigma$}_2 = \left\{ 0, \,\frac{2 \pi}{N}, \, \frac{4 \pi}{N},\, \ldots, \, \frac{2 (N-1)\pi}{N} \right\} \mbox{\boldmath $\rho$}
\end{equation}
in the field space. The choice of a given vacuum spontaneously breaks
the $Z_{N}$ shift symmetry, and, hence, the chiral symmetry.
Let $|\Omega_k\rangle$ denote one of the $N$ vacuum states ($k=1,\, \ldots ,\, N$). Following the techniques of \cite{Davies:1999uw, Davies:2000nw},
we observe that the chiral condensate is proportional to
the monopole-induced term $e^{-S_0}$. The renormalization-group $\beta$ function of QCD(BF)* is identical to that of SYM theory up to ${\cal O}(1/N^2)$ corrections. The first coefficients are just identical. Thus,
\beq
e^{-S_0} \equiv e^{-\frac{8 \pi^2}{g^2N}} = \Lambda^3 (\Lambda L)^{b_0-3}
\label{eso}
\eeq
where $b_0$ denotes the leading coefficient of the $\beta$ function divided by $N$.
At one-loop order in QCD(BF)*
$$b_0= 3\,.$$
Thus, the chiral condensate in QCD(BF)* is
\begin{eqnarray}
\langle \Omega_k| {\rm Tr} \bar \Psi \Psi | \Omega_k \rangle = 2 N \Lambda^3
e^{i \frac { 2 \pi k}{N}}+ {\rm H.c.}\,.
\label{Eq:condensate}
\end{eqnarray}
There is no $L$ dependence in the condensate in QCD(BF)* at one-loop level, just like in
SYM theory.
\subsection{Mass gap and confinement}
\label{sec:mas}
The small fluctuation analysis around any of the $N$ minima is sufficient to see that there are no massless modes in the infrared description of the QCD(BF)*. The choice of the vacuum breaks the discrete chiral symmetry rendering all fermions massive.
The bion-induced potential makes all $2(N-1)$ photons massive. This shows that every particle-like excitation must have a finite mass $m \sim e^{-S_0}$. There are no physical states in the mass range $[0, m) $ in the physical Hilbert space of the theory. Since the global $Z_N$ center group symmetry and $(Z_2)_I$ interchange symmetry are unbroken, the physical states can be expressed as the mutual eigenstates of these symmetries. The Fourier transform
\begin{equation}
{\sigma}_{\pm, k} = ( \sigma_{1,k} \pm \sigma_{2,k})
\, \equiv \,
\frac{1}{\sqrt N} \sum_{j=1}^{N} e^{i \frac{2 \pi j k}{N}} \mbox{\boldmath $H$}_{jj} (\mbox{\boldmath $\sigma$}_{1} \pm \mbox{\boldmath $\sigma$}_{2})
\end{equation}
diagonalizes the mass matrix.
The masses of the dual photons are proportional to $\exp (-S_0)$.
More
exactly,\footnote{Powers of $g$ and numerical factors are omitted
here and in similar expressions below.}
\begin{eqnarray}
m_{\sigma_{\pm,k}} = \sqrt {c_{\pm}}\,\, \Lambda (\Lambda L )^2 \left(2 \sin \frac{\pi k }{N}\right)^2,
\qquad \Lambda L \ll 1\,.
\end{eqnarray}
Any probe charge one might consider is coupled to a number of $\sigma$
fields. The thickness of the
domain line (string)
attached to the probe charge
is determined by the inverse mass of the lightest $\sigma$ field ($k=1$).
It is worth noting that the string has a substructure corresponding
to the contribution of the next-to-lightest, next-to-next-to-lightest
and so on $\sigma$'s.
The fermion masses are of the same order of magnitude in the same regime, as seen from Eq.~(\ref{51}),
\beq
m_{\Psi_{i}} = c \Lambda (\Lambda L )^2\,.
\eeq
Now we are ready to discuss strings in QCD(BF)* at small $L$.
Let us consider a heavy probe quark $Q^{i_1 ... i_m}_{j_1 ... j_n}$ and its antiquark
$\overline{Q^{i_1 ... i_m}_{j_1 ... j_n}}$ in a color-singlet state
at an exponentially large distance from each other. If $m\neq n$ the string (domain line)
forming between these probe objects is unbreakable.
Light dynamical fermions of the low-energy theory
cannot screen the electric charges of the probe quarks.
However,
if $m=n$ some strings (i.e. those attached to the
probes for which every index $i$ is equal to some $j$)
will break through pair creation of light dynamical fermions.
Assume $|n-m|\equiv k \neq 0$. Then the tensions of these unbreakable
$k$ strings can be found by calculating the tensions of the domain lines
supported by the theory (\ref{Eq:dQCDBF2}). These tensions are of the order of $
\Lambda^2 (\Lambda L)$ in the $\Lambda L \ll 1$ Abelian confinement regime
while at $\Lambda L \gsim 1$, in the non-Abelian confinement regime,
they tend to $\Lambda^2$ times a numerical coefficient.
To the best of our knowledge, this is the first analytic demonstration of $\chi$SB, mass gap generation and linear confinement in QCD(BF)*. This theory exhibits all expected nontrivial features of QCD(BF) on $R_4$.
\section{QCD with one AS fermion}
\label{s5}
Now we will discuss QCD with one antisymmetric Dirac fermion\,\footnote{Discussion of QCD with the symmetric representation fermion is parallel.} on $R_3 \times S_1$.
The theory possesses a U(1)$_V \times Z_{2N-4} $ symmetry, $Z_{2N-4} $
being the anomaly-free subgroup of the axial U(1)$_A $.
The action of the symmetry on the elementary fields is as follows:
\begin{eqnarray}
&&
U(1)_V: \; \; \;\;\;\;\;\; \; \lambda \rightarrow e^{i \alpha} \lambda, \;\; \qquad
\psi \rightarrow e^{-i \alpha} \psi\,, \nonumber\\[3mm]
&&
(Z_{2N-4})_A: \; \;\;\; \lambda \rightarrow e^{i \frac{2 \pi}{2N-4}} \lambda\,,
\;\; \qquad
\psi \rightarrow e^{i \frac{2 \pi}{2N-4}} \psi\, .
\label{Eq:symori}
\end{eqnarray}
It is believed that for sufficiently large $r(S_1)$,
the chiral symmetry is broken down to $Z_2 $ by the
bifermion condensate $\langle \psi \lambda \rangle\neq 0$,
$$
\langle \bar \Psi \Psi \rangle \sim N \Lambda^3 e^{i\frac{2 \pi k}{N-2}}
+{\rm H.c.}
$$
resulting in $N-2$ isolated vacua .
The QCD(AS) theory on $R_4$ must confine the same way as ${\cal N}=1$ SYM
theory and possess a mass gap. Since the discussion is quite similar to the
case of QCD(BF)*, we will be brief.
\subsection{Deformed orientifold QCD}
In the small $r(S_1)$ regime, the gauge symmetry is broken,
SU$(N) \rightarrow {\rm U}(1)^{N-1} $.
Without loss of generality we can take $N=2m+1$.
The case $N=2m$ can be dealt with in a similar manner.
In perturbation theory the massless fields are
$N-1$ diagonal photons and $N-2$ charged fermions. The $N^2-N$
off-diagonal $W$ bosons and
$N^2 - 2N +2$ fermions acquire masses in the range $[\frac{2\pi}{LN}, \frac{2\pi}{L}) $ and decouple from infrared physics.
The AS fermions $\Psi_{ij}$ acquire three-dimensional mass terms given by
$$
\sum_{i,j=1}^N\, (a_i + a_j) \bar \Psi^{[ij]} \gamma_4 \Psi_{[ij]}
$$
where $a_k$'s are given in Eq,~(\ref{12}). Hence,
$$
m_{ij}= \frac{2 \pi}{LN} \,\big( [i+j ]\; \, {\rm mod}\,\,\, N\big)
\,.
$$
Thus,
the fermion components $\Psi_{i, N-i}$ remain massless to all orders in perturbation theory. Let us label
$$
\Psi_{i, N-i} \equiv \Psi_{i}\,,\quad i=1, \ldots , N-1\,.
$$
The electric charges of these
degrees of freedom under the unbroken gauge group is
\beq
\mbox{\boldmath $q$}_{\Psi_i} = g\, ( \mbox{\boldmath $H$}_{ii} +
\mbox{\boldmath $H$}_{N-i, N-i} ) \,,\quad
\quad i=1, ... , N\,,
\eeq
Since the fermion is antisymmetric in its indices, we may parameterize the set
of the massless fermions as
\begin{eqnarray}
\Psi = && \left\{ \Psi_1, \ldots, \Psi_{m-1}, \Psi_{m}, \; \; \Psi_{m+1}, \Psi_{m+2}, \ldots, \;\;
\Psi_{2m}\right\}
\nonumber\\[2mm]
=&&
\left\{ \Psi_1, \ldots, \Psi_{m-1}, \Psi_{m}, -\Psi_{m}, -\Psi_{m-1}, \ldots , - \Psi_{1}
\right\}
\,.
\end{eqnarray}
The IR action in perturbation theory is
\begin{eqnarray}
S= \int_{R_3} \; \frac{1}{g_3^2} \Big[ \frac{1}{4} \sum_{a=1}^{N-1} (F^{a}_{\mu {{%\boldsymbol} {\nu}}})^2
+ 2 \sum_{i=1}^{m} i \bar \Psi_i \gamma_{\mu} \Big( \partial_{\mu} +
i ( \mbox{\boldmath $H$}_{ii} + \mbox{\boldmath $H$}_{N-i,N-i} )
\mbox{\boldmath $A$}_{\mu}
\Big)
\Psi_i
\Big] .
\nonumber\\
\end{eqnarray}
\subsection{Nonperturbative effects}
In QCD(AS)* on small $S_1 \times R_3$ there are $N$ types of
instanton-monopoles because of the pattern of the
gauge symmetry breaking SU$(N) \rightarrow {\rm U}(1)^{N-1} $ via a
compact adjoint Higgs field.
The $2N-4$ fermion zero modes of the BPST $R_4$ instanton split
into $N-2$ pairs of the instanton-monopole zero modes
in a slightly different way than that in SYM* theory and QCD(BF)*. The $N-2$
instanton-monopoles have
two fermion zero modes each, while the remaining two monopoles have no zero
modes. It is useful to present the monopole-instanton vertices in QCD(AS)* due to
a nontrivial structure of their zero modes,
\begin{eqnarray}
&& {\cal M}_{1}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$} _1 \mbox{\boldmath
$\sigma$} } \;
( \lambda_1 \psi_1 + \lambda_2 \psi_2) \,,\nonumber\\[3mm]
&& {\cal M}_{2}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_2 \mbox{\boldmath
$\sigma$} } \; ( \lambda_2 \psi_2 + \lambda_3 \psi_3) \,,\nonumber\\[3mm]
&&\ldots \,,\nonumber\\[3mm]
&&{\cal M}_{m-1}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha_{m-1}$} \mbox{\boldmath
$\sigma$} } \; ( \lambda_{m-1} \psi_{m-1} + \lambda_{m} \psi_{m} )
\,,\nonumber\\[3mm]
&& {\cal M}_{m}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_{m} \mbox{\boldmath
$\sigma$} } \;
( 2 \lambda_{m} \psi_{m} )
\,,\nonumber\\[3mm]
&& {\cal M}_{m+1}=
e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_{m+1} \mbox{\boldmath
$\sigma$} } \;
( \lambda_{m} \psi_{m} + \lambda_{m-1} \psi_{m-1} )
\,,\nonumber\\[3mm]
&&\ldots \,,\nonumber\\[3mm]
&& {\cal M}_{2m-2}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_{2m-2} \mbox{\boldmath
$\sigma$} } \;
( \lambda_3 \psi_3 + \lambda_2 \psi_2)
\,,\nonumber\\[3mm]
&& {\cal M}_{2m-1}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_{2m-1} \mbox{\boldmath
$\sigma$} } \;
( \lambda_2 \psi_2 + \lambda_1 \psi_1)
\,,\nonumber\\[3mm]
&& {\cal M}_{2m}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_{2m} \mbox{\boldmath
$\sigma$} } \; \,,
\nonumber\\[3mm]
&& {\cal M}_{2m +1}= e^{ -S_{0}} e^{i \mbox{\boldmath $\alpha$}_{2m+1} \mbox{\boldmath
$\sigma$} } \,.
\end{eqnarray}
Consequently, the contribution to the QCD(AS)* Lagrangian induced by
monopole-instantons takes the form
\begin{equation}
\Delta L \sim \sum_{i=1}^{2m+1} \left( {\cal M}_{i } + \overline {\cal M}_{i }
\right) .
\label{Eq:orientivertex}
\end{equation}
Since $N-2$ the monopoles carry compulsory fermionic
zero mode insertions, they can not induce a mass term for all the dual photons
if $N \geq 4$.
As seen from Eq.~(\ref{Eq:orientivertex}), two of the monopole-instantons do contribute
to the bosonic potential, but this is insufficient to render all photons massive for
$N \geq 4$. (At $N=3$, QCD(AS)* and QCD(F)* are the same theories.)
Thus, in order to render all the photons massive, we need to incorporate
effects of order $e^{-2S_0}$, and introduce the magnetic bions.
Before doing so let us show that the underlying symmetries of QCD(AS)* allow mass terms
for all dual photons to be generated.
Since U(1)$_V \times (Z_{2N-4})_A$ is the symmetry of the microscopic theory,
it must be a symmetry of the long distance theory.
The invariance under $U(1)_V$ is manifest.
The invariance under the $(Z_{2N-4})_A$ necessitates intertwining the axial chiral symmetry
with a discrete shift symmetry of the dual photon,
\begin{eqnarray}
(Z_{2N-4})_A: \; \;\;\; &&
\lambda \psi \rightarrow e^{i \frac{2 \pi}{N-2}} \lambda \psi
\,,\nonumber\\[3mm]
&& \mbox{\boldmath $\sigma$} \rightarrow \mbox{\boldmath $\sigma$}
- \frac{2 \pi}{N-2} \mbox{\boldmath $\rho$}_{AS} \,,
\label{Eq:AS}
\end{eqnarray}
where
\beq
\mbox{\boldmath $\rho$}_{AS}\equiv \sum_{j=1}^{N-2}
\mbox{\boldmath $ \mu $}_{k}
\label{dop3}
\eeq
and $\mu_{k}$ are the $N-1$ fundamental weights
of the associated Lie algebra. Note that the parameter $\mbox{\boldmath $\rho$}_{AS}$ is not exactly the Weyl vector, which appears
in SYM* theory and QCD(BF)*. Rather, it can be represented as
\begin{equation}
\mbox{\boldmath $\rho$}_{AS} = \mbox{\boldmath $\rho$} - \mbox{\boldmath $ \mu$}_{N-1} \; .
\end{equation}
Using the identities
\begin{equation}
\mbox{\boldmath $\alpha$}_{N-1} \mbox{\boldmath $\rho$}_{AS} = 0\,, \quad \mbox{\boldmath $\alpha$}_N \mbox{\boldmath $\rho$}_{AS} = -(N-2) \, \quad \mbox{\boldmath $\alpha$}_i \mbox{\boldmath $\rho$}_{AS} = 1\,
, \quad i=1,\, \ldots\, N-2
\end{equation}
we observe that the vertex operators $ e^{i \mbox{\boldmath $\alpha$}_i\,
\mbox{\boldmath $\sigma$} }$ transform under the discrete shift
symmetry
$$
\mbox{\boldmath $\sigma$} \rightarrow
\mbox{\boldmath $\sigma$} - \frac{2 \pi}{N-2} \mbox{\boldmath $\rho$}_{AS} \
$$
as
\beqn
Z_{N-2}: && e^{i \mbox{\boldmath $\alpha$}_{2m} \mbox{\boldmath
$\sigma$} }
\rightarrow e^{i \mbox{\boldmath $\alpha$}_{2m} \mbox{\boldmath
$\sigma$} }, \qquad e^{i \mbox{\boldmath $\alpha$}_{2m+1} \mbox{\boldmath
$\sigma$} }
\rightarrow e^{i \mbox{\boldmath $\alpha$}_{2m+1} \mbox{\boldmath
$\sigma$} } \,, \nonumber\\[3mm]
&& e^{i \mbox{\boldmath $\alpha$}_{i} \mbox{\boldmath
$\sigma$} }
\rightarrow e^{- i \frac{2 \pi}{N-2}} \; e^{i \mbox{\boldmath $\alpha$}_{i}
\mbox{\boldmath
$\sigma$} } \,,
\nonumber\\[3mm]
&&
i=1, \, \ldots \, 2m-1\,.
\eeqn
Hence, the monopole-induced interactions (\ref{Eq:orientivertex}) are invariant
under $(Z_{2N-4})_A$ given in (\ref{Eq:AS}). The discrete shift symmetry allows mass terms for all dual photons at order $e^{-2S_0}$.
In QCD(AS)*, there are novel topological excitations as is the case in QCD(BF)*.
The zero mode structure of monopole-instantons suggests that other than the magnetic bions common with SYM* theory, there are magnetic bions of a more exotic variety,
\begin{eqnarray}
&&{\cal B}^1_i : \Big( \frac{4\pi}{g}(\mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{i-1})
\,, \; 0 \Big) : \qquad \;\; \;\; c_1 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i \, - \mbox{\boldmath $\alpha$}_{i-1} \,) \mbox{\boldmath $\sigma$}} \,, \nonumber\\[3mm]
&& {\cal B}^{12}_{i,i} : \Big( \frac{4\pi}{g}(\mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{2m-i})\,, 0 \Big) : \qquad \;\;
c_2 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i -
\mbox{\boldmath $\alpha$}_{2m-i} ) \mbox{\boldmath $\sigma$}
} \, ,
\nonumber\\[3mm]
&& {\cal B}^{12}_{i,i-1} : \Big( \frac{4\pi}{g}( \mbox{\boldmath $\alpha$}_i\, - \mbox{\boldmath $\alpha$}_{2m-i+1})\,, 0 \Big) : \quad
c_2 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i -
\mbox{\boldmath $\alpha$}_{2m-i+1} ) \mbox{\boldmath $\sigma$}
} \, ,
\nonumber\\[3mm]
&& {\cal B}^{12}_{i, i+1} : \Big( \frac{4\pi}{g}(\mbox{\boldmath $\alpha$}_i\,
- \mbox{\boldmath $\alpha$}_{2m-i-1})\,, 0 \Big) : \quad
c_2 e^{-2S_0} e^{i ( \mbox{\boldmath $\alpha$}_i -
\mbox{\boldmath $\alpha$}_{2m-i-1} ) \mbox{\boldmath $\sigma$}
} \, .
\end{eqnarray}
Here in the first line summation runs over $ i=1,\,\ldots, \, 2m-1$
while in the second, third and fourth lines over $i= 1,\, \ldots,\, m-1$.
The pairing of the constituent monopoles follows from the structure of the fermion zero modes.
The magnetic bion ${\cal B}^1_i$ is held together due to the attractive fermionic pair exchanges which overcomes
the Coulomb repulsion between its constituents. The constituents of the latter bions
${\cal B}^{12}_{i,i}$ and ${\cal B}^{12}_{i,i\pm1}$
do not interact via the Coulomb law, rather
they experience just the fermion pair exchange. Consequently, the combined effect of the magnetic bions (which is order $e^{-2S_0}$),
\begin{equation}
V_{\rm bion} ( \mbox{\boldmath $\sigma$} ) = m_W^3 g^{-6} \left[ \sum_{i=1}^{2m-1} {\cal B}^1_i +
\sum_{i=1}^{m-1} ( {\cal B}^{12}_{i,i} + {\cal B}^{12}_{i,i+1} + {\cal B}^{12}_{i,i-1} ) \right] + {\rm H.c.}
\end{equation}
and two monopole-instantons
${\mathcal M}_{2m}, {\mathcal M}_{2m+1}$ gives rise to the bosonic potential which renders all $N-1$ dual photons massive, which, in turn, leads to string
(domain line) formation.
Assembling perturbative and nonperturbative effects we get
\begin{eqnarray}
&&L^{\rm QCD(AS)^*} = \frac{g_3^2}{32 \pi^2} (\partial \mbox{\boldmath $\sigma$})^2 +
V_{\rm bion} ( \mbox{\boldmath $\sigma$} ) + \; \sum_{i=2m}^{2m+1} \; ( {\cal M}_{i } + \overline {\cal M}_{i } )
\nonumber\\[3mm]
&&
+ \frac{2}{g_3^2} \sum_{i=1}^{m} \bar \Psi_i \gamma_{\mu} \Big( i \partial_{\mu} +
( \mbox{\boldmath $H$}_{ii} + \mbox{\boldmath $H$}_{N-i,N-i} )
\mbox{\boldmath $A$}_{\mu}
\Big)
\Psi_i
+ \; \sum_{i=1}^{2m-1} \; ( {\cal M}_{i } + \overline {\cal M}_{i } ) \,.
\nonumber\\
\label{Eq:dQCD2}
\end{eqnarray}
In QCD(F/BF)* we had both electric couplings and monopole and bion-induced
magnetic interactions. By the same token in QCD(AS)* interactions of the
electric and magnetic type are present. (This is unlike what we have in SYM* theory.)
The monopole and bion-induced effects are dominant.
In the effective low-energy theory (\ref{Eq:dQCD2}), the $(Z_{2N-4})_A$ chiral symmetry
is entangled with the shift symmetry of the dual photon. Examination of the
bosonic potential in QCD(AS)* reveals $N-2$ gauge inequivalent isolated vacua located at
\begin{equation}
\mbox{\boldmath $\sigma$}
= \left\{ 0, \, \frac{2 \pi}{N-2}, \, \frac{4 \pi}{N-2}, \,\ldots, \, \frac{2 (N-3)\pi}{N-2} \right\}
\mbox{\boldmath $\rho$}_{AS}\,.
\end{equation}
As usual, we label these $N-2$ vacuum states by $|\Omega_k\rangle$, $(k=1, \,\ldots \, ,N-2$). Choosing a vacuum we spontaneously break the $Z_{N-2}$ symmetry.
The chiral condensate in the vacuum $|\Omega_k\rangle $ can be calculated along the
same lines as in QCD(BF)*,
\begin{eqnarray}
\langle \Omega_k | {\rm Tr} \bar \Psi \Psi | \Omega_k \rangle
=2(N-2) \left\{ \begin{array}{ll}
\Lambda^3
(\Lambda L)^{4/3N} \,, & \; \Lambda L \ll1\,, \\[3mm]
\Lambda^3 \,, & \; \Lambda L \gsim 1\,, \\
\end{array}
\!\right\}
\cos\!\left({\frac{2 \pi k}{N-2}} \right),
\nonumber\\
\end{eqnarray}
where there is a weak $L$ dependence at small $L$. This follows from the
${\cal O}(1/N)$ difference in $b_0$, the first $\beta$-function coefficient of QCD(AS) and SYM
theories divided by $N$. In QCD(AS)
$$
b_0= 3+ \frac{4}{3N}\,.
$$
{\bf Remark on the Callias and Atiyah--Singer index theorems:}
On $R_4$, the global aspect of the chiral anomaly is
expressed by the Atiyah--Singer index theorem. BPST instanton is associated with $2 h$ fermionic zero modes, where
$2h=\{ 2, 2N, 2N, 2N-4, 2N+4 \} $ for QCD(F/adj/BF/AS/S), respectively.
In QCD(${\mathcal R}$)* at small $r(S_1)$, due to the gauge symmetry breaking, the four-dimensional instanton splits into $N$ monopoles.
In the small $r(S_1)$ (weak coupling) regime, the instanton should be viewed as a composite object, with the magnetic and topological charges as
in Eq.~(\ref{fracins}),
built of $N$ types of elementary monopoles with charges $\frac{4 \pi}{g} (\mbox{\boldmath $\alpha$}_1, \mbox{\boldmath $\alpha$}_2,\, \ldots ,\,
\mbox{\boldmath $\alpha$}_N)$. The $2h$ fermion zero modes split into groups which associate themselves
with the above $N$ monopoles as follows:
\begin{eqnarray}
&&
{\rm QCD(F)}: 2 \qquad \quad\rightarrow \{2, 0, \ldots, 0,0,0\}\, ,
\nonumber\\[2mm]
&&
{\rm SYM }: 2N \qquad\quad \; \; \rightarrow \{2, 2, \ldots, 2, 2,2\}\, ,
\nonumber\\[2mm]
&&
{\rm QCD(BF)}: 2N \quad \;\; \rightarrow \{2, 2, \ldots, 2,2,2\} \, ,
\nonumber\\[2mm]
&&
{\rm QCD(AS)}: 2N-4 \rightarrow \{2, 2, \ldots, 2, 0, 0\} \, ,
\nonumber\\[2mm]
&&
{\rm QCD(S)}: 2N +4 \;\;\; \rightarrow \{2, 2, \ldots, 2, 4,4\} \,.
\end{eqnarray}
The numbers on the right-hand side are the Callias indices for the corresponding monopoles. Strictly speaking, the Callias index theorem is formulated for
the Yang--Mills + noncompact adjoint Higgs system
on ${\mathbb R}^3$ \cite{Callias:1977kg}. Its generalization to ${\mathbb R}^3 \times S^1$ is
carried out by Nye and Singer, \cite{Nye:2000eg}.
To study the index theorems we need to find the kernels
of the Dirac operators ${\rlap{\raise 1pt \hbox{$\>/$}}D}$ and ${\rlap{\raise 1pt \hbox{$\>/$}}D}^{\dagger}$ in the background of the appropriate topological excitation.
The kernel is the set of zero eigenstates of the Dirac operator.
The difference of the dimensions of the kernels gives the number of zero mode attached to a given topological excitation. Thus, we observe the following
relation between the Atiyah--Singer index ${\cal I}_{\rm inst}$ and the Callias
index ${\cal I}_{\mbox{\boldmath $\alpha$}_i }$,
\beq
{\mathcal I}_{\rm inst}=\sum_{\mbox{\boldmath $\alpha$}_i \in
\Delta_{\rm aff}^{0}} {\mathcal I}_{\rm \mbox{\boldmath $\alpha$}_i}\, ,
\eeq
or
\beq
\rule{0mm}{6mm}
\dim \ker {\rlap{\raise 1pt \hbox{$\>/$}}D}_{\rm inst} - \dim \ker {\rlap{\raise 1pt \hbox{$\>/$}}D}^{\dagger}_{\rm inst} =
\sum_{\mbox{\boldmath $\alpha$}_i \in
\Delta_{\rm aff}^{0}} \left( \dim \ker {\rlap{\raise 1pt \hbox{$\>/$}}D}_{ \mbox{\boldmath $\alpha$}_i } - \dim \ker {\rlap{\raise 1pt \hbox{$\>/$}}D}^{\dagger}_{ \mbox{\boldmath $\alpha$}_i} \right) \,.
\eeq
\section{\boldmath{$\theta$} dependence}
\label{s6}
There is one more interesting aspect of the theory which has not yet been discussed,
namely, {$\theta$} dependence. It is well-known that
in pure Yang--Mills theory on $R_4$ physical quantities, e.g. string tensions,
do depend on $\theta$, and physical periodicity in $\theta$ is $2\pi$.
Introduction of one massless quark in representation ${\mathcal R}$
eliminates {$\theta$} dependence of physical quantities
since one can eliminate the $\theta$ term
through an appropriate chiral rotation of the fermion field, as a result of the chiral anomaly.
This does not mean that various order parameters, e.g. the bifermion condensate,
are $\theta$ independent. If a small fermion mass term is added,
physical quantities acquire {$\theta$} dependence; all {$\theta$}-dependent
effects are proportional to the fermion mass $m$.
Let us ask ourselves what happens on $R_3\times S_1$, in deformed
theories. At first, let us consider pure Yang--Mills, assuming that
$\theta\neq 0$. Then the instanton-monopole induced vertices at level $e^{-S_0}$
are
\beq
{\mathcal L} = e^{-S_0}\sum_{j=1}^N
\,\mu_j\,
e^{i \, \mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$}+i\theta/N}
+{\rm H.c.}\,.
\label{thone1}
\eeq
By globally shifting
\begin{equation}
\mbox{\boldmath $\sigma$} \rightarrow \mbox{\boldmath $\sigma$} - \frac{\theta}{N}
\mbox{\boldmath $\rho$}
\label{shift}
\end{equation}
where $ \mbox{\boldmath $\rho$}$ is the Weyl vector, and using the identities ($\ref{iden}$), we can rewrite the instanton-monopole vertices in the form
\beq
{\mathcal L} = e^{-S_0}\sum_{j=1}^{N-1}
\,\mu_j\,
e^{i \, \mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$}}
+ \mu_N e^{-S_0} e^{i \, \mbox{\boldmath $\alpha$}_N \mbox{\boldmath $\sigma$} + i \theta}
+{\rm H.c.}\,,
\label{thone2}
\eeq
where the $2 \pi$ periodicity is more transparent. In both Eqs.~(\ref{thone1}) and (\ref{thone2}) the vacuum angle dependence is explicit.
Introducing one fundamental fermion, and localizing the fermionic zero mode into the monopole
with charge $\alpha_N$ without loss of generality,
we get, instead of (\ref{thone1}) and (\ref{thone2})
\beqn
{\mathcal L}
&=&
\tilde \mu_{N}\, e^{-S_0} e^{i \, \mbox{\boldmath $\alpha$}_{N} \mbox{\boldmath $\sigma$}
+i\theta/N } \,\lambda \psi + e^{-S_0}\sum_{j=1}^{N-1}
\,\mu_j\,
e^{i \, \mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$}+i\theta/N}
+{\rm H.c.}\,.
\nonumber\\[3mm]
&=&
\tilde \mu_N\, e^{-S_0} e^{i \, \mbox{\boldmath $\alpha$}_{N} \mbox{\boldmath $\sigma$}
+i\theta } \,\lambda \psi + e^{-S_0}\sum_{j=1}^{N-1}
\,\mu_j\,
e^{i \, \mbox{\boldmath $\alpha$}_j \mbox{\boldmath $\sigma$} }
+{\rm H.c.}\,.
\eeqn
where we used (\ref{shift}) in passing to the second step.
It is clear in the latter form that
the $\theta$ dependence can be completely absorbed
in the fermion fields,
\beq
\left\{ \psi\,,\,\lambda \right\} \to \left\{ \psi e^{-i\theta/2} \,,\,\lambda e^{-i\theta/2}\right\}\,.
\label{redef}
\eeq
If the fermion mass term $m \psi\lambda$ is added,
the $\theta$ dependence can no longer be absorbed
in the definition of the fermion field. Performing (\ref{redef})
we change the phase of the mass parameter. Correspondingly,
one can expect physical $\theta$ dependent effects
proportional to $m$, such as the vacuum energy density
\beq
{\cal E} (\theta) \sim m \langle \bar \Psi \Psi \rangle \;
\cos\theta \; ,
\eeq
in parallel with
the behavior of the undeformed theory on $R_4$.
Analysis of the $\theta$ dependence in QCD(BF)* is even easier technically.
The magnetic bion vertices have no $\theta$ dependence because
each of them represent the product of a monopole and antimonopole vertex in
which the $\theta$ dependence cancels. Moreover, the monopole-induced vertices are
\beqn
&& \Delta L^{\rm QCD(BF)^*}=
e^{ -S_{0}}
\sum_{\mbox{\boldmath $\alpha$}_{i} \in \Delta_{\rm aff}^{0}}
\Big( \left(e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_1
+ i \theta/N } +
e^{i \mbox{\boldmath $\alpha$}_i\, \mbox{\boldmath $\sigma$}_2
+ i \theta/N }\right)
\nonumber\\[3mm]
&&\times
( \lambda_i \psi_i + \lambda_{i+1} \psi_{i+1} )
+ {\rm H.c.}
\Big)\,.
\eeqn
The $\theta$ dependence can be readily absorbed
in the fermion fields with the following redefinition:
\beq
\left\{ \psi_i\,,\,\lambda_i \right\} \to
e^{-i\theta/(2N)} \
\left\{ \psi_i \,,\,\lambda_i \right\}\,.
\label{redef2}
\eeq
If we introduce very small mass terms for the fermion fields,
$m \ll \Lambda (\Lambda L)$, then it is obvious that the $\theta$ dependence reappears in the vacuum energy density,
\beq
{\mathcal E} (\theta) = \min_k {\cal E}_k (\theta) \equiv \min_k \left[ m \Lambda^3 \cos \left( \frac{\theta}{N} + \frac{2 \pi k}{N} \right) \right],
\quad k=1,\, \ldots, \, N\,.
\eeq
Turning on a nonvanishing mass term lifts the $N$-fold degeneracy of the vacua $|\Omega_k \rangle$. The vacuum labeled by the integer $k$ turns into a state with energy ${\cal E}_k (\theta)$. Each one of the
$N$ branches is $2 \pi N$ periodic in $\theta$. Consequently, the vacuum energy density
is physically $2 \pi$ periodic,
$$
{\mathcal E}_{\rm vac} (\theta + 2\pi)= {\mathcal E}_{\rm vac} (\theta)
\,.
$$
This is precisely the expected behavior of undeformed QCD(BF) on $R_4$.
In the case of QCD(AS)* the overall
picture emerging from our analysis is quite similar
(albeit there are some minor differences subleading in $1/N$)
and also matches the known $\theta$ dependence of QCD(AS) on $R_4$.
\section{Remarks on planar equivalence}
\label{plan}
Similarity of the dynamical aspects of QCD(BF/AS/S)*
(with fermions in the two-index representation) and ${\cal N}=1$ SYM* theory
is evident. Given that they are quantum theories with distinct matter content and distinct microscopic symmetries, this similarity is remarkable.
We explicitly showed that in the small $r(S_1)$ regime, QCD(BF/AS/S)*
confine through the magnetic bion mechanism in the same way as ${\cal N}=1$ SYM* theory.
Moreover, spontaneous breaking of the discrete chiral symmetries
is similar in these two cases too. The bifermion condensate is saturated by
a monopole-instanton with appropriate fermion zero mode structure.
The calculated mass gaps are quite alike in both cases.
Clearly, our analysis makes it manifest that solvability of ${\cal N}=1$ SYM*
theory at weak coupling is due to the unbroken center symmetry.
Supersymmetry is secondary in this regime.
In fact, an intimate relation between SYM theory and its orientifold-orbifold
daughters exists not only at small $r(S_1)$ but also in the decompactification limit
of large $r(S_1)$. If the number of colors $N\to \infty$,
there is a well defined equivalence between ${\cal N}=1$ SYM and QCD(BF/AS/S)
which goes under the name of planar equivalence
\cite{Armoni:2003gp, Armoni:2004uu, Armoni:2004ub, Kovtun:2003hr, Kovtun:2004bz}.
The necessary conditions for planar equivalence to be valid nonperturbatively
are (i) interchange $(Z_2)_I$ symmetry is unbroken in QCD(BF), ii) $C$ conjugation symmetry is unbroken in QCD(AS/S). It is generally believed
that these conditions are met \cite{Shifman:2007kt}.
The large $N$ equivalence is a useful tool to translate nonperturbative
data of SYM theory to its daughters (and vice versa) on $R_4$. Planar equivalence
is valid also on $R_3 \times S_1$. The equivalence establishes
an isomorphism on a subspace of the Hilbert space of these theories.
Let us grade the Hilbert space of SYM theory with respect to
$(-1)^F$ where $F$ is the fermion number, as
\beq
{\cal H}^{\rm SYM } = {\cal H}^{\rm SYM +} \oplus {\cal H}^{\rm SYM -}\,.
\eeq
Similarly, the Hilbert spaces of QCD(BF) and QCD(AS/S)
can be graded respect to the $1\leftrightarrow 2$ interchange symmetry
in the first case and
charge conjugation in the second.
Planar equivalence is an isomorphism between the even subspaces of the Hilbert spaces
\begin{equation}
{\cal H}^{\rm SYM +} \equiv {\cal H}^{\rm {QCD(BF)}+} \equiv
{\cal H}^{\rm {QCD(AS)} +} \,.
\end{equation}
(The full Hilbert spaces are by no means isomorphic.)
If one performs periodic compactifications\,\footnote{In thermal compactification,
only the center symmetry breaks
spontaneously; the interchange symmetry and $C$
invariance remain unbroken \cite{Unsal:2006pj}. Thus, planar equivalence
for orbifold and orientifold daughters remains valid in the high temperature deconfined phase.} of QCD(BF/AS/S) on $R_3 \times S_1$, with small $r(S_1)$, the $1\leftrightarrow 2$ interchange symmetry of QCD(BF)* and $C$ invariance of QCD(AS/S)* do break spontaneously, along with
the spatial center symmetry \cite{Tong:2002vp, Unsal:2006pj}. (For related lattice studies
showing the breaking and restoration of $C$ see \cite{DeGrand:2006qb, Lucini:2007as}.)
Certain order parameters which probe the interchange symmetry and $C$ invariance are topologically nontrivial \cite{Unsal:2007fb}, e.g.
\beq
{\rm Tr} (U_1^k) - {\rm Tr} (U_2^k), \,\,\,\, {\rm QCD(BF)^*} \,\,\,\, \mbox{and}\,\,\,\,
{\rm Tr} (U^k) - {\rm Tr} ( U^{*\,k}) \,\,\,\,{\rm QCD(AS)^*} \,.
\label{order}
\eeq
These operators are charged under the center symmetry and odd under $(Z_2)_I$ and $C$.
In QCD(BF/AS/S)* stabilization of the center symmetry automatically implies vanishing of
the expectation values of the order parameters (\ref{order}).
There are also order parameters which are neutral under the center symmetry, yet charged under $(Z_2)_I$ and $C$. For example, the odd combination of
the Wilson loops $W_1(C) -W_2(C)$ or
${\rm Tr} F_1^2 - {\rm Tr} F_2^2$ in QCD(BF)* and $W(C) -W^*(C) $ in QCD(AS)* are of this type. The unbroken center symmetry does not restrict the expectation value of such operators. Our dynamical analysis in Sects. (\ref{s4}) and (\ref{s5}) shows that spontaneous breaking of $(Z_2)_I$ and $C$ symmetry definitely
does not take place at small $r(S_1)$. Arguments why this must be the case also on
$R_4$ are summarized in Ref.~\cite{Shifman:2007kt}.
\section{Conclusions and prospects: \\ Abelian vs.
non-Abelian confinement}
\label{s7}
The aspects of QCD* theories that we studied are valid in the
limit $L \Lambda \ll 1$, where the weak coupling regime sets in.
We presented arguments that one-flavor QCD($\mathcal R$)* theories
are continuously connected to one-flavor QCD($\mathcal R$) on $R_4$.
We demonstrated, through an explicit calculation
at small $r(S_1)$, existence of the mass gap, linear confinement,
and discrete $\chi$SB. These are indeed the most salient features of QCD-like
theories on $R_4$.
In the small $r(S_1)$ domain, the QCD* theories are characterized
by the fact that the gauge symmetry is Higgsed down to
a maximal Abelian subgroup $U(1)^{N-1}$.
Thus, at small $r(S_1)$ we deal with Abelian confinement, while it
is expected to give place to non-Abelian confinement in the decompactification
limit.
What happens as we increase
$ L \Lambda $ gradually, all the way to $L\to\infty$? At a scale of the order
$ L \Lambda \sim 1$, we loose the separation of scale between the $W$-bosons
and the nonperturbatively gapped photons. Thus, our effective low-energy
description (which includes only light bosonic and fermionic degrees of freedom)
ceases to be valid. At and above $\Lambda\sim 1/ L $ the theory is strongly coupled in the IR, and the full non-Abelian gauge group is operative. Thus, the confinement mechanism in this regime must be non-Abelian.
This situation is completely analogous to the Seiberg--Witten solution \cite{Seiberg:1994rs}
of four-dimensional ${\cal N}=2$ SYM theory exhibiting mass gap and linear confinement upon a $\mu$ deformation breaking
${\cal N}=2$ down to ${\cal N}=1$. If $\mu/\Lambda \ll 1$, the Seiberg--Witten theory
in the IR is in the regime of broken gauge symmetry, i.e.
SU$(N)\to {\rm U}(1)^{N-1}$, where it is solvable. For $\mu/\Lambda \gsim 1$, one looses the separation of scales between the $W$ bosons and nonperturbatively gapped photons.
The full gauge symmetry is restored. In this regime, the low-energy theory approaches
pure ${\cal N}=1$ SYM theory. The confining strings must be non-Abelian. Currently no
controllable analytical approaches allowing one to continue the
Seiberg--Witten solution to the domain $\mu/\Lambda \gg 1$ are known,
and yet there are good reasons to believe that this continuation is smooth.
Conceptually the relation between $\mu$-deformed ${\cal N}=2$ and ${\cal N}=1$ SYM
theories on $R_4$ is parallel to that between one-flavor QCD* on $R_3 \times S_1$ and
QCD on $R_4$. Both theories realize confinement via the following pattern
\begin{eqnarray}
{\rm SU}(N) \; \; \; \stackrel{\rm Higgsing}{\longrightarrow} \; \; \; [{\rm U}(1)]^{N-1}
\; \; \; \stackrel{\rm nonperturbative}{\longrightarrow} {\rm no\,\, massless \,\, modes} \,.
\label{pattern1}
\end{eqnarray}
Existence of an intermediate Abelian gauge theory in the IR is the key to analytical calculability in both cases.
In both cases by tuning the relevant parameter, $\mu/\Lambda$ or $L\Lambda$, respectively,
from small to large values, we can remove the intermediate step of ``Abelianization."
In this paper we presented a number of arguments in favor
of no phase transitions separating the Abelian and non-Abelian confinement regimes. It is desirable to develop a special technique allowing one to perform ``integrating in" of the
$W$ bosons (and their partners) gradually. If this task can be achieved this could provide a direct route to QCD and QCD-like theories on $R_4$.
If we are right and the transition from QCD* to QCD-like theories is smooth,
this smoothness could explain a long-standing puzzle.
The point is that a rather vaguely defined method which goes under the name of the
maximal Abelian projection seems to give sensible results in the lattice calculations.
The reason might be the proximity of the Abelian confinement regime
we discussed in the body of this paper.
The status of QCD-like theories with massless or very light fermions with exact
or approximate chiral symmetry significantly improved in the recent
years~\cite{Kaplan:1992bt,Narayanan:1994gw}. It is highly
desirable to implement QCD* theories on lattices, and then carry out an in-depth
study of the transition from Abelian to non-Abelian confinement.
\section*{Acknowledgments}
M.\"U. thanks E. Silverstein for discussions on double trace deformations, and
D. T. Son for discussions on Polyakov's model. M.S. is grateful to A. Yung for endless discussions of Abelian vs. non-Abelian confinement. We would like
to thank Adi Armoni for stimulating questions and fruitful correspondence.
The work of M.S. is supported in part by DOE grant DE-FG02-94ER408.
The work of M.\"U. is supported by the U.S.\ Department of Energy Grant DE-AC02-76SF00515.
\newpage
\section*{Appendix: Center stabilization}
\label{app}
\addcontentsline{toc}{section}{Appendix: Center stabilization}
\renewcommand{\theequation}{A.\arabic{equation}}
\setcounter{equation}{0}
Let $U({\bf x})$ be the path-ordered
holonomy of the Wilson line wrapping $S_1$ at the point
${\bf x} \in R_3$. It is known that for complex representation fermions (F/AS/S/BF), the center symmetry is broken down at sufficiently small $r(S^1)$ regardless of the spin connections of fermions. For adjoint fermions with periodic spin connection,
the spatial center symmetry is not broken at small $r(S_1)$, whereas for antiperiodic (thermal) boundary conditions the temporal center symmetry is broken at sufficiently high temperatures.
An easy way to see this is to evaluate the one-loop Coleman--Weinberg effective potential induced by quantum fluctuations by using the background field method
(e.g. \cite{Gross:1980br,Unsal:2006pj}).
The minimum of the {\em classical} action is achieved at the vanishing value of
the gauge field strength, and constant but arbitrary values of the $U(\bf x)$.
Quantum corrections lift the degeneracy.
One can evaluate the one loop-potentials for one flavor QCD-like theories. In the gauge in which the Polyakov line is represented by a constant and diagonal
matrix one obtains\,\footnote{In the multiflavor generalization (with $N_f$ fermions) one must replace
$a_n \rightarrow a_n N_f.$}
\beq
V_{\rm eff}[U]=
\frac{2}{\pi^2 L^4} \sum_{n=1}^{\infty} \frac{1}{n^4} \, T_n\,,
\eeq
where
\beqn
&& T_n= - |{\rm Tr} \, U^n|^2 + a_n ({\rm Tr}\, U^n + {\rm Tr} \,U^{*\, n}) \,, \qquad {\rm (F) } \,,
\nonumber\\[3mm]
&& T_n=
(-1 + a_n) |{\rm Tr}\, U^n|^2 \,, \qquad \qquad\qquad \qquad{\rm (adj) } \,,
\eeqn
\beqn
&& T_n= \frac{1}{2} (-1 + a_n) |{\rm Tr} \,U_1^n + {\rm Tr} \,U_2^n|^2
\nonumber\\[3mm]
&& + \frac{1}{2}
(-1 - a_n) |{\rm Tr} \,U_1^n - {\rm Tr} \,U_2^n|^2 \,,\qquad\,\,\,\,\,
\qquad {\rm (BF) } \,,
\eeqn
\beqn
&& T_n= \frac{1}{4} (-1 + a_n) |{\rm Tr} \, U^n + {\rm Tr} \,U^{*n}|^2 +
\frac{1}{4} (-1 - a_n) |{\rm Tr} \,U^n - {\rm Tr} \,U^{*n}|^2
\nonumber\\[3mm]
&& \mp \frac{1}{2} a_n \left( {\rm Tr} \, U^{2n} + {\rm Tr} \,U^{*2n}\right) , \qquad
\qquad \qquad\qquad \qquad \qquad {\rm (AS/S) }\,.
\label{eq:potential}
\eeqn
Here $a_n$ are prefactors which depend on the fermion boundary conditions,
\begin{equation}
a_n= \left\{ \begin{array}{cl}
(-1)^n & {\rm for}\; {\cal S}^{-}\,,\nonumber\\[3mm]
1 & {\rm for} \; {\cal S}^{+}\,.
\end{array}
\right.
\end{equation}
Note that
\begin{eqnarray}
&&
C\,\left( {\rm Tr}\, U^{n} \pm {\rm Tr}\, (U^{ *})^n \right) = \pm \left( {\rm Tr}\, U^{n} \pm {\rm Tr}\, (U^{ *})^n\right) \,,\nonumber\\[3mm]
&&
{\mathcal I}\, \left( {\rm Tr} \, U_{1}^{n} \pm {\rm Tr}\, (U_{2})^n\right) = \pm \left( {\rm Tr} \, U_{1}^{n} \pm {\rm Tr} \, (U_{2})^n\right).
\end{eqnarray}
The minimum
of the effective potential presented above is located at
\begin{eqnarray}
&&U \sim {\rm Diag}(1, 1, \ldots\,, 1)
\quad {\rm all}\; {\cal R} \; {\rm with} \; { \cal S^-} \;\; {\rm and}
\; {\rm F/BF/AS/S}
\; {\rm with} \; {\cal S^+} ,
\nonumber\\[3mm]
&&
U = {\rm Diag} \left( 1, e^{i \frac{2\pi}{N}}, \ldots ,
e^{i \frac{2\pi (N-1)} {N}} \right)
\quad {\rm adj \; with} \; \; { \cal S^+} \, .
\end{eqnarray}
Thus, the (spatial or temporal) center symmetry is broken in all theories, except QCD(adj) with the periodic spin connection ${\cal S}^{+}$.
In the cases of broken center symmetry the small and large radius physics on $S_1 \times R_3$ are separated by a phase transition. In all these cases
the fermions essentially decouple from infrared physics, and the theory
at small $r(S_1)$ has not much in common with the theory at large $r(S_1)$.
The center symmetry breaking is induced by destabilizing double trace operators
such as e.g.
$- |{\rm Tr} U|^2$ and their multiwinding counterparts.
One can stabilize the center symmetry while respecting the underlying symmetries of
the theories at hand by adding a stabilizing polynomial in the appropriate variable
up to the winding number $[N/2]$ with judiciously chosen coefficients. This will overwhelm the one-loop effect, and make the center-symmetric point a
stable vacuum in the small $r(S_1)$ regime.
\newpage
\addcontentsline{toc}{section}{References}
| {
"attr-fineweb-edu": 1.458984,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUet7xK6Ot9TMSsH9o | \section{Introduction}
Motivated by recent studies on immunity duration against seasonal coronaviruses\cite{Fontanet-2021,Edridge-2021} and the surge of new variants\cite{Callaway-2021}, we consider the contact process\cite{Harris-1974, Ferreira-2016} coupled to the Barabasi-Albert\cite{Cohen-2010, Barabasi-2016} networks. Since we experience the advance of Covid-19 worldwide, we observed that it is spreading out mainly over scale-free networks like airline connections. The most fundamental scale-free model is that reported by Barabasi and Albert\cite{Cohen-2010, Barabasi-2016}. In this work, we revisit this network structure.
Running on top of the network structure, we use the dynamics of the well-known contact process, the simplest epidemic-like model allowing absorbing configurations\cite{Boguna-2013, Pastorsatorras-2015} to describe epidemic spreading without permanent immunity\cite{Dickman-1999, Hinrichsen-2000, Hinrichsen-2006, Odor-2004, Henkel-2008}. The contact process presents an absorbing phase transition, on the directed percolation universality class\cite{Broadbent-1957, Odor-2004, Hinrichsen-2006, Henkel-2008}. In addition, the contact process is a stochastic epidemic process, derived from the master equation by the constant rate Poissonian process approximation\cite{Dorogovtsev-2008, Pastorsatorras-2015}.
Coupling epidemic processes on networks adds some realism at describing an epidemic spreading because networks are ubiquitous in human relationships, represented by bonds between the network nodes\cite{Dorogovtsev-2003, Barabasi-2016}. There are many examples of real scale-free networks, which we can mention the human sexual contacts\cite{Liljeros-2001}, the world wide web\cite{Barabasi-1999, Dorogovtsev-2003, Barabasi-2016}, the transport network\cite{Barrat-2004}, the scientific article citations\cite{Price-1965, Redner-1998}, and the scientific collaborations\cite{Newman-2001, Barabasi-2002}. Possibly, the most known example of a scale-free network model is the Barabasi-Albert (BA) model\cite{Barabasi-1999, Albert-2002, Newman-2002, Dorogovtsev-2003, Barrat-2004, Boccaletti-2006, Barrat-2008, Cohen-2010, Barabasi-2016}.
We can expect a change in the critical behavior of the contact process when coupled to scale-free networks\cite{Dorogovtsev-2008, Cohen-2010, Odor-2012, Barabasi-2016}. A possible change is a system starting to display non-universal scaling where the critical exponents are functions of the degree distribution exponent\cite{Hong-2007, Wittmann-2014, Ferreira-2011, Lima-2006, Krawiecki-2018, Krawiecki-2018-2, Krawiecki-2019}. In this way, our objective is to characterize the stationary critical behavior of the contact process on Barabasi-Albert networks. Also important is the effect of the network connectivity $z$, defined as the mean number of connections. Decreasing the network connectivity can be interpreted as the individuals adopting a social distancing behavior.
This paper is organized as follows. We describe the model in section \ref{sec:model}. We show details on estimating the critical threshold, and the critical exponents in section \ref{sec:results}. Finally, we present our conclusions in section \ref{sec:conclusions}.
\section{Model and Implementation\label{sec:model}}
In the contact process, a network with $N$ nodes represents a group of $N$ interconnected individuals, each one placed at its respective node. The connections between the individuals are represented by the network edges. The epidemics are spread by contamination of a susceptible individual by one of its infected neighbors, placed in another node connected by an edge.
To grow Barabasi-Albert networks with size $N$ and connectivity $z$, where $z \ll N$, one should start from a complete graph with $z+1$ nodes and add one node at a time where every newly added node $j$ will connect with $z$ randomly chosen previously added nodes in the growing process. Connections are chosen by preferential attachment, i.e., the newly added node $j$ will connect with a previously added node $i$ with a probability $W_i$ proportional to its degree $k_i$. We forbid multiple bonds between the same nodes. The Barabasi-Albert model is a random scale-free model in the sense of unbound fluctuations of the connectivity $z$, which is defined as the average degree\cite{Cohen-2010, Barabasi-2016}.
We define the contact process as usual\cite{Harris-1974}. The kinetic Monte-Carlo dynamic rules for the synchronous version of the contact process are summarized as follows:
\begin{enumerate}
\item \textbf{Initialization:} The network state is given by $N$ stochastic variables $(\psi_1,\psi_2,...,\psi_N)$ where $\psi_i = 0$ if the node $i$ is susceptible, and $\psi_i = 1$ if the node $i$ is infected. In the time $t=0$, half of the nodes are randomly chosen to be infected.
We initialize another variable with a null value to count the number of visits $N_{\mathrm{r}}$ to the absorbing state, which is all nodes being susceptibles.
\item \textbf{Evolution step:} One network node is randomly selected and
\begin{itemize}
\item If the node is susceptible, we randomly select one of its neighbors and if the selected neighbor is infected, the node becomes infected with a contamination rate $\mu=1$;
\item However, if the node is infected, it can recover with a recovering rate $\lambda$.
\end{itemize}
The time lapse of each update is $\Delta t = \alpha/N$. For simplicity, we can choose $\alpha=1$.
\item \textbf{Reactivation of the dynamics:} The contact process has an absorbing state, where all individuals are susceptible. At finite lattices, the only true stationary state is the absorbing one\cite{Dickman-2005}. To prevent the system from visiting the absorbing state, we should reactivate the dynamics, by generating quasi-stationary states. This can be done in two distinct and equivalent ways\cite{Pruessner-2007}:
\begin{itemize}
\item \textbf{Reactivation method\cite{Alves-2018, Mota-2018}:} If there is no infected node in the entire network, we increase $N_{\mathrm{r}}$ by one unit, and we randomly infect one node of the network to continue the simulation. Effects of reactivation on the stationary state can be measured by the reactivating field $h_{\mathrm{r}} = \frac{N_{\mathrm{r}}}{Nt}$
which is the average of inserted particles (i.e., spontaneously infected individuals) and scales as $1/N$ in the absorbing phase, vanishing in the thermodynamic limit;
\item \textbf{Quasi-stationary (QS) method\cite{Oliveira-2005, Dickman-2005}:} We store a list of $N_\text{list}$ active states and update the list at a predefined number of steps $N_\text{steps}$ by replacing one randomly selected state in the list with the actual system state if it is an active one. However, if the system falls in the absorbing state, we randomly select one stored active state in the list to replace the actual absorbing state and continue the dynamics. In order to reproduce simulation results with QS method, one should inform the relevant $N_\text{list}$ and $N_\text{steps}$ parameters. In our simulations, we used $N_\text{steps}=N$ (the list of active states is updated at each time unit), and $N_\text{list} = N$;
\end{itemize}
\item \textbf{Iteration:} Steps (2) and (3) should be repeated a predefined number of MC steps to let the system evolve to the stationary state. After that, we can continue to iterate steps (2) and (3) to collect the time series of the relevant observables presented in the section \ref{sec:results}.
\end{enumerate}
Note that we used the recovering rate $\lambda$ as the control parameter. However, if one decides to use the contamination rate as the control parameter, they should change the contamination rate to $1/\lambda$, and the recovering rate to $1$ to compare the results with the ones presented here. The basic reproduction number $R_0$ is defined as the ratio of the contamination and recovering rates and is given in terms of the control parameter as $R_0=1/\lambda$.
\section{Results\label{sec:results}}
The main observable is the fraction of infected nodes
\begin{equation}
\rho = \frac{1}{N}\sum_i^N \psi_i,
\label{infection-concentration}
\end{equation}
that vanishes at the absorbing phase. From the fraction of infected nodes, we can calculate the following averages from the time series of $\rho$ given in Eq.(\ref{infection-concentration})
\begin{eqnarray}
U &=&\left[\frac{\left< \rho^{2} \right>\left< \rho^{3} \right> - \left< \rho^{ } \right>\left< \rho^{2} \right>^{2}}
{\left< \rho^{ } \right>\left< \rho^{4} \right> - \left< \rho^{ } \right>\left< \rho^{2} \right>^{2}} \right],
\nonumber \\
P &=& \left[\left< \rho \right>\right], \nonumber \\
\Delta &=& N \left[ \left< \rho^{2} \right> - \left< \rho^{ } \right>^{2} \right],
\label{cp-averages}
\end{eqnarray}
where $\left[\cdots\right]$ means a quenched average on the random network realizations, and $\left< \cdots \right>$ means a time series average. $U$ is the $5$-order cumulant for directed percolation\cite{Lubeck-2002, Jansen-2007, Henkel-2008}, $P$ is the order parameter of the active-absorbing transition, and $\Delta$ is the order parameter fluctuation. All averages are given as functions of the recovering rate $\lambda$. The cumulant $U$ should be universal on the critical threshold in the presence of an external field, meaning that the curves for the cumulant should cross on the critical threshold when using the reactivation method. For the QS method, we can estimate the critical threshold by the QS moment ratio, given by\cite{Oliveira-2005}
\begin{equation}
m = \left[\frac{\left< \rho^{2} \right>}{\left< \rho \right>^{2}}\right].
\label{qs-cumulant}
\end{equation}
Close to the critical threshold $\lambda_c$, we conjecture that the $5$-order cumulant, the order parameter, and its fluctuation should obey the following finite-size scaling (FSS) relations\cite{kenna-2012, kenna-2006-1, kenna-2006-2, palchykov-2010}
\begin{eqnarray}
U &\approx& f_{U}\left[N^{1/\nu}\left( \ln N \right)^{-\widetilde{\lambda}}\left(\lambda-\lambda_{c}\right)\right], \nonumber \\
P &\approx& N^{-\beta/\nu}\left( \ln N \right)^{-\widetilde{\beta}}f_P\left[N^{1/\nu}\left( \ln N \right)^{-\widetilde{\lambda}}\left(\lambda-\lambda_{c}\right)\right], \nonumber \\
\Delta &\approx& N^{\gamma'/\nu} \left( \ln N \right)^{\widetilde{\gamma}'}f_{\Delta}\left[N^{1/\nu}\left( \ln N \right)^{-\widetilde{\lambda}}\left(\lambda-\lambda_{c}\right)\right],
\label{fss-relations}
\end{eqnarray}
where $\beta/\nu = 1/2$, and $\gamma'/\nu = 0$ are the Mean Field critical exponent ratios. The exponent $\nu$ is the shift exponent\cite{Hong-2007, Wittmann-2014} that obeys $\nu = d_c \nu_\perp$ where $\nu_\perp$ is the spatial correlation exponent and $d_c=4$ is the upper critical dimension, with $1/\nu = 1/2$. Expressions (\ref{fss-relations}) account for logarithmic corrections, and our simulation results with reactivation field are compatible with the pseudo-exponents $\widetilde{\lambda}=1/4$, $\widetilde{\beta}=1$, and $\widetilde{\gamma}'=1/8$. Results for the pseudo-exponent corrections can change with the quasi-stationary state, and for the QS method, we conjecture $\widetilde{\lambda}=1/2$, $\widetilde{\beta}=-1/2$, and $\widetilde{\gamma}'=5/4$. QS moment ratio $m$ obeys the same FSS behavior of $U$, however, with a distinct correction pseudo-exponent.
We simulated the dynamics with the reactivation method on Barabasi-Albert networks with different connectivities $z$ to investigate how the critical thresholds depend on them. We performed a Monte Carlo simulation on networks with sizes: $N=2500$, $N=3600$, $N=4900$, $N=6400$, $N=8100$, and $N=10000$ in order to obtain the averages given in Eq.(\ref{cp-averages}). For each size $N$, and connectivity $z$, we simulated $128$ random network realizations to make quench averages. For each network replica, we considered $10^5$ Markov chain Monte-Carlo (MCMC) steps to let the system evolve to a stationary state and another $10^5$ MCMC steps to collect a time series of $10^5$ values of the fraction of infected nodes. From the time series, we calculated the averages written on Eq.(\ref{cp-averages}), and its respective error bars\cite{Tukey-1958}.
Regarding the CP dynamics with QS method, we considered only the case $z=4$, where we simulated networks with sizes: $N=4900$, $N=6400$, $N=8100$, $N=10000$, $N=12100$, and $N=14400$ in order to obtain the order parameter $P$, and its fluctuations $\Delta$, given in Eq.(\ref{cp-averages}). We also calculated the QS moment ratio $m$ given in Eq.(\ref{qs-cumulant}). We simulated $160$ random network realizations to make quench averages with $2 \cdot 10^5$ MCMC steps, where we discarded the first $10^5$ steps for each network replica.
We show results for the Barabasi-Albert networks with connectivity $z=4$ in Fig.(\ref{cp-z=4-results}). The 5-order cumulant should be independent of the system size at the critical point. By inspecting the panel (a), we can obtain the collective critical threshold $\lambda_c = 0.8729(5)$. We did the same for some network connectivities to collect data of critical thresholds summarized in Tab.(\ref{criticalbehaviortable}). The critical thresholds could be refined by inspection of data collapses according to expressions (\ref{fss-relations}) and we obtained a measurement error of $\pm 0.0005$ in all cases. In panel (b) we show the data collapse of the 5-order cumulant and the critical behavior is compatible with the same Mean Field exponents of the complex networks and scale-free networks with degree distribution exponent $\gamma>3$\cite{Hong-2007, Ferreira-2011}, and present logarithmic corrections with the pseudo-exponent $\widetilde{\lambda} = 1/4$.
\begin{table}[h]
\begin{center}
\begin{tabularx}{0.4\textwidth}{YY}
\hline
$z$ & $\lambda_c$ \\
\hline
4 & 0.8729(5) \\
5 & 0.8985(5) \\
6 & 0.9160(5) \\
7 & 0.9282(5) \\
8 & 0.9372(5) \\
9 & 0.9445(5) \\
10 & 0.9502(5) \\
15 & 0.9671(5) \\
20 & 0.9755(5) \\
\hline
\end{tabularx}
\end{center}
\caption{Summary of critical thresholds $\lambda_c$ on Barabasi-Albert networks for some connectivities $z$. The system will be in the active phase for contamination rates $\mu=1$ and recovering rates $\lambda$ smaller than the critical threshold $\lambda_c$. They were obtained by inspection of the data collapses through finite-size scaling relations presented in Eq.(\ref{fss-relations}).}
\label{criticalbehaviortable}
\end{table}
\begin{figure}[p]
\begin{center}
\includegraphics[scale=0.16]{figures/cp-barabasi-4-0.8729.png}
\end{center}
\caption{(Color Online) Results for the Contact Process on Barabasi-Albert networks with connectivity $z=4$ of different sizes $N$, where we used the reactivation method. In panels (a), (c) and (e), we show the 5-order cumulant $U$, the infection concentration $P$, and its fluctuation $\Delta$ written in Eq. (\ref{cp-averages}). In panels (b), (d) and (f), we show the scaled plots of $U$, $P$, and $\Delta$ according to the FSS relations written in Eq.(\ref{fss-relations}). The cumulants for different lattice sizes cross on the collective threshold, estimated at $\lambda_c = 0.8729(5)$. The data collapses are compatible with the Mean Field critical exponent ratios and pseudo-correction exponents presented in the section \ref{sec:results}. Statistical errors are smaller than the symbols.}
\label{cp-z=4-results}
\end{figure}
We present the order parameter in panel (c) of Fig.(\ref{cp-z=4-results}). From the curves, we can identify the active phase for recovering rates smaller than the critical threshold $\lambda_c$ and the absorbing phase on the converse. Note that the reactivation procedure destroys the absorbing phase by introducing tails in the curves of the order parameter and the inflection points separate the active and absorbing phases. The reactivating field $h_r$ in the simulations scales as $1/N$ when going deep in the absorbing phase, in a way that the tails are just a perturbation to the order parameter. In panel (d), we show the respective data collapse of the order parameter, which is compatible with the Mean Field critical exponents and logarithmic corrections with pseudo-exponent $\widetilde{\beta}=1$.
We display the order parameter fluctuation $\Delta$ in panel (e). It presents increasing peaks at the inflection point of the order parameter. However, the critical behavior predicted by the Mean Field exponents should be a finite jump corresponding to $\gamma'=0$. The respective data collapse is shown in panel (f), and the order parameter fluctuation at the critical threshold scale as $(\ln N)^{1/8}$, yielding a pseudo-exponent $\widetilde{\gamma'}=1/8$. We can conclude that the peaks are due to logarithm corrections on the order parameter fluctuations.
Now, we compare the effects of distinct QS states by comparing the results obtained with the reactivation procedure with the correspondent results obtained by the QS method, shown in Fig.(\ref{cp-z=4-results-QS}). We estimated the critical threshold at $\lambda_c=0.8723(5)$ where we refined the critical point by inspecting the data collapses. The critical behavior is the same, obeying Mean Field critical exponents. However, the QS state has distinct pseudo-exponent corrections. Data collapses of Fig.(\ref{cp-z=4-results-QS}) are compatible with the values $\widetilde{\lambda}=1/2$, $\widetilde{\beta}=-1/2$, and $\widetilde{\gamma}'=5/4$.
\begin{figure}[p]
\begin{center}
\includegraphics[scale=0.16]{figures/cp-barabasi-z=4-0.8723-QS.png}
\end{center}
\caption{(Color Online) Results for the Contact Process on Barabasi-Albert networks with connectivity $z=4$ of different sizes $N$, where we used the quasi-stationary method. In panels (a), (c) and (e), we show the quasi-stationary moment ratio $m$, the infection concentration $P$, and its fluctuation $\Delta$ written in Eq. (\ref{cp-averages}). In panels (b), (d) and (f), we show the scaled plots of $m$, $P$, and $\Delta$ according to the FSS relations written in Eq.(\ref{fss-relations}). We estimated the critical threshold at $\lambda_c = 0.8723(5)$. The data collapses are compatible with the Mean Field critical exponent ratios and pseudo-correction exponents presented in the section \ref{sec:results}. Error bars are smaller than the symbols.}
\label{cp-z=4-results-QS}
\end{figure}
Finally, we discuss the critical thresholds $\lambda_c$ as functions of the network connectivity $z$ in Fig.(\ref{phasediagram}). A regression of $\lambda_c$ in terms of $1/z$ reveals a straight line that separates the active and absorbing phases. An analogous result was obtained for a kinetic consensus formation model, where the critical thresholds have the same linear dependence on the inverse of the network connectivity\cite{Alves-2020}. Particularly interesting is the limit of the fully connected graph $z\to\infty$ where the basic reproduction number $R_0$ assumes the critical value $R_0=1$ which separates the active phase for $R_0>1$ and the absorbing phase for $R_0\leq 1$. In this way, the control parameter $\lambda = 1/R_0$ has a critical value for the fully connected graph given by $\lim_{z\to\infty}\lambda_c = 1$. Indeed, this is compatible with the extrapolation of the linear regression shown in Fig.(\ref{phasediagram}).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.3]{figures/phase-diagram.png}
\end{center}
\caption{(Color Online) Phase diagram of the contact process on the Barabasi-Albert networks: we show the critical thresholds $\lambda_c$ in Tab.(\ref{criticalbehaviortable}) as functions of the Barabasi-Network connectivity $z$ in gray circles. The critical thresholds are a linear function of $1/z$ where $\lim_{z\to\infty}\lambda_c = 1$, in a way that the limit of the critical threshold is the same as the complete graph. The linear function defines a phase diagram that separates the active and absorbing regions, shown in cyan and yellow, respectively. Error bars are smaller than the symbols.}
\label{phasediagram}
\end{figure}
\section{Conclusions\label{sec:conclusions}}
We presented the stationary critical behavior of the contact process on Barabasi-Albert networks. Our data collapses are compatible with Mean Field critical exponents $\beta=1$, $\gamma'=0$ and $\nu=2$ and the existence of logarithmic corrections to the shift scaling when using the reactivation method, i.e., on the $5$-order Binder cumulant, that are proportional to $(\ln N)^{-1/4}$. Logarithmic corrections are also present in the order parameter scaling, proportional to $(\ln N)^{-1}$. Finally, the fluctuation of the order parameter at the critical threshold scales as $(\ln N)^{1/8}$, instead of presenting a finite jump.
By simulating the same dynamics by using quasi-stationary states generated by the QS method, we found the same critical behavior in the Mean Field regime. However, the pseudo-exponents are distinct in this case. We found a correction in the shift scaling proportional to $(\ln N)^{-1/2}$. Also, there is a logarithmic correction in the order parameter at the critical threshold, proportional to $(\ln N)^{1/2}$. In addition, the order parameter fluctuations at the critical threshold scales as $(\ln N)^{5/4}$.
We obtained the phase diagram of the system, i.e., the critical thresholds as functions of the network connectivity. Linear regression of data of critical thresholds reveals a linear function of $1/z$, separating the active and absorbing regions. The extrapolation of the critical threshold in the limit $z\to \infty$ yields the critical basic reproduction number of the complete graph $R_0 = 1/\lambda_c = 1$, as expected. An analogous result was found for a kinetic consensus formation model, where the critical thresholds have the same linear dependence on the inverse of the network connectivity\cite{Alves-2020}. Note that the phase diagram permits us to conclude the social distancing effectiveness in epidemic control. Reducing the connectivity of the network can be interpreted as avoiding social contacts and the direct consequence is to increase the minimal $R_0=1/\lambda_c$ that allows the epidemics to survive.
\section{Acknowledgments}
We would like to thank CAPES (Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{\i}vel Superior), CNPq (Conselho Nacional de Desenvolvimento Cient\'{\i}fico e tecnol\'{o}gico), FUNCAP (Funda\c{c}\~{a}o Cearense de Apoio ao Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico) and FAPEPI (Funda\c{c}\~{a}o de Amparo a Pesquisa do Estado do Piau\'{\i}) for the financial support. We acknowledge the use of Dietrich Stauffer Computational Physics Lab, Teresina, Brazil, and Laborat\'{o}rio de F\'{\i}sica Te\'{o}rica e Modelagem Computacional - LFTMC, Teresina, Brazil, where the numerical simulations were performed.
| {
"attr-fineweb-edu": 1.879883,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUexzxK7kjXJwSSaO_ | \section{Introduction}
Owing to the recent advances in heavy-ion accelerator and trap
facilities as well as in detection techniques, new possibilities
arise to study the \textit{electronic structure} of simple atomic
systems in strong (nuclear) Coulomb fields. Relativistic, quantum
electrodynamics (QED), or even parity non--conservation (PNC)
effects, which are difficult to isolate in neutral atoms, often
become enhanced in high--$Z$, few--electron ions. In order to
improve our understanding of these fundamental interactions, a
number of experiments have been recently carried out on the
characteristic photon emission from heavy ions
\cite{FrI05,GuS05,TrK09}. Apart from the one--photon bound--bound
transitions, the two--photon decays of metastable ionic states have
also attracted much interest since the analysis of its properties
may reveal unique information about the \textit{complete} spectrum
of the ion, including negative energy (positron) solutions of
Dirac's equation. Until now, however, most two--photon studies were
focused on measuring the total and energy--differential decay rates
\cite{DuB93,AlA97,ScM99,MoD04,IlU06,KuT09,TrS09} which were found in
good agreement with theoretical predictions, based on relativistic
calculations
\cite{DrG81,GoD81,Dra86,ToL90,DeJ97,SaP98,KoF03,LaS04,LaS05,SuS09}.
In contrast, much less attention has been previously paid to the
angular and polarization correlations between the emitted photons.
The first two--photon correlation studies with heavy ions are likely
to be carried out at the GSI facility in Darmstadt, where the
significant progress has been recently made in development of solid
state, position sensitive x--ray detectors \cite{TaS06,StS07}. By
means of these detectors, a detailed analysis of the angular and
polarization properties of two--photon emission will become possible
and will provide more insights into the electronic structure of
heavy, few--electron ions.
\medskip
Despite of the recent interest in two--photon coincidence studies,
not much theoretical work has been done so far to explore the
photon--photon correlations in the decay of heavy atomic systems.
While some predictions are available for the hydrogen--like
\cite{Au76,SuK05} and neutral atoms \cite{ToL90}, no systematic
angular (and polarization) analysis was performed for the
\textit{helium--like} ions which are the most suitable candidates
for two--photon investigations in high--$Z$ domain. In the present
work, therefore, we apply the second--order perturbation theory
based on Dirac's equation to investigate the $\gamma - \gamma$
angular correlations in the decay of two--electron systems. The
basic relations of such a (relativistic) perturbation approach will
be summarized in the following Section. In particular, here we
introduce the second--order transition amplitude that describes a
bound--state transition under the simultaneous emission of two
photons. The evaluation of these (many--body) amplitudes within the
framework of the independent particle model (IPM) is thereafter
discussed in Section \ref{sect_ipm}. Within this approximation that
is particularly justified for the high--$Z$ regime
\cite{Dun96,Dra89,Dun04}, the photon--photon correlation function
from Section \ref{sect_angular_correlation} can be traced back to
the one--electron matrix elements. This reduction enables us to
implement the well--established Green's function as well as finite
basis set methods \cite{Ind95,SaJ96,ShT04} and to calculate the
correlation functions for the $1s 2s \, ^1S_0 \to 1s^2 \, ^1S_0$,
$1s 2s \, ^3S_1 \to 1s^2 \, ^1S_0$ and $1s 2p \, ^3P_0 \to 1s^2 \,
^1S_0$ transitions in helium--like xenon Xe$^{52+}$, gold Au$^{77+}$
and uranium U$^{90+}$ ions. Results from these computations are
displayed in Section \ref{sect_results} and indicate a strong
dependence of the photon emission pattern on the symmetry and parity
of initial and final ionic states. Moreover, the significant effects
that arise due the higher--multipole terms in the expansion of the
electron--photon interaction are also discussed in the context of
angular correlation studies. Finally, a brief summary is given in
Section \ref{sect_summary}.
\section{Theoretical background}
\subsection{Second--order transition amplitude}
Since the second--order perturbation theory has been frequently
applied in studying two--photon decay, here we may restrict
ourselves to a short compilation of the basic formulas relevant for
our analysis and refer for all further details to the literature
\cite{DrG81,GoD81,ToL90,SaP98,LaS04,LaS05,SuK05,Dra89,Dun04}. Within
the \textit{relativistic} framework, the second--order transition
amplitude for the emission of two photons with wave vectors
$\bm{k}_i$ ($i$ = 1, 2) and polarization vectors
$\bm{u}_{\lambda_i}$ ($\lambda = \pm 1$ ) is given by
\begin{widetext}
\begin{eqnarray}
\label{matrix_general}
{\cal M}_{fi}(M_f, M_i, \lambda_1, \lambda_2) &=& \sum\limits_{\gamma_\nu J_\nu M_\nu}
\frac{
\mem{\gamma_f J_f M_f}{\hat{\mathcal{R}}^\dag(\bm{k}_1, \bm{u}_{\lambda_1})}{\gamma_\nu J_\nu M_\nu}
\mem{\gamma_\nu J_\nu M_\nu}{\hat{\mathcal{R}}^\dag(\bm{k}_2, \bm{u}_{\lambda_2})}{\gamma_i J_i M_i}
}{E_\nu - E_i + \omega_2} \nonumber \\
&+&
\sum\limits_{\gamma_\nu J_\nu M_\nu}
\frac{
\mem{\gamma_f J_f M_f}{\hat{\mathcal{R}}^\dag(\bm{k}_2, \bm{u}_{\lambda_2})}{\gamma_\nu J_\nu M_\nu}
\mem{\gamma_\nu J_\nu M_\nu}{\hat{\mathcal{R}}^\dag(\bm{k}_1, \bm{u}_{\lambda_1})}{\gamma_i J_i M_i}
}{E_\nu - E_i + \omega_1} \, ,
\end{eqnarray}
\end{widetext}
where $\ketm{\gamma_i J_i M_i}$ and $\ketm{\gamma_f J_f M_f}$ denote
the (many--electron) states with well--defined total angular momenta
$J_{i,f}$ and their projections $M_{i,f}$ of the ions just before
and after their decay, and $\gamma_{i,f}$ all the additional quantum
numbers as necessary for a unique specification. The energies of
these states, $E_i$ and $E_f$, are related to the energies
$\omega_{1,2} = c k_{1,2}$ of the emitted photons by the energy
conservation condition:
\begin{equation}
\label{energy_conservation}
E_i - E_f = \hbar \omega_1 + \hbar \omega_2 \, .
\end{equation}
Using this relation, it is convenient to define the so--called
energy sharing parameter $y = \omega_1/(\omega_1 + \omega_2)$, i.e.,
the fraction of the energy which is carried away by the first
photon.
\medskip
In Eq.~(\ref{matrix_general}), moreover, $\hat{\mathcal{R}}$ is the
transition operator that describes the interaction of the electrons
with the electromagnetic radiation. In velocity (Coulomb) gauge for
the coupling of the radiation field this operator can be written as
a sum of one--particle operators:
\begin{equation}
\label{A_operator_general}
\hat{\mathcal{R}}(\bm{k}, \bm{u}_{\lambda}) =
\sum_{m} {\bm \alpha_m} \, \mathcal{A}_{\lambda, m} =
\sum_{m} {\bm \alpha_m} \, {\bm u}_{\lambda} \,
{\rm e}^{i {\bm k} \cdot {\bm r}_m} \, ,
\end{equation}
where ${\bm \alpha}_m = \left( \alpha_{x,m}, \alpha_{y,m},
\alpha_{z,m} \right)$ denotes the vector of the Dirac matrices for
the $m$--th particle and $\mathcal{A}_{\lambda,m}$ the vector
potential of the radiation field. To further simplify the
second--order transition amplitude (\ref{matrix_general}) for
practical computations, it is convenient to decompose the potential
$\mathcal{A}_{\lambda,m}$ into spherical tensors, i.e. into its
electric and magnetic \textit{multipole} components. For the
emission of the photon in the direction $\hat{\bm k} = (\theta,
\phi)$ with respect to the quantization ($z$--) axis such a
decomposition reads \cite{Ros53}:
\begin{eqnarray}
\label{multipole_expansion}
\bm{u}_{\lambda} {\rm e}^{i \bm{k} \cdot \bm{r}} =
\sqrt{2 \pi} \sum\limits_{L =1}^{\infty} \, \sum\limits_{M=-L}^{L}
\, \sum\limits_{p \, = 0, 1}i^L \, [L]^{1/2} \,
(i \lambda)^p \, \hat{a}^{p}_{LM}(k) \, D^L_{M \lambda}(\hat{\bm k})
\, ,
\end{eqnarray}
where $[L] \equiv 2L + 1$, $D^L_{M \lambda}$ is the Wigner rotation
matrix of rank $L$ and $\hat{a}^{p \, = 0,1}_{LM}(k)$ refer to
magnetic ($p$=0) and electric ($p$=1) multipoles, respectively.
\medskip
The multipole decomposition of the photon field in terms of its
irreducible components with well--defined transformation properties
enables us to simplify the second--order amplitude by employing the
techniques from Racah's algebra. Inserting
Eqs.~(\ref{A_operator_general})--(\ref{multipole_expansion}) into
the matrix element (\ref{matrix_general}) and by making use of the
Wigner--Eckart theorem, we obtain:
\begin{eqnarray}
\label{M_amplitude_new}
{\cal M}_{fi}(M_f, M_i, \lambda_1, \lambda_2) &=& 2 \pi \,
\sum\limits_{L_1 M_1 p_1} \sum\limits_{L_2 M_2 p_2}
(-i)^{L_1 + L_2} \, [L_1, L_2]^{1/2} \, (-i \lambda_1)^{p_1} \,
(-i \lambda_2)^{p_2} \, D^{L_1 *}_{M_1 \lambda_1}(\hat{k}_1) \,
D^{L_2 *}_{M_2 \lambda_2}(\hat{k}_2) \nonumber \\
&\times& \sum\limits_{J_\nu M_\nu}
\frac{1}{[J_i, J_\nu]^{1/2}}
\Bigg[
\sprm{J_f M_f \, L_1 M_1}{J_\nu M_\nu} \, \sprm{J_\nu M_\nu \, L_2 M_2}{J_i
M_i} \,
S_{L_1 p_1, \, L_2 p_2}^{J_\nu}(\omega_2) \nonumber \\
&+&
\sprm{J_f M_f \, L_2 M_2}{J_\nu M_\nu} \, \sprm{J_\nu M_\nu \, L_1 M_1}{J_i M_i} \,
S_{L_2 p_2, \, L_1 p_1}^{J_\nu}(\omega_1) \Bigg] \,
,
\end{eqnarray}
where the second--order \textit{reduced} transition amplitude is
given by:
\begin{eqnarray}
\label{S_function}
S^{J_\nu}_{L_1 p_1, \, L_2 p_2}(\omega_2)
&=&
\sum\limits_{\gamma_\nu} \frac{\rmem{\gamma_f J_f}{\sum\limits_{m} \bm{\alpha}_m \,
\hat{a}^{p_1 \dag}_{L_1, m}(k_1)}{\gamma_\nu J_\nu}
\rmem{\gamma_\nu J_\nu}{\sum\limits_{m} \bm{\alpha}_m \,
\hat{a}^{p_2 \dag}_{L_2, m}(k_2)}{\gamma_i J_i}
}{E_\nu - E_i + \omega_2} \, .
\end{eqnarray}
Here, the summation over the intermediate states formally runs over
the complete spectrum of the ions, including a summation over the
discrete part of the spectrum as well as the integration over the
positive-- and negative--energy continua. In practice, such a
summation is not a simple task especially when performed over the
\textit{many--electron} states $\ketm{\gamma_\nu J_\nu}$. In the
next section, therefore, we shall employ the independent particle
model in order to express the reduced matrix elements
(\ref{S_function}) for many--electron ions in terms of their
one--electron analogs.
\subsection{Evaluation of the reduced transition amplitudes}
\label{sect_ipm}
As seen from Eqs.~(\ref{M_amplitude_new})--(\ref{S_function}), one
has first to generate a \textit{complete} set of many--electron
states $\ketm{\gamma J}$ in order to calculate the second order
transition amplitude $M_{fi}$. A number of approximate methods, such
as multi--configuration Dirac--Fock (MCDF) \cite{Gra89,Fri02} and
configuration interaction (CI) \cite{DeJ97}, are known in atomic
structure theory for constructing these states. Moreover, the
systematic perturbative QED approach in combination with the CI
method turned out to be most appropriate for describing both
transition probabilities \cite{TuV05,InS04} and transition energies
\cite{ArS05} in highly charged ions. In the high--$Z$ domain,
however, the structure of few--electron ions can already be
reasonably well understood within the independent particle model
(IPM). This model is well justified for heavy species especially,
since the interelectronic effects scale with $1/Z$ and, hence, are
much weaker than the electron--nucleus interaction
\cite{Dra89,Dun04,SuJ08}. Within the IPM, that takes the Pauli
principle into account, the many--electron wave functions are
approximated by means of Slater determinants, built from
one--particle orbitals. For this particular choice of the
many-electron function, all the (first-- and the second--order)
matrix elements can be easily decomposed into the corresponding
single--electron amplitudes.
\medskip
For a helium--like system, the decomposition of the reduced
amplitude (\ref{S_function}) reads:
\begin{eqnarray}
\label{S_function_decomposition}
S^{J_\nu}_{L_1 p_1, \, L_2 p_2}(\omega_2)
&=& -\delta_{J_\nu L_1} \, [J_i, J_\nu]^{1/2}
\sum\limits_{j_\nu} (-1)^{J_i + J_\nu + L_2}
\, \sixjm{j_\nu}{j_0}{J_\nu}{J_i}{L_2}{j_i}
\, S^{j_\nu}_{L_1 p_1, \, L_2 p_2}(\omega_2) \, ,
\end{eqnarray}
where the \textit{one--electron} matrix elements of the (electric
and magnetic) multipole field operators are given by
\begin{eqnarray}
\label{S_one_electron}
S^{j_\nu}_{L_1 p_1, \, L_2 p_2}(\omega_2) =
\sum\limits_{n_\nu}
\frac{\rmem{n_0 j_0}{\bm{\alpha} \,
\hat{a}^{p_1 \dag}_{L_1}(k_1)}{n_\nu j_\nu}
\rmem{n_\nu j_\nu}{\bm{\alpha} \, \hat{a}^{p_2 \dag}_{L_2}(k_2)}{n_i j_i}
}{E_\nu - E_i + \omega_2} \, .
\end{eqnarray}
We assume here that the ``spectator'' electron, being in hydrogenic
state $\ketm{n_0 j_0}$, stays passive in the decay process.
Moreover, $\ketm{n_i j_i}$, $\ketm{n_\nu j_\nu}$ and $\ketm{n_f j_f}
= \ketm{n_0 j_0}$ denote the initial, intermediate and final states
of the ``active'' electron, correspondingly. The great advantage of
formula (\ref{S_function_decomposition}) is that it helps us to
immediately evaluate the many--electron transition amplitude
(\ref{S_function}) in terms of the (one--particle) functions
$S^{j_\nu}_{L_1 p_1, \, L_2 p_2}(\omega_2)$. The summation over the
\textit{complete} one--particle spectrum that occurs in these
functions can be performed by means of various methods. In the
present work, we make use of (i) the relativistic Coulomb--Green's
function \cite{SuK05,JeS08,SuKCPC05} and (ii) a B--spline discrete
basis set \cite{SaP98,Ind95,SaJ96,ShT04,SuS09} to evaluate all the
second--order transition amplitudes. Indeed, both approaches yield
almost identical results for the angular correlation functions in
the two--photon decay of heavy helium--like ions.
\begin{figure}
\includegraphics[height=8.5cm,angle=00,clip=]{Fig1.eps}
\vspace*{0.0cm} \caption{(Color online) Angular correlation function
(\ref{function_definition}) for the $1s 2s \, ^1S_0 \to 1s^2 \,
^1S_0$ two--photon decay of helium--like xenon, gold and uranium
ions. Calculations obtained within the electric dipole 2E1
approximation (dashed line) are compared with those including all
the allowed multipoles (solid line). Results are presented for the
relative photon energies $y$ = 0.1 (upper panel) and 0.5 (lower
panel).} \label{Fig1}
\end{figure}
\subsection{Differential decay rate}
\label{sect_angular_correlation}
Equation (\ref{M_amplitude_new}) displays the general form of the
relativistic transition amplitude for the two--photon decay of
many--electron ions. Such an amplitude represents the ``building
block'' for studying various properties of the emitted radiation.
For instance, the differential two--photon decay rate can be written
in terms of (squared) transition amplitudes as:
\begin{equation}
\label{diff_rate}
\frac{{\rm d}w}{{\rm d}\omega_1 {\rm d}\Omega_1 {\rm d}\Omega_2} =
\frac{\omega_1 \omega_2}{(2 \pi)^3 c^3} \,
\frac{1}{2J_i + 1} \, \sum\limits_{M_i M_f} \, \sum\limits_{\lambda_1 \lambda_2}
\left|{\cal M}_{fi}(M_f, M_i, \lambda_1, \lambda_2) \right|^2 \, ,
\end{equation}
if we assume that the excited ions are initially unpolarized and
that the spin states of the emitted photons remain unobserved in a
particular measurement. As seen from expression (\ref{diff_rate}),
the two--photon rate is \textit{single} differential--owing to the
conservation law (\ref{energy_conservation})---in the energy of one
of the photons but \textit{double} differential in the emission
angles. Accordingly, its further evaluation requires to determine
the \textit{geometry} under which the photon emission is considered.
Since no particular direction is preferred for the decay of an
unpolarized (as well as unaligned) ion, it is convenient to adopt
the quantization ($z$--) axis along the momentum of the ``first''
photon: $\bm{k}_1 || z$. Such a choice of the quantization axis
allows us to simplify the rate (\ref{diff_rate}) and to define the
\textit{angular correlation function}:
\begin{equation}
\label{function_definition}
W_{2\gamma}(\theta, y) = 8 \pi^2 \, (E_i - E_f) \,
\frac{{\rm d}w}{{\rm d}\omega_1 {\rm d}\Omega_1 {\rm d}\Omega_2} (\theta_1 = 0, \phi_1 = 0, \phi_2 = 0)
\, ,
\end{equation}
that is characterized (apart from the relative energy $y$) by the
single polar angle $\theta = \theta_2$ of the ``second'' photon
momentum with respect to this axis. In this expression, moreover,
the factor $8\pi^2$ arises from the integration over the solid angle
$d\Omega_1 = \sin\theta_1 {\rm d}\theta_1 {\rm d}\phi_1$ of the
first photon as well as the integration over the azimuthal angle
${\rm d}\phi_2$ of the second photon. In the next Section, we shall
investigate the dependence of the function $W_{2\gamma}(\theta, y)$
on this \textit{opening angle} $\theta$ for various bound--bound
transitions and for a range of (relative) photon energies.
\section{Results and discussion}
\label{sect_results}
With the formalism developed above, we are ready now to analyze the
angular correlations in the two--photon decay of helium--like heavy
ions. In nowadays experiments, the excited states of these ions can
be efficiently populated in relativistic ion--atom collisions. For
example, the formation of the metastable $1s 2s \, ^1S_0$ state
during the inner--shell impact ionization of (initially)
lithium--like heavy ions has been studied recently at the GSI
storage ring in Darmstadt \cite{RzS06}. The radiative deexcitation
of this state can proceed only via the two--photon transition $1s 2s
\, ^1S_0 \to 1s^2 \, ^1S_0$ since a single--photon decay to the
$1s^2\;\, ^1S_0$ ground state is strictly forbidden by the
conservation of angular momentum. Fig.~1 displays the photon--photon
angular correlation function for this experimentally easily
accessible decay of helium--like xenon Xe$^{52+}$, gold Au$^{77+}$
and uranium U$^{90+}$ ions and for the two energy sharing parameters
$y$ = 0.1 (upper panel) and $y$ = 0.5 (lower panel). Moreover,
because the radiative transitions in high--$Z$ ions are known to be
affected by the higher terms of the electron--photon interaction
(\ref{A_operator_general}), calculations were performed within both,
the exact relativistic theory (solid line) to include all allowed
multipole components ($p_1 L_1, \, p_2 L_2$) in the amplitude
(\ref{M_amplitude_new}) as well as the electric dipole approximation
(dashed line), if only a single term with $L_1 = L_2 = 1$ and $p_1 =
p_2 = 1$ is taken into account. In the dipole 2E1 approach, as
expected, the angular distribution is well described by the formula
$1 + \cos^2\theta$ that predicts a \textit{symmetric}---with respect
to the opening angle $\theta = 90^\circ$---emission pattern of two
photons. Within the exact relativistic theory, in contrast, an
asymmetric shift in the angular correlation function is obtained. As
can be deduced from
Eqs.~(\ref{S_function_decomposition})--(\ref{function_definition}),
this shift arises from the interference between the leading 2E1
decay channel and higher multipole terms in the electron--photon
interaction:
\begin{equation}
\label{W_1}
W_{2\gamma}(\theta, y) \propto (1 + \cos^2\theta)
+ 4 \, \frac{\mathcal{S}_{M1}}{\mathcal{S}_{E1}} \, \cos\theta
- \frac{20}{3} \, \frac{\mathcal{S}_{E2}}{\mathcal{S}_{E1}} \, \cos^3\theta
+ ... \, ,
\end{equation}
where, for the sake of brevity, we have introduced the notation
$\mathcal{S}_{Lp} = S^{J_\nu = L}_{L p, \, L p}(\omega_1) + S^{J_\nu
=L}_{L p, \, L p}(\omega_2)$. For high--$Z$ domain, the photon
emission occurs predominantly in the backward directions if the
nondipole terms are taken into account; an effect which becomes more
pronounced for the equal energy sharing (cf. bottom panel of
Fig.~1). Including the higher multipoles into the photon--photon
correlation function, a similar asymmetry was found in the past for
the $2s_{2/1} \to 1s_{1/2}$ decay in hydrogen--like heavy ions both
within the nonrelativistic \cite{Au76} and relativistic \cite{SuK05}
theory.
\medskip
\begin{figure}
\includegraphics[height=8.5cm,angle=0,clip=]{Fig2.eps}
\vspace*{0.0cm} \caption{(Color online) Angular correlation function
(\ref{function_definition}) for the $1s 2s \, ^3S_1 \to 1s^2 \,
^1S_0$ two--photon decay of helium--like xenon, gold and uranium
ions. Calculations obtained within the electric dipole 2E1
approximation (dashed line) are compared with those including all of
the allowed multipoles (solid line). Results are presented for the
relative photon energies $y$ = 0.1 (upper panel) and 0.5 (lower
panel).} \label{Fig2}
\end{figure}
Apart from the singlet $1s 2s \, ^1S_0$, the formation of the
triplet $1s 2s \, ^3S_1$ state has been also observed in recent
collision experiments at the GSI storage ring \cite{RzS06,KuT09}.
Although much weaker in intensity (owing to the dominant M1
transition), the two--photon decay of this $1s2s \, ^3S_1$ state has
attracted recent interest and might provide an important testing
ground for symmetry violations of Bose particles \cite{Dun04,DeB99}.
The angular correlation between the photons emitted in this $1s 2s
\, ^3S_1 \to 1s^2 \, ^1S_0$ (two--photon) decay is displayed in
Fig.~2, by comparing again the results from the exact relativistic
theory with the 2E1 dipole approximation. As seen from the figure,
the photon--photon correlation functions for the $2 ^3S_1 \to 1
^1S_0$ transition is much more sensitive with regard to higher
multipoles in the electron--photon interaction than obtained for the
$2 ^1S_0 \to 1 ^1S_0$ decay. The strongest non--dipole effect can be
observed for the equal energy sharing ($y$ = 0.5), where the
two--photon emission is strictly \textit{forbidden} within the
electric dipole approximation. This suppression of the 2E1 decay is
a direct consequence of the exchange symmetry of photons as required
by the Bose--Einstein statistics and, hence, a particular case of
the Landau--Yang theorem that forbids the decay of vector particles
into two photons (cf. Refs.~\cite{Dun04,DeB99,Lan48,Yan50} for
further details). In contrast to the 2E1 channel, the E1M2 $2 ^3S_1
\to 1 ^1S_0$ transition can proceed even if the energies of the two
photons are equal. This transition as well as higher multipole terms
give rise to a strongly anisotropic correlation function that
vanishes for the parallel and back--to--back photon emission and has
a maximum at $\theta = 90^\circ$.
\medskip
Large effects due to the higher multipole contributions to the $1s
2s \, ^3S_1 \to 1s^2 \, ^1S_0$ two--photon transition can be
observed not only for the case of equal energy sharing ($y$ = 0.5).
For the relative energy $y$ = 0.1, for example, the photon--photon
angular correlation function is found symmetric with regard to
$\theta = 90^\circ$ in the electric dipole (2E1) approximation but
becomes asymmetric in an exact relativistic theory. In contrast to
the decay of $2 ^1S_0$ state, however, a predominant
\textit{parallel emission} of both photons appears to be more likely
if the higher multipoles are taken into account. For the $2 ^3S_1
\to 1 ^1S_0$ two--photon decay of helium--like uranium U$^{90+}$,
for example, the intensity ratio $W_{2\gamma}(\theta = 0^\circ,
y=0.1)/W_{2\gamma}(\theta = 180^\circ, y=0.1)$ increases from
\textit{unity} within the electric dipole approximation to almost
1.6 in the exact relativistic treatment.
\medskip
Until now we have discussed the photon--photon correlations in the
decay of $1s 2s$ (singlet and triplet) helium--like states. Besides
these---well studied---transitions, recent theoretical interest has
been focused also on the $1s 2p \, ^3P_0 \to 1s^2 \, ^1S_0$
two--photon decay whose properties are expected to be sensitive to
(parity violating) PNC phenomena in atomic systems \cite{Dun96}.
Future investigations on such subtle parity non--conservation
effects will require first detailed knowledge on the angle (and
polarization) properties of two--photon emission as well as the role
of non--dipole contributions. The angular correlation function
(\ref{function_definition}) for the $2 ^3P_0 \to 1 ^1S_0$ transition
is displayed in Fig.~3, again, for two relative photon energies $y$
= 0.1 and 0.5 and for the nuclear charges $Z$ = 54, 79 and 92.
Calculations have been performed both within the exact theory and
the (``electric and magnetic'') dipole approximation which accounts
for the leading E1M1--M1E1 decay channel. As seen from the figure,
the emission pattern strongly depends on the energy sharing between
the photons. If, for example, one of the photons is more energetic
than the second one their parallel emission becomes dominant (cf.
upper panel of Fig.~3). In contrast, photons with equal energies,
i.e. when $y$ = 0.5, are more likely to be emitted back--to--back
while the differential rate (\ref{diff_rate}) vanishes identically
for $\theta = 0^\circ$. Such a behaviour of the photon--photon
angular correlation function is caused by the interference between
\textit{two} pathways which appear for each multipole component of
the $2 ^3P_0 \to 1 ^1S_0$ transition. For instance, the leading
E1M1--M1E1 decay may proceed either via intermediate $n \, ^3S_1$ or
$n \, ^3P_1$ states, thus given rise to a ``double--slit'' picture
that becomes most pronounced for the equal energy sharing. Simple
analytical expression for the angular correlation function which
accounts for such a Young--type interference effect can be obtained
from Eqs.~(\ref{S_function_decomposition})--(\ref{diff_rate}) as:
\begin{eqnarray}
\label{W_2}
W_{2\gamma}(\theta, y) &\propto& \sin^4\theta/2 \, \left| \mathcal{S}_{E1M1}
\right|^2 \,
\left( 1+ 2(1 + 2 \cos\theta) \frac{\mathcal{S}_{E2M2}}{\mathcal{S}_{E1M1}}
\right) \nonumber \\
&+& \cos^4\theta/2 \, \left| \mathcal{D}_{E1M1}
\right|^2 \,
\left( 1 - 2(1 - 2 \cos\theta) \frac{\mathcal{D}_{E2M2}}{\mathcal{D}_{E1M1}}
\right) \, + ... ,
\end{eqnarray}
where, similar as before, we denote $\mathcal{S}_{L p_1, L p_2} =
S^{J_\nu = L}_{L p_1, \, L p_2}(\omega_1) + S^{J_\nu =L}_{L p_1, \,
L p_2}(\omega_2) + S^{J_\nu = L}_{L p_2, \, L p_1}(\omega_2) +
S^{J_\nu =L}_{L p_2, \, L p_1}(\omega_1)$ and $\mathcal{D}_{L p_1, L
p_2} = S^{J_\nu = L}_{L p_1, \, L p_2}(\omega_1) - S^{J_\nu =L}_{L
p_1, \, L p_2}(\omega_2) + S^{J_\nu = L}_{L p_2, \, L p_1}(\omega_2)
- S^{J_\nu =L}_{L p_2, \, L p_1}(\omega_1)$. Obviously, if the
energies of the two photons are equal, $\omega_1 = \omega_2$, the
second term in Eq.~(\ref{W_2}) turns to be \textit{zero} and the
photon emission is described by the $\sin^4 \theta/2$ angular
distribution modified by the non--dipole terms in the expansion of
electron--photon interaction. As seen from the lower panel of
Fig.~3, the contribution from these terms becomes more pronounced
for the back--to--back photon emission ($\theta$ = 180$^\circ$)
where they lead to about a 30 \% enhancement of the correlation
function. It is interesting to note that such an enhancement remains
almost constant along the helium isoelectronic sequence for $Z \ge$
54 due to similar ($\propto Z^{12}$) scaling of the E1M1 and E2M2
transition probabilities. Therefore, our calculations clearly
indicate the importance of higher multipoles for analyzing the
photon--photon correlations not only for high--$Z$ domain but also
for medium--$Z$ ions.
\begin{figure}
\includegraphics[height=8.5cm,angle=0,clip=]{Fig3.eps}
\vspace*{0.0cm} \caption{(Color online) Angular correlation function
(\ref{function_definition}) for the $1s 2p \, ^3P_0 \to 1s^2 \,
^1S_0$ two--photon decay of helium--like xenon, gold and uranium
ions. Calculations obtained within the dipole E1M1 approximation
(dashed line) are compared with those including all of the allowed
multipoles (solid line). Results are presented for the relative
photon energies $y$ = 0.1 (upper panel) and 0.5 (lower panel).}
\label{Fig3}
\end{figure}
\section{Summary and outlook}
\label{sect_summary}
In summary, the two--photon decay of heavy, helium--like ions has
been investigated within the framework of the relativistic
second--order perturbation theory and the independent particle
model. In this study, special emphasis was placed on the angular
correlations between the emitted photons. A general expression for
the photon--photon correlation function was derived that accounts
for the complete expansion of the radiation field in terms of its
multipole components. Based on solutions of Dirac's equation, this
function has been calculated for the two--photon decay of the $1s2s
\, ^1S_0$, $1s2s \, ^3S_1$ and $1s2p \, ^3P_0$ states of
helium--like xenon Xe$^{52+}$, gold Au$^{77+}$ and uranium U$^{90+}$
ions. As seen from the results obtained, the photon emission pattern
appears to be sensitive to the symmetry and parity of the particular
excited state as well as to the higher multipole contributions to
the electron--photon interaction. The strongest non--dipole effects
have been identified for the $1s2s \, ^3S_1 \to 1 \, ^1S_0$
two--photon transition for which the 2E1 decay channel is forbidden
owing to symmetrization properties of the system. For the other two
transitions, $1s2s \, ^1S_0 \to 1 \, ^1S_0$ and $1s2p \, ^3P_0 \to 1
\, ^1S_0$, the higher multipoles of the radiation field typically
result in a 10--30 \% deviation of the photon--photon correlation
function from the (analytical) predictions obtained within the
dipole 2E1 approximation. This deviation becomes most apparent for
the parallel and back--to--back photon emission and may be observed
not only for high--$Z$ but also for medium--$Z$ ions.
\medskip
The second--order perturbation approach based on the independent
particle model, used in the present calculations, is appropriate for
the analysis of forthcoming experimental studies on the two--photon
transitions between the $^{2s+1}L_J$ excited and the ground states
of helium--like, heavy ions. Besides these spontaneous decays, whose
energies usually reach 100 keV, \textit{induced} $J=0 \to J=0 + 2
\gamma$ transitions between excited states are also likely to be
explored at the GSI ion storage ring \cite{ScS89}. Having energies
in the optical range (2--3 eV), these transitions may provide an
alternative and very promising tool for studying the parity
violation phenomena. Their theoretical analysis, however, requires a
more systematic treatment of the electron--electron interaction
effects. Based on the multi--configuration Dirac--Fock approach and
B--spline basis set method, investigations along this line are
currently underway and will be reported elsewhere.
\section*{Acknowledgements}
A.S. and F. F. acknowledge support from the Helmholtz Gemeinschaft
and GSI under the project VH--NG--421. S.F. acknowledges the support
by the DFG. This research was supported in part by FCT Project No.
POCTI/0303/2003 (Portugal), financed by the European Community Fund
FEDER and by the Ac\c{c}\~oes Integradas Luso-Alem\~as (Contract No.
A-19/09). A.V. and G.P. acknowledge support from the DFG and GSI.
Laboratoire Kastler Brossel is "Unit\'e Mixte de Recherche du CNRS,
de l' ENS et de l' UPMC No. 8552". This work is supported by
Helmholtz Alliance HA216/EMMI. PI acknowledge support from the PHC
program PESSOA 2009 number 20022VB.
| {
"attr-fineweb-edu": 1.996094,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUezY5qhLBgmacwILC | \section*{References}}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{amsmath}
\usepackage{mathrsfs}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{float}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{bm}
\usepackage{bbm}
\usepackage{mathrsfs}
\usepackage{cleveref}
\usepackage{soul}
\usepackage{color,soul}
\usepackage{color}
\usepackage{siunitx}
\usepackage[margin=2.5cm]{geometry}
\biboptions{sort&compress}
\journal{Composites Science and Technology}
\makeatletter
\def\@author#1{\g@addto@macro\elsauthors{\normalsize%
\def\baselinestretch{1}%
\upshape\authorsep#1\unskip\textsuperscript{%
\ifx\@fnmark\@empty\else\unskip\sep\@fnmark\let\sep=,\fi
\ifx\@corref\@empty\else\unskip\sep\@corref\let\sep=,\fi
}%
\def\authorsep{\unskip,\space}%
\global\let\@fnmark\@empty
\global\let\@corref\@empty
\global\let\sep\@empty}%
\@eadauthor={#1}
}
\makeatother
\begin{document}
\begin{frontmatter}
\title{Phase field predictions of microscopic fracture and R-curve behaviour of fibre-reinforced composites}
\author{Wei Tan\fnref{QMUL}}
\author{Emilio Mart\'{\i}nez-Pa\~neda\corref{cor1}\fnref{IC}}
\ead{[email protected]}
\address[QMUL]{School of Engineering and Materials Science, Queen Mary University London, Mile End Road, London, E1 4NS, UK}
\address[IC]{Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK}
\cortext[cor1]{Corresponding author.}
\begin{abstract}
We present a computational framework to explore the effect of microstructure and constituent properties upon the fracture toughness of fibre-reinforced polymer composites. To capture microscopic matrix cracking and fibre-matrix debonding, the framework couples the phase field fracture method and a cohesive zone model in the context of the finite element method. Virtual single-notched three point bending tests are conducted. The actual microstructure of the composite is simulated by an embedded cell in the fracture process zone, while the remaining area is homogenised to be an anisotropic elastic solid. A detailed comparison of the predicted results with experimental observations reveals that it is possible to accurately capture the crack path, interface debonding and load versus displacement response. The sensitivity of the crack growth resistance curve (R-curve) to the matrix fracture toughness and the fibre-matrix interface properties is determined. The influence of porosity upon the R-curve of fibre-reinforced composites is also explored, revealing a stabler response with increasing void volume fraction. These results shed light into microscopic fracture mechanisms and set the basis for efficient design of high fracture toughness composites. \end{abstract}
\begin{keyword}
Composite Materials\sep Fracture Toughness\sep Phase Field Model \sep Cohesive Zone Model
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{Introduction}
Lightweight fibre reinforced polymer (FRPs) composites are being widely used in aeronautical and automotive applications due to their high specific stiffness and strength. To meet the structural integrity requirements of composite structures used in transportation vehicles, FRPs need to be sufficiently damage-tolerant to sustain defects safely until they can be repaired \cite{Tan2015}. This requires composite structures of high fracture toughness, a property that depends on the mechanical properties of fibre, matrix, and fibre-matrix interfaces, as well as of their spatial distribution within the material.
The fracture of FRPs can be generally classified into two categories: interlaminar fracture (delamination) and intralaminar fracture. Interlaminar fracture toughness values are controlled by the matrix toughness, which typically ranges from 0.1 to 3 kJ/m\textsuperscript{2} \cite{Cowley1997,Tan2016}. Intralaminar fracture can be classified into two categories, namely fibre-dominated fracture and matrix-dominated fracture. The reported intralaminar fracture toughness FRPs are in the range of 1 to 634 kJ/m\textsuperscript{2} \cite{Tan2016,Laffan2012,Marin2016}. The matrix-dominated fracture toughness is comparable to interlaminar fracture toughness ($\sim$1 kJ/m\textsuperscript{2}), while the fibre-dominated fracture toughness is two order of magnitude higher. This is primarily due to the fibre-bridging effect, where a significant amount of fracture energy is absorbed by fibre-matrix debonding, fibre pull-out and fibre breakage.
Avenues for improving composite fracture toughness include matrix modification, thermoplastic particles, nanomaterial veils, stitching, Z-pin and 3D fibre architectures. These methods generally take advantage of well-known toughening mechanisms such as crack deflection, microcrack toughening, fibre/grain bridging. While trial-and-error experimental techniques are available to improve the fracture toughness, another emerging approach is the application of computational micromechanics. This approach is based on the finite element simulation of the mechanical response of a representative volume element (RVE) or an embedded cell of the composite microstructure. This makes possible to (virtually) optimise the material properties by changing the properties of the constituents. It can also provide the homogenized constitutive behaviour of the composite material, which can then be transferred to simulations at a larger length scale \cite{Llorca2011,Tan2018,Herraez2018}.
Cohesive Zone Models (CZM) \cite{Camanho2002,Canal2012} and Continuum Damage Mechanics (CDM) models \cite{Chaboche1988a, Tan2015} are being extensively used in computational micromechanics. However, one source of mesh-dependence in CDM or CZM models is the mesh-induced direction bias. The misalignment between crack band direction and mesh lines induces stress locking because of the displacement continuity condition. A practical solution to mitigate mesh-induced directional bias is to align a refined mesh with the fibre direction \cite{Falco2018}, requiring complex mesh generations and a high computational cost. To overcome this issue, different element enriched formulations have been proposed, such as the eXtended FEM (X-FEM) \cite{Belytschko2009} and the Floating Point Method \cite{Chen2014a}. Despite their effectiveness, these techniques can also fail to track the actual crack path topology, whereby crack coalescence and branching scenarios may potentially occur. A promising alternative for modeling the progressive failure of materials is the Phase Field (PF) fracture model \cite{Bourdin2000,Miehe2010a,TAFM2020}, which is gaining a growing interest in the scientific community \cite{Wu2020}. In particular, this approach enables to accurately simulate complex crack paths, including crack branching and coalescence in arbitrary geometries and dimensions. The PF method is a variational approach to fracture that exploits the classical Griffith energy balance \cite{Griffith1920}; cracking takes place when the energy released by the solid reaches a critical value, the material toughness $G_c$. Recently, Quintanas-Corominas et al. \cite{Quintanas-Corominas2018,Quintanas-Corominas2019,Quintanas-Corominas2019a, Quintanas-Corominas2020a} and Espadas-Escalante et al. \cite{Espadas-Escalante2019} have successfully used the PF model to capture the intralaminar and interlaminar damage behaviours at the mesoscale level. However, important phenomena governing the crack path topology and macroscopic fracture toughness remain unaddressed; these include the influence of fibre, matrix, and fibre-matrix interface, as well as other toughening or embrittlement mechanisms (i.e. fibre bridging, crack branching, voids, defects, etc).
In this work, a coupled PF-CZM framework is presented to model the matrix cracking, fibre-matrix interface debonding, and homogenised fracture toughness. Finite element modelling of single edge notched three-point bending tests are conducted. The predicted results are validated against the measured crack path and load-displacement curves. The main novel aspects herein are: (i) For the first time, a combined PF-CZM model is used to predict the miscroscale crack propagation and investigate the debonding and matrix bridging behaviour. (ii) The effect of matrix toughness, interface strength and toughness on the crack trajectory and the R-curve are firstly quantified. (iii) We explore the influence on the fracture toughnesss of microstructures with varying degrees of porosity. Our model opens new opportunities for the efficient and cost-effective design of energy-absorbing materials and structures.
\section{Numerical model}
\label{Sec:NumModel}
The formulation combines two fracture models. The phase field fracture method, capable of capturing arbitrary crack trajectories, is used to model crack initiation and growth along the matrix and the fibres. Furthermore, fibre-matrix debonding is simulated using a cohesive zone model. Both models are described below and implemented in the commercial finite element package ABAQUS by means of user element subroutines.
\subsection{Phase field fracture model}
The phase field fracture method builds upon Griffith's thermodynamics \cite{Griffith1920}; crack advance is driven by the competition between the work required to create a new surface and the strain energy released by the solid as the crack grows. Griffith's energy-based failure criterion can be expressed in variational form \cite{Francfort1998}. Thus, consider an arbitrary body $\Omega \subset {\rm I\!R}^n$ $(n \in[1,2,3])$ with internal discontinuity boundary $\Gamma$. The total potential energy of the body will be a sum of the contributions associated with the strain energy density $\psi$ and the fracture energy $G_c$ as,
\begin{equation}\label{eq:Egriffith}
\mathcal{E} \left( \bm{u} \right) = \int_\Omega \psi \left( \bm{\varepsilon} \left( \bm{u} \right) \right) \, \text{d}V + \int_\Gamma G_c \, \text{d}S \, ,
\end{equation}
\noindent where $\bm{u}$ and $\bm{\varepsilon}=\left( \nabla \bm{u}^T + \nabla \bm{u} \right)/2$ denote the displacement and strain fields, respectively. Minimisation of the Griffith energy functional (\ref{eq:Egriffith}) is hindered by the complexities associated with tracking the propagating fracture surface $\Gamma$. However, an auxiliary variable, the phase field $\phi$, can be used to track the crack interface; $\phi$ is a damage-like variable that goes from 0 in intact regions to 1 inside of the crack - see Fig. \ref{Fig:PFM}.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{PFMintro-eps-converted-to.pdf}
\caption{Schematic representation of a solid body with (a) internal discontinuity boundaries, and (b) a phase field approximation of the discrete discontinuities.}
\label{Fig:PFM}
\end{figure}
Following continuum damage mechanics arguments, a degradation function $g=(1-\phi)^2$ is defined that diminishes the stiffness of the material with evolving damage. Accordingly, the total potential energy functional can be re-formulated as
\begin{equation}
\mathcal{E}_\ell \left( \bm{u}, \phi \right) = \int_\Omega \left( 1 - \phi \right)^2 \psi \left( \bm{\varepsilon} \left( \bm{u} \right) \right) \, \text{d}V + \int_\Omega G_c \left( \frac{\phi^2}{2 \ell} + \frac{\ell}{2} |\nabla \phi|^2 \right) \, \text{d}V \, ,
\end{equation}
\noindent where $\ell$ is a length scale parameter that governs the size of the fracture process zone; the non-local character of the phase field method guarantees mesh objectivity. As rigorously proven using Gamma-convergence, the $(\bm{u}, \phi)$ sequence that constitutes a global minimum for the regularised functional $\mathcal{E}_\ell$ converges to that of $\mathcal{E}$ for a fixed $\ell \to 0^+$. Thus, $\ell$ can be interpreted as a regularising parameter in its vanishing limit. However, for $\ell>0^+$ a finite material strength is introduced and $\ell$ becomes a material property governing the strength \cite{Tanne2018}; e.g., for plane stress:
\begin{equation}
\sigma_f \propto \sqrt{\frac{G_c E}{\ell}} = \frac{K_{Ic}}{\sqrt{\ell}}
\end{equation}
\noindent where $K_{Ic}$ is the material fracture toughness.
Finally, the strong form can be readily derived by taking the first variation of $\mathcal{E}_\ell$ with respect to the primal kinematic variables and making use of Gauss' divergence theorem. Thus, the coupled field equations read,
\begin{align}\label{eqn:strongForm}
(1-\phi)^2 \, \, \nabla \cdot \boldsymbol{\sigma} &= \boldsymbol{0} \hspace{3mm} \rm{in} \hspace{3mm} \Omega \nonumber \\
G_{c} \left( \dfrac{\phi}{\ell} - \ell \Delta \phi \right) - 2(1-\phi) \, \psi &= 0 \hspace{3mm} \rm{in} \hspace{3mm} \Omega
\end{align}
\noindent The discretised forms of the field equations are solved by using a staggered solution scheme \cite{Miehe2010a,CPB2019}.
\subsection{Cohesive zone model}
Debonding between the matrix and the fibre is captured by means of a cohesive zone model with a bi-linear traction-separation law, as shown in Fig. \ref{Fig:CZM}. For both normal and shear tractions, the constitutive behaviour of the cohesive zone interface is governed by the initial interface modulus $K$, the interface strength $\sigma_I$ and the fracture energy $G_I$.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{CohesiveZoneSketch-eps-converted-to.pdf}
\caption{Sketch of the cohesive zone formulation employed for predicting fibre-matrix debonding.}
\label{Fig:CZM}
\end{figure}
Following Camanho and Davila \cite{Camanho2002}, an effective separation is introduced to describe the evolution of damage under a combination of normal and shear deformation
\begin{equation}
\delta_m = \sqrt{\langle \delta_n \rangle^2 + \delta_s^2 }
\end{equation}
The onset of damage is predicted in terms of the normal $t_n$ and shear $t_s$ tractions using a quadratic nominal stress criterion,
\begin{equation}
\left( \frac{\langle t_n \rangle}{\sigma_I^N} \right)^2 +\left( \frac{ t_s}{\sigma_I^S} \right)^2 =1
\end{equation}
\noindent Finally, damage evolution is governed by the energetic Benzeggagh-Kenane fracture criterion. Thus, the mixed-mode critical energy release rate $G_C$ will be attained when,
\begin{equation}
G_I^N+ \left( G_I^S- G_I^N \right) \left( \frac{G^S}{G^N + G^S} \right)^\eta = G_C
\end{equation}
\noindent where $\eta$ is a material parameter, and $G_I^N$ and $G_I^S$ respectively denote the fracture energies required to cause failure in the normal and shear directions.
\section{Results}
\label{Sec:FEMresults}
\subsection{Singe-edge cracked plate subjected to tension}
To verify our PF model on bulk matrix, we model a singe-edge cracked plate with the geometric setup, dimensions and boundary conditions given in Fig. \ref{Fig.1}a. The square plate of width $H=$ 1 mm and height $W=$ 1 mm has an initial crack length of $a_0=$ 0.25 mm. We load the plate by prescribing the vertical displacement in the upper edge, and fix both vertical and horizontal displacements in the bottom boundary. We adopt the following epoxy material properties for the cracked plate, Young's modulus $E=$ 3.5 GPa, Poisson's ratio $\nu=$ 0.35, tensile strength $\sigma_{N}=$ 20 MPa and critical energy release rate $G_m=$ 10 J/m\textsuperscript{2}.
To assess the effect of fibre reinforcement on the crack propagation of composite material, we use the same geometry, dimensions and boundary conditions as above, except for the additional $f_g=$ 37.2 \% fibre reinforcements, see Fig. \ref{Fig.1}b. We use E-glass fibre of Young's modulus $E=$74 GPa, Poisson's ratio $\nu=$ 0.35 and critical energy release rate of $G_f=$ 13.5 J/m\textsuperscript{2}. Both glass fibre and epoxy matrix are assumed to be linear elastic, isotropic solids. A cohesive surface contact between fibre and matrix is defined and follows a traction-separation law with the properties given in Table \ref{table.1}, where the interfacial tensile strength is assumed to be two-thirds of the shear strength, $\sigma_I^N=2 \sigma_I^S/3$ \cite{Herraez2018}. To compare the effect of matrix cracking and interface debonding more directly, we choose two sets of material parameters: namely $\sigma_I^N \leq \sigma_m^N $ and $\sigma_I^N > \sigma_m^N $.
\begin{table}[H]
\caption{ Properties of fibre-matrix interface \cite{Herraez2018}}
\centering
\begin{tabular}{c c c c c c c}
\hline\hline
$\sigma_I^N$ (MPa) & $\sigma_I^S$ (MPa) & $K^N$ (GPa) & $K^S$ (GPa) & $G_I^N $ (J/m\textsuperscript{2}) & $G_I^S $ (J/m\textsuperscript{2}) & $\eta$\\ [0.5ex]
\hline
40 & 60 & 1000 & 1000 & 125 & 150 & 1.2\\ [1ex]
\hline
\end{tabular}
\label{table.1}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{Figure1.pdf}
\caption{Single-edge cracked plate subjected to tension: (a) Model setup for matrix, and (b) composite. (c) Stress-strain response of single-edge cracked plates made from bulk matrix or fibre-reinforced composite. (d) Crack propagation of matrix. Crack propagation of composite where (e) $\sigma_m^N \leq \sigma_I^N$ and (f) $\sigma_m^N > \sigma_I^N$.}
\label{Fig.1}
\end{figure}
Four-node quadrilateral plane strain elements were used. After a mesh sensitivity study, a fine mesh with a characteristic element size $h=$0.001 mm is used, eight times smaller than the phase field length scale \cite{CMAME2018}. When conducting the mesh sensitivity analysis, attention is paid to ensure that the fracture process zones associated with both the phase field and the cohesive zone model are resolved. In total, 11,221 and 42,361 elements are used for the matrix and composite models, respectively.
The predicted stress-strain responses of single edge cracked plate made from bulk matrix and fibre-reinforced composites are summarised in Fig. \ref{Fig.1}c. The stress-strain curve of matrix model shows a linear elastic behaviour until reaching the peak load. This is followed by a load drop, corresponding to the crack evolution, see Fig. \ref{Fig.1}d. If fibres are added to the matrix, a stiffening and toughening effect is observed on the overall material behaviour. If the fibre-matrix interface debonding initiates first ($\sigma_I^N \leq \sigma_m^N$), there is a notable non-linear behaviour prior to load drop. Before reaching the peak load, a large number of fibre-matrix interfaces have experienced decohesion, which contributes to the unusual non-linear response. The multi-step load dropping in the softening regime is attributed to the coalescence of interface debonding and matrix cracking, see Fig. \ref{Fig.1}e. However, if the matrix cracking initiates first ($\sigma_I^N > \sigma_m^N$), no interfacial decohesion is observed. A linear elastic behaviour is predicted before the maximum load, followed by a zig-zag softening behaviour. This is mainly due to the crack deflection effect in the fibre-reinforced composites. Instead of a straight cracking trajectory, the crack propagating through the matrix will deflect upon encountering the fibres (Fig. \ref{Fig.1}f), hence increasing the fracture surface area and the macroscopic fracture toughness. To quantify the role of the fibres, we estimate an equivalent work of fracture as the area under the resulting stress-strain curve divided by the ligament crack surface area 0.75 mm\textsuperscript{2}. We find that the composite with interface debonding has the highest work of fracture, 21.7 J/m\textsuperscript{2}, followed by the composite without interface debonding (20.4 J/m\textsuperscript{2}), with the bulk matrix giving 14.3 J/m\textsuperscript{2}. Therefore, to improve the macroscopic fracture toughness of fibre-reinforced composites, the fibre-matrix interface strength should be reduced, consistent with most toughening approaches used in ceramic fibre-reinforced composites \cite{Jiang2018}. However, to improve the strength of fibre-reinforced composites, a high fibre-matrix interface strength is required.
\subsection{Single-edge notched three-point bending test}
We proceed to simulate three-point bending (TPB) experiments on a notched beam to predict the microscale crack topology and the matrix-dominated toughness of the composite lamina. This is achieved by means of an embedded cell model, following the approach developed in \cite{Herraez2018,Canal2012}. As shown in Fig. \ref{Fig.2}, the complete composite microstructure is resolved in the fracture process zone as an embedded cell, while the remaining ply material is represented as a homogeneous, transversely-isotropic elastic solid. The two regions share nodes at their interface, implying a continuous displacement field between the homogenised region and the embedded cell. We calculated the material constants of the homogenised region based on Mori–Tanaka method \cite{Canal2012}. The Young's modulus is $E_h=$ 11 GPa and the Possion's ratio is $\nu_h=$ 0.3. The sample dimensions and experimental setup are given in Fig. \ref{Fig.2}. A single edge-notched beam with a support span $L=11.2$ mm, equal to four times the width $W$, is loaded in three-point bending. The thickness of the beam is $t=$ 2 mm. The initial crack length is $a_0=$ 1.4 mm. Inside the embedded cell, the randomly distributed glass fibres of volume fraction $f_g=$ 54 \% are surrounded by epoxy matrix. Fibre diameter ranges from 13 $\si{\micro\metre}$ to 17 $\si{\micro\metre}$. The characteristic element size is set to 1 $\si{\micro\metre}$ in the embedded region and gradually grows to 0.2 mm at the outer edges. The whole model is formed by 152,364 four-node plane strain elements. The fibre, matrix and fibre-matrix interface properties used in the previous section were taken as baseline input parameters. The applied load P, the loading point displacement $\delta$ and the crack mouth opening displacement (CMOD), $\Delta$, were continuously recorded during the virtual tests.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{Figure2.pdf}
\caption{Model set-up of single-edge notched tension tests. All dimensions are in mm.}
\label{Fig.2}
\end{figure}
The predictions of the virtual three-point bending test are shown in Fig. \ref{Fig.3}. First, the measured \cite{Canal2012} and simulated load-CMOD displacement are plotted in Fig. \ref{Fig.3}a using the embedded cell model presented above. The numerical model accurately captures the measured behavior including the linear-elastic response of the beam before the peak load, the CMOD at the maximum load and the softening regime of the curve. The maximum load is slightly underestimated (around 10\%), within the experimental scatter. In addition to the load-CMOD response, the model is able to reproduce the microscopic deformation and failure mechanisms, see Fig. \ref{Fig.3}b. In agreement with what is observed in the scanning electron micrographs, damage began by interface debonding at the outer surface the fibres. Cracks propagated along the fibre–matrix interface and voids grew by distinct interface separation. A continuous crack path was finally developed by the coalescence of matrix cracking and interface decohesion, while a significant amount of matrix ligaments were bridging the crack.
The numerical simulations also precisely capture the crack evolution with increasing remote load. This is shown in Fig. \ref{Fig.3}c, where snapshots of scanning electron micrographs for different values of the CMOD are plotted together with the predicted results.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Figure3.pdf}
\caption{Measured \cite{Canal2012} and predicted (a) load-CMOD curve, (b)crack propagation at high magnification and (c) crack propagation at low magnification.}
\label{Fig.3}
\end{figure}
Cyclic loading was also applied to the TPB specimen to investigate unloading and reloading behaviour and calculate the unloading compliance $C=\delta/P$. This is facilitated by the linear elastic fracture response of composite materials, as confirmed by the unloading response to the origin shown in Fig. \ref{Fig.4}a - no plastic effects have been considered. Thus, we follow the ASTM standard \cite{ASTM1820} to calculate the R-curve. In brief, the elastic compliance is used to calculate the effective crack size $a_e$, which is then used to calculate the geometrical correction factor $f(a_e/W)$. The stress intensity factor was then given by $K=P S(BW^{3/2})^{-1}f(a_e/W)$. Finally, the $J$-integral is estimated by substituting $K$ into the plain strain equation below,
\begin{equation}\label{eqn.1}
J=\frac{K^2(1-\nu^2)}{E} \, .
\end{equation}
\noindent The change in $J$ with crack extension determines the R-curve.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Figure4.pdf}
\caption{(a) Measured load-CMOD response \cite{Canal2012} is compared with the predicted loading-unloading response (b) Measured \cite{Canal2012} and predicted R-curves.}
\label{Fig.4}
\end{figure}
The measured and predicted R-curves are plotted in Fig. \ref{Fig.4}b. Predictions for the R-curve response of the TPB test agree closely with those measured in the experiments. The rising R-curves observed both in the experiments and in the numerical predictions are attributed to the bridging effect from the matrix ligaments and the softening behaviour of fibre-matrix interface decohesion.
\subsubsection{Sensitivity study}
The fibre, matrix, and interface properties used in the previous section were taken as baseline values and a parametric study was carried out by simulating the mechanical response of the single notched beam bending test for different values of the phase field length scale $\ell$, matrix fracture energy release rate $G_m$, interface mode I fracture energy release rate $G_I^N$ and interface normal strength $\sigma_I^N$. The load-CMOD responses of these parametric analyses are plotted in Fig. \ref{Fig.5}. Fig. \ref{Fig.5}a shows that reducing the value of $\ell$ elevates the force-displacement response; in all cases, a constant ratio $\ell/h=8$ is adopted to ensure mesh independent results. This can be rationalised by recalling the relation between the phase field length scale and the material strength: $\sigma_c=\sqrt{27EG_c/(256 \ell)}$ (see, e.g., \cite{CMAME2018}). However, the influence of $\ell$ appears to be small. In agreement with fracture mechanics, phase field predicts a strength dominated behaviour (i.e., sensitive to the choice of $\ell$) when the initial defect is smaller than the transition flaw size, and a fracture dominated response (i.e., governed by $G_c$) for larger cracks \cite{Tanne2018}. In elastic-plastic materials, cracking always takes place at $G=G_c$ if the initial flaw is sufficiently large but the dissipation (R-curve) is influenced by $\ell$ \cite{JMPS2020}. As expected, both the peak load and the CMOD at the maximum load increase with the increasing $G_m$, see Fig. \ref{Fig.5}b. However, the interface fracture toughness has a relatively small effect on the load-CMOD responses, see Fig. \ref{Fig.5}c. Therefore, to enhance the overall fracture toughness, increasing the matrix toughness is more effective than increasing the interface fracture toughness. This supports the trend of using thermoplastic materials for high fracture toughness applications \cite{Tan2016}. The interface normal strength $\sigma_I^N$ has a significant impact on the maximum load, which correlates to the initiation of fibre-matrix interface debonding. From the above analysis, we can conclude that interface strength $\sigma_I^N$ determines the peak load, while both the matrix and the interface fracture toughness contribute to the softening behaviour of the overall mechanical response.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Figure5.pdf}
\caption{Sensitivity study of the parameters used in the simulations: (a) phase field length scale $\ell$, (b) fracture toughness of matrix $G_c$, (c) fracture toughness of interface $G_I^N$ and (d)interface strength $\sigma_N$. }
\label{Fig.5}
\end{figure}
\subsubsection{The effect of porosity}
After validating our model against the experimental results, we shall now proceed to explore the effect on the fracture behaviour of other microstructures, such as those arising from an increase in porosity or voids. In a composite material, a void is a pore that remains unfilled with polymer and fibres. Voids are typically the result of poor manufacturing of the material and are generally treated as defects as they can degrade matrix-dominated properties such as interlaminar shear strength, transverse tensile strength and longitudinal compressive strength, hence affecting the overall mechanical properties. The effect of porosity on strength has been assessed by Vajari \textit{et al.} \cite{Vajari2014}. Here, we quantify the influence of voids on both strength and fracture toughness. To achieve this, we introduce pores on the baseline model, with the porosity ranging from $f_p=$ 2\% to $f_p=$ 10\%. The porosity is represented by circular voids within the matrix and all the other conditions are kept the same. 2D models can provide quantitative insight into the role of porosity as pores in unidirectional ply have a tubular shape \cite{Hernandez2011}. The resulting crack trajectories for selected porosity levels are shown in Fig. \ref{Fig.6}a. Crack blunting was observed during the fracture process. The crack paths appear to be very sensitive to the porosity level. In addition, as shown in Fig. \ref{Fig.6}b, both modulus and strength decrease with increasing volume fraction of porosity. The strength is reduced by approximately 17\% in the presence of 10\% porosity. A similar degradation was measured by Olivier \textit{et al.} \cite{Olivier1995} and predicted by Vajari \textit{et al.} \cite{Vajari2014}. Figure \ref{Fig.6}c shows how the R-curve of fibre-reinforced composites changed from `flat'-type to `rising'-type with increasing porosity. For the sample with higher porosity (10\%), the fracture toughness rises continuously with crack advance, exhibiting a more stable crack growth. The sample with 10\% porosity has a 37\% higher fracture toughness compared to the sample without porosity for $\Delta a=0.8$ mm. This toughening effect can be attributed to the circular holes that blunt the crack-tip and increase the fracture toughness \cite{Liu2020}. This finding differs from the effect associated with manufacturing induced defects, where voids degrade the mechanical behaviour \cite{Tan2018}. It should be noted that manufacturing-induced voids are commonly not regular and are more likely to be located close to fibre-matrix interface; hence reducing the interface and the macroscopic fracture toughnesses. For this virtual test case, all the voids have a regular circular shape and are located at the matrix pocket. Therefore, crack blunting effects are enabled.
\begin{figure}[H]
\centering
\includegraphics[width=1.0\textwidth]{Figure6.pdf}
\caption{The role of porosity on: (a) the crack trajectory, (b) the load-CMOD response, and (c) the fracture resistance R-curves.}
\label{Fig.6}
\end{figure}
\section{Conclusions}
\label{Sec:ConcludingRemarks}
In this work, we present a novel coupled phased field and cohesive zone model to explore the effect of microstructure and constituent properties on the macroscopic fracture toughness. Several boundary value problems of particular interest are modelled to showcase its capabilities and gain physical insight.
First, an analysis of simple single-edge cracked plate tension tests on fibre-reinforced composites suggests that a weak fibre-matrix interface strength will raise the fracture toughness but reduce material strength. Secondly, the model is validated against single-edge notched beam bending experiments. Our predictions exhibit an excellent correlation with the experimental results both qualitatively and quantitatively. Subsequent parametric analyses suggest that increasing the matrix toughness is a more effective toughening mechanism than enhancing the interface fracture toughness. Finally, the influence of different microstructures with varying porosity levels is subsequently investigated to determine optimal toughening strategies. We show that introducing a volume fraction of void inclusions in the matrix-resin regions can enhance the composite fracture toughness due to crack blunting effects.
This embedded cell-based, combined phase field and cohesive zone computational framework provides a compelling multiscale virtual tool to investigate the role of the microstructure and material properties. This will lead to more efficient and rapid designs for enhancing the fracture toughness of energy-absorbing materials and structures.
\section{Acknowledgements}
\label{Sec:Acknowledgeoffunding}
W. Tan acknowledges financial support from the European Commission Graphene Flagship Core Project 3 (GrapheneCore3) under grant No. 881603. E. Mart\'{\i}nez-Pa\~neda acknowledges financial support from the EPSRC (grants EP/R010161/1 and EP/R017727/1) and from the Royal Commission for the 1851 Exhibition (RF496/2018).
\bibliographystyle{elsarticle-num}
| {
"attr-fineweb-edu": 1.275391,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUezrxK7kjXIdHAJBh | \section{Introduction}
This paper is concerned with the rigidity of Dolbeault-type
operators on compact, almost-complex manifolds and its relationship
to symplectic circle actions.
Let $(M^{2n},\omega)$ be a compact, connected symplectic manifold of
dimension $2n$. A circle action is called \emph{symplectic} if it
preserves the symplectic form $\omega$ or, equivalently, if the one
form $\omega(X,\cdot)$ is closed, where $X$ is the generating vector
field of this circle action. This symplectic circle action is called
\emph{Hamiltonian} if $\omega(X,\cdot)$ is exact, i.e.,
$\omega(X,\cdot)=df$ for some smooth function $f$ on $M$. We call
$f$ the \emph{moment map} of this action, which is unique up to a
constant.
An obvious necessary condition for a symplectic circle action to be
Hamiltonian is to have \emph{non-empty} fixed points corresponding
to the critical points of $f$ (the minimum and the maximum of $f$
must be fixed points). In the case of K\"{a}hler manifolds \cite{Fr}
and of four-dimensional symplectic manifolds \cite{Mc}, this
condition is also sufficient. However, this is not true for general
higher dimensional symplectic manifolds. In fact McDuff \cite{Mc}
constructed a six-dimensional symplectic manifold with a symplectic
circle action which has fixed points, but which is not Hamiltonian.
Note that the fixed point sets of the counterexamples in \cite{Mc}
are tori. Hence one possible conjecture is that a symplectic action
with \emph{isolated} fixed points must be Hamiltonian. Some
partially affirmative results have been obtained: Tolman-Weitsman
\cite{TW} showed that this conjecture holds for semifree circle
actions (an action is called \emph{semifree} if it is free outside
the fixed points); Godinho \cite{Go} showed that this holds on
certain circle actions on six-dimensional symplectic manifolds with
specified weights on the fixed points.
McDuff \cite{Mc} showed that a symplectic circle action is
Hamiltonian \emph{if and only if} there is a connected component of
the fixed point set such that all the weights of the representation
of the circle action on the normal bundle of the component are
positive. Therefore in order to show that a symplectic circle action
is actually Hamiltonian, it suffices to show that there exists a
connected component of the fixed point set satisfying this
condition. The main ideas of the proof in \cite{TW} and \cite{Go}
are both based on this argument. In the isolated fixed points case,
Fel'dman \cite{Fe} refined McDuff's observation to show that the
Todd genus of a manifold admitting a symplectic circle action with
isolated fixed points is either 0, in which case the action is
non-Hamiltonian, or 1, in which case the action is Hamiltonian.
In a recent paper of Pelayo and Tolman \cite{PT}, the authors showed
that if a compact symplectic manifold admits a symplectic (not
necessarily Hamiltonian) circle action with isolated fixed points,
then the weights on the fixed points must satisfy some restrictions
(\cite{PT}, Theorem 2). We would like to point out that Theorems 1
and 3 in \cite{PT} are closely related to some much earlier work of
Kosniowski (\cite{Ko2}, \cite{Ko3}). Theorem 1 in \cite{PT} is
related to a conjecture of Kosniowski (\cite{Ko3}, p.338, Conjecture
A). Theorem 3 in \cite{PT} has been obtained in (\cite{Ko2}, Theorem
2) for complex manifolds and in (\cite{Ko3}, p.337) in the more
general case. The forms of the weights on the two fixed points are
hidden in the last paragraph of \cite{Ko2}.
Our paper is inspired by the above-mentioned results. This paper is
organized as follows. In Section 2, we consider some Dolbeault-type
elliptic operators on compact, almost-complex manifolds. If an
almost-complex manifold admits a circle action compatible with the
almost-complex structure, we can define the equivariant indices of
these operators under this circle action. Then following an idea of
Lusztig, Atiyah-Hirzebruch and Kosniowski, we will prove the
invariance of the relevant equivariant indices of these
Dolbeault-type operators under circle actions having isolated fixed
points. This is the meaning of the word \emph{rigidity} in our
title. When an almost-complex manifold admits a compatible circle
action with isolated fixed points, this rigidity result immediately
produces many identities concerning the weights on the fixed points.
In particular, combining with Fel'dman's result \cite{Fe}, we give a
criterion to determine whether or not a symplectic circle action is
Hamiltonian.
In Section 3, as the first application of our result, we give a
simple and unified new proof of Godinho's result \cite{Go}, of which
the original proof is very complicated (see the whole Section 3 of
\cite{Go}). In fact our conclusion is more general than that of
Godinho. As the second main application, we also generalize Pelayo
and Tolman's above-mentioned result to circle actions on
almost-complex manifolds.
\begin{remark}
Given an elliptic operator, we say it is \emph{rigid} if, under
\emph{any} circle action, the corresponding equivariant index of
this operator is invariant. A survey of the results concerning the
rigidity of some important elliptic operators can be found in
Section $1$ of \cite{BT}.
\end{remark}
\section{The rigidity of Dolbeault-type operators and main results}
Let $(M^{2n},J)$ be a compact, almost-complex manifold of real
dimension $2n$ with an almost-complex structure $J$. The choice of
an almost Hermitian metric on $M$ enables us to define the Hodge
star operator $\ast$ and the formal adjoint
$\bar{\partial}^{\ast}=-\ast\bar{\partial}~\ast$ of the
$\bar{\partial}$-operator (\cite{GH}, p.80). Then for each $0\leq
p\leq n$, there is an elliptic differential operator (cf. \cite{LM},
p.258, Example 13.14)
\be\label{GDC}\bigoplus_{q~\textrm{even}}\Omega^{p,q}(M)\xrightarrow{\bar{\partial}+\bar{\partial}^{\ast}}\bigoplus_{q
~\textrm{odd}}\Omega^{p,q}(M),\ee where
$\Omega^{p,q}(M):=\Gamma(\Lambda^{p}T^{\ast}M\otimes\Lambda^{q}\overline{T^{\ast}M}).$
Here $T^{\ast}M$ is the dual of holomorphic tangent bundle $TM$ in
the sense of $J$. The index of this operator is denoted by
$\chi^{p}(M)$
($=\textrm{dim}_{\mathbb{C}}\textrm{ker}(\bar{\partial}+\bar{\partial}^{\ast})-\textrm{dim}_{\mathbb{C}}\textrm{coker}(\bar{\partial}+\bar{\partial}^{\ast})$)
in the notation of Hirzebruch \cite{Hi}. We define the Hirzebruch
$\chi_{y}$-genus $\chi_{y}(M)$ by
$$\chi_{y}(M)=\sum_{p=0}^{n}\chi^{p}(M)\cdot y^{p}.$$
\begin{remark}\begin{enumerate}
\item When
$J$ is integrable, i.e., $M$ is a $n$-dimensional complex manifold,
$\chi^{p}(M)$ equals to the index of the following well-known
Dolbeault complex
\be\label{DC}0\rightarrow\Omega^{p,0}(M)\xrightarrow{\bar{\partial}}\Omega^{p,1}(M)\xrightarrow{\bar{\partial}}\cdots\xrightarrow{\bar{\partial}}\Omega^{p,n}(M)\rightarrow
0\ee and hence $\chi^{p}(M)=\sum_{q=0}^{n}(-1)^{q}h^{p,q}(M),$ where
$h^{p,q}(M)$ is the corresponding Hodge numbers of $M$.
\item
For a general almost-complex manifold, $\bar{\partial}^{2}$ is not
identically zero (it is a well known fact that
$\bar{\partial}^{2}\equiv 0$ is equivalent to the integrality of
$J$). So we cannot define the Dolbeault complex (\ref{DC}).
Therefore (\ref{GDC}) may be taken as the Dolbeault-type complex in
the almost-complex case.
\item
Using the general form of the Riemann-Roch-Hirzebruch theorem (first
proved by Hirzebruch for projective manofolds \cite{Hi}, and in the
general case by Atiyah and Singer \cite{AS}), we have
$$\chi^{p}(M)=<\textrm{ch}(\Lambda^{p}T^{\ast}M)\textrm{td}(TM),[M]>,$$
where $\textrm{ch}(\cdot)$ is the Chern character and
$\textrm{td}(\cdot)$ is the Todd class. $[M]$ is the fundamental
class of $M$ induced from $J$ and $<\cdot,\cdot>$ is the Kronecker
pairing.
\end{enumerate}
\end{remark}
Now suppose $M$ admits a circle action ($S^{1}$-action) preserving
the almost-complex structure $J$. Then for any $g\in S^{1}$, we can
define the equivariant index $\chi^{p}(g,M)$ (resp. equivariant
Hirzebruch $\chi_{y}$-genus
$\chi_{y}(g,M):=\sum_{p=0}^{n}\chi^{p}(g,M)y^{p}$) of the elliptic
operators in (\ref{GDC}) by choosing an invariant almost Hermitian
metric under this circle action. Note that $\chi^{p}(g,M)$ is a
\emph{finite} Laurent series in $g$ as both
$\textrm{ker}(\bar{\partial}+\bar{\partial}^{\ast})$ and
$\textrm{coker}(\bar{\partial}+\bar{\partial}^{\ast})$ are
finite-dimensional.
Moreover, we assume the fixed points of this action are non-empty
and isolated, say $P_{1},\cdots,P_{m}$. At each $P_{i}$, there are
well-defined $n$ integer weights $k^{(i)}_{1},\cdots,k^{(i)}_{n}$
(not necessarily distinct) induced from the isotropy representation
of this $S^{1}$-action on $T_{p_{i}}M$. Note that these
$k^{(i)}_{1},\cdots,k^{(i)}_{n}$ are \emph{nonzero} as the fixed
points are isolated. We use $e_{i}(x_{1},\cdots,x_{n})$ ($1\leq
i\leq n$) to denote the $i$-th elementary symmetric polynomial of
$x_{1},\cdots,x_{n}$ and $e_{0}(x_{1},\cdots,x_{n}):=1$. With these
notations understood, the main observation in this paper is the
following
\begin{theorem}\label{main theorem}
Let $(M^{2n},J)$ be a compact, connected, almost-complex manifold
with a circle action preserving the almost-complex structure $J$.
Suppose the fixed points of this action are non-empty and isolated.
Let the notations be as above. Then for each $0\leq p\leq n$, the
expression
$$\sum_{i=1}^{m}\frac{e_{p}(g^{k^{(i)}_{1}},\cdots,g^{k^{(i)}_{n}})}{\prod_{j=1}^{n}(1-g^{k_{j}^{(i)}})}$$
is a constant and equals to $\chi^{p}(M)$. Here $g$ is an
indeterminate.
\end{theorem}
Note that $\chi^{0}(M)$ is nothing else but the Todd genus of $M$.
Hence combining with Fel'dman's observation (\cite{Fe}, Theorems 1
and 2) mentioned in the Introduction, we have
\begin{theorem}
If $M$ is a symplectic manifold and the circle action is symplectic,
then the
expression
$$\sum_{i=1}^{m}\frac{1}{\prod_{j=1}^{n}(1-g^{k_{j}^{(i)}})}$$
is either $0$, in which case the action is non-Hamiltonian, or $1$,
in which case the action is Hamiltonian.
\end{theorem}
Our proof follows the exposition of Section 5.7 in \cite{HBJ}.
Although the considerations of Sections 5.6 and 5.7 in \cite{HBJ}
are for compact complex manifolds, we will see that, when replacing
the Dolbeault complexes (\ref{DC}) by the elliptic complexes
(\ref{GDC}), the proof can also be applied to the situation of
almost-complex manifolds with no more difficulties.
\emph{Proof of Theorem \ref{main theorem}.}
Let $g\in S^{1}$ be a topological generator, i.e., the closure of
$\{g^{r}~|~r\in\mathbb{Z}\}$ is the whole $S^{1}$. Then the fixed
point set of the action $g$ is exactly $\{P_{1}, \cdots,P_{m}\}$. We
note that the characteristic power series corresponding to the
Hirzebruch $\chi_{y}$-genus is $x\frac{1+ye^{-x}}{1-e^{-x}}$. Then
the Lefschetz fixed point formula of Atiyah-Bott-Segal-Singer
(\cite{AS}, p.256) tells us that
\be\label{LOC}\chi_{y}(g,M)=\sum_{i=1}^{m}\prod_{j=1}^{n}\frac{1+yg^{k^{(i)}_{j}}}{1-g^{k^{(i)}_{j}}}.\ee
Note that the left hand side (LHS) of (\ref{LOC}) is a \emph{finite}
Laurent series in $g$. Hence the only possible singularities are $0$
and $\infty$. While \be\label{lim}\lim_{g\rightarrow
0}\prod_{j=1}^{n}\frac{1+yg^{k^{(i)}_{j}}}{1-g^{k^{(i)}_{j}}}=(-y)^{d_{i}},~\lim_{g\rightarrow\infty}\prod_{j=1}^{n}\frac{1+yg^{k^{(i)}_{j}}}{1-g^{k^{(i)}_{j}}}=(-y)^{n-d_{i}}.\ee
Here $d_{i}$ is the number of the negative integers in $k_{1}^{(i)},
\cdots,k_{n}^{(i)}$. By (\ref{lim}), the RHS of (\ref{LOC}), and
hence the LHS of (\ref{LOC}), has well-defined limits at
$g=0,\infty$. Therefore $\chi_{y}(g,M)$ must be constant in $g$ and
$$\chi_{y}(g,M)\equiv\chi_{y}(id,M)=\chi_{y}(M),$$
which means
\be\label{identity}\chi_{y}(M)=\sum_{i=1}^{m}\prod_{j=1}^{n}\frac{1+yg^{k^{(i)}_{j}}}{1-g^{k^{(i)}_{j}}}\ee
holds for a dense subset of $S^{1}$ (the topological generators in
$S^{1}$ are dense). So this must be an identity in the indeterminate
$g$. Comparing the corresponding coefficients of (\ref{identity}),
we have
$$\chi^{p}(M)=\sum_{i=1}^{m}\frac{e_{p}(g^{k^{(i)}_{1}},\cdots,g^{k^{(i)}_{n}})}{\prod_{j=1}^{n}(1-g^{k_{j}^{(i)}})},~~0\leq p\leq n.$$
This completes the proof of Theorem \ref{main theorem}.
\begin{remark}
\begin{enumerate}
\item
Let $N_{P}$ denote the number of fixed points of the circle action
with exactly $p$ negative weights. From the proof we know that
\be\label{LOC2}
\begin{split}
\chi_{y}(M)&
=\sum_{i=1}^{m}(-y)^{d_{i}}=\sum_{i=1}^{m}(-y)^{n-d_{i}}\\
& =\sum_{p=0}^{n}N_{p}(-y)^{p}=\sum_{p=0}^{n}N_{p}(-y)^{n-p}.
\end{split}\ee
Hence\be\label{relation}\chi^{p}(M)=(-1)^{p}N_{p}=(-1)^{p}N_{n-p}=(-1)^{n}\chi^{n-p}(M).\ee
\item
As pointed out in Section 5.7 of \cite{HBJ}, this idea is
essentially due to Lusztig \cite{Lu}, Kosniowski \cite{Ko} and
Atiyah-Hirzebruch \cite{AH}. The former two used it to derive the
localization formula (\ref{LOC2}) for complex manifolds. Atiyah and
Hirzebruch used this idea in \cite{AH} to get their famous vanishing
theorem of $\hat{A}$-genus on spin manifolds.
\item
The localization formula (\ref{LOC2}) has been generalized to more
general cases (cf. \cite{HT} and \cite{KY}) since its first
appearance in \cite{Ko} on complex manifolds. To the author's
knowledge, in the case of almost-complex manifolds, the relations
between $\chi^{p}(M)$ and the weights as in Theorem \ref{main
theorem} have not been explicitly pointed out before. In fact, we
will see in the next section that, with these relations set up, we
can give many applications in related areas.
\end{enumerate}
\end{remark}
\section{Applications}
\subsection{}
Since the appearance of \cite{Mc}, symplectic circle actions on
six-dimensional symplectic manifolds have received a great deal of
attention due to the rich structures and possibilities in this
dimension. In \cite{Go}, Godinho showed that certain symplectic
circle actions on six-dimensional manifolds must be Hamiltonian.
More precisely, let the circle act symplectically on a
six-dimensional compact, connected, symplectic manifold with
non-empty isolated fixed points whose isotropy weights are of the
form $(\pm k_{1},\pm k_{2},\pm k_{3})$ for fixed integers $k_{1}\geq
k_{2}\geq k_{3}\geq 1$. Let $N_{0}$ and $N_{3}$ be the numbers of
the fixed points with the weights $(k_{1},k_{2},k_{3})$ and
$(-k_{1},-k_{2},-k_{3})$ respectively. Let $s_{1}, s_{2}, s_{3},
t_{1}, t_{2}$ and $t_{3}$ be the numbers of the weights
$(-k_{1},k_{2},k_{3}),$ $(k_{1},-k_{2},k_{3})$,
$(k_{1},k_{2},-k_{3})$, $(k_{1},-k_{2},-k_{3})$,
$(-k_{1},k_{2},-k_{3})$ and $(-k_{1},-k_{2},k_{3})$ respectively.
The following result is an extension of Godinho's results (cf.
\cite{Go}, Theorems 1.1, 1.2, 3.2 and 3.4)
\begin{theorem}
Let the circle act symplectically on a six-dimensional compact,
connected, symplectic manifold with non-empty isolated fixed points
and let the notations be as above, then there are exactly two
possibilities:
\begin{enumerate}
\item
$N_{0}=N_{3}=s_{2}=s_{3}=t_{2}=t_{3}=0$, $s_{1}=t_{1}\geq 1$ and
$k_{1}=k_{2}+k_{3}$,
in which case the action is non-Hamiltonian;
\item
$N_{0}=N_{3}=1$ and \be\begin{split} &
(g^{k_{3}}+g^{k_{2}}+g^{k_{1}})+(t_{1}g^{k_{2}+k_{3}}+t_{2}g^{k_{1}+k_{3}}+t_{3}g^{k_{1}+k_{2}})\\
=&(s_{3}g^{k_{3}}+s_{2}g^{k_{2}}+s_{1}g^{k_{1}})+(g^{k_{2}+k_{3}}+g^{k_{1}+k_{3}}+g^{k_{1}+k_{2}}),
\end{split}\nonumber\ee
in which case the action is Hamiltonian. Here of
course $g$ is an indeterminate.
\end{enumerate}
\end{theorem}
\begin{remark}
When $k_{1}\neq k_{2}+k_{3}$, case (1) will
lead to (\cite{Go}, Theorems 1.1 and 3.2) and case (2) will lead to
(\cite{Go}, Theorems 1.2 and 3.4).
\end{remark}
\begin{proof}
Note that we have the following expression for the Todd class
$\chi^{0}(M)$:
$$\chi^{0}(M)=\frac{N_{0}-(s_{1}g^{k_{1}}+s_{2}g^{k_{2}}+s_{3}g^{k_{3}})+(t_{1}g^{k_{2}+k_{3}}+t_{2}g^{k_{1}+k_{3}}+t_{3}g^{k_{1}+k_{2}})-N_{3}g^{k_{1}+k_{2}+k_{3}}}{(1-g^{k_{1}})(1-g^{k_{2}})(1-g^{k_{3}})}.$$
From Theorem 2.2 we have either
$$N_{0}-(s_{1}g^{k_{1}}+s_{2}g^{k_{2}}+s_{3}g^{k_{3}})+(t_{1}g^{k_{2}+k_{3}}+t_{2}g^{k_{1}+k_{3}}+t_{3}g^{k_{1}+k_{2}})-N_{3}g^{k_{1}+k_{2}+k_{3}}=0,$$
in which case the action is non-Hamiltonian, or \be\begin{split}
& N_{0}-(s_{1}g^{k_{1}}+s_{2}g^{k_{2}}+s_{3}g^{k_{3}})+(t_{1}g^{k_{2}+k_{3}}+t_{2}g^{k_{1}+k_{3}}+t_{3}g^{k_{1}+k_{2}})-N_{3}g^{k_{1}+k_{2}+k_{3}}\\
=& (1-g^{k_{1}})(1-g^{k_{2}})(1-g^{k_{3}}),
\end{split}\nonumber\ee
in which case the action is Hamiltonian.
\end{proof}
We can also reproduce Tolman-Weitsman's following result (\cite{TW},
Theorem 1).
\begin{theorem}[Tolman-Weitsman]
Let $M^{2n}$ be a compact, connected symplectic manifold, equipped
with a semifree symplectic circle action with non-empty isolated
fixed points. Then this circle action must be Hamiltonian.
\end{theorem}
\begin{proof}
Since the action is semifree, all the integer weights $k_{j}^{(i)}$
are $\pm 1$. We still use $N_{p}$ to denote the number of fixed
points with exactly $p$ negative weights as in Section 2. Hence
$$\chi^{0}(M)=\frac{\sum_{p=0}^{n}N_{p}(-g)^{p}}{(1-g)^{n}}.$$
By assumption, the fixed points are non-empty, hence at least one of
$N_{0},\cdots,N_{n}$ is nonzero, which means $\chi^{0}(M)\neq 0$.
According to Theorem 2.2, $\chi^{0}(M)=1$ and the action is
Hamiltonian. Moreover, $N_{p}={n\choose p},~0\leq p\leq n.$ (Compare
to Lemma 3.1 in \cite{TW}.)
\end{proof}
\begin{remark}
This result was also reproved by Fel'dman
(\cite{Fe}, Corollary 1), whose main tools are the Conner-Floyd
equations for the Todd class.
\end{remark}
\subsection{}
We still use the notations in Section 2. In this subsection we will
prove the following result.
\begin{theorem}\label{main theorem in section 3}
Suppose the almost-complex manifold $(M^{2n},J)$ admits a circle
action preserving the almost-complex structure $J$ and having
isolated fixed points. Then for any $k\in\mathbb{Z}-\{0\},$
$$\sharp\{k_{j}^{(i)}=k~|~1\leq i\leq m,~1\leq j\leq n\}=\sharp\{k_{j}^{(i)}=-k~|~1\leq i\leq m,~1\leq j\leq n\}.$$
Here $\sharp$ denotes the cardinality of a set.
\end{theorem}
The following corollary is a generalization of (\cite{PT}, Theorem
2) from the case of symplectic circle actions on symplectic
manifolds to that of circle actions on almost-complex manifolds.
\begin{corollary}
Suppose the almost-complex manifold $(M^{2n},J)$ admit a circle
action preserving the almost-complex structure $J$ and having
isolated fixed points. Then
$$\sum_{i=1}^{m}\sum_{j=1}^{n}k_{j}^{(i)}=0.$$
\end{corollary}
In order to derive Theorem \ref{main theorem in section 3}, we need
the following lemma.
\begin{lemma}\label{lemma}
$\forall$ $0\leq k\leq n$, we have
$$e_{k}(x_{1}+1,x_{2}+1,\cdots,x_{n}+1)=\sum_{i=0}^{k}{n-i\choose n-k}e_{i}(x_{1},x_{2},\cdots,x_{n}).$$
\end{lemma}
\begin{proof}
\be\begin{split} e_{k}(x_{1}+1,x_{2}+1,\cdots,x_{n}+1)&=\sum_{1\leq
i_{1}<\cdots<i_{k}\leq
n}(x_{i_{1}}+1)(x_{i_{2}}+1)\cdots(x_{i_{k}}+1)\\
& :=\sum_{i=0}^{k}a_{i}\cdot e_{i}(x_{1},x_{2},\cdots,x_{n}).
\end{split}\nonumber\ee It suffices to determine the coefficients $a_{i}$. Note
that there are ${n\choose k}{k\choose i}$ monomials of degree $i$ in
the expression $\sum_{1\leq i_{1}<\cdots<i_{k}\leq
n}(x_{i_{1}}+1)(x_{i_{2}}+1)\cdots(x_{i_{k}}+1)$. Hence,
$$a_{i}=\frac{{n\choose
k}{k\choose i}}{{n\choose i}}={n-i\choose n-k}$$ as
$e_{i}(x_{1},\cdots,x_{n})$ has ${n\choose i}$ monomials.
\end{proof}
\emph{Proof of Theorem \ref{main theorem in section 3}.}
\be\begin{split}
\sum_{i=1}^{m}\sum_{j=1}^{n}\frac{1}{1-g^{k_{j}^{(i)}}}&=\sum_{i=1}^{m}\frac{e_{n-1}(1-g^{k_{1}^{(i)}},\cdots,1-g^{k_{n}^{(i)}})}{\prod_{j=1}^{n}(1-g^{k_{j}^{(i)}})}\\
&=\sum_{l=0}^{n-1}(n-l)(-1)^{l}\sum_{i=1}^{m}\frac{e_{l}(g^{k_{1}^{(i)}},\cdots,g^{k_{n}^{(i)}})}{\prod_{j=1}^{n}(1-g^{k_{j}^{(i)}})}~~(\textrm{by
Lemma \ref{lemma}})\\
&=\sum_{l=0}^{n-1}(n-l)(-1)^{l}\chi^{l}(M)\\
&=\sum_{l=0}^{n-1}(n-l)N_{l}\\
&=\frac{n}{2}\sum^{n}_{l=0}N_{l}~~(\textrm{by}~N_{l}=N_{n-l})\\
&=\frac{mn}{2}.
\end{split}\nonumber\ee
Note that for any $k\in\mathbb{Z}-\{0\},$
$\frac{1}{1-g^{k}}+\frac{1}{1-g^{-k}}=1$. So what we have showed
implies that, for any $k\in\mathbb{Z}-\{0\},$
$$\sharp\{k_{j}^{(i)}=k~|~1\leq i\leq m,~1\leq j\leq n\}=\sharp\{k_{j}^{(i)}=-k~|~1\leq i\leq m,~1\leq j\leq n\}.$$
\bibliographystyle{amsplain}
{\bf Acknowledgements.}~The author thanks the referee for his/her
very careful reading of the earlier version of this paper and many
fruitful comments and suggestions, which improve the quality of this
paper.
| {
"attr-fineweb-edu": 1.40625,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUf03xK6mkyCfOE0GJ | \section{Quantum Criticality in a Ferromagnetic SET}
The general setup of the magnetic SET
is given in Fig.~\ref{fig1}a.
The magnetic excitation spectrum of an itinerant ferromagnet consists of
the Stoner continuum, i.~e.~triplet particle-hole excitations, and spin
waves.
Given the Zeeman-splitting of the bands, it might at first be surprising
that ferromagnetic leads can screen the local moment.
The important point is that the local moment is
coupled to all possible particle-hole combinations
of both the source and drain leads.
The resulting exchange coupling matrix is such that
the anti-symmetric combination of the two leads decouple from
the local moment~\cite{Glazman.88,Ng.88}:
\begin{equation}
{\mathbf J} \sim \left( \begin{array}{cc}
V_{L}^{*} V_{L}^{} & V_{L}^{*} V_{R}^{} \\
V_{L}^{} V_{R}^{*} & V_{R}^{*} V_{R}^{} \end{array} \right)
= {\mathcal{U}}\left( \begin{array}{cc}
V_{L}^{*} V_{L}^{} + V_{R}^{*} V_{R}^{} & ~~0 \\
0 & ~~0 \end{array} \right) {\mathcal{U}}^{\dagger}~,~
\label{canonical_two_leads}
\end{equation}
where $V_i$ is the hybridization strength to the left/right ($i=L/R$)
lead and
the proportionality factor depends on the charging energy of the dot
and the chemical potential of source and drain. The local moment hence
couples to the sum of the DOS of both leads. If the magnetization in
the source and drain are anti-aligned and the SET setup is otherwise
symmetric w.r.t. the two leads,
the local moment couples to an effective band of unpolarized electrons
and complete
Kondo screening is
recovered for arbitrary spin polarization in the
leads~\cite{Martinek.03}.
This
was experimentally
verified by Pasupathy et al.~\cite{Pasupathy.04}.
Here, to illustrate the basic physics, we will focus on
such an anti-parallel case.
The new observation we introduced in Ref.~\cite{Kirchner.05}
is
not that Stoner excitations can screen
the local moment
but
that the spin waves in the ferromagnetic leads will also couple
to it.
The derivation of the effective low-energy model, given
in Ref.~\cite{Kirchner.05}, confirms this symmetry argument.
A generalized Schrieffer-Wolff transformation yields the
following effective low-energy Hamiltonian~\cite{Kirchner.05}:
\begin{eqnarray}
{\mathcal H}_{\mbox{bfk}}&=&
J \sum_{i}{\bf S} \cdot {\bf s}_{i} +
\sum_{{\bf k},i,\sigma}
\tilde{\epsilon}_{{\bf k}\sigma i}^{}~
c_{{\bf k}\sigma i}^{\dagger} c_{{\bf k} \sigma i} +
h_{\mbox{\tiny loc}}
S_{z}
\nonumber\\
&+&
g
\sum_{\beta,{\bf q},i} S_{\beta}
(\phi_{\beta,{\bf q},i} +
\phi^{\dagger}_{\beta,{\bf q},i} )
+ \sum_{\beta,{\bf q},i}
\omega_{\bf q}^{}\,
\phi_{\beta,{\bf q},i}^{\;\dagger} \phi_{\beta,{\bf q},i}.
\label{hamiltonian-bfk-n=2}
\end{eqnarray}
where the local magnetic field
$h_{\mbox{\tiny loc}}
= g \sum_i m_i$, with
$m_i$, for $i=L,R$, being the ordered moment of the left/right
leads, $\tilde{\epsilon}_{{\bf k}\sigma i}$ is the
Zeeman-shifted conduction electron dispersion,
and ${\phi}_{\beta,i}$, with $\beta = x,y$, describes
the magnon excitations.
With the canonical transformation,
Eq.~(\ref{canonical_two_leads}),
for the fermionic bath and a similar one for the
bosonic bath, the effective
fermionic dispersion, labeled $E_{\bf k}$,
becomes spin-independent; moreover,
the antisymmetric combinations of each bath decouple.
Hence, the low-energy properties of the ferromagnetic
SET are governed by a
BFKM
with an easy-plane anisotropy.
For the anti-parallel alignment,
$m_L=-m_R$, and
$h_{\mbox{\tiny loc}}$
will vanish.
Magnons are gapless bosons with
a quadratic dispersion.
The spectral density of the local dissipation they
generate is sub-Ohmic,
\begin{equation}
\int \, dq^3 \,\delta(\omega-\omega_q) \sim\sqrt{\omega}.
\label{subOhmic}
\end{equation}
This
feature
turns out to be essential for the existence of a
QCP~\cite{Zhu.02}.
Fig.~\ref{fig1}b shows the corresponding phase diagram of the
ferromagnetic SET. There are three renormalization-group
fixed points: ``Kondo'' and ``LM''
refer to the Kondo screened Fermi-liquid
fixed point,
and the critical local-moment fixed point,
describing a quantum-critical phase. ``QC'' refers to the
quantum-critical fixed point, characterizing the critical Kondo
screening on the entire separatrix (red line, corresponding to the
critical coupling $g_c$ as a function of $J$).
Most dissipation channels will
not lead to sub-Omic fluctuation spectra; coupling to phonons, photons,
or antiferromagnetic magnons
will not lead to critical Kondo screening.
The generalized Schrieffer-Wolff transformation relates
the coupling constants $J$ (Kondo coupling) and $g$ (magnon coupling) of
Eq.~(\ref{hamiltonian-bfk-n=2}) to the coupling constants of the
original
model: $J\sim \Gamma/(\rho \Delta)$ and $g \sim \Gamma/(\rho \Delta)^2$,
where $\Gamma=\pi \rho V^2$ is the
hybridization width, and $\rho$ is the lead density of states at the Fermi
energy.
$\Delta$ is the
charge fluctuation energy and is linearly dependent on the
gate voltage $V_g$ of the SET. The gate voltage is therefore able to
tune the
competition between
the Kondo coupling and the coupling to the fluctuating magnon field.
Since the Kondo
screening occurs on the scale of the bare ($g=0$, no magnons) Kondo
temperature $T_K^0=(1/\rho) \exp{(-1/\rho J)}$, the
control parameter is $g/T_K^0$. $T_K^0$ depends
exponentially on $J$, whereas $g\sim J^2$. This implies that $g/T_K^0$
is exponentially large deep in the Kondo regime and becomes of order unity in
the mixed valence regime.
This situation is reminiscent of the so-called Doniach-Varma picture for
the Kondo lattice where the
RKKY interaction ($\sim J^2$)
competes with the Kondo singlet formation ($\sim T_K^0$)~\cite{Doniach.77}.
This analogy is not accidental.
The quantum phase transition as $g$ is tuned through $g_c$
is reflected in
the narrowing of the Kondo resonance, as seen in Fig.~\ref{fig1}c.
The transport properties in the quantum critical regime have been worked
out in Ref.~\cite{Kirchner.05}. In the Kondo phase the conductance
has the well-known Fermi-liquid form, $G(T)=a-bT^2$, where $a=2e^2/h$
follows from Friedel's sum rule.
In the critical local moment phase ($g>g_c$)
at $T=0$,
the
electrons are completely decoupled from the local moment
and the conductance vanishes. At finite temperatures,
we find $G(T)=cT^{1/2}$.
The conductance versus temperature at the critical gate voltage
shows fractional power-law behavior,
$G(T)\,=\, A \,+\, BT^{1/4}$,
where $A$ is smaller than $a$.
The experimental feasibility of these measurements has been
extensively discussed in Ref.~\cite{Kirchner.05}.
We now make the connection between our results
and the physics of quantum critical heavy fermion
systems. The BFKM has been put forth as an effective model
for a Kondo-destroying QCP in heavy fermion
systems~\cite{Si.01}. In this approach, the self-consistency
relation between the lattice system and the effective impurity
model gives rise to a sub-Ohmic spectrum.
The inference about the destruction of Kondo effect at the
antiferromagnetic QCP
of heavy fermion systems have come from the collapse of
a large Fermi surface and the vanishing of multiple energy
scales~\cite{Paschen.04,Gegenwart.07}.
The ferromagnetic SET structure discussed here provides a tunable model
system to study the physics of a critical destruction of Kondo effect.
\section{The Case of Critical Paramagnons}
If the leads contain critical paramagnons instead of spin waves,
the dynamical spin susceptibility of the leads
will have an over-damped form:
\begin{eqnarray}
\chi_{\mbox{\tiny leads}}
({\bf q},\omega ) \sim \frac{1}{q^2 - i \omega/\gamma q}
\label{paramagnons}
\end{eqnarray}
where $\gamma$ is a constant.
The dissipative spectrum becomes
\begin{equation}
\int \, dq^3 \,{\rm Im} \chi_{\mbox{\tiny leads}}
({\bf q},\omega )
\sim |\omega|^{1/3} {\rm sgn}( \omega) .
\label{subOhmic-paramagnons}
\end{equation}
Since in this case the spin-rotational invariance in the leads
is not broken, the issue of anti-parallel alignment does not arise.
Palladium, for instance, has a Stoner
enhancement factor of around 10; there will be a large
frequency window over which Eq.~(\ref{subOhmic-paramagnons}) applies.
Furthermore, contact properties of palladium leads are well studied
and seem to be characterized
by a relatively small contact resistance~\cite{Babic.04}.
It has been argued~\cite{Kirchner.05}
that temperature/frequency dependences of the
critical electronic properties of BFKM
with easy-plane anisotropy are similar to
those of the same model with SU(2) invariance. For the
Kondo-destroying QCP and the critical local-moment phase,
it was further argued that they are similar to those
of a large-N limit of an SU(N)$\times$SU(N/2) generalization
of the BFKM:
\begin{eqnarray}
{\mathcal H}_{\mbox{\tiny BFK}}
&=&
({J}/{N}) \sum_{\alpha}{\bf S} \cdot {\bf s}_{\alpha} +
\sum_{{\bf k},\alpha,\sigma} E_{\bf k}~c_{{\bf k} \alpha \sigma}^{\dagger}
c_{{\bf k} \alpha \sigma}^{} \nonumber \\
&+&
({g}/{\sqrt{N}})
{\bf S} \cdot {\bf \Phi} + \sum_{\bf q} \omega_{\bf q}^{}\,{\bf
\Phi}_{\bf q}^{\;\dagger}\cdot {\bf \Phi}_{\bf q}^{}.
\label{hamiltonian-bfk}
\end{eqnarray}
The large-N limit leads to a set of dynamical saddle-point
equations~\cite{Zhu.04}, which can be solved analytically at zero
temperature and numerically at finite temperatures.
Alternatively, the dynamical equations, exact in the large-N limit,
can be used as an approximation for the $N=2$ case. Ref.~\cite{Kirchner.05}
considered the $N=2$ version of the Bose-Fermi Anderson model,
\begin{eqnarray}
H_{\mbox{\tiny bfam}}&=& \sum_{{\bf k},\sigma} E_{\bf k}~c_{{\bf k}
\sigma}^{\dagger}
c_{{\bf k} \sigma}^{}
+
t \sum_{{\bf k},\sigma} \biggl (c_{{\bf k}
\sigma}^{\dagger} d^{}_{\sigma} + \mbox{h.c.} \biggr)
+ \varepsilon_d \sum_{\sigma}
d^{\dagger}_{\sigma}d^{}_{\sigma}
\nonumber \\
&+& U n_{d\uparrow} n_{d\downarrow}
+ g
{\bf S}_d \cdot {\bf \Phi}
+
\sum_{\bf q} \omega_{\bf q}\,{\bf
\Phi}_{\bf q}^{\;\dagger}\cdot {\bf \Phi}_{\bf q} ,
\label{bfam}
\end{eqnarray}
at $U=\infty$ (and, hence, particle-hole asymmetric).
The numerical results presented in Ref.~\cite{Kirchner.05}
are all for this $N=2$ case. At zero field, they have
the same behavior as the exact results in the large-N
limit of Eq.~(\ref{hamiltonian-bfk}).
We observe that the dissipative spectrum associated with
the critical paramagnons, Eq.~(\ref{subOhmic-paramagnons}),
can be cast into the general form considered in Ref.~\cite{Zhu.04},
\begin{eqnarray}
A_{\Phi}(\omega)
\sim
|\omega|^{1-\epsilon} \mbox{sgn}(\omega) ,
\label{Aphi}
\end{eqnarray}
with $\epsilon=2/3$.
For general $\epsilon$, the large-N results at zero
temperature~\cite{Zhu.04}
imply that, for the critical point ($g=g_c$),
\begin{eqnarray}
T''(\omega>0) = const_1 + const_2 ~\cdot ~\omega^{\epsilon/2} .
\label{T-epsilon-criticala}
\end{eqnarray}
Likewise, for the critical local-moment phase ($g>g_c$),
\begin{eqnarray}
T''(\omega>0) = const~\cdot~\omega^{\epsilon} .
\label{T-epsilon-criticalb}
\end{eqnarray}
\begin{figure}[t!]
\begin{center}
\includegraphics[angle=0,width=0.49\textwidth]{4687_Kirchner_2.eps}
\end{center}
\caption{(a) DC conductance for different coupling strengths $g$,
for $\epsilon=2/3$, the case of critical paramagnons.
The zero temperature value of the conductance in the
Fermi liquid vase is fixed through the Friedel-Langreth sum rule.
(b) $\omega/T$-scaling at the QCP
($g=g_c$). The universal scaling curve of the T-matrix can be probed
via the AC conductance and Johnson noise measurements~\cite{Kirchner.05}.
} \label{fig2}
\end{figure}
For the case appropriate to critical paramagnons,
$\epsilon=2/3$, we have carried out more detailed studies
based on the large-N limit of Eq.~(\ref{hamiltonian-bfk}).
Fig.~\ref{fig1}c demonstrates the destruction of Kondo resonance
as the dissipative coupling $g$ reaches $g_c$ and beyond.
The DC conductance as a function of temperature is given
in Fig.~\ref{fig2}a. The temperature exponent at $g=g_c$ and
$g>g_c$ are compatible to $1/3$ and $2/3$ respectively.
The equality of these exponents with their counterparts
in the $T=0$ frequency dependence is consistent
with $\omega/T$ scaling. The latter is further illustrated
in Fig.~\ref{fig2}b, which demonstrates the $\omega/T$
scaling collapse of the dynamical T-matrix at $g=g_c$.
This $\omega/T$ scaling provides evidence for the
interacting nature of the QCP. Because $\epsilon>1/2$,
the latter in turn is an indication for an unconventional
quantum criticality~\cite{Zhu.04,Vojta.05,Glossop.05}.
\section{Issues on NCA in a finite field}
In the case of ferromagnetic leads, a local magnetic field will arise
if the ordered moments of the two leads are parallel, or if
the couplings to the leads are asymmetric in the anti-parallel
configuration. This refers to $h_{\mbox{\tiny loc}}$
of Eq.~(\ref{hamiltonian-bfk-n=2}), along the direction
of magnetic ordering. The effect of this field goes beyond
Eqs.~(\ref{hamiltonian-bfk},\ref{bfam}).
In the following, we briefly discuss what would happen if we were to
incorporate a local field in Eqs.~(\ref{hamiltonian-bfk},\ref{bfam}).
This effect is relevant if an external local field is applied along any
of the spin-wave directions in the ferromagnetic case,
or along any direction in the case of critical paramagnons.
We further restrict to the case of Eq.~(\ref{bfam}), where for $g=0$
the large-N equations reduce to the commonly applied NCA formalism.
Our purpose is to illustrate some delicate aspects in the theoretical
treatment of such a local field, $h$.
The Kondo effect ($g=0$) in the presence of
a magnetic field is a well-studied subject~\cite{Costi.00}. The poor
performance of the NCA for this problem has, however, not been
extensively discussed
in the literature.
\begin{figure}[t!]
\begin{center}
\includegraphics[angle=0,width=0.49\textwidth]{4687_Kirchner_3.eps}
\end{center}
\caption{(a) Kondo resonance in zero (dashed line) and finite
local field (continuous line).
The NCA, while capturing the Zeeman-split peaks,
incorrectly produces a sharp resonance
that is pinned to the Fermi energy ($\omega=0$).
This reflects its failure to capture the marginally irrelevant
character of the potential scattering term.
(b) Local magnetization at the critical coupling $g_c$.
The results are consistent with the expectation based on hyperscaling.
The parameters
adopted are:
$\epsilon_d=-0.3D$, $U=\infty$, $t=0.1D$, corresponding to
$T_K^0=4.2 \times 10^{-3}D$;
the cut-off energy
for the bosonic bath
$\Lambda = 0.32D$.
} \label{fig3}
\end{figure}
It was shown in
Ref.~\cite{Kirchner.02} that within the NCA the potential scattering
term of the Anderson model {\it incorrectly} scales in the same manner
as the spin exchange coupling. In a magnetic field, the up and down
fermions will be Zeeman-split. This gives rise to the splitting
of the Kondo resonance which is reproduced by the NCA, see
Fig.~\ref{fig3}a. The NCA does however overestimate the asymmetry of the
two peaks and, more significantly, it incorrectly predicts a sharp
feature at the Fermi energy
($\omega=0$). This sharp resonance is due to
the NCA's incorrect treatment of
the potential scattering term. Since this term is not
affected by the local field,
the 'Kondo resonance' due to this term
remains at $\omega=0$.
At the QCP, on the other hand,
the Kondo effect has been destroyed.
One might therefore expect
that the NCA can still be used to obtain universal properties
at a finite local field.
Following a hyperscaling analysis similar to
that given in
Ref.~\cite{Ingersent.02}, and using the fact that
$\chi_{stat}\sim T^{\epsilon-1}$,
we find that, for $\epsilon=1/2$,
\begin{equation}
M(h,T=0)\,\sim\, |h|^{\epsilon/(2-\epsilon)}=|h|^{1/3},
\end{equation}
and we expect $|h|/T^{(2-\epsilon)/2}=|h|/T^{3/4}$-scaling.
For $h<<T$ the magnetization should therefore behave as
$M(h,T)\sim |h|$, whereas for $h>>T$
it will be $M(h,T)\sim |h|^{1/3}$.
(We have set $g\mu_B =1$.)
This behavior is correctly
reproduced by the NCA, see Fig.~\ref{fig3}b. We conclude that
the NCA, generalized to incorporate the coupling to the bosonic bath,
correctly captures certain universal properties of the
quantum critical BFKM in a finite local field.
In conclusion, a SET with ferromagnetic electrodes constitutes a tunable
spintronic system that allows to experimentally access a quantum
critical Kondo state. Nonequilibrium properties of this boundary quantum phase
transition are readily obtained by having $\mu_1\neq\mu_2$
[see Fig.~\ref{fig1}a].
The ferromagnetic SET therefore seems to be an ideal system to
address out-of-equilibrium aspects of quantum criticality both
theoretically and experimentally.
This work was supported in part by NSF, the Robert A. Welch Foundation,
the W. M. Keck Foundation, and the Rice Computational Research
Cluster funded by NSF,
and a partnership between Rice University, AMD and Cray.
\vspace*{-0.0cm}
| {
"attr-fineweb-edu": 1.798828,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUf4LxK7IDPV_IuanX | \section{Introduction}
{Over the past five years, it has become possible to simultaneously record the activity of thousands of neurons at single-cell resolution \cite{AORLK13,PYH14,PFEO14,HGPS15}.} The high spatial and temporal resolution permitted by these new methods allows us to examine whether previously unexamined regions of the brain might dynamically map sensory information across space in unappreciated ways. However, the high dimensionality of these data also poses new computational challenges for statistical neuroscientists. Therefore scalable and efficient methods for extracting as much information as possible from these recordings must be developed; {in turn, improved analytical approaches that can extract information from e.g. shorter experiments may enable new dynamic closed-loop experimental designs}.
In many experimental settings, a key quantity of interest is the tuning function, a filter that relates known information about sensory input or behavioral state to the activity of a neuron. For example, tuning functions permit measurement of orientation
selectivity in visual cortex \cite{HW68}, allow us to relate movement direction to activity in primary motor cortex
\cite{S00,GCK86}, and let us measure the grid-like spatial sensitivity of neurons within entorhinal cortex \cite{HFMMM05}. This paper focuses on data-efficient methods for tuning function estimation.
To be more concrete, let us first consider example experimental data where the activity of $n$ neurons is measured across $d$ trials of identical lengths, with different stimuli presented during each trial. We can then model the response $\bm{y_i} \in \mathbb{R}^d$ of neuron $i$ as a function of a stimulus matrix $\bm{X_i} \in \mathbb{R}^{d \times m}$. Each row of $\bm{X_i}$ corresponds to the stimulus projected onto neuron $i$, at each of the $d$ trials. In the simplest case, the relationship between the unobserved tuning function $\bm{\beta_i} \in \mathbb{R}^m$ and the observed activity $\bm{y_i}$ at neuron $i$ in response to stimulus $\bm{X_i}$ can be modeled as\footnote{ {Empirical findings, to some degree, challenge the linear neural response to the stimulus, the conditionally independent neural activity, and the Gaussian noise assumptions. Nevertheless, numerous studies have successfully used these simplifying assumptions to analyze neural data (see \cite{RWRB97, DS12} and references therein). In the concluding section \ref{sec:cr}, we discuss directions for future work that allow the approach presented here to be extended to more general settings, e.g. correlated point process observations.}}:
\begin{eqnarray}
\bm{y_i = X_i \beta_i + \epsilon_i} \text{ where } \bm{\epsilon_i} \sim \mathcal{N}(0,\nu_i^2\sigma^2 \bm{I}). \label{eq:model}
\end{eqnarray}
The efficient statistical analysis and estimation of the unobserved tuning functions $\{ \bm{\beta_i} \}$ given the noisy observations $\{ \bm{y_i}\}$ and the stimulus set $\{\bm{X_i}\}$ is the tuning function estimation problem. In this setting, one standard approach is to use, for example, maximum-likelihood estimation to estimate tuning functions one neuron at a time (e.g., $\bm{\beta_{i,\text{ml}}} := \bm{(X_i'X_i)^{-1} X_i' y_i}$).
However, this model neglects a common feature of many neural circuits: the spatial clustering of neurons sharing a similar information processing function. {For example, there are maps of tone frequency across the cortical surface in the auditory system \cite{issa2014multiscale}, visual orientation maps in both cortical \cite{HW62,HW68,OHKI05} and subcortical brain regions \cite{feinberg2014orientation}, and maps respecting the spatial organization of the body (somatotopy) in the motor system \cite{LS17,penfield1950cerebral,romanes1964motor,bouchard2013functional,MPPJM15}. As a consequence, neurons in close proximity often have similar tuning functions (see \cite{S08,WM15}, for recent reviews).} In each of these cases, there are typically regions where this rule is violated and largely smooth tuning maps are punctuated by jumps or discontinuities. Therefore simply smoothing in all cases will erode the precision of any sharp borders that might exist. Ideally, we would use an approach to estimate $\{ \bm{\beta_i} \}$ that would smooth out the tuning map more in areas where there is evidence from the data that nearby tuning functions are similar, while letting the data `speak for itself' and applying minimal smoothing in regions where adjacent neurons have tuning functions that are very dissimilar.
In this paper, we propose a multivariate Bayesian extension of group lasso \cite{YL06}, generalized lasso \cite{TT11}, and total-variation (TV) regularization \cite{ROF92}. Specifically, we use the following { improper} prior:
\begin{eqnarray}
{\bm{\beta}| \lambda,\sigma} &\propto& \prod_{i \sim j}\bigl ( \frac{\lambda}{2 \sigma} \bigr)^m \exp \Bigl( -\frac{\lambda}{\sigma} \Bigl \| \bm{ \beta_i - \beta_j }\Bigr\|_2 \Bigr)\label{eq:beta_prior},
\end{eqnarray}
where $\|u\|_2 = \sqrt{\sum_{i=1}^m u_i^2} $ and $i \sim j$ if two cells $i$ and $j$ are spatially nearby\footnote{ We will clearly define the notion of proximity $i \sim j$, at the end of section \ref{sec:ldn}.}. This prior allows for a flexible level of similarity between nearby tuning functions. For clarity, we contrast against a $\| \bm{ \beta_i - \beta_j}\|_2^2$ based prior:
\begin{eqnarray*}
\prod_{i \sim j}\bigl ( \frac{\lambda^2}{2 \pi \sigma^2} \bigr)^{m/2} \exp \Bigl( -\frac{\lambda^2}{2\sigma^2} \Bigl \| \bm{ \beta_i - \beta_j }\Bigr\|_2^2 \Bigr),
\end{eqnarray*}
which penalizes large local differences quadratically. The prior defined in (\ref{eq:beta_prior}), on the other hand, penalizes large differences linearly; intuitively, this prior encourages nearby tuning functions to be similar while allowing for large occasional breaks or outliers in the spatial map of the inferred tuning functions. This makes the estimates much more robust to these occasional breaks.
The paper is organized as follows. Section 2 presents the full description of our statistical model, including likelihood, priors and hyper-priors. Section 3 presents an efficient block Gibbs sampler with discussions about its statistical and computational properties. Finally, section 4 illustrates our robust and scalable Bayesian analysis of simulated data from the visual cortex and real neural data obtained from the spinal cord. We conclude in Section 5 with a discussion of related work and possible extensions to our approach.
\section{Bayesian Inference\label{sec:ldn}}
To complete the model introduced above, we place an inverse Gamma prior on $\sigma$ and $\{ \nu_i\}_{i=1,\cdots,n}$, and we place a Gamma prior on $\lambda^2$, both of which are fairly common choices in Bayesian inference \cite{PG08}. These choices lead to the likelihood, priors, and hyper-priors presented below:
\begin{eqnarray}
\text{likelihood,}\hspace{0.6cm} \bm{y_i | \beta_i},\sigma,\nu_i &\sim& \bigl( \frac{1}{2 \pi \nu_i^2 \sigma^2} \bigr)^{d/2} \exp \Bigl( -\frac{1}{2\nu_i^2 \sigma^2} \Bigl \| \bm{ y_i - X_i \beta_i }\Bigr\|_2^2 \Bigr)
\nonumber
\\
\text{prior,}\hspace{1.35cm} \bm{\beta} | \lambda,\sigma &\sim& \prod_{i \sim j}\bigl ( \frac{\lambda}{2 \sigma} \bigr)^m \exp \Bigl( -\frac{\lambda}{\sigma} \Bigl \| \bm{\beta_i - \beta_j} \Bigr\|_2 \Bigr) \nonumber\end{eqnarray}
and hyper-priors,
\begin{eqnarray}
\sigma^2 &\sim& \text{inverse-Gamma}(\kappa,\epsilon)=\frac{\epsilon^{\kappa} }{\Gamma(\kappa)} (\sigma^2)^{-\kappa-1} e^{-\epsilon/\sigma^2} \label{eq:hpriors} \\
\lambda^2 &\sim& \text{Gamma}(r,\delta)= \frac{\delta^r }{\Gamma(r)} (\lambda^2)^{r-1} e^{-\delta \lambda^2}\nonumber\\
\nu_i^2 &\sim& \text{inverse-Gamma}(\varkappa,\varepsilon)=\frac{\varepsilon^{\varkappa} }{\Gamma(\varkappa)} (\nu_i^2)^{-\varkappa-1} e^{-\varepsilon/\nu_i^2}.\nonumber
\end{eqnarray}
The well known representation \cite{AM74,W87, ETL06,CGGK10} of the Laplace prior as a scale mixture of Normals:
\begin{eqnarray*}
&&\bigl ( \frac{\lambda}{2 \sigma} \bigr)^m \exp \Bigl( -\frac{\lambda}{\sigma} \| \bm{\beta_i - \beta_j }\|_2 \Bigr)=\\ && C\int_0^{\infty} \Bigl (\frac{1}{2 \pi \sigma^2 \tau_{ij}^2} \Bigr)^{m/2} \exp \Bigl( -\frac{\| \bm{ \beta_i - \beta_j} \|_2^2}{2\sigma^2 \tau_{ij}^2} \Bigr) \underbrace{ \frac{(\frac{\lambda^2}{2})^{\frac{m+1}{2}} }{\Gamma(\frac{m+1}{2})} (\tau_{ij}^2)^{\frac{m+1}{2}-1 } e^{- \frac{\lambda^2}{2} \tau_{ij}^2} d\tau_{ij}^2}_{\tau_{ij}^2 \sim \text{Gamma}(\frac{m+1}{2}, \frac{\lambda^2}{2}) },
\end{eqnarray*}
(where $C=\pi^{\frac{m-1}{2}} \Gamma(\frac{m+1}{2})$) allows us to formulate our prior (\ref{eq:beta_prior}), in a hierarchical manner:
\begin{eqnarray}
\tau_{ij}^2 | \lambda^2 &\sim& \frac{(\frac{\lambda^2}{2})^{\frac{m+1}{2}} }{\Gamma(\frac{m+1}{2})} (\tau_{ij}^2)^{\frac{m+1}{2}-1 } e^{- \frac{\lambda^2}{2} \tau_{ij}^2} \quad \text{for all $i\sim j$} \label{eq:t}\\
\bm{\beta}| \{\tau_{ij}^2\},\sigma^2 &\sim& \exp \bigl(-\frac{\bm{\beta' D' \Gamma D \beta}}{2\sigma^2} \bigr) \label{eq:hie}
\end{eqnarray}
where {(using $\otimes$ as the Kronecker product)}
\begin{eqnarray*}
\bm{D}&=& \bm{D_s \otimes I_m } \text{ and } \bm{\Gamma }=\bm{\Gamma_s \otimes I_m} \\
\bm{\Gamma_s}&=&\text{diag}(\cdots,\frac{1}{\tau_{ij}^2},\cdots) \in \mathbb{R}^{p \times p}
\end{eqnarray*}
and $\bm{D_s} \in \mathbb{R}^{p \times n}$ is a sparse matrix such that each row accommodates a $+1$ and $-1$, corresponding to $i \sim j$. We let $p$ denote the number of edges in the proximity network. Note that
\begin{eqnarray*}
\bm{\beta' D' \Gamma D \beta }&=& \sum_{i \sim j} \frac{\| \bm{\beta_i - \beta_j}\|_2^2}{\tau_{ij}^2}.
\end{eqnarray*}
{
In light of the hierarchical representation, illustrated in equations (\ref{eq:t}, \ref{eq:hie}), the prior defined in (\ref{eq:beta_prior}) can be viewed as an improper Gaussian mixture model; $\bm{\beta}$ is Gaussian given $\{\cdots,\tau_{ij}^2,\cdots\}$, and the $\tau_{ij}^2$s come from a common ensemble. This prior favors spatial smoothness while allowing the amount of smoothness to be variable and adapt to the data.} As we will discuss in section \ref{sec:gibbs}, posterior samples of $\tau_{ij}^2$ tend to be smaller in smooth areas than in regions with discontinuities or outliers.
For each edge in the proximity network, and each corresponding row in $\bm{D_s}$, there is a unique pair of nodes $i$ and $j$ that are spatially ``nearby," i.e. $i \sim j$. We found that considering the four horizontally and vertically nearby nodes as neighbors, for nodes that lie on a two dimensional regular lattice, allows us to efficiently estimate tuning functions without contamination from measurement noise or bias from oversmoothing. See section \ref{sec:opm} for an illustrative example. As for nodes that lie on an irregular grid, we compute the sample mean $\bm{\mu}$ and sample covariance $\bm{C}$ of the locations, and then whiten the location vectors $\bm{v_i}$; that is, $\bm{v_{i,\text{whitened}}} = \bm{C^{-1/2} (v_{i} - \mu)}$. We found that connecting each node to its $k$-nearest-neighbors (within a maximum distance $r$) in the whitened space works well in practice. See section \ref{sec:motor} for an illustrative example with $k=1$ and $r=5$.
Extending the robust prior presented in equation (\ref{eq:beta_prior}), which is based on the simple local difference $\|\bm{\beta_i - \beta_j}\|_2$ for $i \sim j$, to a robust prior based on any generic $\| . \|_2$ measure of local roughness is easy; we only need to appropriately modify $\bm{D_s}$. For example, if $\bm{y_1,\cdots,y_n}$ are equidistant temporal samples, then the following robust prior
\begin{eqnarray*}
\bm{\beta}| \lambda,\sigma &\sim& \prod_{i=1}^{n-2}\bigl ( \frac{\lambda}{2 \sigma} \bigr)^m \exp \Bigl( -\frac{\lambda}{\sigma} \Bigl \| \bm{ 2\beta_i - \beta_{i+1} - \beta_{i-1} }\Bigr\|_2 \Bigr)
\end{eqnarray*}
reflects our a priori belief that $\bm{\beta_1,\cdots,\beta_n}$ are (approximately) piecewise linear \cite{KKBG09}. In this case, $\bm{D_s}$ is a tridiagonal matrix with 2 on the diagonal and -1 on the off diagonals. As another example, let the matrix $\bm{D_s}$ be equal to the discrete Laplacian operator; $[\bm{D_s}]_{ii}$ equals the number of edges attached to node $i$, and if $i \sim j$, then $[\bm{D_s}]_{ij}=-1$, otherwise its zero. The discrete Laplacian operator (Laplacian matrix), which is an approximation to the continuous Laplace operator, is commonly used in the spatial smoothing literature to impose a roughness penalty \cite{Wahba90}. Our robust prior based on the discrete Laplacian operator is as follows\begin{eqnarray*}
\bm{\beta} | \lambda,\sigma &\sim& \prod_{i=1}^n\bigl ( \frac{\lambda}{2 \sigma} \bigr)^m \exp \Bigl( -\frac{\lambda}{\sigma} \Bigl \| \sum_{j \sim i } (\bm{\beta_i - \beta_j}) \Bigr\|_2 \Bigr),
\end{eqnarray*}
which given the appropriate matrix $\bm{D_s}$ can easily be formulated in the hierarchical manner of equation \ref{eq:hie}. On regular grids, this prior is based only on the four (horizontal and vertical) neighbors but better approximations to the the continuous Laplace operator based on more neighbors is straightforward and within the scope of our scalable block Gibbs sampler presented in section \ref{sec:gibbs}.
{ Finally note that the prior defined in (\ref{eq:beta_prior}) is not a proper probability distribution because it can not be normalized to one. However, in most cases the posterior distribution will still be integrable even if we use such an improper prior \cite{Gelman03}. As we see later in section \ref{sec:gibbs}, all the conditional distributions needed for block-Gibbs sampling are proper. Furthermore, the joint posterior inherits the unimodality in $\bm{\beta}$ and $\sigma$ given $\{ \nu_i \}_{i=1,\cdots,n}$ and $\lambda$ from the Bayesian Lasso \cite{PG08}, aiding in the mixing of the Markov chain sampling methods employed here. (See appendix \ref{sec:app}.)}
{
\subsection{Relationship to Network Lasso\label{sec:net_las}}
In related recent independent work, \cite{HLB15} present an algorithm based on the alternating direction method of multipliers \cite{ADMM11} to solve the network lasso convex optimization problem,
\begin{eqnarray}
\underset{\bm{\beta_i} \in \mathbb{R}^m \text{ for } i=1,\cdots,n}{\text{minimize}} \hspace{1cm} \sum_{i=1}^n \|\bm{y_i - X_i \beta_i} \|_2^2 + \gamma \sum_{i \sim j} \|\bm{\beta_i -\beta_j} \|_2,\label{eq:netlasso}
\end{eqnarray}
in a distributed and scalable manner. The parameter $\gamma$ scales the edge penalty relative to the node objectives (and can be tuned using cross-validation). Similar to our formulation (section \ref{sec:ldn}), network lasso uses an edge cost that is a sum of norms of differences of the adjacent node variables, leading to a setting that allows for robust smoothing within clusters on graphs. The optimization approach of \cite{HLB15} leads to fast computation but sacrifices the quantification of posterior uncertainty (which is in turn critical for closed-loop experimental design - e.g., deciding which neurons should be sampled more frequently to reduce posterior uncertainty) provided by the method proposed here. A Bayesian version of the network lasso is a special case of our robust Bayesian formulation, by setting the variable variance parameters equal to one, that is $\nu_i^2=1$ for $i=1,\cdots,n$. As we will see in the next example, heteroscedastic noise challenges the posterior mean estimate's robustness.
\subsection{Model Illustration}
In this section, we show that posterior means based on the prior of equation (\ref{eq:hpriors}) on $\{\nu_i\}_{i=1,\cdots,n}$ are robust to neuron-dependent noise variance. Our numerical experiments for heterogenous noise power show that a model with a homogeneous noise assumption will misinterpret noise as signal, depicted in figure \ref{fig:heteroscedastic}. Comparisons with network lasso are presented as well. We postpone the details concerning the block-Gibbs sampler presented in this paper to section \ref{sec:gibbs}.
The signal and heterogeneous noise models are as follows:
\begin{eqnarray*}
y_i &=& \beta_i + \epsilon_i, \hspace{2.4cm}
\text{ where } \hspace{1cm} \beta_i = \sqrt{\frac{i}{n}(1-\frac{i}{n})}\sin(11\pi \frac{i^4}{n^4}),\\
\epsilon_i &\sim& \mathcal{\mathcal{N}}(0,\sigma_i^2), \hspace{2.2cm} \text{ with } \hspace{1.1cm}
\sigma_i = \begin{cases}
0.1 & \quad \text{if } \frac{i}{n} \in [0,0.5) \cup (0.6,1] \\
1 & \quad \text{if } \frac{i}{n} \in [0.5,0.6]\\
\end{cases}.
\end{eqnarray*}
The following hyperpriors were used for the posterior means of the robust Bayesian model:
\begin{eqnarray*}
\sigma^2 &\sim& \text{inverse-Gamma}(\kappa=0,\epsilon=0),\\
\lambda^2 &\sim& \text{Gamma}(r=0.0001,\delta=0.001), \\
\nu_i^2 &\sim& \text{inverse-Gamma}(\varkappa=3,\varepsilon=2).
\end{eqnarray*}
The hyperpriors of $\lambda^2$ and $\sigma^2$ are relatively flat. For $\nu_i^2$, we set the hyper parameters such that we have unit prior mean and prior variance. Bayesian network lasso is only different from the robust Bayesian formulation in that it assumes a constant noise variance, i.e. $\nu_i^2=1$ for $i=1,\cdots,n$.
Bayesian network lasso and robust Bayesian posterior mean estimates are based on 10,000 consecutive iterations of the Gibbs sampler (after 5,000 burn-in iterations), discussed in section \ref{sec:gibbs}. The network lasso estimate is the solution to the convex optimization problem equation (\ref{eq:netlasso}) where the tuning parameter $\gamma$ is set using 10-fold cross-validation. Note that the network lasso estimate corresponds to the mode of the posterior distribution of Bayesian network lasso conditioned on $\sigma$ and $\lambda$.
For the sake of comparison, we also present numerical results for a homogeneous noise model. Here, the signal $\bm{\beta}$ is the same but the noise variance is $\sigma_i=0.33$ for $i=1,\cdots,n$. This particular choice of $\sigma_i$ was made to guarantee that the signal-to-noise ratio is equal to that of the heterogeneous noise model. As for the priors, they remain the same. As expected, Bayesian network lasso and robust Bayesian posterior means are similar, depicted in figure \ref{fig:homogeneous}.
Figures \ref{fig:heteroscedastic} and \ref{fig:homogeneous} illustrate that if the noise power is constant, robust Bayesian and Bayesian network lasso posterior means are similar. On the other hand, if noise power is not constant, robust Bayesian posterior mean detects the nonuniform noise power and adapts to it while Bayesian network lasso posterior mean will misinterpret noise as signal and overfit. Overall, network lasso estimates tend to over-smooth high frequency variations into piecewise-constant estimates which is undesirable. Repeated simulations presented in figure \ref{fig:comparison} further confirm these observations.}
\begin{figure}
\hspace{-0.4cm}
\centering
\includegraphics[width=16cm]{./figure/comparison_signal_nonuniform}
\caption{{Heterogenous noise example. The Bayesian network lasso posterior mean estimate overfits in the region of higher observation noise. The robust Bayesian formulation is less prone to misidentifying heterogenous noise as signal. The network lasso tends to cluster high frequency variations into piecewise-constant estimates.}}
\label{fig:heteroscedastic}
\end{figure}
\begin{figure}
\hspace{-0.4cm}
\centering
\includegraphics[width=16cm]{./figure/comparison_signal_uniform}
\caption{{Homogeneous noise example. The posterior means of Bayesian network lasso and our robust Bayesian are very similar. This is expected given the homogeneity of noise power. The network lasso suffers from the staircase effect, that is, the denoised signal is not
smooth but piecewise constant.}}
\label{fig:homogeneous}
\end{figure}
\begin{figure}
\hspace{-1.5cm}
\centering
\includegraphics[width=18cm]{./figure/comparison}
\caption{{Tukey boxplots comparing model $\sqrt{MSE}:=n^{-1/2}\| \bm{\beta} - \bm{\hat{\beta}}_{\text{model}} \|_2$ under homogeneous and heterogeneous noise. To make this comparison meaningful, signal-to-noise ratios are the same for both noise models. The boxplots are generated by simulating 100 replications of each model. For homogeneous noise, Bayesian network lasso and robust Bayesian perform similarly. However, when noise is heterogeneous, Bayesian network lasso tends to overfit, as illustrated in figure \ref{fig:heteroscedastic}. In terms of $MSE$, network lasso is more robust to noise variations than its Bayesian counterpart but robust Bayesian performs slightly better.}}
\label{fig:comparison}
\end{figure}
\section{Scalable Block Gibbs Sampling\label{sec:gibbs}}
We will now introduce some vector and matrix notation before we describe our Gibbs sampling approach to inference. First, we introduce the following variables:
\begin{eqnarray}
\bm{\underline y_i} &:=& \frac{\bm{y_i}}{\nu_i}, \quad \bm{\underline X_i} := \frac{\bm{X_i}}{\nu_i}. \label{eq:u}
\end{eqnarray}
{We also let $\bm{\underline X} \in \mathbb{R}^{nd \times nm}$ stand for the rectangular blockwise-diagonal matrix diag$\Bigl (\cdots,\bm{ \underline X_i},\cdots \Bigr)$.} Moreover, we let $\bm{\beta}$, $\bm{\underline y}$ and $\bm{\underline X' \underline y} $ stand for the column-wise concatenation (for $i=1,\cdots,n$) of $\bm{\beta_i}$, $\bm{\underline y_i}$, and $\bm{\underline X_i' \underline y_i}$, respectively. $\bm{\underline X' \underline X}$ is then the blockwise-diagonal matrix diag$\Bigl (\cdots,\bm{\underline X_i' \underline X_i},\cdots \Bigr) \in \mathbb{R}^{nm \times nm}$. Finally, recall that $p$ stands for the number of edges in the proximity network.
Our efficient Gibbs sampler and the full conditional distributions of $\bm{\beta}$, $\sigma^2$, $\{\nu_i^2 \}$, $\lambda$ and $\{ \tau_{ij}^2 \}$ can then be formulated as follows:
\textbf{Step 1.} The local smoothing parameters $\{ \tau_{ij}\}_{i \sim j}$ are conditionally independent, with {
\begin{equation*}
\tau_{ij}^2 | \beta,\sigma^2,\lambda^2 \sim \bigl ( \frac{ 1}{ \tau_{ij}^2} \bigr)^{1/2} \exp \Bigl ( - \frac{ \|\bm{\beta_i - \beta_j} \|^2}{2\sigma^2 \tau_{ij}^2} -\frac{\lambda^2}{2} \tau_{ij}^2 \Bigr) \end{equation*}
}
\textbf{Step 2.} The full conditional for $\bm{\beta}$ is multivariate normal with mean $\bm{P^{-1} \underline X'\underline y}$ and covariance $\sigma^2 \bm{P^{-1}}$, where $$\bm{P = \underline X'\underline X + D' \Gamma D}.$$
\textbf{Step 3.} $\sigma^2 \sim$ inverse-Gamma($\kappa'$,$\epsilon'$) with
\begin{equation*}
\kappa'=\kappa+ \frac{(pm+nd)}{2}, \quad\text{and} \quad \quad \epsilon' = \epsilon + \frac{1}{2} \| \bm{\underline y-\underline X\beta}\|^2 + \frac{1}{2} \|\bm{\Gamma^{1/2} D \beta} \|^2.
\end{equation*}
\textbf{Step 4.} $\lambda^2 \sim $ Gamma($r'$,$\delta'$) with
\begin{equation*}
r' = r + p(m+1)/2, \quad\text{and} \quad \quad \delta'=\delta+\frac{1}{2}\sum_{i \sim j} \tau_{ij}^2 .
\end{equation*}
\textbf{Step 5.} $\nu_i^2 \sim$ inverse-Gamma($\varkappa'$,$\varepsilon'$) with
\begin{equation*}
\varkappa'=\varkappa+ \frac{d}{2}, \quad\text{and} \quad \quad \varepsilon' = \varepsilon + \frac{1}{2\sigma^2} \| \bm{y_i-X_i\beta_i}\|^2 .
\end{equation*}
Note that in step 1, the conditional distribution can be rewritten as
\begin{eqnarray}
\frac{1}{\tau_{ij}^2} \Big | \bm{\beta},\sigma^2,\lambda \sim \text{inverse-Gaussian}(\mu',\lambda') \label{eq:localsmooth}
\end{eqnarray}
with
\begin{equation*}
\mu'=\frac{\lambda \sigma}{\|\bm{\beta_i - \beta_j}\|_2}, \quad \quad \quad \lambda' = \lambda^2,
\end{equation*}
in the parametrization of the inverse-Gaussian density given by
\begin{equation*}
\text{inverse-Gaussian}(\mu',\lambda') \quad \sim \quad f(x)=\sqrt{\frac{\lambda'}{2 \pi}} x^{-3/2} \exp \Bigl \{ - \frac{\lambda' (x-\mu')^2}{2 (\mu')^2 x} \Bigr \}.
\end{equation*}
Moreover, the conditional expectation of $\frac{1}{\tau_{ij}^2}$ (using its inverse-Gaussian density in \ref{eq:localsmooth}) is equal to $\frac{\lambda \sigma}{\|\bm{ \beta_i - \beta_j}\|_2} $. This makes the iterative Gibbs sampler above intuitively appealing; if the local difference is significantly larger than typical noise (i.e., $\|\bm{\beta_i - \beta_j}\|_2 \gg \lambda \sigma$) then there is information in the difference, and therefore, minimal smoothing is applied in order to preserve that difference. On the other hand, if the local difference is small, this difference is likely to be due to noise, and therefore, local smoothing will reduce the noise. In other words, the robust Bayesian formulation presented in this paper functions as an adaptive smoother where samples will be less smooth in regions marked with statistically significant local differences, and vice versa.
Furthermore in step 2, the conditional distribution of $\bm{\beta}$ depends on the observation $\bm{y}$ and the local smoothing parameters $\tau$. A large $1/\tau_{ij}^2$ causes the samples of $\bm{\beta_i}$ and $\bm{\beta_j}$ to be more similar to each other than their respective ML estimates $\bm{\beta_{i,\text{ml}}}$ and $\bm{\beta_{j,\text{ml}}}$ (where $\bm{\beta_{i,\text{ml}}} := \bm{(X_i'X_i)^{-1} X_i' y_i}$). In contrast, if $1/\tau^2_{ij}$ is small, then the conditional samples of $\bm{\beta_i}$ and $\bm{\beta_j}$ typically revert to their respective ML estimates, plus block-independent noise.
Finally, although unnecessary in our approach, the fully Bayesian sampling of $\lambda$ in step 4 can be replaced with an empirical Bayes method. The difficulty in computing the marginal likelihood of $\lambda$, which requires a high-dimensional integration, can be avoided with the aid of the EM/Gibbs algorithm \cite{C01}. Specifically, iteration $k$ of the EM algorithm
\begin{eqnarray*}
\lambda^{(k+1)} = \operatornamewithlimits{argmax}_{\lambda} \text{E} \Bigl [ \log p(\bm{ \beta}, \tau^2,\lambda | \bm{y}) \Big | \bm{y} ,\lambda^{(k)} \Bigr],
\end{eqnarray*}
simplifies to
\begin{eqnarray}
\lambda^{(k+1)} &=& \sqrt{ \frac{p(m+1)}{\sum_{i \sim j} \text{E}[\tau_{ij}^2| \bm{y},\lambda^{(k)} ] }} \label{eq:em},
\end{eqnarray}
which can be approximated by replacing conditional expectations with sample averages from step 1. The empirical Bayes approach gives consistent results with the fully Bayesian setting. The expectation of the conditional Gamma distribution of $\lambda^2$ in step 4:
\begin{eqnarray*}
\text{E}[\lambda^2| \bm{y}, \tau] &=& \frac{2r+p(m+1)}{2\delta + \sum_{i \sim j} \tau_{ij}^2},
\end{eqnarray*}
is similar to the EM/Gibbs update (\ref{eq:em}). In our experience, both approaches give similar results on high dimensional data.
\subsection{Computational cost\label{sec:comp}}
The conditional independence of the local smoothing parameters $\{ \tau_{ij}\}_{i \sim j}$ given $\bm{\beta}$ and $\sigma$ amounts to a computational cost of sampling these variables that scales linearly with their size: $O(pm)$. Similarly, the cost of sampling $\sigma^2$ given $\bm{\beta}$ and$\{ \tau_{ij}\}_{i \sim j}$ is due to computing $\sum_{i=1}^n\| \bm{y_i-X_i\beta_i}\|^2$, $\sum_{i=1}^n\| \bm{\underline y_i-\underline X_i\beta_i}\|^2$ and $\|\bm{\Gamma^{1/2} D \beta }\|^2$ which are, respectively, $O(ndm)$, $O(ndm)$ and $O(pm)$, amounting to a total cost of $O((nd+p)m)$.
The conditional distribution of $\bm{\beta}$ given $\{ \tau_{ij}\}_{i \sim j}$ is multivariate Gaussian with mean $\bm{P^{-1} \underline X' \underline y}$ and covariance $\sigma^2 \bm{P^{-1}}$, whose computational feasibility rests primarily on the ability to solve the equation
\begin{eqnarray}
\bm{P w} &=& \bm{b} \label{eq:lin}
\end{eqnarray}
as a function of the unknown vector $\bm{w}$, for $\bm{P} = \bm{\underline X' \underline X + D' \Gamma D}$. This is because if $\bm{\epsilon_1, \epsilon_2} \sim \mathcal{N}(0,\bm{I})$, then
\begin{eqnarray}
\bm{P^{-1} \underline X' \underline y }+ \sigma \bm{ P^{-1} }\Bigl [\bm{\underline X' \epsilon_1} + \bm{D' \Gamma^{1/2} \epsilon_2 }\Bigr]\label{eq:samp},
\end{eqnarray}
is a Gaussian random vector with mean $\bm{P^{-1} \underline X' \underline y}$ and covariance $\sigma^2\bm{ P^{-1}}$. { Similar approaches for the efficient realization of Gaussian fields based on optimizing a randomly perturbed cost function (log-posterior) were studied in \cite{HR91,H09,PaYu10,BSHL14,GMI14}. In our case, the randomly perturbed cost function is
\begin{eqnarray*}
f_{\bm{\epsilon_1,\epsilon_2}} (\bm{\theta}) &:=& \bigl (\bm{D \theta} - \sigma \bm{\Gamma ^{-1/2} \epsilon_2} \bigr)' \bm{\Gamma} \bigl (\bm{D \theta} - \sigma \bm{\Gamma ^{-1/2} \epsilon_2} \bigr) \\&+&
\bigl ( \bm{\underline y } +\sigma \bm{\epsilon_1} - \bm{\underline X \theta} \bigr)'\bigl ( \bm{\underline y } +\sigma \bm{\epsilon_1} - \bm{\underline X \theta} \bigr),
\end{eqnarray*}
in which case it is easy to see that $\arg \max_{\bm{\theta}}f_{\bm{\epsilon_1,\epsilon_2}} (\bm{\theta})$ is given by equation (\ref{eq:samp}).
}
Standard methods for computing $\bm{P^{-1}b}$ require cubic time and quadratic space, rendering them impractical for high-dimensional applications. A natural idea for reducing the computational burden involves exploiting the fact that $\bm{P}$ is composed of a block-diagonal matrix $\bm{\underline X' \underline X}$ and a sparse matrix $\bm{D'\Gamma D}$. { For instance, matrices based on discrete Laplace operators on regular grids lend themselves well to multigrid algorithms which have linear time complexity (see \cite{B77,GS89,PaYu10} and section 19.6 of \cite{PRE92}).} Even standard methods for solving linear equations involving sparse matrices (as implemented, e.g., in MATLAB's $ \bm{P \backslash b} $ call) are quite efficient here, requiring sub-quadratic time \cite{RH05}. This sub-quadratic scaling requires that a good ordering is found to minimize fill-in during the forward sweep of the Gaussian elimination algorithm; code to find such a good ordering (via `approximate minimum degree' algorithms \cite{Davis06}) is built into the MATLAB call $\bm{P\backslash b}$ when $\bm{P}$ is represented as a sparse matrix. As we will see in section \ref{sec:opm}, exploiting these efficient linear algebra techniques permits sampling from a high dimensional ($> 10^6$) surface defined on a regular lattice in just a few seconds using MATLAB on a 2.53 GHz MacBook Pro.
\section{Motivating Neuroscience Applications\label{sec:results}}
\begin{figure}
\includegraphics[scale=0.75]{figure/opm_data_curve}
\caption{Electrophysiological recordings from a single neuron in the primary visual cortex of a monkey. A moving bar of light was projected onto the receptive field of the cell at different angles. In the diagrams on the left, the receptive field is shown as a dashed rectangle and the light source as a superimposed black bar. The angle of the dashed rectangle indicates the preferred orientation. For each bar (stimulus) orientation, the neural response was recorded. The voltage traces in the middle column show the electrophysiological recordings corresponding to the stimulus orientation of that row. Note that the neural response depends on the stimulus orientation; it increases as the bar and the preferred orientation become more aligned. Clearly, the bar orientation of the middle row evoked the largest number of action potentials. The graph on the right shows average number of action potentials per second (neural response) versus the angle of the bar. This graph indicates how the neural response depends on the orientation of the light bar. The data have been fit by a Gaussian function. (Data is from \cite{HW68,HDB74} and figures are adapted from \cite{WAND95,DA01}.) }
\label{fig:opm_data}
\end{figure}
\begin{figure}
\hspace{0cm}
\includegraphics[width=0.95\textwidth]{figure/theta}
\caption{ Analysis of a synthetic orientation tuning map. $\theta$ is a synthetic $710 \times 710$ orientation preference map (see section 2.4 of the Supplement of \cite{K10} for details). Each pixel is a neuron, and $\theta_i \in (-90^\circ,+90^\circ]$ (the preferred orientation of neuron $i$) is given by $\arctan \bigl (\beta_{2,i} /\beta_{1,i} \bigr)$. Likewise, the robust Bayesian $\hat \theta$, smoothed $\theta_{sm}$, and maximum-likelihood $\theta_{ml}$ estimates of preferred orientations are inverse trigonometric functions of $\bm{\hat \beta}$, $\bm{\beta_{sm}}$, and $\bm{\beta_{ml}}$, respectively. The Bayesian estimate $\hat \theta$ of preferred orientations is less noisy than $\theta_{ml}$ and more robust than $\theta_{sm}$; see also Fig.~\ref{fig:r_theta} for a zoomed-in view. The $\bm{\hat \beta}$ estimate of posterior expectations is based on 10000 consecutive iterations of the Gibbs sampler (after 500 burn-in iterations).
}
\label{fig:orientation_preference}
\end{figure}
\begin{figure}
\hspace{0cm}
\includegraphics[width=1\textwidth]{figure/r_tau}
\caption{True tuning strengths $\{r_i\}$, the estimated tuning strengths $\{ r_{i,ml},r_{i,sm},\hat r_i \}$, and posterior means of local smoothing parameters $\{\tau_{ij}\}_{i \sim j}$. Each pixel is a neuron, and its (estimated) tuning strength is given by the length of its (estimated) $\bm{\beta_i}$, e.g. $r_i = \| \bm{ \beta_i }\|_2$, $\hat r_i = \| \bm{\hat \beta_{i} }\|_2$, etc. The proximity network is a $710 \times 710$ regular grid with edges between a node and its four (horizontal and vertical) neighbors. The local smoothing parameters defined on edges among vertical and horizontal edges are designated by $\{\tau_y\}$ and $\{\tau_x\}$, respectively. The $r_{sm}$ (smoothed) and $\hat r$ (robust Bayesian) tuning strength maps underestimate the true value at points where posterior means of local smoothing parameters $\{\tau_x,\tau_y \}$ take significant values. These points correspond to sharp breaks in the orientation preference map $\theta$ (as illustrated in figure \ref{fig:orientation_preference}) where local averaging of significantly differently oriented tuning functions leads to a downward bias in estimated tuning strengths.}
\label{fig:r_tau}
\end{figure}
\begin{figure}
\hspace{0cm}
\includegraphics[width=1\textwidth]{figure/r_theta}
\caption{ A $40 \times 40$ zoomed-in view of preferred orientations $\{\theta_i\}$ and tuning strengths $\{r_i\}$, and their estimates. (The center of this map is pixel (241, 60) in figure \ref{fig:orientation_preference} and figure \ref{fig:r_tau}.) The smoothed $r_{sm}$ tuning strength map underestimates the true tuning strength at sharp breaks in the orientation preference map $\theta$. This bias is less severe for the Bayesian estimate $\hat r$ because the robust prior applies less local smoothing at sharp breaks (as illustrated in figure \ref{fig:r_tau}). Similarly, $\hat \theta$ provides much more accurate angular estimates than $\theta_{sm}$.
}
\label{fig:r_theta}
\end{figure}
\begin{figure}
\hspace{0cm}
\includegraphics[width=1\textwidth]{figure/burnin_samples}
\caption{ { The sample path of 3 randomly selected pixels (top), $\sigma$ (middle), and $\lambda$ (bottom). The last 10000 (after 500 burn-in) samples (left) and the first 50 samples (right).}
}
\label{fig:samples}
\end{figure}
\begin{figure}
\hspace{0cm}
\includegraphics[width=1\textwidth]{figure/r_theta_mouse}
\caption{ { A $40 \times 40$ zoomed-in view of the $710 \times 710$ (not shown) randomly arranged preferred orientations $\{\theta_i\}$ and tuning strengths $\{r_i\}$, and their estimates. The orientation at each pixel was randomly drawn from a uniform distribution on $(-90^\circ,+90^\circ]$. Since the preferred orientations lack spatial organization, the Bayesian estimate $\hat \theta$ of preferred orientations reverts to its respective $\theta_{ml}$. The posterior estimates are based on 10000 consecutive iterations of the Gibbs sampler (after 500 burn-in iterations).}}
\label{fig:orientation_preference_mouse}
\end{figure}
Here we will discuss the application of our robust Bayesian analysis approach towards the analysis of both synthetic and real neural tuning maps. In both cases, our new algorithm permits the robust estimation of neural tuning with higher fidelity and less data than alternative approaches.
\subsection{Synthetic data}
\subsubsection{Estimating Orientation Preference Maps\label{sec:opm}}
We will first apply our algorithm to synthetic data modeled after experiments where an animal is presented with a visual stimulus and the neural activity in primary visual cortex (also known as V1) is simultaneously recorded. V1 is the first stage of cortical visual information processing and includes neurons that selectively respond to sinusoidal grating stimuli that are oriented in specific directions \cite{HW62}. Neurons with such response properties are called \textit{simple cells}. See figure \ref{fig:opm_data} for an illustrative example of the recorded neural activity while a bar of light is moved at different angles \cite{HW68,HDB74,WAND95,DA01}. As can be seen the figure, action potential firing of the simple cell depends on the angle of orientation of the stimulus.
To capture the essential characteristics of simple cells in the visual cortex, we will use the following model. The response of cell $i \in \{1,2,\cdots,n \}$ to a grating stimulus with orientation $\phi_{\ell}$ depends on the preferred orientation $\theta_i \in (-90^\circ,+90^\circ]$, and the tuning strength $r_i \in \mathbb{R}^{+}$ of that cell. The number of cells is $n$, and the number of trials (with differently oriented stimuli) is $d$. Formally speaking, in the simplest linear model, during the $\ell$th trial, the noisy measurement $y_{i,\ell} \in \mathbb{R}$ at neuron $i$ in response to a stimulus with orientation $\phi_{\ell}$ can be written as \cite{S98,MGWKB10,MGWKB11}
\begin{eqnarray*}
y_{i,\ell} \big | \bm{\beta_i,x_{\ell}},\sigma^2 &\sim& \mathcal{N}(\bm{\beta_i' x_{\ell}} ,\sigma^2 ), \quad i=1,\cdots, n \quad \text{and} \quad \ell=1,\cdots,d,
\end{eqnarray*}
where {$\bm{\beta_i}:= r_i[\cos\theta_i \ \sin\theta_i]'$} is related to $\theta_i$ (preferred orientation) and $r_i$ (tuning strength) as follows
$$
\theta_i:=\arctan \Bigl[ \frac{\beta_{2,i}}{\beta_{1,i}}\Bigr],
\quad \quad
r_i := \sqrt{\beta_{2,i}^2+\beta_{1,i}^2},
$$
and {$\bm{x_{\ell}}=[\cos\phi_{\ell} \ \sin\phi_{\ell}]'$} stands for the grating stimulus with orientation $\phi_{\ell}$. Writing the stimulus set $\{ \bm{x_{\ell}} \}_{\ell=1,\cdots,d}$ in matrix notation
\begin{eqnarray*}
\bm{X_{\o}} &:=& \begin{bmatrix}
\vdots \\
\bm{ x_{\ell}'} \\
\vdots \\
\end{bmatrix}_{d \times 2},
\end{eqnarray*}
allows us to compactly rewrite the neural response $\bm{y_i }\in \mathbb{R}^d$ as
\begin{equation}
\bm{y_i} \big | \bm{X_{\o}, \beta_i}, \sigma^2 \sim \mathcal{N} (\bm{X_{\o} \beta_i}, \sigma^2 \bm{I}) \quad i=1,\cdots, n \label{eq:1}.
\end{equation}
Note that all neurons respond to the same particular grating stimulus, namely $\bm{X_{\o}}$, though due to different preferred orientations, not
all neurons respond similarly.
In this example, the noise variances are set to be equal, that is $\nu_i=1$ for $i=1,\cdots,n$. As for the Gibbs sampler, we skip step 5, and substitute $\nu_i=1$ in all other steps. In the next section, we present a real data example, where $\{\nu_i\}$ is estimated using step 5 of our Gibbs sampler.
Drawing conclusions regarding the cortical circuitry underlying orientation maps, their formation during visual development, and across evolution, has recently been the subject of numerous studies \cite{SKLW07,RLW09,K10,KKS12}. For instance, \cite{K10} argued that evolutionary history (instead of ecological or developmental constraints) underlies the formation of qualitatively similar pinwheel distributions observed in the visual cortex of disparate mammalian taxa. Consequently, the estimation of orientation maps without contamination from measurement noise or bias from overs-smoothing will help to clarify important questions about evolution and information processing in the visual cortex.
We therefore generated synthetic tuning maps by extracting the phase of superpositions of complex plane waves (see section 2.4 of the Supplement of \cite{K10} for details). In our simulations, for clarity we assume $\bm{\beta_i}=(\cos \theta_i, \sin \theta_i)'$, and therefore $r_i=1$, which means tuning strengths are constant across all neurons. The top left panels of figure \ref{fig:orientation_preference} and figure \ref{fig:r_tau} show the angular components $ \{ \theta_i\}$ and tuning strengths $r_i=1$ of the resulting map. It is well known that in some species the preferred orientations $ \{ \theta_i\}$ are arranged around singularities, called pinwheel centers \cite{OHKI05,OhkiReid06}. Around each singularity, the preferred orientations $ \{ \theta_i\}$ are circularly arranged, resembling a spiral staircase. If we closely examine the top left panel of figure \ref{fig:orientation_preference}, it is evident that around pinwheel centers the preferred orientations $ \{ \theta_i\}$ are descending, either clockwise or counterclockwise from
$-90^\circ$ to $+90^\circ$. Experimentally measured maps obtained from cats, primates, \cite{K10} and our synthetically generated data all share this important feature.
{ We simulated the neural responses of each cell to twenty differently oriented grating stimuli by sampling responses according to equation (\ref{eq:1}) with $\sigma=0.4$. The orientations $\phi_{\ell}$ (for $\ell=1,\cdots,20$) were randomly and uniformly sampled from $(-90^\circ,+90^\circ]$.} Our main objective is to estimate (from neural responses $\{\bm{y_i }\}$ and stimuli $\bm{X_{\o}}$) the preferred orientations $\{ \theta_i \}$ and tuning strengths $\{ r_i \}$. Ordinary linear regression yields maximum-likelihood estimates
\begin{eqnarray}
\bm{\beta_{i,\text{ml}}} &=& \bm{(X_{\o}'X_{\o})^{-1}X_{\o}' y_i }\label{eq:ml}\\
\theta_{i,\text{ml}} &=& \arctan \Bigl( \frac{ \beta_{2,i,\text{ml}}} {\beta_{1,i,\text{ml}} } \Bigr),\nonumber \\
r_{i,\text{ml}} &=& \| \bm{\beta_{i,\text{ml}}} \|_2. \nonumber
\end{eqnarray}
The maximum likelihood estimates $\theta_{i,\text{ml}}$ and $r_{i,\text{ml}}$ are depicted in figure \ref{fig:orientation_preference}, \ref{fig:r_tau} and \ref{fig:r_theta}. The fine structure around pinwheel centers and the border between clustered preferred orientations is disordered.
We also computed the smoothed estimate $\bm{\beta_{sm}}$ based the following smoothing prior
\begin{eqnarray*}
p(\bm{\beta} | \gamma) &\propto& \exp \Bigl ( -\frac{\gamma}{2} \sum_{i \sim j }\| \bm{\beta_i - \beta_j} \|_2^2 \Bigr)\\
&\propto& \exp \Bigl ( -\frac{\gamma}{2} \bm{\beta' D' D \beta} \Bigr),
\end{eqnarray*}
and the likelihood in (\ref{eq:1})
\begin{eqnarray*}
p(\bm{y} | \bm{\beta}) &\propto& \exp \Bigl ( -\frac{1}{2\sigma^2} \| \bm{y - X\beta} \|_2^2 \Bigr),
\end{eqnarray*}
leading to following posterior expectation of $\bm{\beta}$:
\begin{eqnarray}
\bm{\beta_{sm}}(\gamma) := (\bm{X'X} + \gamma \bm{D' D})^{-1} \bm{X'y} \label{eq:sm},
\end{eqnarray}
where $\bm{X'X = I_{n \times n} \otimes X_{\o}'X_{\o} }$ and $\bm{X'y}=(\cdots, \bm{X_{\o}'y_i},\cdots)$. The smoothed estimate $\bm{\beta_{sm}}$ is based on a Gaussian prior that penalizes large local differences quadratically. (In contrast, the robust prior defined in equation \ref{eq:beta_prior} penalizes large differences linearly.) The amount of smoothing is dictated by $\gamma$; large values of $\gamma$ lead to over-smoothing and small values of $\gamma$ lead to under-smoothing. In this example, the true $\beta$ is known; therefore, for the sake of finding the best achievable smoothing performance, { we selected $\gamma=2.15$ }(using a grid search) which minimizes $\|\bm{\beta_{sm}}(\gamma)-\bm{\beta} \|_2$.
The proximity network that we used in this example was defined using the edges between every node and its four nearest (horizontal and vertical) neighbors. The smoothed estimates $\theta_{i,\text{sm}}:= \arctan \Bigl ( \frac{ \hat \beta_{2,i,sm}} {\hat \beta_{1,i,sm} } \Bigr)$ and $r_{i,\text{sm}}:=\|\bm{\beta_{i,sm}}\|_2$ are depicted in figures \ref{fig:orientation_preference}, \ref{fig:r_tau} and \ref{fig:r_theta}. In spite of the observation that $\theta_{sm}$ is less noisy than $\theta_{ml}$, there are still areas where the fine structure around pinwheel centers and the border between clustered preferred orientations is disordered.
Figure \ref{fig:r_tau} shows that $r_{sm}$ is typically close to the true value of one, except for in neurons that lie at the border between regions with different orientation preferences. This is due to the fact that at regions that mark the border, tuning functions (and their noisy observations) point at significantly different directions, and therefore, local averaging decreases the length of the average value. On the other hand, in smooth regions where vectors are pointing in roughly the same direction, local averaging preserves vector length.
The ability of our method to recover orientation preference maps from noisy recordings is shown in figures \ref{fig:orientation_preference}, \ref{fig:r_tau} and \ref{fig:r_theta}. To use the Bayesian formulation of equation \ref{eq:beta_prior}, we substituted a fixed $\bm{X_{\o}}$ for all $\bm{X_i}$. {For $\lambda^2$, a Gamma($r=1,\delta=1$) was used based on the understanding that a priori $\frac{1}{p}\sum_{i \sim j}\|\beta_i -\beta_j\|_2 $ should be $O(1)$. As for $\sigma^2$, the improper inverse-Gamma($\kappa'=0$,$\epsilon'=0$), i.e. $\pi(\sigma^2) \propto 1/\sigma^2$, was used. $\{\bm{ \hat \beta_i}\}$, namely the posterior expectation of $\{\bm{\beta_i}\}$, is based on 10000 samples from our efficient Gibbs sampler (after 500 burn-in iterations). The estimates $\hat \sigma=0.4066 \pm 0.0001$ and $\hat \lambda=11.13 \pm 0.01 $ (i.e., the mean $\pm$ standard deviation) are based on the 10000 samples.} The following estimates of the preferred orientations and tuning strengths
\begin{eqnarray*}
\hat \theta_i &:=& \arctan \Bigl ( \frac{ \hat \beta_{2,i}} {\hat \beta_{1,i} } \Bigr),\\
\hat r_i &:=& \| \bm{\hat \beta_i} \|_2,
\end{eqnarray*}
are depicted in figures \ref{fig:orientation_preference} and \ref{fig:r_tau}. The posterior mean estimates of $\tau_x$ and $\tau_y$ (depicted in figure \ref{fig:r_tau}) tend to be larger for neurons on the border of regions with similar preferred orientations $\{ \theta_i \}$ (and less so around pinwheel centers), leading to minimal local smoothing for those pixels. Figure \ref{fig:r_tau} shows that the Bayesian estimate $\hat r$ (like $r_{sm}$) underestimates the tuning strength for points that mark the border between different orientation preferences. In comparison to $r_{sm}$, as illustrated in the zoomed-in maps of figure \ref{fig:r_theta}, this problem is less severe for the Bayesian estimate $\hat r$ because of the robust prior that decreases the strength of local averaging by increasing the local smoothing parameters $\{ \tau_{ij} \}$ in regions marked with discontinuities.
As we can see in figure \ref{fig:r_theta}, the sharp border between similar orientation preferences is not over-smoothed while the noise among nearby neurons with similar orientation preferences is reduced. As a consequence of robustness, information is shared less among cells that lie at the border, but for cells that lie inside regions with smoothly varying preferred orientation, local smoothing is stronger. Moreover, in this example the chain appears to mix well (see figure \ref{fig:samples}), and the Gibbs sampler is computationally efficient, requiring just a few seconds on a laptop (per iteration) to sample a surface described by $> 10^6$ parameters.
{Finally, let us add that it is well known that the semiregular, smoothly varying arrangement (with local discontinuities) of orientation preference maps is not a general feature of cortical architecture \cite{VHCNT05}. In fact, numerous electrophysiological and imaging studies \cite{TB76,MB79,MGI88,GSL99} have found that orientation selective neurons in the visual cortex of many rodents are randomly arranged. A question that arises is whether the model would over-smooth if the neurons are not arranged smoothly in terms of their maps. In order to answer this question, we generated a randomly arranged orientation preference map, and applied our algorithm to the simulated neural activity in response to the same grating stimuli $\bm{X_{\o}}$ used above. We also used the same noise variance ($\sigma=0.4$) and the same priors for $\lambda,\sigma$ and $\{\bm{\beta} \}_{i=1,\cdots,n}$. Results are depicted in figure \ref{fig:orientation_preference_mouse}. Since the preferred orientations lack spatial organization, the Bayesian estimate $\hat \theta$ of preferred orientations reverts to its respective $\theta_{ml}$. }
\subsection{Real data}
\subsubsection{Phasic tuning in motor neurons\label{sec:motor}}
We next tested the method's performance on real neural imaging data obtained from an isolated mouse spinal cord preparation (schematized in figure \ref{fig:spinal-data}A). In these data, the fluorescent activity sensor GCaMP3 was expressed in motor neurons that innervate leg muscles. After application of a cocktail of rhythmogenic drugs, all motor neurons in the preparation fire in a periodic bursting pattern mimicking that seen during walking \cite{MPPJM15}. Under these conditions, we acquired sequences of fluorescent images and then applied a model-based constrained deconvolution algorithm to infer the timing of neuronal firing underlying each fluorescent activity time series extracted from the pixels corresponding to individual neurons \cite{Eftychios_2014}.
Each mouse leg is controlled by $\sim50$ different muscles, each of which is innervated by motor neurons that fire in distinct patterns during locomotor behavior \cite{Krouchev_2006,Akay_2014}. Furthermore, all motor neurons that share common muscle targets are spatially clustered together into ``pools" within the spinal cord \cite{romanes1964motor}. Therefore, during the locomotor-like network state monitored in these data, different spatially-distinct groups of motor neurons are recruited to fire at each moment in time (figure \ref{fig:spinal-data}B-D). When the activity of each motor neuron is summarized as a single mean phase tuning value (representing the average phase angle of the $\sim70$ firing events detected per neuron, as seen in figure \ref{fig:raw_single}), a clear spatial map can be derived (figure \ref{fig:spinal-data}D). Such maps appear smooth within pools, and sharply discontinuous between pools.
While phase tuning can be reliably inferred one neuron at a time in these data, fluorescent measurements from each neuron are not always of high quality. As a result, activity events cannot be reliably inferred from all neurons \cite{MPPJM15}. Additionally, more neurons could have been observed with less data per neuron if phase tuning was estimated more efficiently. Therefore we applied our robust and scalable Bayesian information sharing algorithm to these data in an attempt to reduce measurement noise, and decrease the required data necessary to attain precision tuning map measurements.
\begin{figure}
\includegraphics[width=1\textwidth]{./figure/mn-map2.pdf}
\caption{Isolated spinal cord imaging preparation. (a) Schematic of isolated spinal cord imaging preparation. (b) Activity inferred from fluorescence measurements obtained from four motor neurons. Height of black bars indicates intensity of neuronal activity at each time point. Vertical blue bars indicate the onset of each locomotor cycle (i.e. $0^{\circ}$). (c) Example fluorescent imaging field with the position of the four neurons shown in (b) indicated. (d) Each motor neuron shown in (c) is represented in color (legend in inset) corresponding to its estimated tuning value.}\label{fig:spinal-data}
\end{figure}
In this setting, let us introduce some simplifying notation. We use $\ell_i$ to denote the total number of spikes that neuron $i$ has fired. As mentioned earlier, $\ell_i \sim 70$ here. Furthermore, we use $\theta_{i,\ell}$ to denote the $\ell$th phase at which neuron $i$ has fired a spike. Then we convert this phase $\theta_{i,\ell}$ to $\bm{y_{i,\ell}}:=[\cos(\theta_{i,\ell}) \sin(\theta_{i,\ell})]'$, a point on the unit circle.
We model the neuron's tendency to spike at phases that are concentrated around a certain angle using a two dimensional vector $\bm{\beta_i}$. The direction of $\bm{\beta_i}$ is the preferred phase $\theta_i$ and the length of $\bm{\beta_i}$ is the tuning strength $r_i$. If the neuron is highly tuned, that is there no variability among phases at which this neuron fires a spike, then $r_i=1$ and $\bm{\beta_i}$ lies on the unit circle. On the other hand, if the neuron is weakly tuned, that is, there is large variability among phases at which this neuron fires a spike, then $r_i \sim 0$. We relate observation $\bm{y_{i,\ell}}$ to the unknown {$\bm{\beta_i}:= r_i[\cos\theta_i \ \sin\theta_i]'$} as follows
\begin{eqnarray*}
\bm{y_{i,\ell}} \big | \bm{\beta_i},\sigma,\nu_i &\sim& \mathcal{N}(\bm{\beta_i} ,\nu_i^2 \sigma^2 \bm{I}), \quad \text{for } \ \ell=1,\cdots,\ell_i
\end{eqnarray*}
where $\bm{\beta_i}$ is related to $\theta_i$ (preferred phase) and $r_i$ (tuning strength) as follows
\begin{eqnarray*}
\theta_i&:=&\arctan \Bigl[ \frac{\beta_{2,i}}{\beta_{1,i}}\Bigr], \\
r_i &:=& \sqrt{\beta_{2,i}^2+\beta_{1,i}^2}.
\end{eqnarray*}
{
There are two points worth mentioning. First, the Gaussian noise model clearly violates the fact that $\{\bm{y_{i,\ell}}\}$ lie on the unit circle, and should therefore be considered a rather crude approximation. Nevertheless, as demonstrated below, this Gaussian likelihood with our prior in (\ref{eq:beta_prior}), is remarkably effective in estimating the preferred phases $\{ \theta_i \}$ with as little as one observed phase per neuron. Second, the vector representation of the $\ell_i$ spikes that neuron $i$ has fired
$$
\bm{y_i}=
\begin{bmatrix}
\bm{y_{i,1}} \\
\vdots \\
\bm{y_{i,\ell_i}}
\end{bmatrix}_{2\ell_i \times 1}
$$ can be related to the unknown $\bm{\beta_i}$ using the formulation presented in equation (\ref{eq:model}) where
$$
\bm{X_i}=
\begin{bmatrix}
\vdots \\
\bm{I_{2\times 2}} \\
\vdots \\
\end{bmatrix}_{2\ell_i \times 2}.
$$
}
The ML estimate of $\bm{\beta_i}$, given the Gaussian additive noise model, is the sample mean of the observations $\{\bm{y_{i,\ell}}\}_{\ell=1,\cdots,\ell_i}$, $\bm{\beta_{i,\text{ml}}} = \frac{1}{\ell_i} \sum_{\ell=1}^{\ell_i} \bm{y_{i,\ell}} $. The ML estimate of the preferred phase $\theta_{i,\text{ml}} = \arctan \Bigl[ \frac{\beta_{2,i,\text{ml}}}{\beta_{1,i,\text{ml}}}\Bigr]$ is the circular mean of the observed phases, as depicted in figure ~\ref{fig:raw_single}. The resulting radius $\|\bm{ \beta_{i,\text{ml}}} \|_2$, the ML estimate of $r_i$, will be 1 if all angles are equal. If the angles are uniformly distributed on the circle, then the resulting radius will be 0, and there is no circular mean. The radius measures the concentration of the angles and can be used to estimate confidence intervals.
In addition to the observed phases, we also have the three-dimensional physical location of all cells. As an illustrative example, the spatial distribution of $\{\theta_{i,\text{ml}} \}$ and $\{r_{i,\text{ml}} \}$ is depicted in figure \ref{fig:raw_pp} and \ref{fig:raw_ts}. The three-dimensional location is projected into the two-dimensional x-y plane. Each dot is a cell, and its color in panel \ref{fig:raw_pp} and \ref{fig:raw_ts} corresponds to $\theta_{i,\text{ml}}$ and $r_{i,\text{ml}}$, respectively. Clearly, nearby cells tend to have similar preferred phases and tuning strengths --- but there are many exceptions to this trend. A mixture prior is required to avoid oversmoothing the border between clusters of cells with similar properties while allowing cells within a cluster to share information and reduce noise.
In order to include the physical location of the cells into our Bayesian formulation, we formed a proximity network based on nearest spatially-whitened neighbors, as described in section \ref{sec:ldn}. $\{\bm{\hat \beta_i}\}$, the posterior expectation of $\{\bm{\beta_i}\}$, is based on 10000 samples from our efficient Gibbs sampler (after 500 burn-in iterations). For illustration purposes, we experimented with holding the hyperparamter $\lambda$ fixed in the simulations; the effects of this hyper parameter on the estimates of the preferred phases and tuning strengths
\begin{eqnarray}
\hat \theta_i &:=& \arctan \Bigl ( \frac{ \hat \beta_{2,i}} {\hat \beta_{1,i} } \Bigr),\label{eq:hat_mle}\\
\hat r_{i} &:=& \| \bm{ \hat \beta_i} \|_2\label{eq:hat_mle},
\end{eqnarray}
are depicted in figure \ref{fig:animals}. It is clear that large $\lambda$ forces nearby neurons to have more similar preferred phases whereas for small $\lambda$ the preferred phases revert to their respective ML estimates.
The ability of our method to recover the preferred phases from as little as one noisy phase $\theta_{i,\ell}$ per neuron is illustrated below. We divide the data into two parts. For each cell, there are roughly 70 phases recorded (at which the corresponding neuron fired). For each neuron $i$, we randomly selected one of the phases $\{\theta_{i,\ell} \}_{\ell=1,\cdots,\ell_i}$ for the training set, and let the rest of the phases constitute the testing set:
\begin{eqnarray*}
\bm{y_{i,\text{train}} }&:=& \begin{pmatrix}
\cos(\theta_{i,\ell_{\text{train}}})\\
\sin(\theta_{i,\ell_{\text{train}}})
\end{pmatrix} ,
\quad
\bm{y_{i,\text{test}}} := \frac{1}{\ell_i-1} \sum_{\substack{\ell=1,\cdots,\ell_i \\ \ell \neq \ell_{\text{train}} }} \begin{pmatrix}
\cos(\theta_{i,\ell})\\
\sin(\theta_{i,\ell})
\end{pmatrix}. \label{eq:testphase}
\end{eqnarray*}
The raw estimates of preferred phases and tuning strengths, using training data, are computed as follows{
\begin{eqnarray*}
\theta_{i,\text{train}} &:=& \arctan \Bigl ( \frac{ y_{2,i,\text{train}}} {y_{1,i,\text{train}} } \Bigr),
\quad r_{i,\text{train}} := \| \bm{y_{i,\text{train}}} \|_2,
\end{eqnarray*}}
and raw estimates of preferred phases and tuning strengths, using testing data, are computed likewise. For $\{\bm{y_i}\}$ in our Gibbs sampler, we use the training data $\{\bm{y_{i,\text{train}}} \}$. Posterior estimates $\{\hat \theta_i,\hat r_i, \hat \nu_i \}$, for four distinct datasets, are depicted and compared against testing data in figures
\ref{fig:ttph1}-\ref{fig:ttph26}. For $\lambda^2$, we use a Gamma($r=1, \delta=1$) prior and for $\sigma^2$ an improper inverse-Gamma($\kappa'=0$, $\epsilon'=0$) prior. Generally speaking, $\sigma$ and $\lambda$ are not identifiable. Furthermore, the joint posterior distribution of $\beta$ and $\sigma$ is only unimodal given $\{\nu_i^2 \}$. We address both challenges by placing a relatively tight prior on $\{\nu_i^2 \}$. We use independent $\text{inverse-Gamma}(\varkappa=3, \varepsilon=2)$ priors for $\{\nu_i^2 \}$, making the prior means and variances equal to one. Since the posterior distribution of $\beta$ and $\sigma$ is only unimodal given $\{\nu_i\}$, this prior constrains the $\nu_i$s such that the posterior distribution stays nearly unimodal. Finally, $\lambda$ and $\sigma$ stay nearly identifiable given this tight prior.
The raw training estimates of preferred phases and tuning strengths are very noisy which is expected given the fact that only one phase per neuron is used. This is an extremely low signal-to-noise limit. In contrast, roughly 70 phases per neuron are used to compute the raw testing estimates. The Bayesian estimates $\{\hat \theta_i,\hat r_i \}$ are also based on one phase per neuron, but they employ the a priori knowledge that the activity of a neuron carries information about its nearby neurons. As mentioned earlier, this is done by incorporating the proximity network into the Bayesian formulation.
Moreover, as illustrated in the middle panels of figures \ref{fig:ttph1}-\ref{fig:ttph26}, the Bayesian estimates respect the border of clustered cells with similar phasic preferences and tuning strengths. Information is not invariably shared among nearby cells; instead, it is based on how locally similar the samples of $\{\bm{\beta_i}\}$ are. If the estimated typical noise is much less than the local difference, then, intuitively speaking, local smoothing should be avoided because the difference seems statistically significant.
In contrast, the raw test estimates $\{\theta_{i,\text{test}}, r_{i,\text{test}} \}$ are computed in isolation (one neuron at a time) but use roughly 70 phases per neuron (high signal-to-noise). The Bayesian estimates are less noisy in comparison to the raw training estimates (low signal-to-noise) and qualitatively resemble the raw test estimates (high signal-to-noise). Unlike the previous synthetic data example, here the true parameters are unknown. In order to quantify the noise reduction, we treat the high signal-to-noise test estimates as the unknown true parameters, and compare them against the Bayesian estimates. Recall that the Bayesian estimates are based on the low signal-to-noise raw train data. We quantify the noise reduction, by comparing the testing error $\frac{1}{n} \sum |\hat \theta_i - \theta_{i,\text{test}}|$ with the raw error $=\frac{1}{n} \sum | \theta_{i,\text{train}} - \theta_{i,\text{test}}|$. The test error is $10^\circ-16^\circ$ less than the raw error. For more details see the the captions of figures \ref{fig:ttph1}-\ref{fig:ttph26}. Lastly, the boxplots in figure \ref{fig:summary} summarize and quantify the noise reduction due to our robust Bayesian information sharing approach. In each case, the new Bayesian approach provides significant improvements on the estimation accuracy.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figure/raw_26_single}
\caption{ Phasic preference of one cell}
\label{fig:raw_single}
\end{subfigure}%
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figure/raw_26_preferred_phase}
\caption{ Preferred phases of $n=854$ cells.}
\label{fig:raw_pp}
\end{subfigure}%
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{figure/raw_26_tuning_strength}
\caption{ Tuning strengths of $n=854$ cells.}
\label{fig:raw_ts}
\end{subfigure}
\caption{(a) Noisy observations of phases at which this cell has fired. The phase of each red dot on the unit circle is a phase at which this cell has fired, and the angular histogram depicts its distribution. The blue dot is the circular mean of all red dots, and its phase and length are the ML estimates of the preferred phase and tuning strength, respectively. (b,c) The three-dimensional spatial cell position is projected into the two-dimensional x-y plane. Each dot indicates one cell; each cell is color coded with the phase $\theta_{i,\text{ml}}$ or tuning strength $r_{i,\text{ml}}$. Preferred phases (and tuning strengths) tend to be similar among nearby cells, but not all nearby cells have similar preferred phases (and tuning strengths).}\label{fig:raw}
\end{figure}
\begin{figure}
\vspace{-1cm}
\centering
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figure/raw_26_colormap}
\caption{$\lambda=0$}
\label{fig:raww}
\end{subfigure}%
~
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figure/processed_lambda_1_26_colormap}
\caption{$\lambda=1$}
\label{fig:lambda_0_0_1}
\end{subfigure}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figure/processed_lambda_10_26_colormap}
\caption{$\lambda = 10$}
\label{fig:lambda_0_1}
\end{subfigure}
~
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{figure/processed_lambda_100_26_colormap}
\caption{$\lambda=100$}
\label{fig:lambda_1}
\end{subfigure}
\caption{Preferred phase estimates for different values of the hyper parameter $\lambda$. Each dot corresponds to the estimated preferred angle $\hat \theta_i$ for one cell. For $\lambda=0$, the estimates are equal to the ML estimates. For $\lambda=1$, information sharing is not large enough and estimates are not very different from the ML estimates. For $\lambda=10$, nearby neurons are forced to have similar preferred phases, nonetheless, the sharp border between functionally different clusters of neurons is not oversmoothed. The posterior mean and standard deviation of $\lambda$, based on 10000 iterations (after 500 burn-ins), is $5.26$ and $0.52$, respectively. For $\lambda=100$, smoothing within clusters is stronger and borders are not
violated. However, tuning estimates within each cluster suffer from oversmoothing.
}\label{fig:animals}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_thetai_1}
\end{subfigure}
\vspace{1cm}
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_Ai_nui_1}
\end{subfigure}
\caption{Dataset 1 with $n=584$. The posterior estimates $\hat \sigma=0.62 \pm 0.02$ and $\hat \lambda = 6.43 \pm 0.38$ (i.e., the mean $\pm$ standard deviation) are based on 10000 samples (after 500 burn-ins). The test set is made of $59$ observed phases per neuron. The test error is $\frac{1}{n} \sum |\hat \theta_i - \theta_{i,\text{test}}|=27.6^\circ$ and the raw error is $\frac{1}{n} \sum |\hat \theta_{i,\text{train}} - \theta_{i,\text{test}}|=36.9^\circ$. }\label{fig:ttph1}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_thetai_16}
\end{subfigure}
\vspace{1cm}
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_Ai_nui_16}
\end{subfigure}
\caption{Dataset 16 with $n=676$. The posterior estimates $\hat \sigma=0.62 \pm 0.01 $ and $\hat \lambda = 7.04 \pm 0.38$ (i.e., the mean $\pm$ standard deviation) are based on 10000 samples (after 500 burn-ins). The test set is made of $60$ phases per neuron. The test error is $\frac{1}{n} \sum |\hat \theta_i - \theta_{i,\text{test}}|=28.4^\circ$ and the raw error is $\frac{1}{n} \sum |\hat \theta_{i,\text{train}} - \theta_{i,\text{test}}|=41.5^\circ$. }\label{fig:ttph16}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_thetai_23}
\end{subfigure}
\vspace{1cm}
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_Ai_nui_23}
\end{subfigure}
\caption{Dataset 23 with $n=695$. The posterior estimates $\hat \sigma=0.69 \pm 0.02$ and $\hat \lambda = 7.39 \pm 0.39$ (i.e., the mean $\pm$ standard deviation) are based on 10000 samples (after 500 burn-ins). The test set is made of $73$ phases per neuron. The test error is $\frac{1}{n} \sum |\hat \theta_i - \theta_{i,\text{test}}|=39.45^\circ$ and the raw error is $\frac{1}{n} \sum |\hat \theta_{i,\text{train}} - \theta_{i,\text{test}}|=51.55^\circ$. }\label{fig:ttph23}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_thetai_26}
\end{subfigure}
\vspace{1cm}
\begin{subfigure}[b]{1\textwidth}
\includegraphics[width=\textwidth]{figure/Atrain_test_Ai_nui_26}
\end{subfigure}
\caption{Dataset 26 with $n=854$. The posterior estimates $\hat \sigma= 0.62 \pm 0.01$ and $\hat \lambda =7.52 \pm 0.40$ (i.e., the mean $\pm$ standard deviation) are based on 10000 samples (after 500 burn-ins). The test set is made of $82$ phases per neuron. The test error is $\frac{1}{n} \sum |\hat \theta_i - \theta_{i,\text{test}}|=27.1^\circ$ and the raw error is $\frac{1}{n} \sum |\hat \theta_{i,\text{train}} - \theta_{i,\text{test}}|=42.9^\circ$. }\label{fig:ttph26}
\end{figure}
\begin{figure}
\includegraphics[width=1\textwidth]{./figure/Asummary}
\caption{ Tukey boxplots comparing the raw error and test error for the four datasets illustrated in figures (\ref{fig:ttph1}-\ref{fig:ttph26}). The raw and test error (for cell $i$) are defined as $\|\bm{ y_{i,\text{train}} -y_{i,\text{test}}} \|_2$ and $\| \bm{\hat \beta_{i} -y_{i,\text{test}} }\|_2$, respectively.
} \label{fig:summary}
\end{figure}
\clearpage
\section{Concluding Remarks\label{sec:cr}}
We developed a robust and scalable Bayesian smoothing approach for inferring tuning functions from large scale high resolution spatial neural activity, and illustrated its application in a variety of neural coding settings. A large body of work has addressed the problem of estimating a smooth spatial process from noisy observations \cite{B74,Wahba90,BK95,RH05,GPBook06}. These ideas have found many of their applications in problems involving tuning function estimation \cite{GAO02,Cunningham07,CZANNER05,CGRS09, PAFKRVVW10,Kamiar08,MGWKB10,MGWKB11,P10,PRHP14}. { There has also been some work on parametric Bayesian tuning function estimation (see \cite{CSSK10} and references therein).} The main challenge in the present work was the large scale (due to the high spatial resolution) of the data and the functional discontinuities present in neuronal tuning maps, e.g. \cite{SG96,OHKI05,MPPJM15}.
In order to address these challenges, we proposed a robust prior as part of a computationally efficient block Gibbs sampler that employs fast Gaussian sampling techniques \cite{HR91,H09,PaYu10} and the Bayesian formulation of the Lasso problem \cite{PG08,CGGK10}. This work focused especially on the conceptual simplicity and computational efficiency of the block Gibbs sampler: we emphasized the robustness properties of the Bayesian Lasso, the unimodality of the posterior and the use of efficient linear algebra methods for sampling, which avoid the Cholesky decomposition or other expensive matrix decompositions. Using in vitro recordings from the spinal cord, we illustrated that this approach can effectively infer tuning functions from noisy observations, given a negligible portion of the data and reasonable computational time.
\comment{
In a related line of work, relevant vector machines allow the degree of smoothness to be uneven and infer it from local statistics \cite{TIP01,Bishop06}. This is typically achieved using the following prior,
\begin{eqnarray*}
p(\beta| \{ w_{ij} \}) &\propto& \prod_{i \sim j} \exp \bigl ( - \frac{w_{ij}}{2 } \| \beta_i - \beta_j\|_2^2 \bigr)
\end{eqnarray*}
The values of the local smoothing parameters $\{w_{ij} \}$ can be determined using evidence approximation, in which the marginal likelihood is maximized. Since optimization of the marginal likelihood is intractable, coordinate descent algorithms have been proposed \cite{FT01}. The computational cost per spatial point scales quadratically with total number of spatial points, amounting to a computational cost that scales cubically with the total number of spatial points, making it computationally infeasible for high dimensional datasets. Finally, local maxima in the model's marginal likelihood function surface can be a significant concern in some cases.
}
It is worth mentioning that in another line of work, smoothness inducing priors were used to fit spatio-temporal models to fMRI data \cite{PTF05, GCW09,HG10,QDG10, W12}. Although these priors handle spatial correlation in the data, they do not always successfully account for spatial discontinuities and the large scale of the data. \cite{WJBS04} used automatic relevance determination (ARD) \cite{M95} to allow for spatially non-stationary noise where the level of smoothness at each voxel was estimated from the data. It is known \cite{WN08} that ARD can converge slowly to suboptimal local minima. On the other hand, wavelet bases with a sparse prior, defined by a mixture of two Gaussian components, allowed \cite{GP07} to present a statistical framework for modeling transient, non-stationary or spatial varying phenomenon. They used variational Bayes approximations together with fast orthogonal wavelet transforms to efficiently compute the posterior distributions. As mentioned in their paper, a main drawback is that wavelet denoising with an orthogonal transform exhibits Gibbs phenomena around discontinuities, leading to inefficient modeling of singularities, such as edges. In \cite{VCDH10,SZT10, S12,GKKKT13, HWRGBJS15} smoothness (and matrix factorization) approaches were combined with various global sparsity-inducing priors (or regularizers) to smooth (or factorize) the spatio-temporal activity of voxels that present significant effects, and to shrink to zero voxels with insignificant effects. In \cite{HPATF07}, non-stationary Gaussian Processes were used as adaptive filters with the computational disadvantage of inverting large covariance matrices. Finally, in a recent work, \cite{SEBV16} design an efficient Monte Carlo sampler to perform spatial whole-brain Bayesian smoothing. Costly Cholesky decompositions are avoided by efficiently employing the sparsity of precision matrices and preconditioned conjugate gradient methods. The prior in \cite{SEBV16} assigns a spatially homogenous level of smoothness which performs less favorably in situations involving outliers and sharp breaks in the functional map.
{
There is also a vast literature addressing the recovery of images from noisy observations (see \cite{MGMH04,BCM05} and references therein). Most of these techniques use some sort of regularizer or prior to successfully retain image discontinuities and remove noise.
Early examples include the auxiliary line process based quadratic penalty in \cite{GG84}, and the $\sum_i \frac{1}{1+|\nabla_i \bm{\beta}|}$ log-prior in \cite{GR92}, where the gradient at $i$ is denoted by $\nabla_i$. The line process indicates sharp edges and suspends or activates the smoothness penalty associated with each edge. The log-prior $\sum_i \frac{1}{1+|\nabla_i \bm{\beta}|}$ encourages the recovery of discontinuities while rendering auxiliary variables of the line process as unnecessary. These log-priors are non-concave. These non-concave maximum a posteriori optimization problems are generally impractical to maximize. Different techniques were designed based on simulated annealing \cite{GG84,GR92,GY95}, coarse-to-fine optimization \cite{BL88} and alternate maximization between image and auxiliary contour variables \cite{CBAB95} to compute (nearly) global optimums, at the expense of prohibitively large amount of computation. Moreover, it is known that a small perturbation in the data leads to abrupt changes in the de-noised image \cite{BS93}. This is due to the non-concavity of the problem.
Image de-noising methods based on concave log-priors (or regularizers) that enjoy edge preserving properties were designed in \cite{SD90,G90,SD91,ROF92,BS93}. These log-priors typically take the form of $-\sum_i \phi(\nabla_i \bm{\beta})$ for some concave function $\phi(.)$. Example of $\phi(x)$ include the Huber function \cite{H64} in \cite{SD90,SD91}, $\log \cosh (\frac{ x }{T})$ in \cite{G90}, $ |x |^p$ where $1 \leq p < 2$ \cite{BS93} and $ |x|$ in the TV penalty \cite{ROF92,B93}. Various methods have been proposed for computing optimal or nearly optimal solutions to these image recovery problem, e.g. \cite{SD90,G90,SD91,ROF92,BS93,VO95,VO98,C04,WYYZ08,OBF09,ABF10,BS11,DVL11}.
In addition to these approaches, another significant contribution has been to consider wavelet, ridgelet and curvelet based priors/regularizers, e.g. \cite{DJ94,C99,EC99,SCD02,PSWS03} which present noticeable improvements in image reconstruction problems. More recently, the non-local means method \cite{BCM05,DFKE07,LBM13} presents a further improvement. However, most of these methods' favorable performance relies heavily on parameters which have been fine tuned for specifically additive noisy observations of two-dimensional arrays of pixels of real world images. In other words, they are specifically tailored for images. It is not clear if and how these approaches can be modified to retain their efficiency while being applied to broader class of spacial observations lying on generic graphs. Moreover, the denoised images rarely come equipped with confidence intervals. But our sampling based approach allows for proper quantification of uncertainty, which could in turn be used to guide online experimental design; for example, in the spinal cord example analyzed here, we could choose to record more data from neurons with the largest posterior uncertainty about their tuning functions.
In principle, many of above mentioned approaches can be formulated as Bayesian, with the aid of the Metropolis-Hastings (MH) algorithm, to compute posterior means and standard deviations \cite{LS04,LM13}. However, generic MH approaches can lead to unnecessary high computational cost. For example, in \cite{LS04} a TV prior and Gaussian noise model was used to denoise a one dimensional pulse; it was reported that the chain resulting from the MH algorithm suffers from very slow convergence. One contribution of the present paper is to show that by using a hierarchical representation of our prior in equation (\ref{eq:hie}) costly MH iterations can be avoided in all steps of our block Gibbs sampler. Additionally, we show how our model can take into account nonuniform noise variance (quite common in neuroscience applications) without increasing the computational complexity. Finally, we emphasize the importance of conditioning $\bm{\beta}$ on $\sigma$ in equation (\ref{eq:beta_prior}) which has been neglected in previous Bayesian formulations of the TV prior \cite{LS04,LM13}. This is important because it guarantees a unimodal posterior of $\bm{\beta}$ and $\sigma$ given $\{ \nu_i \}_{i=1,\cdots,n}$ and $\lambda$.
}
We should also note that a number of fully-Bayesian methods have been developed that present adaptive smoothing approaches for modeling non-stationary spatial data. These methods are predicated on the idea that to enhance spatial adaptivity, local smoothness parameters should a priori be viewed as a sample from a common ensemble. Conditional on these local smoothing parameters, the prior is a Gaussian Markov random field (GMRF) with a rank deficient precision matrix \cite{LFF02,LB04,FKL04,RH05,YS10,YLL10,YSS12}. The hyper prior for the local smoothing parameters can be specified in two ways. The simpler formulation assumes the local smoothing parameters to be independent \cite{LFF02,LB04,FKL04,BFH07}. For example, \cite{LFF02} presented a nonparametric prior for fitting unsmooth and highly oscillating functions, based on a hierarchical extension of state space models where the noise variance of the unobserved states is locally adaptive. The main computational burden lies on the Cholesky decomposition \cite{BFH07} or other expensive matrix decompositions of the precision matrix. In a more complex formulation, the log-smoothing parameters follow another GMRF on the graph defined by edges $i \sim j$ \cite{YS10,YLL10,YSS12}.
In both formulations, local smoothing parameters are conditionally dependent, rendering Metropolis-within-Gibbs sampling necessary. { These methods often provide superior estimation accuracy for functions with high spatial variability on regular one-dimensional and two-dimensional lattices, but at a prohibitively higher computational cost which makes them less attractive for the high dimensional datasets considered in this paper.} One interesting direction for future work would be to combine the favorable properties of these approaches with those enjoyed by our scalable and robust Bayesian method.
{
Finally, important directions for future work involve extensions that allow the treatment of point processes, or other non-Gaussian data, and correlated neural activities. Since our prior can be formulated in a hierarchical manner, when dealing with non-Gaussian likelihoods, it is only step 2 of our Gibbs sampler that needs modification. In step 2, all MCMC algorithms suited for Gaussian priors and non-Gaussian likelihoods can be integrated into our efficient Gibbs sampler. For example, the elliptical slice sampler \cite{MAM10} or Hamiltonian
Monte Carlo methods \cite{DKPR87,RS03,RC05,APP11,GCC11} are well-suited for sampling from posteriors arising from a Gaussian prior and likelihoods from the exponential family. With regard to correlated neural activities, it would be interesting to see how tools developed in \cite{VASPKLCSP12,BMS12} can be incorporated into our Gibbs sampler to make inference about models which can account for correlated observations.
}
{
| {
"attr-fineweb-edu": 1.984375,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUf7zxK3YB9i3ROKt_ | \section{Introduction}
In this paper, we address the problem of robust estimation of multivariate location and scatter matrix under cellwise and casewise contamination.
Traditional robust estimators assume a casewise contamination model for the data where the majority of the cases are assumed to be free of contamination. Any case that deviates from the model distribution is then flagged as an outlier. In situations where only a small number of cases are contaminated this approach works well. However, if a small fraction of cells in a data table are contaminated but in such a way that a large fraction of cases are affected, then traditional robust estimators may fail. This problem, referred to as propagation of cellwise outliers, has been discussed by \citet{alqallaf:2009}. Moreover, as pointed out by \citet{agostinelli:2014} both types of data contamination, casewise and cellwise, may occur together.
Naturally, when data contain both cellwise and casewise outliers, the problem becomes more difficult. To address this problem, \citet{agostinelli:2014} proposed a two-step procedure: first, apply a univariate filter (UF) to the data matrix $\mathbb X$ and set the flagged cells to missing values, NA's; and second, apply the generalized S-estimator (GSE) of \citet{danilov:2012} to the incomplete data set. Here, we call this two-step procedure UF-GSE. It was shown in \citet{agostinelli:2014} that UF-GSE is simultaneously robust against cellwise and casewise outliers. However, this procedure has three limitations, which are addressed in this paper:
\begin{itemize}
\item The univariate filter does not handle well moderate-size cellwise outliers.
\item The GSE procedure used in the second step loses robustness against casewise outliers for $p > 10$.
\item The initial estimator EMVE used in the second step does not scale well to higher dimensions ($p > 10$).
\end{itemize}
\citet{rousseeuw:2015} pointed out that to filter the variables based solely on their value may be too limiting as no correlation with other variables is taken into account. A not-so-large contaminated cell that passes the univariate filter could be flagged when viewed together with other correlated components, especially for highly correlated data. To overcome this deficiency, we introduce a consistent bivariate filter and use it in combination with UF and a new filter developed by \citet{rousseeuw:2016} in the first step of the two-step procedure.
\citet{maronna:2015a} made a remark that UF-GSE, which uses a fixed loss function $\rho$ in the second step, cannot handle well high-dimensional casewise outliers. S-estimators with a fixed loss function exhibit an increased Gaussian efficiency when $p$ increases, but at the same time lose their robustness \citep[see ][]{rocke:1996}. Such curse of dimensionality has also been observed for UF-GSE in our simulation study. To overcome this deficiency, we constructed a new robust estimator called {\it Generalized Rocke S-estimator} or {\it GRE} to replace GSE in the second step.
The first step of filtering is generally fast, but the second step is slow due to the computation of the extended minimum volume ellipsoid (EMVE), used as initial estimate by the generalized S-estimator. The standard way to compute EMVE is by subsampling, which requires an impractically large number of subsamples when $p$ is large, making the computation extremely slow.
To reduce the high computational cost of the two-step approach in high dimension, we introduce a new subsampling procedure based on clustering. The initial estimator computed in this way is called EMVE-C.
The rest of the paper is organized as follows. In Section \ref{sec:GSE-bivariate-filtering}, we describe some existing filters and introduce a new consistent bivariate filter. By consistency, we mean that, when $n$ tend to infinity and the data do not contain outliers, the proportion of data points flagged by the filter tends to zero. We also show in Section \ref{sec:GSE-bivariate-filtering} how the bivariate filter can be used in combination with the other filters in the first step. In Section \ref{sec:GSE-GRE}, we introduce the GRE to be used in place of GSE in the second step. In Section \ref{sec:GSE-computing-issue}, we discuss the computational issues faced by the initial estimator, EMVE, and introduce a new cluster-based-subsampling procedure called EMVE-C. In Section \ref{sec:GSE-MCresults} and \ref{sec:GSE-example}, we compare the original and modified two-step approaches with several state-of-the-art robust procedures in an extensive simulation study. We also give there a real data example. Finally, we conclude in Section \ref{sec:GSE-conclusions}. The Appendix contains all the proofs. We also give a separate document called ``Supplementary Material", which contains further details, simulation results, and other related material.
\section{Univariate and Bivariate Filters}\label{sec:GSE-bivariate-filtering}
Consider a random sample of $\mathbb X = (\pmb X_1,\dots,\pmb X_n)^t$, where $\pmb{X}_{i}$ are first generated from a central parametric distribution, $H_0$, and then some cells, that is, some entries in $\pmb{X}_{i}=(X_{i1},\dots,X_{ip})^{t}$ , may be independently contaminated. A {\it filter} $\mathcal{ F}$ is a procedure that flags cells in a data table and replaces them by NA's. Let $f_n$ be the fraction of cells in the data table flagged by the filter. A {\it consistent filter} for a given distribution $H_0$ is one that asymptotically will not flag any cell if the data come from $H_0$. That is, $lim_{n \rightarrow \infty} f_n =0$ a.s. $[H_0]$.
\begin{remark}\label{remark:filter-definition}
Given a collection of filters $\mathcal{ F}_1,...,\mathcal{ F}_k$ they can be combined in several ways: (i) they can be {\it united} to form a new filter, $ \mathcal{ F}_U = \mathcal{F}_1\cup \cdots \cup \mathcal{ F}_k$, so that the resulting filter, $ \mathcal{ F}_U $, will flag all the cells flagged by at least one of them; (ii) they can be {\it intersected}, so that the resulting filter, $ \mathcal{ F}_I = \mathcal{F}_1 \cap \cdots \cap \mathcal{ F}_k$, will only flag the cells identified by all of them; and (iii)
a filter, $\mathcal F$, can be conditioned to yield a new filter, $\mathcal F_C$,
so that $\mathcal F_C$ will only filter the cells filtered by $\mathcal F$ which satisfy a given condition $C$.
\end{remark}
\begin{remark}\label{remark:filter-consistency}
It is clear that $ \mathcal{ F}_U$ is a consistent filter provided all the filters $ \mathcal{ F}_i$, $i=1,\dots,k$ are consistent filters. On the other hand, $ \mathcal{ F}_I$ is a consistent filter provided at least one of the filters $ \mathcal{ F}_i$, $i=1,\dots,k$ is a consistent filter. Finally, it is also clear that if $\mathcal{ F}$ is a consistent filter, so is $\mathcal{ F}_C$.
\end{remark}
We describe now three basic filters, which will be later combined to obtain a powerful consistent filter for use in the first step of our two-step procedure.
\subsection{A Consistent Univariate Filter (UF) }
This is the initial filter introduced in \citet{agostinelli:2014}. Let $X_1, \dots, X_n$ be a random (univariate) sample of observations. Consider a pair of initial location and dispersion estimators, $T_{0n}$ and $S_{0n}$, such as the median and median absolute deviation (MAD) as adopted in this paper. Denote the standardized sample by $Z_{i} = (X_{i} - T_{0n})/S_{0n}$. Let $F$ be a chosen reference distribution for $Z_{i}$. Here, we use the standard normal distribution, $F = \Phi$.
Let ${F}^+_{n}$ be the empirical distribution function for the absolute standardized value, that is,
\begin{linenomath}
\[
{F}^+_{n}(t) = \frac{1}{n}\sum_{i=1}^n I( |Z_{i}| \le t).
\]
The proportion of flagged outliers is defined by
\begin{equation}\label{eq:2SGS-GY-d}
\begin{aligned}
d_{n} &=\sup_{t\ge \eta} \left\{ F^+
(t)-{F}^+_{n}(t) \right\}^{+},
\end{aligned}
\end{equation}
\end{linenomath}
where $\{a \}^+$ represents the positive part of $a$, $F^+$ is the distribution of $|Z|$ when $Z\sim F$, and $\eta = (F^+)^{-1}(\alpha)$ is a large quantile of $F^+$. We use $\alpha = 0.95$ for univariate filtering as the aim is to detect large outliers, but other choices could be considered. Then, we flag $\lfloor nd_{n} \rfloor$ observations with the largest absolute standardized value, $|Z_i|$, as cellwise outliers and replace them by NA's.
The following proposition states this is a consistent filter. That is, even when the actual distribution is unknown, asymptotically, the univariate filter will not flag outliers when the tail of the chosen reference distribution is heavier than (or equal to) the tail of the actual distribution.
\begin{proposition}[\citealp{agostinelli:2014}]\label{prop:2SGS-GY-asymptotic}
Consider a random variable $X \sim F_{0}$ with $F_0$ continuous.
Also, consider a pair of location and dispersion estimators $T_{0n}$ and $S_{0n}$ such that $T_{0n}\rightarrow\mu_0 \in \mathbb R$ and $S_{0n}\rightarrow\sigma_0 > 0$ a.s. [$F_0$]. Let $F_0^+(t) = P_{F_0}( |\frac{X - \mu_0}{\sigma_0}| \le t)$.
If the reference distribution $F^+$ satisfies the inequality
\begin{linenomath}
\begin{equation}\label{eq:2SGS-reference}
\max_{t\ge \eta}\left\{F^+(t)-F_{0}^+(t) \right\}\leq 0,
\end{equation}
\end{linenomath}
then
\begin{linenomath}
\[
\frac{n_{0}}{n}\rightarrow0\text{ a.s.,}%
\]
\end{linenomath}
where
\begin{linenomath}
\[
n_{0}= \lfloor nd_{n} \rfloor.
\]
\end{linenomath}
\end{proposition}
We define the global univariate filter, UF, as the union of all the consistent filters described above, applied to each variable in $\mathbb X$. By Remarks \ref{remark:filter-definition} and \ref{remark:filter-consistency}, it is clear that UF is a consistent filter.
\subsection{A Consistent Bivariate Filter (BF)}
Let $(\pmb X_{1}, \dots, \pmb X_{n})$, with $\pmb X_{i} = (X_{i1}, X_{i2})^t$, be a random sample of bivariate observations. Consider also a pair of initial location and scatter estimators,
\begin{linenomath}
\[
\pmb T_{0n} = \left( \begin{array}{c}
T_{0n,1} \\
T_{0n,2}
\end{array}
\right)
\quad \text{and} \quad
\pmb C_{0n} = \left( \begin{array}{cc}
C_{0n,11} & C_{0n,12} \\
C_{0n,21} & C_{0n,22}
\end{array}
\right).
\]
\end{linenomath}
Similar to the univariate case we use the coordinate-wise median and the bivariate Gnanadesikan-Kettenring estimator with MAD scale \citep{gnanadesikan:1972} for $\pmb T_{0n}$ and $\pmb C_{0n}$, respectively. More precisely, the initial scatter estimators are defined by
\begin{linenomath}
\[
C_{0n,jk} = \frac{1}{4} \left( \text{MAD}(\{ X_{ij} + X_{ik}\})^2 - \text{MAD}(\{ X_{ij} - X_{ik}\})^2 \right),
\]
\end{linenomath}
where $\text{MAD}(\{Y_i\})$ denotes the MAD of $Y_1,\dots,Y_n$. Note that $C_{0n,jj} = \text{MAD}(\{X_j\})^2$, which agrees with our choice of the coordinate-wise dispersion estimators.
Now, denote the pairwise (squared) Mahalanobis distances by $D_i = (\pmb X_{i} - \pmb T_{0n})^t \pmb C_{0n}^{-1} (\pmb X_i - \pmb T_{0n})$.
Let $G_n$ be the empirical distribution for pairwise Mahalanobis distances,
\begin{linenomath}
\[
G_n(t) = \frac{1}{n}\sum_{i=1}^n I( D_i \le t).
\]
\end{linenomath}
Finally, we filter outlying points $\pmb X_i$ by comparing $G_n(t)$ with $G(t)$, where $G$ is a chosen reference distribution. In this paper, we use the chi-squared distribution with two degrees of freedom, $G = \chi^2_2$.
The proportion of flagged bivariate outliers is defined by
\begin{linenomath}
\begin{equation}\label{eq:2SGS-GY-d-bivariate}
\begin{aligned}
d_{n} &=\sup_{t\ge \eta} \left\{ G
(t)-{G}_{n}(t) \right\}^{+}.
\end{aligned}
\end{equation}
\end{linenomath}
Here, $\eta = G^{-1}(\alpha)$, and we use $\alpha = 0.85$ for bivariate filtering since we now aim for moderate outliers, but other choices of $\alpha$ can be considered. Then, we flag $\lfloor nd_n \rfloor$ observations with the largest pairwise Mahalanobis distances as outlying bivariate points. Finally, the following proposition states the consistency property of the bivariate filter.
\begin{proposition}\label{prop:2SGS-GY-asymptotic-bivariate}
Consider a random vector $\pmb X = (X_1, X_2)^t \sim H_0$.
Also, consider a pair of bivariate location and scatter estimators $\pmb T_{0n}$ and $\pmb C_{0n}$ such that $\pmb T_{0n}\rightarrow\pmb\mu_0 \in \mathbb R^2$ and $\pmb C_{0n}\rightarrow\pmb\Sigma_0 \in \mathrm{PDS}(2)$ a.s. [$H_0$] $\mathrm{(}\mathrm{PDS}(q)$ is the set of all positive definite symmetric matrices of size $q\mathrm{)}$. Let $G_0(t) = P_{H_0}( (\pmb X - \pmb \mu_0)^t \pmb\Sigma_0^{-1} (\pmb X - \pmb \mu_0) \le t)$ and suppose that $G_0$ is continuous.
If the reference distribution $G$ satisfies:
\begin{linenomath}
\begin{equation}\label{eq:2SGS-reference-bivariate}
\max_{t\ge \eta}\left\{G(t)-G_{0}( t) \right\}\leq 0,
\end{equation}
\end{linenomath}
then
\begin{linenomath}
\[
\frac{n_{0}}{n}\rightarrow0\text{ a.s.,}%
\]
\end{linenomath}
where
\begin{linenomath}
\[
n_{0}= \lfloor nd_{n} \rfloor.
\]
\end{linenomath}
\end{proposition}
In the next section, we will define the global univariate-and-bivariate filter, UBF, using UF and BF as building blocks.
\subsection{A Consistent Univariate and Bivariate Filter (UBF)}
We first apply the univariate filter from \citet{agostinelli:2014} to each variable in $\mathbb X$ separately using the initial location and dispersion estimators, $\pmb T_{0n} = (T_{0n,1}, \dots, T_{0n,p})$ and $\pmb S_{0n} = (S_{0n,1}, \dots, S_{0n,p})$.
Let $\mathbb U$ be the resulting auxiliary matrix of zeros and ones with zeros indicating the filtered entries in $\mathbb X$. We next iterate over all pairs of variables in $\mathbb X$ to identify outlying bivariate points which helps filtering the moderately contaminated cells.
Fix a pair of variables, $ (X_{ij}, X_{ik})$ and set $\pmb X_i^{(jk)} = (X_{ij}, X_{ik})$.
Let $\pmb C_{0n}^{(jk)}$ be an initial pairwise scatter matrix estimator for this pair of variables,
for example, the Gnanadesikan-Kettenring estimator. Note that pairwise scatter matrices do not ensure positive definiteness of $\pmb C_{0n}$, but this is not necessary in this case because only bivariate scatter matrix, $\pmb C_{0n}^{(jk)}$, is required
in each bivariate filtering.
We calculate the pairwise Mahalanobis distances $D_i^{(jk)} = (\pmb X_i^{(jk)} - \pmb T_{0n}^{(jk)})^t (\pmb C_{0n}^{(jk)})^{-1} (\pmb X_i^{(jk)} - \pmb T_{0n}^{(jk)})$ and perform the bivariate filtering on the pairwise distances with no flagged components from the univariate filtering: $\{D_i^{(jk)} : U_{ij} = 1, U_{ik}=1\}$.
We apply this procedure to all pairs of variables $1\le j < k \le p$.
Let
\begin{linenomath}
\[
J=\left\{ (i,j,k): D_i^{(jk)} \text{ is flagged as bivariate outlier}\right\},
\]
\end{linenomath}
be the set of triplets which identify the pairs of cells flagged by the bivariate filter in rows $i=1,...,n$.
It remains to determine which cells $(i,j)$ in row $i$ are to be flagged as cellwise outliers.
For each cell $(i,j)$ in the data table, $i=1,\dots,n$ and $j=1,\dots,p,$ we count the number of
flagged pairs in the $i$-th row where cell $(i,j)$ is involved:
\begin{linenomath}
\[
m_{ij}=\#\left\{ k: (i,j,k) \in J\right\}.
\]
\end{linenomath}
Cells with large $m_{ij}$ are likely to correspond to univariate outliers.
Suppose that observation $X_{ij}$ is not contaminated by cellwise contamination. Then $m_{ij}$ approximately follows the binomial distribution, $Bin( \sum_{k\ne j} U_{ik}, \delta)$, under ICM, where $\delta$ is the overall proportion of cellwise outliers that were not detected by the univariate filter.
We flag observation $X_{ij}$ if
\begin{linenomath}
\begin{equation}\label{eq:UBF-condition}
m_{ij} > c_{ij},
\end{equation}
\end{linenomath}
where $c_{ij}$ is the 0.99-quantile of $Bin( \sum_{k\ne j} U_{ik}, \delta)$.
In practice we obtained good results (in both simulation and real data example) using the conservative choice $\delta = 0.10$, which is adopted in this paper.
The filter obtained as the combination of all the univariate and the bivariate filters described above is called UBF. The following argument shows that UBF is a consistent filter.
By Remarks \ref{remark:filter-definition} and \ref{remark:filter-consistency}, the union of all the bivariate consistent filters (from Proposition \ref{prop:2SGS-GY-asymptotic-bivariate}) is a consistent filter. Next, applying the condition described in (\ref{eq:UBF-condition}) to the union of these bivariate consistent filters yields another consistent filter. Finally, the union of this with UF results in the consistent filter, UBF.
\subsection{The DDC Filter}
Recently, \citet{rousseeuw:2016} proposed a new procedure to filter and impute cellwise outliers, called {\it DetectDeviatingCells} (DDC). DDC is a sophisticated procedure that
uses correlations between variables to estimate the expected value for each cell, and then flags those with an observed value that greatly deviates from this expected value. The DDC filter exhibited
a very good performance when used in the first step in our two-step procedure in our simulation. However, the DDC filter is not shown to be consistent, as needed to ensure the overall consistency of our two-step estimation procedure.
In view of that, we propose a new filter made by intersecting UBF and DDC (denoted here as UBF-DDC). By Remarks \ref{remark:filter-definition} and \ref{remark:filter-consistency}, UBF-DDC
is consistent. Moreover, we will show in Section \ref{sec:GSE-MCresults} and in \ref{sec:appendix-filter-comparison} that
UBF-DDC is very effective, yielding the best overall performances when used as the first step in our two-step estimation procedure.
\section{Generalized Rocke S-estimators}\label{sec:GSE-GRE}
The second step of the procedure introduces robustness against casewise outliers that went undetected in the first step.
Data that emerged from the first step has missing values that correspond to
potentially contaminated cells. To estimate the multivariate location and
scatter matrix from that data, we use a recently developed estimator called GSE, briefly reviewed below.
\subsection{Review of Generalized S-estimators}\label{sec:GSE-GSE}
Related to $\mathbb X$ denote $\mathbb U$ the auxiliary matrix of zeros and ones, with zeros indicating the corresponding missing entries.
Let $p_{i} = p(\pmb{U}_{i})=\sum_{j=1}^{p}U_{ij}$ be the actual dimension of the observed part of $\pmb{X}_{i}$.
Given a $p$-dimensional vector of zeros and ones $\pmb u$, a $p$-dimensional vector $\pmb{m}$ and a $p\times p$ matrix $\pmb{A}$,
we denote by $\pmb{m}^{(\pmb{u})}$ and $\pmb{A}^{(\pmb{u})}$ the sub-vector of $\pmb{m}$ and the sub-matrix of $\pmb{A}$, respectively, with columns and rows corresponding to the positive entries in $\pmb{u}$.
Define
\begin{linenomath}
\[
D(\pmb{x},\pmb{m},\pmb{C})=(\pmb{x}-\pmb{m})^{t}%
\pmb{C}^{-1}(\pmb{x}-\pmb{m})
\]
\end{linenomath}
the squared Mahalanobis distance and
\begin{linenomath}
\[
D^{\ast}(\pmb{x},\pmb{m},\pmb{C})=D(\pmb{x},\pmb{m},\pmb{C}^{\ast})
\]
\end{linenomath}
the normalized squared Mahalanobis distances, where $\pmb{C}^{\ast}=\pmb{C}/|\pmb{C}|^{1/p},$ so $|\pmb{C}^{\ast}|=1$, and where $|A|$ is the determinant of $A$.
Let ${\pmb{\Omega}}_{0n}$ be a $p \times p$ positive definite initial estimator.
Given the location vector $\pmb{\mu}\in\mathbb{R}^{p}$ and a $p\times p$ positive definite matrix $\pmb{\Sigma}$, we define the generalized M-scale, $s_{GS}(\pmb{\mu},\pmb{\Sigma},{\pmb{\Omega}}_{0n}, \mathbb X, \mathbb U)$, as the solution
in $s$ to the following equation:
\begin{linenomath}
\begin{equation}\label{eq:GSE-GSE-scale}
\sum_{i=1}^{n}c_{p(\pmb{U}_{i})}\rho\left( \frac{D^{\ast}\left(
\pmb{X}_{i}^{(\pmb{U}_{i})},\pmb{\mu}^{(\pmb{U}_{i})},\pmb{\Sigma}^{(\pmb{U}_{i})}\right) }{s \, c_{p(\pmb{U}_{i}%
)}\,\left\vert \pmb{\Omega}_{0n}^{(\pmb{U}_{i})}\right\vert
^{1/p(\pmb{U}_{i})}}\right) =b\sum_{i=1}^{n}c_{p(\pmb{U}_{i})}
\end{equation}
\end{linenomath}
where $\rho(t)$ is an even, non-decreasing in $|t|$ and bounded loss function. The tuning constants $c_{k}$, $1\leq k\leq p$, are chosen such that
\begin{linenomath}
\begin{equation}\label{eq:GSE-GSE-tuning-constant}
E_{\Phi}\left( \rho\left( \dfrac{||\pmb{X}||^{2}}{c_{k}}\right) \right)
=b,\quad\pmb{X}\sim N_{k}(\pmb{0},\pmb{I}),
\end{equation}
\end{linenomath}
to ensure consistency under the multivariate normal. A common choice of $\rho$ is the Tukey's bisquare rho function, $\rho(u) =\min(1, 1 - (1 - u)^{3})$, and $b = 0.5$, as also used in this paper.
A generalized S-estimator is then defined by
\begin{linenomath}
\begin{equation}\label{eq:GSE-GSE}
(\pmb T_{GS}, {\pmb{C}
}_{GS}) = \arg\min_{\pmb{\mu}, \pmb{\Sigma}} s_{GS}(\pmb{\mu
}, \pmb{\Sigma}, {\pmb{\Omega}}_{0n}, \mathbb X, \mathbb U)
\end{equation}
\end{linenomath}
subject to the constraint
\begin{linenomath}
\begin{equation}\label{eq:GSE-GSE-constraint}
s_{GS}(\pmb{\mu}, \pmb{\Sigma},
\pmb{\Sigma}, \mathbb X, \mathbb U) = 1.
\end{equation}
\end{linenomath}
\subsection{Generalized Rocke S-estimators}\label{sec:GSE-RockeGSE}
\citet{rocke:1996} showed that if the weight function $W(x) = \rho'(x)/x$ in S-estimators is non-increasing, the efficiency of the estimators tends to one when $p \to \infty$. However, this gain in efficiency is paid for by a decrease in robustness. Not surprisingly,
the same phenomenon has been observed for generalized S-estimators in simulation studies. Therefore, there is a need for new generalized S-estimators with controllable efficiency/robustness trade off.
\citet{rocke:1996} proposed that the $\rho$ function used to compute S-estimators should change with the dimension to prevent loss of robustness in higher dimensions.
The Rocke-$\rho$ function is constructed based on the fact that for large $p$ the scaled squared Mahalanobis distances for normal data
\begin{linenomath}
\[
\frac{D(\pmb X, \pmb \mu, \pmb\Sigma)}{\sigma} \approx \frac{Z}{p} \quad \text{with} \quad Z \sim \chi^2_p,
\]
\end{linenomath}
and hence that $D/\sigma$ are increasingly concentrated around one. So, to have a high enough, but not too high, efficiency, we should give a high weight to the values of $D/\sigma$ near one and downweight the cases where $D/\sigma$ is far from one.
Let
\begin{linenomath}
\begin{equation}\label{eq:GSE-Rocke-rho-gamma}
\gamma = \min\left(\frac{\chi^2(1 - \alpha)}{p} - 1, 1 \right),
\end{equation}
\end{linenomath}
where $\chi^2(\beta)$ is the $\beta$-quantile of $\chi^2_p$.
In this paper, we use a conventional choice of $\alpha = 0.05$ that gives an acceptable efficiency of the estimator. We have also explored smaller values of $\alpha$ according to \citet{maronna:2015b}, but we have seen some degree of trade-offs between efficiency and casewise robustness (see the supplementary material).
\citet{maronna:2006} proposed a modification of the Rocke-$\rho$ function, namely
\begin{linenomath}
\begin{equation}\label{eq:GSE-Rocke-rho}
\rho(u) = \begin{cases}
0 & \text{for}\quad 0 \le u \le 1 - \gamma \\
\left(\frac{u-1}{4\gamma}\right)\left[3 - \left(\frac{u-1}{\gamma}\right)^2\right] + \frac{1}{2} & \text{for}\quad 1 - \gamma < u <1 + \gamma\\
1 & \text{for}\quad u \ge 1 + \gamma
\end{cases}
\end{equation}
\end{linenomath}
which has as derivative the desired weight function that vanishes for $u \not\in [1 - \gamma, 1 + \gamma]$
\begin{linenomath}
\[
W(u) = \frac{3}{4\gamma}\left[1 - \left(\frac{u - 1}{\gamma} \right)^2 \right] I( 1 - \gamma \le u \le 1 + \gamma).
\]
\end{linenomath}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.65]{Ch3_Section2_GRE_weights_comparison_p=40.pdf}
\caption{Weight functions of the Tukey-bisquare and the Rocke for $p=40$. Chi-square density functions are also plotted in blue for comparison. All the functions are scaled so that their maximum is 1 to facilitate comparison.}\label{fig:GSE-example-weights}
\end{figure}
Figure \ref{fig:GSE-example-weights} compares the Rocke-weight function, $W_{Rocke}(z/c_p)$, and the Tukey-bisquare weight function, $W_{Tukey}(z/c_p)$, for $p=40$, where $c_p$ as defined in (\ref{eq:GSE-GSE-tuning-constant}). The chi-square density function is also plotted in blue for comparison. When $p$ is large the tail of the Tukey-bisquare weight function greatly deviates from the tail of the chi-square density function and inappropriately assigns high weights to large distances. On the other hand, the Rocke-weight function can resemble the shape of the chi-square density function and is capable of assigning low weights to large distances.
Finally, we define the generalized Rocke S-estimators or GRE by (\ref{eq:GSE-GSE}) and (\ref{eq:GSE-GSE-constraint}) with the $\rho$-function in (\ref{eq:GSE-GSE-scale}) replaced by the modified Rocke-$\rho$ function in (\ref{eq:GSE-Rocke-rho}). We compared GRE with GSE via simulation and found that GRE has a substantial better performance in dealing with casewise outliers when $p$ is large (e.g., $p > 10$). Results from this simulation study are provided in the supplementary material.
\section{Computational Issues}\label{sec:GSE-computing-issue}
The generalized S-estimators described above are computed via iterative re-weighted means and covariances, starting from an initial estimate. We now discuss some computing issues associated with this iterative procedure.
\subsection{Computation of the Initial Estimator}
For the initial estimate, the extended minimum volume ellipsoid (EMVE) has been used, as suggested by \citet[][]{danilov:2012}. The EMVE is computed with a large number of subsamples ($>500$) to increase the chance that at least one clean subsample is obtained.
Let $\varepsilon$ be the proportion of contamination in the data and $m$ be the subsample size. The probability of having at least one clean subsample of size $m$ out of $M$ subsamples is
\begin{linenomath}
\begin{equation}\label{eq:GSE-nresample-calculation}
q = 1 - \left[1 - \left(\begin{array}{c} n \cdot(1 - \varepsilon) \\ m \end{array}\right)/\left(\begin{array}{c} n \\ m \end{array}\right) \right]^M.
\end{equation}
\end{linenomath}
For large $p$, the number of subsamples $M$ required for a large $q$, say $q = 0.99$, can be impractically large, dramatically slowing down the computation. For example, suppose $m = p$, $n = 10p$, and $\varepsilon = 0.50$. If $p = 10$, then $M = 7758$; if $p = 30$, then $M = 2.48 \times 10^{10}$; and if $p = 50$, then $M = 4.15 \times 10^{16}$. Therefore, there is a need for a faster and more reliable starting point for large $p$.
Alternatively, pairwise scatter estimators could be used as fast initial estimator \citep[e.g.,][]{alqallaf:2002}. Previous simulation studies have shown that pairwise scatter estimators are robust against cellwise outliers, but they perform not as well in the presence of casewise outliers and finely shaped multivariate data \citep{danilov:2012, agostinelli:2014}.
\subsubsection{Cluster-Based Subsampling}
Next, we introduce a cluster-based algorithm for faster and more reliable subsampling for the computation of EMVE. The EMVE computed with the cluster-based subsampling is called called EMVE-C throughout the paper.
High-dimensional data have several interesting geometrical properties as described in \citet{hall:2005}. One such property that motivated the Rocke-$\rho$ function, as well as the following algorithm, is that for large $p$ the $p$-variate standard normal distribution $N_p(\pmb 0, \pmb I)$ is concentrated ``near" the spherical shell with radius $\sqrt p$. So, if outliers have a slightly different covariance structure from clean data, they would appear geometrically different. Therefore, we could apply a clustering algorithm to first separate the outliers from the clean data. Subsampling from a big cluster, which in principle is composed of mostly clean cases, should be more reliable and require fewer number of subsamples.
Given $\mathbb X$ and $\mathbb U$. The following steps describe our clustering-based subsampling:
\begin{enumerate}\itemsep0pt
\item Standardize the data $\mathbb X$ with some initial location and dispersion estimator $T_{0j}$ and $S_{0j}$. Common choices for $T_{0j}$ and $S_{0j}$ that are also adopted in this paper are the coordinate-wise median and MAD. Denote the standardized data by $\mathbb Z = (\pmb Z_1, \dots, \pmb Z_n)^t$, where $\pmb Z_i = (Z_{i1}, \dots, Z_{ip})^t$ and $Z_{ij} = (X_{ij} - T_{0j})/S_{0j}$.
\item Compute a simple robust correlation matrix estimate $\pmb R = (R_{jk})$. Here, we use the Gnanadesikan-Kettenring estimator \citep{gnanadesikan:1972}, where
\begin{linenomath}
\[
R_{ij} = \frac{1}{4} (S_{0jk+}^2 - S_{0jk-}^2),
\]
\end{linenomath}
and where $S_{0jk+}$ is the dispersion estimate for $\{ Z_{ij} + Z_{ik} | U_{ij} = 1, U_{ik} = 1 \}$ and $S_{0jk-}$ the estimate for $\{ Z_{ij} - Z_{ik} | U_{ij} = 1, U_{ik} = 1\}$. We use $Q_n$ \citep{rousseeuw:1993} for the dispersion estimate.
\item Compute the eigenvalues $\lambda_1 \ge \dots \ge \lambda_p$ and eigenvectors $\pmb e_1, \dots, \pmb e_p$ of the correlation matrix estimate
\begin{linenomath}
\[
\pmb R = \pmb E \pmb \Lambda \pmb E^t,
\]
\end{linenomath}
where $\pmb\Lambda = \text{diag}(\lambda_1, \dots, \lambda_p)$ and $\pmb E = (\pmb e_1, \dots, \pmb e_p)$. Let $p_+$ be the largest dimension such that $\lambda_{j} > 0$ for $j =1, \dots, p_+$. Retain only the eigenvectors $\pmb E_0 = (\pmb e_1, \dots, \pmb e_{p_+})$ with a positive eigenvalue.
\item Complete the standardized data $\mathbb Z$ by replacing each missing entry, as indicated by $\mathbb U$, by zero. Then, project the data onto the basis eigenvectors $\tilde{\pmb Z} = \pmb Z \pmb E_0$, and then standardize the columns of $\tilde{\pmb Z}$, or so called principal components, using coordinate-wise median and MAD of $\tilde{\pmb Z}$.
\item Search for a ``clean" cluster $C$ in the standardized $\tilde{\pmb Z}$ using a hierarchical clustering framework by doing the following. First, compute the dissimilarity matrix for the principal components using the Euclidean metric. Then, apply classical hierarchical clustering (with any linkage of choice). A common choice is the Ward's linkage, which is adopted in this paper. Finally, define the ``clean" cluster by the smallest sub-cluster $C$ with a size at least $n/2$. This can be obtained by cutting the clustering tree at various heights from the top until all the clusters have size less than $n/2$.
\item Take a subsample of size $n_0$ from $C$.
\end{enumerate}
With good clustering results, we can draw fewer subsamples, and equally important, we can use a larger subsample size. The current default choices in GSE are $M=500$ subsamples of size $n_0 = (p + 1)/(1 - \alpha_{mis})$ as suggested in \citet{danilov:2012}, where $\alpha_{mis}$ is the fraction of missing data ($\alpha_{mis} =$ number of missing entries $/(np)$). For the new clustering-based subsampling, we choose $M=50$ and $n_0 = 2(p + 1)/(1 - \alpha_{mis})$ in view of their overall good performance in our simulation study. However, using equation (\ref{eq:GSE-nresample-calculation}), a more formal procedure for the choice of $M$ and $n_0$ could be considered.
$M$ and $n_0$ could be chosen as a function of the cluster size $C$, the expected remaining fraction of contamination $\delta$, and a desired level of confidence. In such case, $n$ and $\varepsilon$ in equation (\ref{eq:GSE-nresample-calculation}) should be replaced by to the size of the cluster $C$ and the value of $\delta$, respectively.
Without clustering, $\varepsilon$ would be chosen fairly large (e.g. $\varepsilon =0.50$) for conservative reasons. However, with clustering, $\varepsilon$ can be made smaller (e.g., $\varepsilon \le 0.10$).
In general, $p$ is the primary driver of computational time, but the procedure could also be time-consuming for large $n$ because the number of operations required by hierarchical clustering is of order $n^3$. As an alternative, one may bypass the hierarchical clustering step and sample directly from the data points with the smallest Euclidean distances to the origin calculated from $\tilde{\pmb Z}$. This is because the Euclidean distances, in principle, should approximate the Mahalanobis distances to the mean of the original data. However, our simulations show that the hierarchical clustering step is essential for the excellent performance of the estimates, and that this step entails only a small increase in real computational time, even for large $n$.
A recent simulation study \citep{maronna:2015b} has shown that Rocke estimator starting from the the ``kurtosis plus specific direction" (KSD) estimator \citep{pena:2001} estimator can attain high efficiency and high robustness for large $p$. The KSD estimator uses a multivariate outlier detection procedure based on finding directions that maximize or minimize the kurtosis coefficient of the respective projections. The ``clean" cases that were not flagged as outliers are then used for estimating multivariate location and scatter matrix. Unfortunately, KSD is not implemented for incomplete data. The study of the adaption of KSD for incomplete data would be of interest and worth of future research.
\subsection{Other Computational Issues}
There is no formal proof that the recursive algorithm decreases the objective function at each iteration for the case of generalized S-estimators with a monotonic weight function \citep{danilov:2012}. This also the case for generalized S-estimators with a non-monotonic weight function. For Rocke estimators with complete data, \citet[][see Section 9.6.3]{maronna:2006} described an algorithm that ensures attaining a local minimum. We have adapted this algorithm for the generalized counterparts. Although we cannot provide a formal proof, we have seen so far in our experiments that the descending property of the recursive algorithms always holds.
\section{Two-Step Estimation and Simulation Results}\label{sec:GSE-MCresults}
The original two-step approach for global--robust estimation under cellwise and casewise contamination is to first flag outlying cells in the data table and to replace them by NA's using a univariate filter only (shortened to UF). In the second step, the generalized S-estimator is then applied to this incomplete data. Our new version of this is to replace UF in the first step by the proposed combination of univariate-and-bivariate filter and DDC (shortened to UBF-DDC) and to replace GSE in the second step by GRE-C (i.e., GRE starting from EMVE-C). We call the new two-step procedure UBF-DDC-GRE-C. The new procedure will be made available in the \texttt{TSGS} function
in the \texttt{R} package \texttt{GSE} \citep{leung:2015}.
We now conduct a simulation study similar to that in \citet{agostinelli:2014} to compare the two-step procedures, UF-GSE as introduced in \citet{agostinelli:2014} and UBF-DDC-GRE-C, as well as
the classical correlation estimator (MLE) and several other robust estimators that showed a competitive performance under
\begin{itemize}
\item Cellwise contamination: SnipEM (shortened to Snip) introduced in \citet{farcomeni:2014a}
\item Casewise contamination: Rocke S-estimator as recently revisited by \citet{maronna:2015b} and HSD introduced by \citet{vanAelst:2012}
\item Cellwise and casewise contamination: DetMCDScore (shortened to DMCDSc) introduced by \citet{rousseeuw:2015}
\end{itemize}
We also considered the different variations of the two-step procedures using different first steps, including UBF-GRE-C and DDC-GRE-C. However, UBF-DDC-GRE-C generally performs better in simulations than UBF-GRE-C and DDC-GRE-C. Therefore, we present only the results of UBF-DDC-GRE-C here. The complete results of UBF-GRE-C and DDC-GRE-C can be found in \ref{sec:appendix-filter-comparison}.
We consider clean and contaminated samples from a $N_p(\pmb{\mu_0},\pmb{\Sigma_0})$ distribution with dimension $p=10,20,30,40,50$ and sample size $n= 10p$. The simulation mechanisms are briefly described below.
Since the contamination models and the estimators considered in our simulation study
are location and scale equivariant, we can assume without loss of
generality that the mean, $\pmb\mu_0$, is equal to $\pmb 0$ and the variances in $\text{diag}(\pmb\Sigma_0)$ are all equal to $\pmb 1$. That is, $\pmb\Sigma_0$ is a correlation matrix.
Since the cellwise contamination model and the estimators are not affine-equivariant, we consider the two different approaches to introduce correlation structures:
\begin{itemize}
\item Random correlation as described in \citet{agostinelli:2014} and
\item First order autoregressive correlation.
\end{itemize}
The random correlation structure generally has small correlations, especially with increasing $p$. For example, for $p=10$, the maximum correlation values have an average of $0.49$, and for $p=50$, the average maximum is $0.28$.
So, we consider the first order autoregressive correlation (AR1) with higher correlations, in which the correlation matrix has entries
\begin{linenomath}
\[
\Sigma_{0,jk} = \rho^{|j-k|},
\]
\end{linenomath}
with $\rho=0.9$.
We then consider the following scenarios:
\begin{itemize}
\item Clean data: No further changes are done to the data.
\item Cellwise contamination: We randomly replace a $\epsilon$ of the cells in the data matrix by $X_{ij}^{cont} \sim N(k, 0.1^2)$, where $k=1,2,\dots,10$.
\item Casewise contamination: We randomly replace a $\epsilon$ of the cases in the data matrix by $\pmb X_i^{cont} \sim 0.5 N( c \pmb v, 0.1^2 \pmb I) + 0.5 N(- c \pmb v, 0.1^2 \pmb I)$, where $c=\sqrt{k (\chi^2)^{-1}_p(0.99)}$ and $k = 1,2,\dots, 20$ and $\pmb{v}$ is the eigenvector corresponding to the smallest eigenvalue of $\pmb{\Sigma}_{0}$ with length such that $\left(\pmb{v} -\pmb\mu_0\right)^{t}\pmb{\Sigma}_{0}^{-1}
\left(\pmb{v}-\pmb\mu_0\right)=1$. Experiments show that the placement of outliers in this way is the least favorable for the proposed estimator.
\end{itemize}
We consider $\epsilon =0.02, 0.05$ for cellwise contamination, and $\epsilon = 0.10, 0.20$ for casewise contamination.
The number of replicates in our simulation study is $N=500$.
The performance of a given scatter estimator $\pmb{\Sigma}_n$ is
measured by the Kulback--Leibler divergence between two Gaussian distribution with the same mean and
covariances $\pmb{\Sigma}$ and $\pmb{\Sigma}_{0}$:
\begin{linenomath}
\[
D(\pmb{\Sigma}, \pmb{\Sigma}_{0}) = \mbox{trace}(\pmb{\Sigma
}\pmb{\Sigma}_{0}^{-1}) - \log(|\pmb{\Sigma}\pmb{\Sigma}_{0}^{-1}|) -
p.
\]
\end{linenomath}
This divergence also appears in the likelihood ratio test statistics for testing
the null hypothesis that a multivariate normal distribution has covariance
matrix $\pmb{\Sigma}= \pmb{\Sigma}_{0}$. We call this divergence measure the likelihood ratio test distance (LRT). Then, the performance of an
estimator $\pmb{\Sigma}_n$ is summarized by
\begin{linenomath}
\[
\overline{D}({\pmb{\Sigma}}_n, \pmb{\Sigma}_{0}) =\frac{1}{N} \sum_{i=1}^{N}
D(\hat{\pmb{\Sigma}}_{n,i}, \pmb{\Sigma}_{0})
\]
\end{linenomath}
where $\hat{\pmb{\Sigma}}_{n,i}$ is the estimate at the $i$-th replication. Finally, the maximum average LRT distances over all considered contamination values, $k$, is also calculated.
\begin{table}[t!]
\centering
\footnotesize
\caption{Maximum average LRT distances under cellwise contamination. The sample size is $n=10p$. }\label{tab:GSE-2SGRE-Cellwise}
\begin{tabular}{llcccccccHHHHc}
\hline
Corr. & $p$ & $\epsilon$ & MLE & Rocke & HSD & Snip & DMCDSc & UF- & UBF- & UF- & UBF- & DDC- & UBF-DDC- \\
& & & & & & & & GSE & GSE & GRE-C & GRE-C & GRE-C & GRE-C\\
\hline
Random & 10 & 0 & 0.6 & 1.2 & 0.8 & 5.0 & 1.5 & 0.8 & 0.9 & 1.2 & 1.3 & 1.0 & 1.0\\
& & 0.02 & 114.8 & 1.2 & 2.3 & 6.9 & 1.6 & 1.2 & 1.4 & 1.3 & 1.4 & 1.1 & 1.1\\
& & 0.05 & 285.4 & 3.6 & 11.2 & 7.5 & 3.2 & 4.5 & 4.4 & 2.2 & 2.5 & 2.6 & 2.5\\
& 20 & 0 & 1.1 & 2.0 & 1.2 & 11.5 & 2.0 & 1.3 & 1.5 & 1.9 & 2.0 & 1.8 & 1.8 \\
& & 0.02 & 146.1 & 2.7 & 10.6 & 13.9 & 2.6 & 4.0 & 4.4 & 2.9 & 3.0 & 2.5 & 2.5 \\
& & 0.05 & 375.9 & 187.2 & 57.1 & 15.5 & 9.3 & 11.0 & 11.1 & 8.0 & 8.2 & 7.7 & 7.3\\
& 30 & 0 & 1.6 & 2.8 & 1.7 & 16.7 & 2.6 & 1.9 & 2.0 & 3.4 & 3.9 & 3.5 & 3.3\\
& & 0.02 & 179.0 & 23.1 & 22.6 & 18.5 & 4.4 & 5.8 & 6.3 & 5.4 & 5.9 & 5.3 & 5.0 \\
& & 0.05 & 475 & 380.5 & 123.1 & 20.8 & 13.7 & 14.2 & 14.8 & 12.3 & 13.4 & 14.2 & 13.3 \\
& 40 & 0 & 2.1 & 3.6 & 2.3 & 20.7 & 3.2 & 2.4 & 2.6 & 5.9 & 6.2 & 5.8 & 5.8 \\
& & 0.02 & 215.1 & 121.3 & 38.9 & 22.6 & 6.0 & 7.3 & 8.0 & 9.4 & 10.9 & 9.5 & 8.8 \\
& & 0.05 & $>$500 & $>$500 & 212.4 & 25.8 & 17.9 & 16.6 & 17.4 & 18.4 & 19.9 & 18.8 & 18.6 \\
& 50 & 0 & 2.7 & 4.4 & 2.8 & 25.4 & 3.8 & 2.9 & 3.2 & 5.2 & 5.3 & 4.9 & 4.9 \\
& & 0.02 & 249.0 & 192.8 & 58.7 & 27.1 & 8.1 & 9.1 & 10.0 & 12.5 & 12.9 & 12.5 & 12.1 \\
& & 0.05 & $>$500 & $>$500 & 298.7 & 29.7 & 20.7 & 19.6 & 20.6 & 22.7 & 23.6 & 24.4 & 23.8 \\
\hline
AR1(0.9) & 10 & 0 & 0.6 & 1.1 & 0.8 & 4.3 & 1.4 & 0.7 & 0.8 & 1.1 & 1.2 & 1.1 & 1.0 \\
& & 0.02 & 149.8 & 1.2 & 0.9 & 4.9 & 1.5 & 0.9 & 0.9 & 1.2 & 1.3 & 1.1 & 1.0 \\
& & 0.05 & 383.8 & 2.6 & 2.8 & 7.0 & 3.1 & 2.1 & 1.1 & 1.7 & 1.4 & 1.3 & 1.3 \\
& 20 & 0 & 1.1 & 1.9 & 1.2 & 7.8 & 2.1 & 1.2 & 1.3 & 1.8 & 1.9 & 1.8 & 1.7 \\
& & 0.02 & 311.3 & 2.5 & 3.9 & 10.5 & 2.6 & 2.1 & 1.5 & 2.2 & 2.1 & 2.0 & 1.9 \\
& & 0.05 & $>$500 & $>$500 & 31.3 & 14.3 & 12.3 & 9.3 & 2.7 & 7.6 & 2.8 & 2.1 & 2.5 \\
& 30 & 0 & 1.6 & 2.8 & 1.8 & 9.4 & 2.7 & 1.7 & 1.8 & 3.2 & 3.4 & 3.6 & 3.2 \\
& & 0.02 & 475.9 & 71.1 & 10.7 & 13.9 & 5.4 & 4.0 & 2.3 & 3.9 & 3.4 & 3.5 & 3.3 \\
& & 0.05 & $>$500 & $>$500 & 103.3 & 19.8 & 22.6 & 20.3 & 6.2 & 18.1 & 5.5 & 3.4 & 3.6 \\
& 40 & 0 & 2.1 & 3.6 & 2.2 & 10.9 & 3.4 & 2.3 & 2.3 & 5.5 & 5.7 & 5.8 & 5.5 \\
& & 0.02 & $>$500 & 222.1 & 22.7 & 16.2 & 8.9 & 6.7 & 3.5 & 6.5 & 5.7 & 6.0 & 5.6 \\
& & 0.05 & $>$500 & $>$500 & 259.9 & 23.7 & 34.8 & 31.4 & 14.0 & 29.7 & 12.4 & 6.1 & 5.9 \\
& 50 & 0 & 2.7 & 4.4 & 2.8 & 13.0 & 4.0 & 2.8 & 2.9 & 5.5 & 5.2 & 4.6 & 5.0 \\
& & 0.02 & $>$500 & $>$500 & 43.3 & 18.9 & 12.8 & 9.7 & 4.9 & 9.7 & 6.4 & 6.4 & 7.8 \\
& & 0.05 & $>$500 & $>$500 & $>$500 & 28.9 & 46.5 & 42.8 & 22.6 & 40.8 & 20.4 & 7.9 & 8.9 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.5]{Ch3_Section4_MCStudy_LRTcurve_cellwise_p=30_paper_version.pdf}
\caption{Average LRT distance behaviors for various contamination values, $k$, of UF-GSE and UBF-DDC-GSE for random and AR$1(0.9)$ correlations under 5\% cellwise contamination. The dimension is $p=30$ and the sample size is $n=10p$. The results remain the same for larger values of $k$; thus, they are not included in the figure. }\label{fig:2SGR-LRT-cellwise-curve}
\end{figure}
Table \ref{tab:GSE-2SGRE-Cellwise} shows the maximum average LRT distances under cellwise contamination.
UBF-DDC-GRE-C and UF-GSE perform similarly under random correlation, but UBF-DDC-GRE-C outperforms UF-GSE under AR$1(0.9)$. When correlations are small, like in random correlation, the bivariate filter fails to filter moderate cellwise outliers (e.g., $k=2$) because there is not enough information about the bivariate correlation structure in the data. Therefore, the bivariate filter gives similar results as the univariate filter. However, when correlations are large, like in AR$1(0.9)$, the bivariate filter can filter moderate cellwise outliers and therefore, outperforms the univariate filter. This is demonstrated, for example, in Figure \ref{fig:2SGR-LRT-cellwise-curve} which shows the average LRT distance behaviors for various cellwise contamination values, $k$.
\begin{table}[t!]
\centering
\footnotesize
\caption{Maximum average LRT distances under casewise contamination. The sample size is $n=10p$. }\label{tab:GSE-2SGRE-Casewise}
\begin{tabular}{llcccccccHHHHc}
\hline
Corr. & $p$ & $\epsilon$ & MLE & Rocke & HSD & Snip & DMCDSc & UF- & UBF- & UF- & UBF- & DDC- & UBF-DDC- \\
& & & & & & & & GSE & GSE & GRE-C & GRE-C & GRE-C & GRE-C\\
\hline
Random & 10 & 0 & 0.6 & 1.2 & 0.8 & 5.0 & 1.5 & 0.8 & 0.9 & 1.2 & 1.3 & 1.0 & 1.0 \\
& & 0.10 & 43.1 & 2.8 & 3.9 & 44.4 & 4.9 & 9.7 & 18.5 & 11.0 & 19.1 & 9.4 & 7.7 \\
& & 0.20 & 89.0 & 4.7 & 21.8 & 110.3 & 123.6 & 91.8 & 146.8 & 30.1 & 53.0 & 25.3 & 23.7 \\
& 20 & 0 & 1.1 & 2.0 & 1.2 & 11.5 & 2.0 & 1.3 & 1.5 & 1.9 & 2.0 & 1.8 & 1.8 \\
& & 0.10 & 77.0 & 3.4 & 13.4 & 76.9 & 37.8 & 29.7 & 50.1 & 11.5 & 20.9 & 9.5 & 9.1 \\
& & 0.20 & 146.7 & 5.6 & 95.9 & 166.5 & 187.6 & 291.8 & 311.4 & 22.0 & 49.3 & 18.0 & 17.4 \\
& 30 & 0 & 1.6 & 2.8 & 1.7 & 16.7 & 2.6 & 1.9 & 2.0 & 3.4 & 3.9 & 3.5 & 3.3 \\
& & 0.10 & 100.0 & 4.3 & 26.1 & 82.3 & 118.6 & 75.3 & 101.3 & 12.8 & 21.8 & 10.6 & 9.9 \\
& & 0.20 & 200.7 & 7.4 & 297.7 & 220.9 & 268.4 & 415.5 & 445.2 & 21.7 & 47.6 & 18.7 & 16.9 \\
& 40 & 0 & 2.1 & 3.6 & 2.3 & 20.7 & 3.2 & 2.4 & 2.6 & 5.9 & 6.2 & 5.8 & 5.8 \\
& & 0.10 & 125.9 & 5.2 & 46.3 & 101.6 & 130.6 & 140.2 & 168.8 & 18.6 & 29.5 & 17.7 & 16.2 \\
& & 0.20 & 252.4 & 9.1 & $>$500 & 186.2 & 340.1 & $>$500 & 579.9 & 22.7 & 52.3 & 21.2 & 19.5 \\
& 50 & 0 & 2.7 & 4.4 & 2.8 & 25.4 & 3.8 & 2.9 & 3.2 & 5.2 & 5.3 & 4.9 & 4.9 \\
& & 0.10 & 150.3 & 5.9 & 80.0 & 121.9 & 139.5 & 258.1 & 228.8 & 27.5 & 43.4 & 21.2 & 17.6 \\
& & 0.20 & 303.1 & 10.0 & $>$500 & 224.3 & 407.7 & $>$500 & $>$500 & 24.2 & 64.8 & 23.7 & 23.0 \\
\hline
AR1(0.9) & 10 & 0 & 0.6 & 1.1 & 0.8 & 4.3 & 1.4 & 0.7 & 0.8 & 1.1 & 1.2 & 1.1 & 1.0 \\
& & 0.10 & 43.1 & 2.8 & 1.7 & 20.2 & 2.9 & 3.7 & 4.3 & 3.1 & 3.6 & 3.0 & 2.9 \\
& & 0.20 & 88.9 & 4.8 & 8.7 & 49.7 & 29.7 & 50.8 & 50.1 & 7.2 & 8.4 & 6.8 & 6.9 \\
& 20 & 0 & 1.1 & 1.9 & 1.2 & 7.8 & 2.1 & 1.2 & 1.3 & 1.8 & 1.9 & 1.8 & 1.7 \\
& & 0.10 & 77.0 & 2.8 & 4.7 & 43.8 & 14.8 & 12.9 & 14.9 & 3.5 & 4.3 & 3.3 & 3.3 \\
& & 0.20 & 146.6 & 5.3 & 35.3 & 113.0 & 87.6 & 260.5 & 193.9 & 7.3 & 10.5 & 6.0 & 6.0\\
& 30 & 0 & 1.6 & 2.8 & 1.8 & 9.4 & 2.7 & 1.7 & 1.8 & 3.2 & 3.4 & 3.6 & 3.2 \\
& & 0.10 & 98.9 & 3.4 & 8.9 & 66.1 & 32.2 & 31.3 & 37.7 & 4.1 & 5.1 & 4.2 & 4.1 \\
& & 0.20 & 200.5 & 8.2 & 155.5 & 144.8 & 122.9 & 372.7 & 365.1 & 8.4 & 13.3 & 6.9 & 6.8 \\
& 40 & 0 & 2.1 & 3.6 & 2.2 & 10.9 & 3.4 & 2.3 & 2.3 & 5.5 & 5.7 & 5.8 & 5.5 \\
& & 0.10 & 124.9 & 4.3 & 15.6 & 83.7 & 49.2 & 69.1 & 75.5 & 6.4 & 7.3 & 5.8 & 6.4 \\
& & 0.20 & 253.0 & 9.2 & 430.3 & 151.9 & 209.3 & 477.6 & 479.7 & 10.0 & 17.4 & 8.9 & 8.7 \\
& 50 & 0 & 2.7 & 4.4 & 2.8 & 13.0 & 4.0 & 2.8 & 2.9 & 5.5 & 5.2 & 4.6 & 5.0 \\
& & 0.10 & 150.2 & 5.1 & 26.5 & 103.3 & 64.4 & 148.2 & 160.1 & 7.6 & 8.1 & 7.5 & 7.9 \\
& & 0.20 & 302.6 & 10.1 & $>$500 & 188.5 & 276.0 & $>$500 & $>$500 & 11.0 & 21.2 & 10.0 & 8.8 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.5]{Ch3_Section4_MCStudy_LRTcurve_casewise_p=30_paper_version.pdf}
\caption{Average LRT distance behaviors for various contamination values, $k$, of UF-GSE and UBF-DDC-GRE-C for random correlations under 10\% casewise contamination. The dimension is $p=30$ and the sample size is $n=10p$. }\label{fig:GSE-2SGR-LRT-casewise-curve}
\end{figure}
Table \ref{tab:GSE-2SGRE-Casewise} shows the maximum average LRT distances under casewise contamination. Overall, UBF-DDC-GRE-C outperforms UF-GSE. This is because the Rocke $\rho$ function in GRE in UBF-DDC-GRE-C is more capable of downweighting moderate casewise outliers (e.g., $10 < k < 20$) than the Tukey-bisquare $\rho$ function in GSE in UF-GSE. Therefore, UBF-DDC-GRE-C outperforms UF-GSE under moderate casewise contamination and gives overall better results.
This is demonstrated, for example, in Figure \ref{fig:GSE-2SGR-LRT-casewise-curve} which shows the average LRT distance behaviors for various casewise contamination values, $k$.
\afterpage{
\begin{table}[t!]
\centering
\footnotesize
\caption{Finite sample efficiency for random correlations. The sample size is $n=10p$. }\label{tab:GSE-2SGRE-efficiency}
\begin{tabular}{lccccccHHHHc}
\hline
$p$ & MLE & Rocke & HSD & Snip & DMCDSc & UF- & UBF- & UF- & UBF- & DDC- & UBF-DDC- \\
& & & & & & GSE & GSE & GRE-C & GRE-C & GRE-C & GRE-C \\
\hline
10 & 1.00 & 0.50 & 0.73 & 0.12 & 0.41 & 0.75 & 0.66 & 0.53 & 0.48 & 0.56 & 0.57 \\
20 & 1.00 & 0.57 & 0.92 & 0.09 & 0.56 & 0.83 & 0.73 & 0.59 & 0.55 & 0.61 & 0.61 \\
30 & 1.00 & 0.58 & 0.93 & 0.10 & 0.63 & 0.87 & 0.79 & 0.49 & 0.44 & 0.48 & 0.50 \\
40 & 1.00 & 0.60 & 0.94 & 0.10 & 0.68 & 0.89 & 0.83 & 0.39 & 0.36 & 0.40 & 0.40 \\
50 & 1.00 & 0.60 & 0.94 & 0.11 & 0.70 & 0.91 & 0.84 & 0.48 & 0.49 & 0.56 & 0.58 \\
\hline
\end{tabular}
\end{table}
}
Table \ref{tab:GSE-2SGRE-efficiency} shows the finite sample relative efficiency under clean samples with random correlation for the considered robust estimates, taking the MLE average LRT distances as the baseline. The results for the AR$1(0.9)$ correlation are very similar and not shown here. As expected, UF-GSE show an increasing efficiency as $p$ increases while UBF-DDC-GRE-C have lower efficiency. Improvements can be achieved by using smaller $\alpha$ in the Rocke $\rho$ function with some trade-off in robustness. Results from this experiment are provided in the supplementary material.
\afterpage{
\begin{table}[t!]
\centering
\footnotesize
\caption{Average ``CPU time" -- in seconds of a 2.8 GHz Intel Xeon -- evaluated using the \texttt{R} command, \texttt{system.time}. The sample size is $n=10p$.} \label{tab:GSE-2SGRE-computing-times}
\begin{tabular}{lcHHHc}
\hline
$p$ & UF- & UBF- & UF- & UBF- & UBF-DDC- \\
& GSE & GSE & GRE-C & GRE-C & GRE-C \\
\hline
10 & 0.7 & 1.1 & 0.1 & 0.2 & 0.2 \\
20 & 7.7 & 11.0 & 1.2 & 1.7 & 1.7 \\
30 & 34.5 & 45.6 & 5.4 & 6.3 & 6.4 \\
40 & 120.5 & 144.9 & 14.5 & 16.9 & 17.1 \\
50 & 278.4 & 338.0 & 33.0 & 37.0 & 37.8\\
\hline
\end{tabular}
\end{table}
}
Finally, we compare the computing times of the two-step procedures. Table \ref{tab:GSE-2SGRE-computing-times} shows the average computing times over all contamination settings for various dimensions and for $n =10p$. The computing times for the two-step procedure have been substantially improved with the implementation of the faster initial estimator, EMVE-C.
\section{Real data example: small-cap stock returns data}\label{sec:GSE-example}
In this section, we consider the weekly returns from 01/08/2008 to 12/28/2010 for a portfolio of 20 small-cap stocks from \citet{martin:2013}.
The purpose of this example is fourfold: first, to show that the classical MLE and traditional robust procedures perform poorly on data affected by propagation of cellwise outliers; second, to show that the two-step procedures (e.g., UF-GSE) can provide better estimates by filtering large outliers; third, that the bivariate-filter version of the two-step procedure (e.g., UBF-GSE) provides even better estimates by flagging additional moderate cellwise outliers; and fourth, that the two-step procedures that use GRE-C (e.g.,
UBF-GRE-C) can more effectively downweight some high-dimensional casewise outliers than those that use GSE (e.g., UBF-GSE), for this 20-dimensional dataset. Therefore, UBF-GRE-C provides the best results for this dataset.
\begin{figure}[thb]
\centering
\includegraphics[scale=0.44]{Ch3_Section5_Example_SmallCapStock_QQ.pdf}
\caption{Normal quantile--quantile plots of weekly returns. Weekly returns that are three MAD's away from the coordinatewise-median are shown in green. }\label{fig:GSE-stock-qq}
\end{figure}
Figure \ref{fig:GSE-stock-qq} shows the normal QQ-plots of the 20 small-cap stocks returns in the portfolio. The bulk of the returns in all stocks seem roughly normal, but large outliers are clearly present for most of these stocks. Stocks with returns lying more than three MAD's away from the coordinatewise-median (i.e., the large outliers) are shown in green in the figure. There is a total of 4.8\% large cellwise outliers that propagate to 40.1\% of the cases. Over 75\% of these weeks correspond to the 2008 financial crisis.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.65]{Ch3_Section5_Example_SmallCapStock_MD_MLE_Rocke_2SGS.pdf}
\caption{
Squared Mahalanobis distances of the weekly observations in the small-cap asset returns data based on the MLE, the Rocke, the UF-GSE, and the UBF-GSE estimates. Weeks that contain one or more asset returns with values three MAD's away from the coordinatewise-median are in green. Large distances are truncated for better visualization.}\label{fig:GSE-stock-distances-Rocke-2SGS}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.65]{Ch3_Section5_Example_SmallCapStock_filtering_results.pdf}
\caption{Pairwise scatterplots of the asset returns data for WTS versus HTLD, HTLD versus WSBC, and WSBC versus SUR. Points with components flagged by the univariate filter are in blue. Points with components additionally flagged by the bivariate filter are in
orange.}\label{fig:GSE-stock-filtering-results}
\end{figure}
Figure \ref{fig:GSE-stock-distances-Rocke-2SGS} shows the squared Mahalanobis distances of the 157 weekly observations based on four estimates: the MLE, the Rocke-S estimates, the UF-GSE, and the UBF-GSE.
Weeks that contain large cellwise outliers (asset returns with values three MAD's away from the coordinatewise-median) are in green. From the figure, we see that the MLE and the Rocke-S estimates have failed to identify many of those weeks as MD outliers (i.e., failed to flag these weeks as having estimated full Mahalanobis distance exceeding the 99.99\% quantile chi-squared distribution with 20 degrees of freedom). The MLE misses all but seven of the 59 green cases. The Rocke-S estimate does slightly better but still misses one third of the green cases. This is because it is severely affected by the large cellwise outliers that propagate to 40.1\% of the cases. The UF-GSE estimate also does a relatively poor job. This may be due to the presence of several moderate cellwise outliers. In fact,
Figure \ref{fig:GSE-stock-filtering-results} shows the pairwise scatterplots for WTS versus HTLD, HTLD versus WSBC, and WSBC versus SUR with the results from the univariate and the bivariate filter. The points flagged by the univariate filter are in blue, and those flagged by the bivariate filter are in orange. We see that the bivariate filter has identified some additional cellwise outliers that are not-so-large marginally but become more visible when viewed together with other correlated components. These moderate cellwise outliers account for 6.9\% of the cells in the data and propagate to 56.7\% of the cases. The final median weight assigned to these cases by UF-GSE and UBF-GSE are 0.50 and 0.65, respectively. By filtering the moderate cellwise outliers, UBF-GSE makes a more effective use of the clean part of these partly contaminated data points (i.e., the 56.7\% of the cases). As a result, UBF-GSE successfully flags all but five of the 59 green cases.
Figure \ref{fig:GSE-stock-distances-2SGR} shows the squared Mahalanobis distances produced by UBF-GRE-C and UBF-GSE, for comparison. Here, we see that UBF-GRE-C has missed only 3 of the 59 green cases, while UBF-GSE has missed 6 of the 59. UBF-GRE-C has also clearly flagged weeks 36, 59, and 66 (with final weights 0.6, 0.0, and 0.0, respectively) as casewise outliers. In contrast, UBF-GSE gives final weights 0.8, 0.5, and 0.5 to these cases. Consistent with our simulation results, UBF-GSE has difficulty downweighting some high-dimensional outlying cases on datasets of high dimension.
In this example, UBF-GRE-C makes the most effective use of the clean part of the data and has the best outlier detecting performance among the considered estimates.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.65]{Ch3_Section5_Example_SmallCapStock_MD_2SGR.pdf}
\caption{
Squared Mahalanobis distances of the weekly observations in the small-cap asset returns data based on the UBF-GSE and the UBF-GRE-C estimates. Weeks that contain one or more asset returns with values three MAD's away from the coordinatewise-median are in green. }\label{fig:GSE-stock-distances-2SGR}
\end{figure}
\section{Conclusions}\label{sec:GSE-conclusions}
In this paper, we overcome three serious limitations of UF-GSE. First, the estimator cannot deal with moderate cellwise outliers. Second, the estimator shows an incontrollable increase in Gaussian efficiency, which is paid off by a serious decrease in robustness, for larger $p$. Third, the initial estimator (extended minimum volume ellipsoids, EMVE) used by GSE and UF-GSE does not scale well in higher dimensions because it requires an impractically large number of subsamples to achieve a high breakdown point in larger dimensions.
To deal with also moderate cellwise outliers, we complement the univariate filter with a combination of bivariate filters (UBF-DDC).
To achieve a controllable efficiency/robustness trade off in higher dimensions, we replace the GSE in the second step with the Rocke-type GSE which we called it GRE. Finally,
to overcome the high computational cost of the EMVE, we introduce a clustering-based subsampling procedure. The proposed procedure is called UBF-DDC-GRE-C.
As shown by our simulation, UBF-DDC-GRE-C provides reliable results for cellwise contamination when $\epsilon \le 0.05$ and $p \le 50$. For larger dimensions ($p > 50$), in our experience, the proposed estimator still performs well unless there is a large fraction of small size cellwise outliers that evade the filter and propagate.
Furthermore, UBF-DDC-GRE-C exhibits high robustness against moderate and large cellwise outliers, as well as casewise outliers in higher dimensions (e.g., $p > 10$).
We also show via simulation studies that, in higher dimensions, estimators using the proposed subsampling with only $50$ subsamples can achieve equivalent performance than the usual uniform subsampling with $500$ subsamples.
The proposed two-step procedure still has some limitation.
As pointed out in the rejoinder in \citet{agostinelli:2014b}, the GSE in the second step does not handle well flat data sets, i.e., $n \approx 2p$. In fact, when $n \le 2p$, these estimators fail to exist (cannot be computed). This is also the case for GRE-C, and for all the casewise robust estimators with breakdown point $1/2$. Our numerical experiments show that the proposed two-step procedure works well when $n \ge 5p$ but not as well when $2p < n < 5p$, depending on the amount of data filtered in the first step. In this situation, if much data are filtered leaving a small fraction of complete data cases, GSE and GRE may fail to converge \citep{danilov:2012, agostinelli:2014b}. This problem could be remedied by using graphical lasso \citep[GLASSO,][]{friedman:2008} to improve the conditioning of the estimates.
\section{Proof of Proposition 1}
The proof was available in \citet{agostinelli:2014}, but we provide a more detailed proof here for completeness.
\begin{proof}
Without loss of generality, set $\mu_0 = 0$ and $\sigma_0 = 1$.
Let $Z_{0i} = \frac{X_i - \mu_0}{\sigma_0} = X_i$ and $Z_{i} = \frac{X_i - T_{0n}}{S_{0n}}$.
Denote the empirical distributions of $Z_{01}, \dots, Z_{0n}$ and $Z_{1}, \dots, Z_{n}$ by
\begin{linenomath}
\[
F_{0n}^+(t) = \frac{1}{n}\sum_{i=1}^n I\left( |Z_{0i}| \le t \right) \quad \text{and} \quad
F_{n}^+(t) = \frac{1}{n}\sum_{i=1}^n I\left( |Z_{i}| \le t \right).
\]
\end{linenomath}
By assumption, with probability one, there exists $n_1$ such that $n \ge n_1$ implies $0 < 1 - \delta \le S_{0n} \le 1 + \delta$ and $-\delta \le T_{0n} \le \delta$, and we have
\begin{linenomath}
\[
\begin{aligned}
F_{n}^+(t) &= \frac{1}{n}\sum_{i=1}^n I\left(- t \le Z_{i} \le t \right) = \frac{1}{n}\sum_{i=1}^n I\left(- t \le \frac{X_i - T_{0n}}{S_{0n}} \le t \right) \\
&= \frac{1}{n}\sum_{i=1}^n I\left(- t S_{0n} + T_{0n} \le X_{i} \le t S_{0n} + T_{0n} \right) \\
&\ge \frac{1}{n}\sum_{i=1}^n I\left(- t (1 - \delta) + T_{0n} \le X_{i} \le t (1 - \delta) + T_{0n} \right) \\
&\ge \frac{1}{n}\sum_{i=1}^n I\left(- t (1 - \delta) + \delta \le X_{i} \le t (1 - \delta) - \delta \right) \\
&= \frac{1}{n}\sum_{i=1}^n I\left( |X_{i}| \le t(1 - \delta) - \delta \right) = F_{0n}^+(t(1 - \delta) - \delta).
\end{aligned}
\]
\end{linenomath}
Now, by the Glivenko--Cantelli Theorem, with probability one there
exists $n_2$ such that $n\geq n_2$ implies that $\sup_{t}| F_{0n}^+(t) -F_0^+(t)|\leq\varepsilon/2$. Also, by the uniform continuity of $F_0^+$, given $\varepsilon>0,$ there
exists $\delta> 0$ such that $|F_0^+(t(1-\delta) - \delta)-F_0^+(t)|\leq\varepsilon/2$.
Finally, note that
\begin{linenomath}
\begin{align*}
{F}_{n}^+(t) & \geq F_{0n}^+(t(1-\delta) -\delta)\\
& =\left( F_{0n}^+(t(1-\delta) -\delta)-F_0^+(t(1-\delta
)-\delta)\right) \\
& \qquad+(F_0^+(t(1-\delta) -\delta)-F_0^{+}(t))+(F_0^{+}(t)- F^+(t))+F^{+}(t).
\end{align*}
\end{linenomath}
Let $n_3=\max(n_1,n_2)$, then $n\geq n_3$ imply
\begin{linenomath}
\begin{align*}
\sup_{t> \eta}( F^+(t)-{F}_{n}^+(t)) &
\leq\sup_{t> \eta}\left| F_0^+(t(1-\delta) -\delta)- F_{0n}^{+}(t(1-\delta)
-\delta)\right| \\
& \qquad+\sup_{t> \eta}\left| F_0^{+}(t) - F_0^{+}(t(1-\delta)-\delta) \right|
+\sup_{t> \eta}(F^{+}(t)-F_0^{+}(t))\\
& \leq \frac{\varepsilon}{2} + \frac{\varepsilon}{2} + 0 = \varepsilon.
\end{align*}
\end{linenomath}
This implies that $n_{0}/n\rightarrow0$ a.s..
\end{proof}
\section{Performance comparison between GSE and GRE}
We conduct a simulation study to compare the standalone performances of the second steps (i.e., the estimation step) in the two-step S-estimators: GRE-C starting from EMVE-C versus GSE starting from EMVE.
We consider clean and casewise contaminated samples from a $N_p(\pmb{\mu_0},\pmb{\Sigma_0})$ distribution with $p=10,20,\dots,50$ and $n=10p$. The simulation mechanisms are the same as that of Section 5, but in addition,
5\% of the cells in the generated samples are randomly selected and assigned a missing value. The number of replicates is $N=500$.
Table \ref{tab:GSE-robustness-all} shows the maximum average LRT distances from the true correlation matrices among the considered contamination sizes and, for brevity, shows only the values for random correlations. EMVE is capable of dealing small fraction of outliers with 500 subsamples, but breaks down when the fraction gets larger, and brings down the performance of GSE. EMVE-C with more refined subsampling procedure and larger subsample sizes shows better performance than EMVE, even for relatively larger fraction of outliers. Overall, GRE performs better than GSE. The Rocke $\rho$ function used in GRE is capable of giving smaller weights to points that are moderate-to-large distances from the main mass of points \citep{rocke:1996}; see, for example,
Figure \ref{fig:GSE-LRT-curve-GRE} that shows the average LRT distance behaviors for $10\%$ contamination for dimension $30$ and sample size $300$ data. In the figure, we see that GRE outperforms GSE for moderate sizes contamination points, as expected.
\begin{table}[t!]
\centering
\footnotesize
\caption{Maximum average LRT distances. The sample size is $n=10p$.}\label{tab:GSE-robustness-all}
\begin{tabular}{llcccc}
\hline
$p$ & $\epsilon$ & EMVE & GSE & EMVE-C & GRE-C \\
\hline
10 & 0.10 & 8.7 & 4.6 & 17.3 & 10.9 \\
& 0.20 & 81.4 & 84.8 & 43.4 & 36.1 \\
20 & 0.10 & 20.8 & 24.1 & 9.2 & 8.1 \\
& 0.20 & 123.0 & 156.8 & 13.1 & 14.9 \\
30 & 0.10 & 31.2 & 54.8 & 13.4 & 9.4 \\
& 0.20 & 299.1 & 223.2 & 24.3 & 16.0 \\
40 & 0.10 & 77.5 & 80.7 & 21.9 & 12.2 \\
& 0.20 & 511.8 & 287.9 & 43.2 & 17.1 \\
50 & 0.10 & 172.5 & 125.1 & 29.4 & 16.5 \\
& 0.20 & 644.3 & 349.8 & 60.2 & 26.3 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.5]{Ch3_Appendix_MCStudy_GRE_Casewise_p=30.pdf}
\caption{Average LRT distances for various contamination sizes, $k$, for random correlations under 10\% casewise contamination. The dimension is $p=30$ and the sample size is $n=10p$.}\label{fig:GSE-LRT-curve-GRE}
\end{figure}
Table \ref{tab:GSE-efficiency} shows the finite sample relative efficiency under clean samples, taking the classical EM estimator as the baseline. As expected, GSE shows an increasing efficiency as $p$ increases. GRE, overall, has lower efficiency.
\afterpage{
\begin{table}[t!]
\centering
\footnotesize
\caption{Finite sample efficiency. The sample size is $n=10p$.}\label{tab:GSE-efficiency}
\begin{tabular}{lcccc}
\hline
$p$ & EMVE & GSE & EMVE-C & GRE-C \\
\hline
10 & 0.24 & 0.89 & 0.26 & 0.54 \\
20 & 0.30 & 0.95 & 0.30 & 0.59 \\
30 & 0.34 & 0.98 & 0.33 & 0.58 \\
40 & 0.35 & 0.98 & 0.34 & 0.47 \\
50 & 0.37 & 0.99 & 0.35 & 0.48 \\
\hline
\end{tabular}
\end{table}
}
\section{Efficiency of GRE and tuning parameter $\alpha$ in Rocke-$\rho$ function}
The tuning parameter $\alpha$ in the Rocke-$\rho$ function in $\gamma$ in
(10) is chosen small to control the efficiency. In this chapter, we used the conventional choice $\alpha = 0.05$, as seen to achieve reasonable efficiency while achieving high robustness. Here, we explore the performance of GRE-C with smaller values of $\alpha$. We repeat the simulation study as in Section 5 for $p=10,30,50$ and $n=10p$. The number of replicates is $N=30$. Table \ref{tab:GSE-GRE-alpha-efficiency} reports the finite sample efficiency and maximum average LRT distances under 20\% casewise contamination. In general, higher efficiency can be achieved using smaller values of $\alpha$, but with the cost of some loss in robustness.
\begin{table}[h!]
\centering
\footnotesize
\caption{Finite sample efficiency and maximum average LRT distances for GRE-C with various values of $\alpha$. The sample size is $n=10p$.}\label{tab:GSE-GRE-alpha-efficiency}
\begin{tabular}{lcccccc}
\hline
$p$ & \multicolumn{3}{c}{Efficiency, clean data} & \multicolumn{3}{c}{Max LRT, 20\% casewise} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7}
& $\alpha=0.05$ & $\alpha=0.01$ & $\alpha=0.001$ & $\alpha=0.05$ & $\alpha=0.01$ & $\alpha=0.001$ \\
\cmidrule(lr){1-4}\cmidrule(lr){5-7}
10 & 0.54 & 0.67 & 0.67 & 33.1 & 32.1 & 32.1 \\
30 & 0.58 & 0.85 & 0.95 & 16.0 & 20.2 & 28.7 \\
50 & 0.55 & 0.58 & 0.93 & 27.1 & 28.1 & 47.7 \\
\hline
\end{tabular}
\end{table}
\bibliographystyle{elsarticle-harv}
| {
"attr-fineweb-edu": 1.075195,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfB05qhDCd8S7eZV6 | \section{Introduction}
Finely-labelled data are crucial to the success of supervised and semi-supervised approaches to classification algorithms. In particular, common deep learning approaches \cite{NueralNet, NN} typically require a great number of data samples to train effectively \cite{DNNSmallData1, DNNSmallData2}. In this work, a \emph{sample} refers to a section taken from a larger time-series dataset of audio. Often, these datasets lack large quantities of fine labels as producing them is extremely costly (requiring exact marking of start and stop times). The common distribution of data in these domains is such that there are small quantities of expertly-labelled (finely) data, large quantities of \textit{weakly} (coarsely) labelled data, and a large volume of unlabelled data. Here, \textit{weak} labels refer to labels that indicate one or more events are present in the sample, although do not contain the information as to the event frequency nor the exact location of occurrence(s) (illustrated in Section \ref{FeatureExtract}, Fig. \ref{fig:LogMel}). Our goal therefore is to improve classification performance in domains with variable quality datasets.
Our key contribution is as follows. We propose a framework that combines the strengths of both traditional algorithms and deep learning methods, to perform multi-resolution Bayesian bootstrapping. We obtain probabilistic labels for pseudo-fine labels, generated from weak labels, which can then be used to train a neural network. For the label refinement from weak to fine we use a Kernel Density Estimator (KDE).
The remainder of the paper is organised as follows. Section \ref{Methodology} discusses the structure of the framework as well as the baseline classifiers we test against. Section \ref{Experiments} describes the datasets we use and the details of the experiments carried out. Section \ref{Classification Performance} presents the experimental results, while Section \ref{Conclusion} concludes.
\section{Methodology}
\label{Methodology}
\subsection{Framework Overview}
Our framework is separated into an inner and outer classifier in cascade as in Figure \ref{fig:Framework}. For the inner classifier we extract features from the finely and weakly-labelled audio data using the two-sample Kolgomogrov-Smirnov test for features of a log-mel spectrogram (Section \ref{FeatureExtract}). We train our inner classifier, the Gaussian KDE (Section \ref{GaussianKDE}), on the finely-labelled data and predict on the weakly-labelled data.
For the outer classifier we extract the feature vectors from an unlabelled audio dataset using the log-mel spectrogram. We then train our outer classifier, a CNN, (Section \ref{CNN}) on the finely-labelled data and the resulting pseudo-finely labelled data output by the Gaussian KDE. The details of our data and problem are found in Section \ref{Experiments}. Code will be made available on \url{https://github.com/HumBug-Mosquito/weak_labelling}.
\begin{figure}[]
\begin{center}
\includegraphics[width=7cm]{FrameworkEventDetection.pdf}
\vspace{-0.1in}
\caption{Framework comprising a feature extraction \& selection layer, an inner classifier and an outer classifier. The arrows represent data flows.}
\label{fig:Framework}
\end{center}
\end{figure}
\subsection{Feature Extraction and Selection}
\label{FeatureExtract}
The CNN uses the log-mel spectrogram (as in Fig.~\ref{fig:LogMel}) as it has recently become the gold standard in feature representation for audio data \cite{LogMel,LogMel4}.
\begin{figure}
\begin{center}
\includegraphics[width=8.2cm]{Log_Mel3.pdf}
\vspace{-0.1in}
\caption{From top to bottom: Log-mel spectrogram of $100$ seconds of audio data at a signal-to-noise ratio of $-15$ dB. The KS-selected features are shown as green dashed lines; The corresponding fine and weak labels for the above log-mel spectrogram.}
\label{fig:LogMel}
\end{center}
\end{figure}
The input signal is divided into $0.1$ second windows and we compute $128$ log-mel filterbank features. Thus, for a given $100$ seconds of audio input, the feature extraction method produces a $1000\times128$ output.
The two-sample Kolmogorov-Smirnov (KS) test \cite{KS1} is a non-parametric test for the equality of continuous, one-dimensional probability distributions that can be used to compare two samples. This measure of similarity is provided by the Kolmogorov-Smirnov statistic which quantifies a distance between the empirical distribution functions of the two samples. We use the KS test to select a subset of the $128$ log-mel features, that are maximally different between the two classes to feed into the classifiers. We choose $N$ features with the largest KS statistics. Fig. \ref{fig:LogMel} illustrates that the process to find maximally different feature pairs, correctly chooses frequencies of interest. For example, if the noise file is concentrated in high frequencies (as in Fig. \ref{fig:LogMel}), the KS feature selection process chooses lower harmonics of the training signal (a mosquito flying tone) as features to feed to the algorithms. Conversely, for low-frequency dominated noise, higher audible harmonics of the event signal are identified.
\subsection{Gaussian Kernel Density Estimation}
\label{GaussianKDE}
Kernel density estimation (KDE) or Parzen estimation \cite{KDE1, KDE2} is a non-parametric method for estimating a $d$-dimensional probability density function $f_{\mathbf{X}}(\bm{x})$ from a finite sample $\mathcal{D}=\{\bm{x}_{i}\}^{N}_{i=1}$, $\bm{x}_{i}\in\mathbb{R}^{d}$, by convolving the empirical density function with a kernel function.
We then use Bayes' theorem to calculate the posterior over class $1$
\begin{equation}\label{eq:Bayes}
p(C_{1}\mid\bm{x})=\frac{p(\bm{x}\mid C_{1})p(C_{1})}{p(\bm{x}\mid C_{0})p(C_{0})+p(\bm{x}\mid C_{1})p(C_{1})},
\end{equation}
where $ p(\bm{x}\mid C_{k})$ is the KDE per class $C_k$, with $C_1$ representing the event class and $C_0$ the noise class (i.e. non-event).
\subsection{Convolutional Neural Network}
\label{CNN}
With our scarce data environment we use a CNN and dropout with probability $p=0.2$ \cite{Dropout}. Our proposed architecture, given in Fig. \ref{fig:CNN}, consists of an input layer connected sequentially to a single convolutional layer and a fully connected layer. The CNN is trained for $10$ epochs with SGD \cite{SGD}, and all activations are ReLUs. We use this particular architecture due to constraints in data size \cite{Wavelet} and therefore have few layers and fewer parameters to learn.
\begin{figure}[]
\begin{center}
\includegraphics[width=8.2cm]{CNN.pdf}
\vspace{-0.1in}
\caption{The CNN architecture. Spectrogram of mosquito recording fed as input to convolutional layer with $32$ filters and kernel $2\times 2 \times 1$. Generated feature maps are down-sampled by a max-pooling layer from $127\times 9$ to $63\times 4$. It is then connected to a dense layer with $256$ neurons and finally the output with $2$ neurons.}
\label{fig:CNN}
\end{center}
\end{figure}
\subsection{Traditional Classifier Baselines}
\label{BaseLine}
We compare our inner classifier, the Gaussian KDE, with
more traditional classifiers that are widely used in machine learning: Linear Discriminant Analysis (LDA), Gaussian Na\"ive Bayes (GNB), support vector machines using a radial basis function kernel (RBF-SVM), random forests (RF) and a multilayer perceptron (MLP).
\section{Experiments}
\label{Experiments}
\subsection{Description of Datasets}
\label{Data}
The most common scenario where mixed quality labels can be found is in crowd-sourcing tasks \cite{CrowdSourcing1, CrowdSourcing2, CrowdSourcing3}, or any challenge where data collection is expensive. The HumBug \cite{Zooniverse}, \cite{HumBug} project utilises crowd-sourcing, forming the main motivation for this research, as well as the basis for our signal\footnote{The overall goal of HumBug is real-time mosquito species recognition to identify potential malaria vectors in order to deliver intervention solutions effectively.}. The event signal consists of a stationary real mosquito audio recording with a duration of $1$ second. The noise file is a non-stationary section of ambient forest sound. The event signal is placed randomly throughout the noise file at varying signal-to-noise ratios (SNRs), to create quantifiable data for prototyping algorithms. There is a class imbalance of $1$ second of event to $9$ seconds of noise in the finely-labelled data and this is propagated to the weakly-labelled and unlabelled datasets. We include $100$ seconds of expert, finely-labelled data, $1000$ seconds of weakly-labelled data, and a further $1000$ seconds of unlabelled test data. To report performance metrics, we create synthetic labels at a resolution of $0.1$ seconds for the finely-labelled data, and on the order of $5$ seconds for the weakly-labelled data. We choose $5$ seconds as to allow the labeller to have temporal context when classifying audio as an event or non-event. As the listener is presented randomly sampled (or actively sampled \cite{BayesActiveSample1,BayesActiveSample2}) sections of audio data, a section much shorter than $5$ seconds would make the task of tuning into each new example very difficult due to the absence of a reference signal.
\subsection{Experimental Design}
We evaluate our inner model against the baseline classifiers with two experiments and finally test the overall performance of the framework utilising the outputs of the various inner classifiers. We make the assumption that the accuracy of the weak labels is $100\%$.
Therefore, all the classifiers predict over the coarse class $1$ labelled data only. Additionally, the priors we use in Eq. \ref{eq:Bayes} for our Gaussian KDE model are set such that $p(C_{0})=p(C_{1})=0.5$. This is to reflect that, since the audio sample is weakly labelled $1$, each data point is equally likely to be in fine class $0$ or $1$.
Generative models, such as the Gaussian KDE obtain a performance boost from the additional information provided by the coarse class $0$ data as this allows it to better model the class $0$ distribution. Conversely, discriminative models such as the SVM, RF and MLP take a hit in performance because the decision boundary that they create over-fits to the class $0$ data points due to the increased class imbalance. We therefore train the LDA, GNB, SVM, RF and MLP on the finely-labelled data only, whereas the Gaussian KDE is trained on both the finely-labelled data and the coarse class $0$ data.
\section{Classification Performance}
\label{Classification Performance}
For each SNR we run $40$ iterations, varying the time location of the injected signals, as well as the random seed of the algorithms. After applying median filtering, with a filter window of $500$ ms, we see the results in Fig. \ref{fig:PostProcresults}.
The F$1$-score gradually increases as expected from the threshold of detection to more audible SNRs. The decay of performance at the lower SNRs can be partially accounted for by the two-sample KS test used for feature selection failing to choose features of interest due to the increased noise floor. We observe a significant benefit to using the Gaussian KDE, which when combined with temporal averaging helps recover the dynamic nature of the signal (namely that there is correlation between neighbouring values in the time-series).
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{color2.pdf} \caption{From left to right: Boxplots showing results of first two experiments, grouped by SNR. LDA, SVM, RF and MLP are trained on finely-labelled data only whilst the Gaussian KDE is trained on the finely-labelled data and coarse class $0$ data.}
\label{fig:PostProcresults}
\end{figure*}
Fig. \ref{fig:PostProcresults} shows that the Gaussian KDE predicts better calibrated probabilities than the other baseline classifiers. This is shown by applying rejection \cite{NueralNet, rejection} in addition to the median filtering. The
rejection window for the output probabilities is $0.1<p(C_{1}| \bm{x})<0.9$. The Gaussian KDE improves significantly in performance, especially at the lower SNRs; however it should be noted that the F$1$-score is evaluated on the remaining data after rejection.
The Gaussian KDE rejects a large proportion of the data at lower SNRs, showing that the probabilities are at either extremes only when the model is confident in its predictions.
The final experiment tests the overall framework with input to a CNN from pseudo-finely labelled data with median filtering and rejection applied. Table \ref{table:CNNResults} shows that using the framework in conjunction with any of the inner classifiers outputs outperforms a regular CNN trained on the coarse data. Furthermore, training the CNN on the output of the Gaussian KDE significantly improves detection of events by $22.1\%$ over the best baseline system, the CNN(GNB). We also show that using the strongest inner classifier (KDE) alone results in vastly lower precision and recall scores to the bootstrapping approach advocated here, which sees an improvement of $0.22$ to the F$1$-score gained by incorporating the CNN into the pipeline with the KDE.
\setlength{\tabcolsep}{4.5pt}
\begin{table}[!h]
\begin{center}
\caption{CNN outer classifier: Metrics reported as means $\pm$ one standard deviation at an SNR of $-19.8$ dB for $40$ iterations}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{cccc}
\hline
Classifier & F$1$-score & Precision & Recall \\
\hline
\textbf{CNN(KDE)} & \textbf{0.729 $\pm$ 0.034} & \textbf{0.719 $\pm$ 0.029} & \textbf{0.744 $\pm$ 0.031}\\
CNN(MLP) & $0.435 \pm 0.022$ & $0.667 \pm 0.026$ & $0.322 \pm 0.029$\\
CNN(RF) & $0.320 \pm 0.031$ & $0.419 \pm 0.035$ & $0.259 \pm 0.034$\\
CNN(SVM) & $0.338 \pm 0.024$ & $0.484 \pm 0.024$ & $0.259 \pm 0.022$\\
CNN(GNB) & $0.597 \pm 0.023$ & $0.654 \pm 0.028$ & $0.549 \pm 0.023$\\
CNN(LDA) & $0.307 \pm 0.027$ & $0.571 \pm 0.026$ & $0.210 \pm 0.023$\\
CNN(Coarse) & $0.174 \pm 0.036$ & $0.095 \pm 0.031$ & $0.923 \pm 0.039$\\
\hline
KDE & $0.506 \pm 0.021$ & $0.518 \pm 0.021$ & $0.502 \pm 0.024$\\
\hline
\label{table:CNNResults}
\end{tabular}}
\end{center}
\vspace{-9mm}
\end{table}
\FloatBarrier
\section{Conclusions \& Further Work}
\label{Conclusion}
\subsection{Conclusions}
This paper proposes a novel framework utilising a Gaussian KDE for super-resolving weakly-labelled data to be fed into a CNN to predict over unlabelled data. Our framework is evaluated on synthetic data and achieves an improvement of $22.1\%$ in F$1$-score over the best baseline system. We thus highlight the value label super-resolution provides in domains with only small quantities of finely-labelled data, a problem in the literature that is only sparsely addressed to date.
\subsection{Further Work}
To leverage the probabilistic labels outputted by the inner classifier, a suitable candidate for the outer classifier is a loss-calibrated Bayesian neural network (LC-BNN). This combines the benefits of deep learning with principled Bayesian probability theory \cite{LC-BNN}.
Due to computational limitations, optimisation of the hyper-parameters was infeasible. Future work plans to use Bayesian Optimisation \cite{BaysOpt} for this tuning.
Finally, following the promising results of this paper, the next step is application to real datasets.
\FloatBarrier
| {
"attr-fineweb-edu": 1.820312,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfDnxK7IDM3wqgeNH | \section{\bf Super Poisson-Lie symmetry in sigma models on supermanifolds }
Consider two dimentional sigma models on supermanifolds
\footnote{Here we use the notation presented by DeWitt$^{,}$s in
\cite{D}. For example the transformation properties of upper and
lower right indices to the left one are as follows:
$$
^iT_{jl...}^{\;k}=T_{jl...}^{ik},\qquad
_jT^{ik}_{l...}=(-1)^j\;T_{jl...}^{ik},
$$
where indices on the exponent of $(-1)$ show the Grassmann degrees
of variables. } $M$as target space with background matrix
${\hspace{-0.5mm}_\mu {\cal E}}\hspace{-0.5mm}_{ \nu}(x)
={\hspace{-0.5mm}_\mu G }\hspace{-0.5mm}_{
\nu}(x)+{\hspace{-0.5mm}_\mu B }\hspace{-0.5mm}_{ \nu}(x) $ as
function of coordinates $x^\mu$ \footnote {Here we consider
bosonic worldsheet with light cone coordinates: $\xi^
\pm=\frac{1}{2}(\tau \pm \sigma)$}
\begin{equation}
S\;=\;\frac{1}{2}\int\!d\xi^{+}\wedge
d\xi^{-}\;\partial_{+}x^\mu\; {\hspace{-0.5mm}_\mu {\cal
E}}\hspace{-0.5mm}_{ \nu}\;
\partial_{-}x^\nu = \frac{1}{2}\int\!d\xi^{+}\wedge d\xi^{-} L.
\end{equation}
Suppose that a supergroup $G$ acts from right on $M$ freely. Then
with the use of left invariant supervector field
${{_iv}^\mu}^{(L,\;l)}$ (defined with left derivative)
\begin{equation}
{_iv}^{(L,\;l)} \;=\;{{_iv}^\mu}^{(L,\;l)}\;
\frac{{\overrightarrow{\partial}}}{\partial x^\mu}\;\qquad,
(i=1,...,dimG)
\end{equation}
corresponding to this action, one can compute variation of $S$
under transformation $x^\mu \longrightarrow x^\mu +
\epsilon^i(\xi^{+}, \xi^{-}) {_iv}^\mu$ as follows\footnote{From
now on we will omit the superscripts $(L,\;l)$ on ${{_iv}^\mu}$.
}:
\begin{equation}
\delta S \;=\;\frac{1}{2}\int\!d\xi^{+} \wedge
d\xi^{-}\;\varepsilon^i\;(-1)^{\lambda+i\lambda}\;\partial_{+}
x^\lambda\; {\cal L}_{_iv}\;{\cal E}_{\lambda \nu}\;\partial_{-}
x^\nu-\frac{1}{2}\int\!d\varepsilon^i \wedge \star {_iJ},
\end{equation}
where the Lie superderivative ${\cal L}_{_iv}\;{\cal E}_{\lambda
\nu}$ and Hodge star of Nother$^{,}$s current have the following
forms respectively
\begin{equation}
{\cal L}_{_iv}\;{\cal E}_{\mu
\nu}\;=\;(-1)^{i\mu+\mu+\lambda}\;{\overrightarrow {\partial_{\mu}
}}\;{_iv}^\lambda\; {\cal E}_{\lambda
\nu}+{_iv}^\lambda\;{\overrightarrow {\partial_{\lambda} }}\;{\cal
E}_{\mu \nu} +(-1)^{\mu \nu+\mu
\lambda+i\nu+\lambda+\nu}\;{\overrightarrow {\partial_{\nu}
}}\;{_iv}^\lambda\; {\cal E}_{\mu \lambda },
\end{equation}
\begin{equation}
\star {_iJ}\; =\;(-1)^{\mu+\lambda}\;{_iv}^\mu \;\partial_{+}
x^\lambda\;{\cal E}_{\lambda \mu}\;d\xi^{+}
-(-1)^{\mu}\;{_iv}^\mu\;{\cal E}_{\mu \nu}\;\partial_{-} x^\nu
d\xi^{-}.
\end{equation}
By direct calculation, one can consider
\begin{equation}
d \star {_iJ}\; =\; [(-1)^{\lambda+i\lambda}\;\partial_{+}
x^\lambda\;{\cal L}_{_iv}\;{\cal E}_{\lambda \nu}\;\partial_{-}
x^\nu +{_iv}^\mu\; (equations\; of\; motion)]\; d\xi^{-} \wedge
d\xi^{+},
\end{equation}
where the equations of motion have the following form
\bigskip
$$
-(-1)^{\lambda+\mu\lambda}\;\partial_{+} x^\lambda\;{\overrightarrow
{\partial_{\mu} }}\;{\cal E}_{\lambda \nu}\;
\partial_{-}x^\nu+(-1)^{\mu}\;\partial_{+} x^\lambda\;{\overrightarrow {\partial_{\lambda} }}\;{\cal E}_{\mu \nu}\;
\partial_{-} x^\nu+(-1)^{\mu}\;{\cal E}_{\mu \nu}\;\partial_{+}\partial_{-} x^\nu
$$
\begin{equation}
+(-1)^{\lambda+\mu}\;
\partial_{-}\partial_{+} x^\lambda\;{\cal E}_{\lambda \mu}
+(-1)^{\lambda+\mu}\;\partial_{+} x^\lambda\;
\partial_{-}x^\nu\;{\overrightarrow {\partial_{\nu} }}\;{\cal E}_{\lambda
\mu}=0.
\end{equation}
Thus, on extremal surface we have $\delta S =0$ and
\begin{equation}
d \star{_iJ}\; =\; -[(-1)^{\lambda+i\lambda}\;\partial_{+}
x^\lambda\;{\cal L}_{_iv}\;{\cal E}_{\lambda \nu}\;\partial_{-}
x^\nu ]\; d\xi^{+} \wedge d\xi^{-}.
\end{equation}
If ${\cal L}_{_iv}\;{\cal E}_{\lambda \nu}=0$, then $G$ is a
superisometry group of $M$ and we have conserved currents. On the
other hand, if $\star{_iJ}$ on extremal surfaces satisfy
Maurer-cartan equation \cite{D}
\begin{equation}
d \star{_iJ} \;= \;-(-1)^{jk}\;\frac{1}{2} {\tilde{f}^{jk}}_{\; \;
\; \; \; i} \star{_jJ} \wedge \star {_kJ},
\end{equation}
where, ${\tilde{f}^{jk}}_{\; \; \; \; \; i}$ are structure
constants of Lie superalgebras $\tilde {\bf g}$ (with the same
dimension of ${\bf g}$ (Lie superalgebra of $G$)); then the {\em
super Poisson-Lie symmetry} will be
\begin{equation}
{\cal L}_{v_i}({\cal E}_{\lambda \nu}) =(-1)^{\mu+\mu{'}+\lambda
\mu{'}+jk+\mu k+ \mu \mu{'}+\nu \lambda+ \mu{'} \nu}\;
{\tilde{f}^{jk}}_{\; \; \; \; i}\;{_jv}^\mu\; {_kv}^\mu{'}
\;{\cal E}_{\mu \nu}\;{\cal E}_{\lambda \mu{'}},
\end{equation}
where this formula is a generalization of usual Poisson-Lie
symmetry \cite{K.S1} to sigma models on supermanifolds.\\
Now, using integrability condition on Lie superderivative
\begin{eqnarray}
{\cal L}_{[{_iv} , {_jv}]}({\cal E}_{\lambda \nu})\;=\;[{\cal
L}_{_iv} , {\cal L}_{_jv}]{\cal E}_{\lambda \nu}= {\cal
L}_{_iv}\;{\cal L}_{_jv}\;{\cal E}_{\lambda \nu}-(-1)^{ij}\;{\cal
L}_{_jv}{\cal L}_{_iv}\; {\cal E}_{\lambda \nu},
\end{eqnarray}
and after some computations, we see that the structure constants
of Lie superalgebras ${\bf g}$ and $\tilde {\bf g}$ must be
satisfied in the following relations
\begin{equation}
{f^k}_{ij}\;{\tilde{f}^{ml}}_{\; \; \; \;
k}\;=\;(-1)^{il}\;{f^m}_{ik}\;{\tilde{f}^{kl}}_{\; \; \; \; j}
+{f^l}_{ik}\;{\tilde{f}^{mk}}_{\; \; \; \;
j}+{f^m}_{kj}\;{\tilde{f}^{kl}}_{\; \; \; \; i}
+(-1)^{mj}\;{f^l}_{kj}\;{\tilde{f}^{mk}}_{\; \; \; \; i},
\end{equation}
where these are the mixed superJacobi identities of the Lie
super-bialgebras $({\bf g},\tilde {\bf g})$ \cite{{R},{J.z}}.\\
In the same way, one can consider the dual sigma models with
background matrix ${\hspace{-0.5mm}_\mu {\tilde{\cal
E}}}\hspace{-0.5mm}_{ \nu}$; where the supergroup ${\tilde G}$
acts freely on $M$ and the roles of ${\bf g}$ and ${\tilde {\bf g}
}$ are exchanged.\\
\section{\bf Super Poisson-Lie T-dual sigma models on supergroups}
When the supergroup $G$ acts transitively and freely on $M$, the
target can be identified with the supergroup $G$. In this case,
in order to obtain T-dual sigma models, one can consider the
equation of motion for the action on the Drinfeld superdouble $D$
\cite{K.S2}
\begin{equation}
<\partial_{\pm}l l^{-1} , \varepsilon^{\mp}> \;= \;0,
\end{equation}
where $l(\xi^+,\xi^-)$ is a map from world sheet to Drinfeld
superdouble $D$ and $<.,.>$ is the invariant bilinear form on the
double and $\varepsilon^\mp$ are $n$ dimensional orthogonal super
vector spaces such that $\varepsilon^+ + \varepsilon^-$ spans Lie
superalgebra ${\cal D}= {\bf g} \bigoplus \tilde {\bf g}$. Now by
using decomposition $l$ in the vicinity of the unit element of
$D$ \cite{N.A}
\begin{equation}
l(\xi^{+} , \xi^{-})\; =\; g(\xi^{+} , \xi^{-}) \tilde{h}(\xi^{+}
, \xi^{-}),\qquad (\; g\in G , \quad \tilde{h}\in \tilde{G}\; )
\end{equation}
we obtain from (3.1)
\begin{equation}
<g^{-1} \partial_{\pm}g + \partial_{\pm}\tilde{h}\hspace{0.5mm}
\tilde{h}^{-1} ,g^{-1} \varepsilon^{\mp} g> \;=\; 0 .\hspace{3cm}
\end{equation}
On the other hand, for the super vector spaces $\varepsilon^\mp$
we have
\begin{equation}
g^{-1} \varepsilon^{\pm} g\; = \;Span\{X_i \pm E^{\pm}_{ij}(g)
\tilde{X}^j\},
\end{equation}
where super matrices $E^{\pm}$ are supertranspose of each other
($E^{-}_{ij}=(-1)^{ij}E^{+}_{ji}$), $ \{X_i\}$ and
$\{\tilde{X}^i\}$ are bases of Lie superalgebras ${\bf g}$ and
$\tilde {\bf g}$ such that
$$
<X_i , X_j> \;= \;<\tilde{X}^i , \tilde{X}^j>\; = \;0,
$$
\begin{equation}
\qquad < X_i , {\tilde X}^j >\; =\; {\delta_ i} \hspace{1mm}^j
\;=\; (-1)^{ij}\; {\delta ^j} \hspace{1mm}_i \; =\; (-1)^{ij} <
{\tilde X}^j , X_i >,
\end{equation}
and we have\footnote{Here one must use of superdeterminant and
superinverse formula \cite{D}. }
\begin{equation}
E^{+}(g) = \Big(a(g) + E^{+}(e)\hspace{0.5mm}b(g)\Big)^{-1}
E^{+}(e)\hspace{0.5mm}d(g),
\end{equation}
such that
\begin{equation}
g^{-1} X_i\; g\; =\;{a(g)_i}\;^k\;{_k X}=(-1)^k\;{a(g)_i}\;^k
X_k,
\end{equation}
\begin{equation}
g^{-1} \tilde{X}^j g\; =\; {b(g)}^{jk}\;{_k X} + {d(g)^j}\;_k
\tilde{X}^k=(-1)^k\;{b(g)}^{jk}\;{X_k} + {d(g)^j}\;_k \tilde{X}^k,
\end{equation}
Now by using (3.3)-(3.5) we have
$$
{A_{+}}_i(g)\; :=\; (\partial_{+}\tilde{h}\hspace{0.5mm}
\tilde{h}^{-1})_i\; =\;(-1)^l\; (g^{-1} \partial_{+}g )^l
E^{+}_{li}(g), \\
$$
\begin{equation}
{A_{-}}_i(g) \;:= \;(\partial_{-}\tilde{h}\hspace{0.5mm}
\tilde{h}^{-1})_i\; =\; -E^{+}_{il}(g) (g^{-1} \partial_{-}g )^l,
\hspace{7mm}
\end{equation}
where $A_\pm$ are right invariant one forms on $\tilde{G}$ and
satisfy in the following flat connection relation
\begin{equation}
\partial_{+} A_{- i}(g) - \partial_{-} A_{+ i}(g) -(-1)^{j\hspace{0.5mm}l}\;
{\tilde{f}^{j\hspace{0.5mm}l}}_{\; \; \; \; \; i}\; A_{- j}(g) A_{+ l}(g) =
0.
\end{equation}
Indeed one can observe that the above equation results in the
equations of motions and super Poisson-Lie symmetry of the
following action
\begin{equation}
S\;=\;\frac{1}{2}\int\! \;(g^{-1} \partial_{+} g)^i\;{_i{
E^+_j}}(g) \; (g^{-1} \partial_{-} g)^j\; d\xi^{+} \; d\xi^{-}.
\end{equation}
To see this it is convenient to use the following definition for
the left invariant one forms with left derivative
\cite{{N.A},{egh}}
\begin{equation}
(g^{-1} \partial_{+} g)^i\;=\;{
L}\hspace{-0.5mm}^{(l)^i}_{+}\;=\;\partial_{+} x^\mu \; {_\mu
L}^{(l)^i},
\end{equation}
\begin{equation}
(g^{-1} \partial_{-} g)^j\;=\; {
L}\hspace{-0.5mm}^{(l)^j}_{-}\;=\;{{^jL}^{(l)^t}}_\nu \;
\partial_{-} x^\nu,
\end{equation}
where the superscript $t$ stands for supertransposition. On the
other hand, by use of $\;\;$ $< {_iv}\;,\; {
L}\hspace{-0.5mm}^{(l)j}
>\;=\; {_i\delta}^j $ with ${
L}\hspace{-0.5mm}^{(l)j}={\overrightarrow dx^\nu}\;{_\nu
L}\hspace{-0.5mm}^{(l)j}$ we have
\begin{equation}
{_iv}^\mu={_iL}\hspace{-0.5mm}^{(l){\mu^{\;{-1}}}}\hspace{-0.5mm}
\qquad, \qquad (-1)^{i+i\nu}\;{{^\nu
L}\hspace{-0.5mm}^{(l)}\hspace{-1mm}_i}\hspace{-0.55mm}^{\;{-t}}
={_iv}^\nu.
\end{equation}
Then from (3.9) we will have
$$
{A_{+}}_i(g) \; =\; (-1)^{i+i\nu}\; \partial_{+} x^\mu
\;{\hspace{-0.5mm}_\mu {\cal E}}\hspace{-0.5mm}_{ \nu}\;
{_iv}^\nu,\hspace{1.75cm}
$$
\begin{equation}
{A_{-}}_i(g) \;=\; -(-1)^i\;{_iv}^\mu\;{\hspace{-0.5mm}_\mu {\cal
E}}\hspace{-0.5mm}_{ \nu} \;\partial_{-} x^\nu,\hspace{2cm}
\end{equation}
where
\begin{equation}
{\hspace{-0.5mm}_\mu {\cal E}}\hspace{-0.5mm}_{ \nu}\;=\;{_\mu
L}^{(l)^i}\;{_i{ E^+_j}}(g) \; {{^jL}^{(l)^t}}_\nu.
\end{equation}
Now using the above relations in (3.10) we attain the proper
result. Note that by use of
\begin{equation}
(g^{-1} \partial_{\pm}g )\;=\;{
R}\hspace{-0.5mm}^{(l)^i}_{\pm}\;g^{-1}{_iX} g\;=\;{
R}\hspace{-0.5mm}^{(l)^i}_{\pm}\;{_i a}^{(l)^j}(g)\;{_jX} ,
\end{equation}
where
\begin{equation}
(\partial_{\pm}g g^{-1})^i\;=\;{ R}\hspace{-0.5mm}^{(l)^i}_{\pm},
\end{equation}
we have
\begin{equation}
{ L}\hspace{-0.5mm}^{(l)^i}_{\pm}\;=\;{
R}\hspace{-0.5mm}^{(l)^j}_{\pm}\;{_j a}^{(l)^i}(g),
\end{equation}
and one can rewrite the action of (3.11) in the following form:
\begin{equation}
S\;=\;\frac{1}{2}\int\!\; (\partial_{+}g g^{-1})^i\;{_i{
\mathbb{E}^+_j}}(g) \; (
\partial_{-} g g^{-1})^j\; d\xi^{+} \; d\xi^{-},
\end{equation}
where
\begin{equation}
{_i{\mathbb{E}^+_j}}(g)\;=\;{{_i\Big({{E}^{+^{-1}}}\hspace{-2mm}(e)+\Pi(g)\Big)}_j}^{-1},
\end{equation}
and the super Poisson structure has the following form:
\begin{equation}
\Pi\;=\;b{a}^{-1}.
\end{equation}
In the same way one can obtain the following super Poisson-Lie
symmetric dual sigma model
\begin{equation}
\tilde{S}\; =\; \frac{1}{2}\int\!\; (\partial_{+}
\tilde{g}\tilde{g}^{-1} )_i \;
\tilde{\mathbb{E}}^{ij}(\tilde{g})\; {_j(\partial_{-} \tilde{g}
\tilde{g}^{-1} )}\; d\xi^{+} \; d\xi^{-},
\end{equation}
with
\begin{equation}
\tilde{\mathbb{E}}^{ij}(\tilde{g})\; =\;{\Big( {\tilde
E}^{+^{-1}}\hspace{-2mm}(\tilde e)+ \tilde {\Pi}(\tilde
g)\Big)^{ij}}^{-1}\;,
\end{equation}
where
\begin{equation}
E^{\pm}(e)\; \tilde{E}^{\pm}(\tilde{e})\; =\;
\tilde{E}^{\pm}(\tilde{e}) \;E^{\pm}(e) = I.
\end{equation}
\section{ Examples }
In this section we consider Poisson-Lie T-dual sigma models
related to four dimensional Lie super bialgebras ${\it ((2A_{1,1}+
2A)^1, {D^{10}}_{\hspace{-3mm}p=\frac{1}{2}}})$ and ${\it
((2A_{1,1}+ 2A)^1, I})$ \footnote { These Lie super-bialgebras
are obtained in \cite{RE} in the same way as \cite{R} . Note that
for the Lie superalgebras with odd numbers of fermionic
coordinates the metric tensor is singular \cite{D} hence we
consider four dimensional Lie superalgebras with two fermionic
coordinates as an example.}.
\subsection { \bf Case A }
For Lie super-bialgebra ${\it ((2A_{1,1}+ 2A)^1,
{D^{10}}_{\hspace{-3mm}p=\frac{1}{2}}})$, we have the following
nonzero (anti) commutation relations to the basis \footnote {Here
$\{X_1,X_2,{\tilde{X}}^1,{\tilde{X}}^2 \}$ and
$\{X_3,X_4,{\tilde{X}}^3 ,{\tilde{X}}^4\}$ are bosonic and
fermionic bases respectively \cite{B}.} $\{X_1,X_2,X_3,X_4\}$ for
$(2A_{1,1}+ 2A)^1$ and
$\{{\tilde{X}}^1,{\tilde{X}}^2,{\tilde{X}}^3,{\tilde{X}}^4 \}$
for ${D^{10}}_{\hspace{-3mm}p=\frac{1}{2}}$
$$
\{X_3,X_3\}=X_1, \qquad ,\{X_4,X_4\}=X_2,\hspace{4cm}
$$
$$
[\tilde{X}^1,\tilde{X}^2]=\tilde{X}^2, \qquad
[\tilde{X}^1,\tilde{X}^3]=\frac{3}{2}\tilde{X}^3,\hspace{4cm}
$$
$$
[\tilde{X}^1,\tilde{X}^4]=\frac{1}{2}\tilde{X}^4, \qquad
[\tilde{X}^2,\tilde{X}^4]=\tilde{X}^3,\hspace{4cm}
$$
$$
\hspace{0.5cm}[X_2 , \tilde{X}^1] =X_2, \qquad [X_2 ,
\tilde{X}^2] =-X_1, \qquad [X_3 , \tilde{X}^1]
=\frac{3}{2}X_3-\tilde{X}^3,
$$
$$
[X_3 , \tilde{X}^2] =X_4, \qquad [X_4 , \tilde{X}^1]
=\frac{1}{2}X_4, \qquad [X_4 , \tilde{X}^2]
=-\tilde{X}^4,\hspace{0.5cm}
$$
\begin{equation}
\{X_3 , \tilde{X}^3\} =\frac{3}{2}X_1, \qquad \{X_3 ,
\tilde{X}^4\} =X_2, \qquad \{X_4 , \tilde{X}^4\} =\frac{1}{2}X_1.
\end{equation}
Now using (3.17) with the following representation for the Lie
supergroups ${\bf (2A_{1,1}+ 2A)^1}$ and ${\bf
D^{10}_{P=\frac{1}{2}}}$:
\begin{equation}
g = e^{x X_1}\;e^{y X_2}\;e^{\psi X_3}\;e^{\chi X_4}, \qquad
\tilde{g} = e^{\tilde{x} \tilde{X}^1}\;e^{\tilde{y}
\tilde{X}^2}\;e^{\tilde{\psi} \tilde{X}^3}e^{\tilde{\chi}
\tilde{X}^4},
\end{equation}
where $\{x,y,\tilde{x},\tilde{y}\}$ and
$\{\psi,\chi,\tilde{\psi},\tilde{\chi}\}$ are bosonic and
fermionic coordinates of the Lie supergroups ${\bf (2A_{1,1}+
2A)^1}$ and ${\bf D^{10}_{P=\frac{1}{2}}}$; we have
$$
{R_{\pm}}\hspace{-2mm}^{(l)^i}=\pmatrix{ \partial_{\pm}x
-\frac{\psi}{2}\;\partial_{\pm}\psi& \partial_{\pm}y
-\frac{\chi}{2}\;\partial_{\pm}\chi& -\partial_{\pm}\psi &
-\partial_{\pm}\chi },
$$
\vspace{-3mm}
\begin{equation}
\Pi^{ij}(g)=\pmatrix{0 & -y &
\frac{3\psi}{2} & \frac{\chi}{2} \cr y & 0 & 0 & \psi \cr
-\frac{3\psi}{2} & 0 & 0 & 0 \cr -\frac{\chi}{2} & -\psi & 0 & 0}.
\end{equation}
Then using (3.19) and choosing ${_i{ E^+_j}}(e)$ as
\begin{equation}
{_i{ E^+_j}}(e)=\pmatrix{ 1 & 0 & 0 & 0 \cr 0 & 1 & 0 & 0 \cr 0 &
0 & 0 & 1 \cr 0 & 0 & -1 & 0 },
\end{equation}
the following model is derived
\[ \begin{array}{lcl}
\vspace{3mm} S&=&\frac{1}{2}\int\;\frac{1}{2(1+y^2)}
\Big\{(2-\frac{3\psi
\chi}{1+y^2})(\partial_{+}x\;\partial_{-}x+y\;\partial_{+}x\;\partial_{-}y-y\;
\partial_{+}y\;\partial_{-}x)\hspace{5cm}\\
\vspace{3mm}
&+&(2+\frac{3y^2\psi
\chi}{1+y^2})\partial_{+}y\;\partial_{-}y-[\chi+(1+2y)\psi]\partial_{+}x\;\partial_{-}\psi
\hspace{1cm}\\
\vspace{3mm}
&-&[-\chi+(1+2y)\psi]\partial_{+}\psi\;\partial_{-}x
+(3\psi-y\chi)(\partial_{+}x\;
\partial_{-}\chi-\partial_{+}\chi\;\partial_{-}x)\\
\vspace{3mm}
&+&[y\chi+(y-2)\psi]\partial_{+}y\;
\partial_{-}\psi+[y\chi-(y-2)\psi]\partial_{+}\psi\;\partial_{-}y\\
\vspace{3mm}
&-&(\chi+3y\psi)(\partial_{+}y\;\partial_{-}\chi+\partial_{+}\chi\;\partial_{-}y)
+[2(1+y^2)+(5-y)\frac{\psi
\chi}{2}]\partial_{+}\psi\;\partial_{-}\chi
\end{array}
\]
\vspace{-4mm}
\begin{equation}
-\;(1+2y)\psi\chi\;\partial_{+}\psi\;\partial_{-}\psi-[2(1+y^2)+(1+y)\frac{\psi
\chi}{2}]\partial_{+}\chi\;\partial_{-}\psi \Big\}d\xi^{+}\;
d\xi^{-},\hspace{1.5cm}
\end{equation}
where the action have the following background matrices
\begin{equation}
{\hspace{-0.5mm}_\mu { G} }\hspace{-0.5mm}_{
\nu}=\frac{1}{2(1+y^2)}\pmatrix{ 2-\frac{3\psi \chi}{1+y^2} & 0 &
-\chi & 3\psi-y\chi \cr 0 & 2+\frac{3y^2\psi \chi}{1+y^2} &
(y-2)\psi & 0 \cr -\chi & (y-2)\psi & 0 &
2(1+y^2)+\frac{3}{2}\psi\chi \cr 3\psi-y\chi & 0 &
-2(1+y^2)-\frac{3}{2}\psi\chi & 0 },
\end{equation}
\begin{equation}
{\hspace{-0.5mm}_\mu { B} }\hspace{-0.5mm}_{
\nu}=\frac{1}{2(1+y^2)}\pmatrix{ 0 & (2-\frac{3\psi \chi}{1+y^2})y
& -(1+2y)\psi & 0 \cr -(2-\frac{3\psi \chi}{1+y^2})y & 0 & y\chi
& -(\chi+3y\psi)\cr (1+2y)\psi & -y\chi & -(1+2y)\psi \chi &
\frac{1}{2}(2-y)\psi\chi \cr 0 & (\chi+3y\psi) &
\frac{1}{2}(2-y)\psi\chi& 0 }.
\end{equation}
For the dual model with
$$
{{\tilde {R}^{(l)}} }\hspace{-2mm}_{\pm
i}=\pmatrix{\partial_{\pm}{\tilde x} & e^{\tilde
x}\;\partial_{\pm}{\tilde y} & e^\frac{3{\tilde
x}}{2}\;(\partial_{\pm}{\tilde \psi}+{\tilde
y}\partial_{\pm}{\tilde \chi}) &e^\frac{{\tilde x}}{2}\;
\partial_{\pm}{\tilde \chi} },
$$
\vspace{-3mm}
\begin{equation}
{\tilde \Pi}_{ij}(\tilde g)=\pmatrix{0 & 0 & 0 & 0 \cr 0 & 0 & 0
& 0 \cr 0 & 0 & -\frac{1}{3}(\tilde y^3 e^{3\tilde x}+e^{3\tilde
x}-1) & -\frac{\tilde y^2}{2} e^{2\tilde x} \cr 0 & 0 &
-\frac{\tilde y^2}{2} e^{2\tilde x} & -\tilde y e^{\tilde
x}},\hspace{2cm}
\end{equation}
and
\begin{equation}
\tilde{E}^{ij}(\tilde{e})=\pmatrix{ 1 & 0 & 0 & 0 \cr 0 & 1 & 0
& 0 \cr 0 & 0 & 0 & -1 \cr 0 & 0 & 1 & 0 },
\end{equation}
we have
\[ \begin{array}{lcl}
\vspace{3mm} {\tilde S}&=&\frac{1}{2}\int \Big
\{\partial_{+}{\tilde x}\;\partial_{-}{\tilde x}+e^{2\tilde
x}\;\partial_{+}{\tilde y}\;\partial_{-}{\tilde
y}-\frac{1}{\lambda}[\tilde y e^{4\tilde x}\;\partial_{+} {\tilde
\psi}\;\partial_{-} {\tilde \psi}\\
\vspace{3mm} &+&(\frac{{\tilde y^2}}{2}e^{4\tilde x}-e^{2\tilde
x})\;\partial_{+} {\tilde \psi}\;\partial_{-} {\tilde \chi}+
(\frac{{\tilde y^2}}{2}e^{4\tilde x}+e^{2\tilde x})\;\partial_{+}
{\tilde \chi}\;\partial_{-} {\tilde \psi}
\end{array}
\]
\vspace{-4mm}
\begin{equation}
+\;\frac{e^{\tilde x}}{3}(\tilde y^3 e^{3\tilde x}+e^{3\tilde
x}-3)\;\partial_{+} {\tilde \chi}\;\partial_{-} {\tilde \chi}]
\Big\}d\xi^{+}\; d\xi^{-},\hspace{1cm}
\end{equation}
such that for the background matrices we have
\begin{equation}
{\hspace{-0.5mm}_\mu {\tilde G} }\hspace{-0.5mm}_{ \nu}=\pmatrix{
1 & 0 & 0 & 0 \cr 0 & e^{2\tilde x} & 0 & 0 \cr 0 & 0 & 0 &
\frac{e^{2\tilde x}}{\lambda}\cr 0 & 0 & -\frac{e^{2\tilde
x}}{\lambda} & 0 }, \quad {\hspace{-0.5mm}_\mu {\tilde B}
}\hspace{-0.5mm}_{ \nu}=\pmatrix{0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0
\cr 0 & 0 & \frac{-\tilde{y}e^{4\tilde x}}{\lambda} &
\frac{-{\tilde y}^2 e^{4\tilde x}}{2\lambda} \cr 0 & 0 &
\frac{-{\tilde y}^2 e^{4\tilde x}}{2\lambda}& \frac{-e^{\tilde
x}}{3\lambda}(\tilde y^3 e^{3\tilde x}+e^{3\tilde
x}-3)},\hspace{1cm}
\end{equation}
where
$$
\lambda=\frac{{\tilde y^4}}{12}e^{4\tilde x}+\frac{{\tilde
y}}{3}e^{4\tilde x}-\frac{{\tilde y}}{3}e^{\tilde x}+1.
$$
\bigskip
\subsection { \bf Case B }
For Lie super-bialgebra ${\it ((2A_{1,1}+ 2A)^1, I})$, where $I$
is $(2,2)$ (with two bosonic and fermionic bases) Abelian
superalgebra, we have the following nonzero (anti) commutation
relations to the basis $\{X_1,X_2,X_3,X_4\}$ for $(2A_{1,1}+
2A)^1$ and
$\{{\tilde{X}}^1,{\tilde{X}}^2,{\tilde{X}}^3,{\tilde{X}}^4 \}$
for $I$:
$$
\{X_3,X_3\}=X_1, \qquad \{X_4,X_4\}=X_2,\hspace{3cm}
$$
\begin{equation}
[X_3 , \tilde{X}^1] =-\tilde{X}^3, \qquad [X_4 , \tilde{X}^2]
=-\tilde{X}^4.\hspace{2.5cm}
\end{equation}
Using the representation in (4.2) we have $\Pi^{ij}(g)=0$ and if
we choose ${_i{ E^+_j}}(e)$ as (4.4) the action of the model
will have the following form
$$
S\;=\;\frac{1}{2}\int\!d\xi^{+}\; d\xi^{-}\; \Big\{
\partial_{+}x\;\partial_{-}x+\partial_{+}y\;\partial_{-}y+
\partial_{+}\psi\;\partial_{-}\chi-\partial_{+}\chi\;\partial_{-}\psi
\hspace{3cm}
$$
\begin{equation}
-\;\frac{\psi}{2}\;(\partial_{+}x\;\partial_{-}\psi
+\partial_{+}\psi
\;\partial_{-}x)-\frac{\chi}{2}\;(\partial_{+}y\;
\partial_{-}\chi+ \partial_{+}\chi\;\partial_{-}y)
\Big\},\hspace{2.25cm}
\end{equation}
with the background matrices as
\begin{equation}
{\hspace{-0.5mm}_\mu { G} }\hspace{-0.5mm}_{ \nu}=\pmatrix{ 1 & 0
& 0 & 0 \cr 0 & 1 & 0 & 0 \cr 0 & 0 & 0 & 1 \cr 0 & 0 & -1 & 0
}\quad , \quad {\hspace{-0.5mm}_\mu { B} }\hspace{-0.5mm}_{
\nu}=\pmatrix{0 & 0 & -\frac{\psi}{2} & 0 \cr 0 & 0 & 0 &
-\frac{\chi}{2} \cr \frac{\psi}{2} & 0 & 0 & 0 \cr 0 &
\frac{\chi}{2} & 0 & 0 }.
\end{equation}
For the dual model we find
\begin{equation}
{\tilde {R_{\pm}} \hspace{-1mm}^{(l)}} \hspace{-2mm}_{i}=\pmatrix{
\partial_{\pm}{\tilde x} & \partial_{\pm}{\tilde y} & \partial_{\pm}{\tilde \psi} & \partial_{\pm}{\tilde \chi}}
\qquad, \qquad {\tilde
\Pi}_{ij}(\tilde g)=\pmatrix{ 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0
\cr 0 & 0 & -\tilde x & 0 \cr 0 & 0 & 0 & -\tilde y },
\end{equation}
then we have
$$
\tilde S=\frac{1}{2}\int\!d\xi^{+}\; d\xi^{-}\; \Big \{
\partial_{+}{\tilde x}\;\partial_{-}{\tilde x}+\partial_{+}{\tilde y}\;\partial_{-}{\tilde
y}\hspace{6.24cm}
$$
\begin{equation}
+\;\frac{1}{\tilde{x} \tilde{y}+1}(\partial_{+} {\tilde
\psi}\;\partial_{-} {\tilde \chi}-\partial_{+} {\tilde
\chi}\;\partial_{-} {\tilde \psi}-\tilde y\;\partial_{+} {\tilde
\psi}\;\partial_{-} {\tilde \psi}-\tilde x\;\partial_{+} {\tilde
\chi}\;\partial_{-} {\tilde \chi})\Big \},\hspace{0.83cm}
\end{equation}
with the following background matrices
\begin{equation}
{\hspace{-0.5mm}_\mu {\tilde G} }\hspace{-0.5mm}_{ \nu}=\pmatrix{
1 & 0 & 0 & 0 \cr 0 & 1 & 0 & 0 \cr 0 & 0 & 0 &
\frac{1}{\tilde{x} \tilde{y}+1} \cr 0 & 0 & \frac{-1}{\tilde{x}
\tilde{y}+1} & 0 } \quad , \quad {\hspace{-0.5mm}_\mu {\tilde B}
}\hspace{-0.5mm}_{ \nu}=\pmatrix{0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0
\cr 0 & 0 & \frac{-\tilde{y}}{\tilde{x} \tilde{y}+1} & 0 \cr 0 & 0
& 0 & \frac{-\tilde{x}}{\tilde{x} \tilde{y}+1} }.
\end{equation}
Note that the background matrices of the model depend only on the
fermionic fields $\{\psi,\chi\}$ and for the dual model the
matrices depend only on the bosonic fields $\{\tilde x,\tilde
y\}$. As such we see that in this case the super Poisson-Lie
T-duality (super non-Abelian duality) transforms the role of
fermionic fields in the model to bosonic fields on the dual model.
In the next section we generalize this feature for general Abelian
Lie super-bialgebra $(\bf g , I)$. But before that, let us
investigate the physical equivalence of the model and its dual in
the case B by use of canonical transformations. The generating
functional for this equivalence is
$$
F[x , \tilde x]\;=\;-\frac{1}{2}\int\;d\sigma\;{\tilde
x}_i\;{{R}^{(l)^i}}\hspace{-3mm}_\sigma \hspace{8cm}
$$
\vspace{-3mm}
\begin{equation}
\hspace{2.5cm}=\;-\frac{1}{2}\int\;d\sigma\;\{{\tilde
x}\;\partial_{\sigma}x-{\tilde
x}\;\frac{\psi}{2}\;\partial_{\sigma}\psi+{\tilde
y}\;\partial_{\sigma}y-{\tilde
y}\;\frac{\chi}{2}\;\partial_{\sigma}\chi-{\tilde \psi}\;
\partial_{\sigma}\psi-{\tilde \chi}\;\partial_{\sigma}\chi\},
\end{equation}
This generating functional produces the following canonical
transformations
\begin{equation}
{p}_{i}\;=\;\frac{\overleftarrow{\delta}F}{\delta { x}^{i}},\qquad
\qquad {\tilde p}^{i}\;=\;-\frac{\overleftarrow{\delta}F}{\delta
{\tilde x}_{i}},
\end{equation}
i.e.
$$
{p}_{1}\;=\;\frac{1}{2}\partial_{\sigma}{\tilde
x},\hspace{3.8cm}\qquad \qquad {\tilde
p}^{1}\;=\;\frac{1}{2}\partial_{\sigma}{ x}-\frac{1}{4} \psi\;
\partial_{\sigma}\psi,
$$
\vspace{-4mm}
$$
{p}_{2}\;=\;\frac{1}{2}\partial_{\sigma}{\tilde
y},\hspace{3.8cm}\qquad \qquad {\tilde
p}^{2}\;=\;\frac{1}{2}\partial_{\sigma}{ y}-\frac{1}{4} \chi\;
\partial_{\sigma}\chi,
$$
\vspace{-4mm}
$$
{p}_{3}\;=\;-\frac{1}{2}{\tilde x}\;\partial_{\sigma}{\tilde
\psi}-\frac{1}{4} \psi\;\partial_{\sigma}{\tilde
x}-\frac{1}{2}\partial_{\sigma}{\tilde \psi},\qquad \qquad {\tilde
p}^{3}\;=\;\frac{1}{2}\partial_{\sigma}{\psi},\hspace{1.65cm}
$$
\vspace{-4mm}
\begin {equation}
{p}_{4}\;=\;-\frac{1}{2}{\tilde y}\;\partial_{\sigma}{\tilde
\chi}-\frac{1}{4} \chi\;\partial_{\sigma}{\tilde
y}-\frac{1}{2}\partial_{\sigma}{\tilde \chi},\qquad \qquad {\tilde
p}^{4}\;=\;\frac{1}{2}\partial_{\sigma}{\chi},\hspace{1.65cm}
\end {equation}
with these canonical transformations the Hamiltonian of the model
$$
{\cal H}\;=\;{p_1}^2+{p_2}^2+ \psi\; p_1\; p_4+\frac{1}{2}\psi \;
\chi\; p_1 \;p_2 + 2p_3\; p_4-\chi\; p_2\;
p_3+\frac{1}{4}\partial_{\sigma}x\;
\partial_{\sigma}x
$$
\vspace{-5mm}
\begin {equation}
+\;\frac{1}{4}\partial_{\sigma}y\; \partial_{\sigma}y+
\frac{1}{2}\partial_{\sigma} \psi\;\partial_{\sigma}
\chi-\frac{1}{4}\psi\;\partial_{\sigma}x\;\partial_{\sigma}
\psi-\frac{1}{4}
\chi\;\partial_{\sigma}y\;\partial_{\sigma}\chi,\hspace{1.5cm}
\end {equation}
with
\begin {equation}
{p}_{i}\;=\;\frac{\overleftarrow {\partial} L} {\partial
(\partial_{\tau}x^i)},
\end {equation}
is equal to the Hamiltonian of the dual model
$$
{\tilde{\cal H}}\;=\;({\tilde p^1})^2+({\tilde p^2})^2+ 2({\tilde
x} \;{\tilde y}+1) {\tilde p^3}\; {\tilde p^4} + {\tilde x}
\;{\tilde p^3}\; \partial_{\sigma}{\tilde \chi}-{\tilde y}
\;{\tilde p^4}\;
\partial_{\sigma}{\tilde \psi}
$$
\vspace{-5mm}
\begin {equation}
+ \;\frac{1}{2}\partial_{\sigma}\tilde{ \psi}\;\partial_{\sigma}
\tilde{ \chi}+\frac{1}{4}(\partial_{\sigma}{\tilde x}\;
\partial_{\sigma}{\tilde x}+\partial_{\sigma}{\tilde y}\;\partial_{\sigma}{\tilde y} ).\hspace{1.5cm}
\end {equation}
with
\begin {equation}
{\tilde p}^{i}\;=\;\frac{\overleftarrow {\partial} {\tilde L}}
{\partial (\partial_{\tau}{\tilde x}_i)}.
\end {equation}
Therefore the two models (4.13) and (4.16) are physically
equivalent.
\section{\bf Super non-Abelian duality }
Here we consider Abelian Lie super-bialgebras $({\bf g} , I)$
where ${\bf g}$ and $I$ have $(m , 2n)$ m bosonic and $2n$
fermionic generators. We consider the following three cases in
terms of commutation relations of Lie superalgebra ${\bf g}$:
\smallskip
a)$\;$ The commutation relations for the generators
$\{X_A\}=\{X_1,\cdots ,X_m,X_{m+1},\cdots ,X_{m+2n} \}$ of $\bf
g$ have the following form:
\begin{equation}
[X_i ,
X_{m+a}]\;=\;\sum_{b=1}^{2n}\;{f^{m+b}}\hspace{-2mm}_{i,m+a}\;X_{m+b},
\qquad i=1,\cdots , m, \quad a =1,\cdots, 2n,
\end{equation}
i.e. we have only commutation relation of bosons with fermions.
Now by use of the following parameterizations for the Lie
supergroup G
\begin{equation}
g\;=\;e^{x^1 X_1} \cdots e^{x^m X_m}\;e^{\psi^1 X_{m+1}}\cdots
e^{\psi^{2n} X_{m+2n}},
\end{equation}
we have
\begin{equation}
\partial _{\pm} g g^{-1}=\sum_{i=1}^{m} \partial _{\pm}x^i X_i +\sum
_{a=1}^{2n}\; \sum _{k_1, k_2,\cdots k_{m+a}=1}^{m+2n}\partial
_{\pm}{\psi}^a{{(e^{-x^m \chi_m})}_{m+a}}^{\;k_1}\cdots {{(e^{-x^1
\chi_1})}_{k_m}}^{\;k_{m+1}}\; X_{k_{m+a}},
\end{equation}
where $\chi_m$ is adjoint representation of the bosonic bases
$X_m$. By comparison of this relation with (3.18) we see that the
${ R}\hspace{-0.5mm}^{(l)^i}_{\pm}$ are functions of bosonic
coordinates of Lie supergroup G i.e. $\{x^i\}$. On the other hand
as for case B of section 4 we have
\begin{equation}
b=\Pi= 0, \qquad \qquad E^+_{ij}(g)\;=\; E^+_{ij}(e).
\end{equation}
In this way by use of (3.19) we see that the background matrix
depends only on the bosonic coordinates of the Lie supergroup G.
Furthermore by using of the commutation relations of Lie
super-bialgebras for $({\bf g} , I)$ we have \cite{R}
\begin{equation}
[X_i , {\tilde
X}^{m+a}]\;=\;\sum_{b=1}^{2n}\;{f^{m+b}}\hspace{-2mm}_{m+b,i}\;{\tilde
X}^{m+b},
\end{equation}
\begin{equation}
[X_{m+a} , {\tilde
X}^{m+b}]\;=-\;\sum_{i=1}^{m}\;{f^{m+b}}\hspace{-2mm}_{i,m+a}\;{\tilde
X}^{i},
\end{equation}
then one can obtain
\begin{equation}
{\tilde g}^{-1} X_i\; {\tilde g}\; =\;X_i+\sum _{a=1}^{2n}\;\sum
_{b=1}^{2n}\;{\tilde
\psi}_a\;{f^{m+a}}\hspace{-2mm}_{m+b,i}\;{\tilde X}^{m+b},
\end{equation}
\begin{equation}
{\tilde g}^{-1} X_{m+a}\; {\tilde g}\; =\;X_{m+a}-\sum
_{i=1}^{m}\;\sum _{b=1}^{2n}\;{\tilde
\psi}_b\;{f^{m+b}}\hspace{-2mm}_{m+a,i}\;{\tilde X}^{i},
\end{equation}
and by comparison of dual version of (3.8) we see that the matrix
$\tilde b$ depends only on the fermionic coordinates of $\tilde
G$; then ${\tilde \Pi}\;=\;\tilde b\; {\tilde a}^{-1}$ is only the
function of fermionic coordinates of $\tilde G$ and background
matrix of dual model depends only on the fermionic coordinates. In
this case super Poisson Lie T-duality transforms the role of
bosonic fields in the model to the fermionic fields on the dual
model.
\smallskip
b)$\;$ The commutation relations for the generators $\{X_A\}$ of
$\bf g$ have the following form
\begin{equation}
[X_{m+a} ,
X_{m+b}]\;=\sum_{i=1}^{m}\;{f^{i}}\hspace{-1mm}_{m+a,m+b}\;X_{i},
\end{equation}
i.e. we have only commutation relations of fermions with fermions.
In this case we have
\begin{equation}
\partial _{\pm} g g^{-1}\;=\;\sum_{i=1}^{m} \partial _{\pm}x^i X_i +\sum
_{a=1}^{2n} \partial _{\pm}{\psi}^a\;
X_{m+a}-\sum_{a>b=1}^{2n}\;\sum_{i=1}^{m}\;\partial
_{\pm}{\psi}^a\;{\psi}^b\;{f^{i}}\hspace{-1mm}_{m+a,m+b}\;X_{i},
\end{equation}
then the background matrix of the model depends only on the
fermionic fields. On the other hand, for the dual model we have
\begin{equation}
[X_{m+a} , {\tilde X}^i]\;=\;-\sum_{b=1}^{2n}\;
{f^{i}}\hspace{-1mm}_{m+b,m+a}\;{\tilde X}^{m+b},
\end{equation}
\begin{equation}
{\tilde g}^{-1} X_i\; {\tilde g}\; =\;X_i,\qquad \qquad {\tilde
g}^{-1} X_{m+a}\; {\tilde g}\;
=X_{m+a}+\sum_{i=1}^{m}\;\sum_{b=1}^{2n}\; {\tilde
x}_i\;{f^{i}}\hspace{-1mm}_{m+b,m+a}\; {\tilde X}^{m+b},
\end{equation}
then the matrix $\tilde b$ and background matrix of the dual model
depend only on the bosonic fields. In this case super Poisson Lie
T-duality transforms the role of fermionic fields in the model to
the bosonic fields on the dual model (such as case B of section
4).
\smallskip
c)$\;$For the case
\begin{equation}
[X_i , X_j]\;=\sum_{k=1}^{m}\;\; {f^{k}}\hspace{-1mm}_{ij}\;X_{k},
\end{equation}
we have only commutation relations of bosons with bosons; the
background matrices of models and its dual are functions of
bosonic fields and super Poisson Lie T-duality transforms the
bosonic fields to the bosonics ones (such as ordinary Poisson
Lie T-duality).
\section{\bf Conclusion }
We investigated Poisson-Lie $T$-duality for sigma models on
supermanifolds, especially on Lie supergroups. We show that for
the Abelian case $({\bf g} , I)$ the super Poisson Lie T-duality
transforms the role of fermionic (bosonic) fields in the model to
bosonic (fermionic) fields on the dual model and vice versa. We
hope that the relationship between $T$-duality and mirror symmetry
will be better understood in this way. Furthermore, one can
investigate Poisson-Lie $T$-dual sigma models on supergroups
having conformal symmetry, as well as its relations to
superstring theories on $AdS$ backgrounds. The study of
Poisson-Lie $T$-dual sigma models on low dimensional supergroups
\cite{{R},{RE}} with spectator fields and its relations to models
such as $2+1$ dimensional string cosmology coupled with fermionic
matter can be considered an other open problems, some of which
are under investigation.
\acknowledgments We would like to thank Sh. Moghadassi for
carefully reading the manuscript and useful comments.
| {
"attr-fineweb-edu": 1.693359,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfHA25V5jRAvtXsUG | \section{Introduction}
Let $\H^n$ denote the hyperbolic space of dimension $n$ with $n \geq 2$. In this paper, we shall use the Poincar\'e ball model for the hyperbolic spaces. This is the unit ball
\[
\B^n =\{x = (x_1,\ldots,x_n) \in \R^n\, :\, |x| =(x_1^2 + \cdots + x_n^2)^{\frac12} < 1\},
\]
equipped with the usual Poincar\'e metric
\[
g(x) = \lt(\frac 2{1 -|x|^2}\rt)^2 \sum_{i=1}^n d x_i^2.
\]
The hyperbolic volume element with respect to $g$ is $dV_g = (\frac 2{1 -|x|^2})^n dx$. Let $d$ denote the geodesic distance with respect to $g$ in $\H^n$, then the distance from the origin to $x$ is $\rho(x):=d(0,x) = \ln \frac{1+ |x|}{1 -|x|}$. Let $\nabla_g$ denote the hyperbolic gradient with respect to $g$ in $\H^n$. We have $\na_g = (\frac{1 -|x|^2}2)^2 \na$, where $\na$ is the usual gradient in the Euclidean space $\R^n$. The Laplace-Beltrami operator with respect to $g$ in $\H^n$ is given by
\[
\Delta_g = \lt(\frac{1 -|x|^2}{2}\rt)^2 \Delta + (n-2) \frac{1 -|x|^2}2 \la x, \na \ra,
\]
where $\Delta$ and $\la \cdot, \cdot \ra$ denote the usual Laplace operator and the usual scalar product in $\R^n$ respectively. The spectral gap of $-\De_g$ on $L^2(\H^n)$ is $\frac{(n-1)^2}4$ (see, e.g., \cite{MS,KS}), i.e.,
\begin{equation}\label{eq:Poincare1}
\int_{\H^n} |\na_g u|_g^2 dV_g \geq \frac{(n-1)^2}4 \int_{\H^n} u^2 dV_g,\quad u \in C_0^\infty(\H^n),
\end{equation}
where we use $|\na_g u|_g = \sqrt{g(\na_g u, \na_g u)}$ for simplifying notation. The inequality \eqref{eq:Poincare1} is sharp and the sharp constant $\frac{(n-1)^2}4$ is never attained. The non-attainability of \eqref{eq:Poincare1} together with the sub-criticality of the operator $-\Delta_g - \frac{(n-1)^2}4$ leaves a room for several improvements of the inequality \eqref{eq:Poincare1}. For examples, the reader could consult the paper \cite{BDGG,BGG,BG,BGGP} for the improvements of \eqref{eq:Poincare1} by adding the remainder terms concerning to the Hardy weight, i.e.,
\[
\int_{\H^n} |\na_g u|_g^2 dV_g -\frac{(n-1)^2}4 \int_{\H^n} u^2 dV_g \geq C \int_{\H^n} W u^2 dV_g,
\]
for some constant $C >0$ and the weight $W$ satisfying some appropriate conditions. Another improvement of \eqref{eq:Poincare1} was obtained by Mancini and Sandeep \cite{MS} for the dimension $n \geq 3$,
\[
\int_{\H^n} |\na_g u|_g^2 dV_g -\frac{(n-1)^2}4 \int_{\H^n} u^2 dV_g \geq C \lt(\int_{\H^n} |u|^q dV_g\rt)^{\frac 2q},
\]
where $2 < q \leq \frac{2n}{n-2}$ and $C$ is a positive constant. The preceding inequality is equivalent to the Hardy--Sobolev--Maz'ya inequality in the half space (see \cite[Section $2.1.6$]{Maz'ya}). In the critical case $q = \frac{2n}{n-2}$, we have
\begin{equation}\label{eq:HSM1}
\int_{\H^n} |\na_g u|_g^2 dV_g - \frac{(n-1)^2}4 \int_{\H^n} u^2 dV_g \geq C_n\lt(\int_{\H^n} |u|^{\frac{2n}{n-2}} dV_g\rt)^{\frac{n-2}n},\quad u\in C_0^\infty(\H^n),
\end{equation}
where $C_n$ denotes the sharp constant for which \eqref{eq:HSM1} holds true. It was proved by Tertikas and Tintarev \cite{TT} for $n \geq 4$ that $C_n < S_1(n,2)$ where $S_1(n,2)$ is the sharp constant in the first order $L^2$-Sobolev inequality in $\R^n$ (see \cite{Aubin,Talenti}) and $C_n$ is attained. In the case $n=3$, it was shown by Benguria, Frank and Loss \cite{BFL} that $C_3 = S_1(3,2)$ and is not attained. Another proof of the non-attainability of $C_3$ was given by Mancini and Sandeep \cite{MS}. For the $L^p-$version of \eqref{eq:HSM1}, we refer the reader to \cite{VHN2018a}. In that paper, the author proved the following inequality
\begin{equation}\label{eq:Nguyen}
\int_{\R^n} |\na_g u|_g^p dV_g - \frac{(n-1)^p}{p^p} \int_{\H^n} |u|^p dV_g \geq S_1(n,p) \lt(\int_{\H^n} |u|^{\frac{np}{n-p}} dV_g\rt)^{\frac{n-p}n},\, u\in C_0^\infty(\H^n),
\end{equation}
for $n\geq 4$ and $\frac{2n}{n-1}\leq p < n$, where $S_1(n,p)$ is the best constant in the first order $L^p-$Sobolev inequality in $\R^n$ (see \cite{Aubin,Talenti}). Notice that $(\frac{n-1}p)^p$ is the spectral gap of the $p-$Laplacian in the hyperbolic space (see \cite{NgoNguyen2017}).
In this paper, we extend the inequality \eqref{eq:HSM1} to the second order derivative (i.e., the Laplace-Beltrami operator $\Delta_g$ with respect to $g$ in $\H^n$). Let us recall the following second order Poincar\'e inequality in $\H^n$ (see \cite{KS,NgoNguyen2017})
\begin{equation}\label{eq:Poincare2}
\int_{\H^n} (\Delta_g u)^2 dV_g \geq \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g, \quad u\in C_0^\infty(\H^n).
\end{equation}
The constant $\frac{(n-1)^4}{16}$ is sharp and is never attained. The first main result of this paper is the following improvement of \eqref{eq:Poincare2} with the remainder involving with the Rellich type inequality. Let us recall the sharp Rellich inequality in $\H^n$ for $n \geq 5$ (see \cite{KO})
\begin{equation}\label{eq:RellichHn}
\int_{\H^n} (\Delta_g u)^2 dV_g \geq \frac{n^2(n-4)^2}{16} \int_{\H^n} \frac{u(x)^2}{\rho(x)^4} dV_g,\quad u\in C_0^\infty(\H^n),
\end{equation}
with the constant $\frac{n^2(n-4)^2}{16}$ being sharp. We shall prove the following theorem.
\begin{theorem}\label{Rellich}
Let $n \geq 5$. Then the following inequality holds
\begin{equation}\label{eq:improveRellich}
\int_{\H^n} (\Delta_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq \frac{n^2(n-4)^2}{16} \int_{\H^n} \frac{u(x)^2}{(\frac{V_g(B_g(0,\rho(x)))}{\si_n})^{\frac4n}} dV_g,
\end{equation}
for any $u\in C_0^\infty(\H^n)$, where $B_g(0,r)$ denotes the geodesic ball with center at $0$ and radius $r >0$ in $\H^n$ and $\si_n$ is the volume of unit ball in $\R^n$. Morover, the constant $\frac{n^2(n-4)^2}{16}$ in the right hand side of \eqref{eq:improveRellich} is sharp.
\end{theorem}
It is easy to check that the weight $W(x) =(\frac{V_g(B_g(0,\rho(x)))}{\si_n})^{-\frac4n}$ satisfies the following asymptotic estimates $W(x) \sim \rho(x)^{-4}$ as $x \to 0$ and
\[
W(x) \sim \lt(\frac{n}{n-1}\rt)^{-\frac 4n} (\sinh \rho(x))^{-\frac{4(n-1)}n}\quad \text{as}\quad |x| \to 1.
\]
Thus in a neighborhood of the origin, the right hand side of \eqref{eq:improveRellich} is similar with the one of \eqref{eq:RellichHn}. However, we can chek that $W(x) < \rho(x)^{-4}$ for any $x\not=0$. Hence, it is interesting problem whether we can replace the right hand side of \eqref{eq:improveRellich} by the one of \eqref{eq:RellichHn}.
The second main result of this paper is an improvement of \eqref{eq:Poincare2} with the remainder involving with the sharp Sobolev inequality in $\H^n$. Let $S_2(n,2)$ denote the sharp constant in the second order $L^2-$Sobolev inequality in $\R^n$ with $n\geq 5$, i.e.,
\begin{equation}\label{eq:Sobolevorder2}
S_2(n,2)\lt(\int_{\R^n} |u|^{\frac{2n}{n-4}} dx\rt)^{\frac{n-4}n} \leq \int_{\R^n} (\De u)^2 dx.
\end{equation}
The sharp constant $S_2(n,2)$ was computed by Lieb \cite{Lieb} by proving the sharp Hardy--Littlewood--Sobolev inequality which is the dual version of \eqref{eq:Sobolevorder2}. The following theorem gives an extension of \eqref{eq:HSM1} for the Laplace--Beltrami opertor $\Delta_g$ on $\H^n$. It is interesting that we reach the sharp constant $S_2(n,2)$ in \eqref{eq:Sobolevorder2} in the right hand side of obtained inequality.
\begin{theorem}\label{Sobolev}
Let $n\geq 5$. Then the following inequality holds
\begin{equation}\label{eq:improveSobolev}
\int_{\H^n} (\Delta_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq S_2(n,2) \lt(\int_{\H^n} |u|^{\frac{2n}{n-4}} dV_g\rt)^{\frac{n-4}n},
\end{equation}
for any function $u\in C_c^\infty(\H^n)$. The constant $S_2(n,2)$ in the right hand side of \eqref{eq:improveSobolev} is sharp.
\end{theorem}
In recent paper \cite{LuYanga}, Lu and Yang extended the inequality \eqref{eq:HSM1} to higher order derivatives. Their results was established for the GJMS operator $P_k$, $k\geq 1$ in $\H^n$ (see \cite{GJMS,Juhl})
\[
P_k = P_1(P_1+2) \cdots (P_1 +k(k-1)), \quad k \in \N,
\]
where $P_1 = -\De_g - \frac{n(n-2)}4$ is the conformal Laplacian in $\H^n$. The sharp Sobolev inequality for $P_k$ (see \cite{Liu} and \cite{LuYanga} for another proof) reads as
\begin{equation}\label{eq:SobolevPk}
\int_{\H^n} (P_ku) u dV_g \geq S_k(n,2) \lt(\int_{\H^n} |u|^{\frac{2n}{n-2k}} dV_g\rt)^{\frac{n-2k}n}, \, u\in C_0^\infty(\H^n),\, 1 \leq k < \frac n2,
\end{equation}
where $S_k(n,2)$ is the sharp constant in the $k-$th order Sobolev inequality in $\R^n$. In other hand, we have the following Poincar\'e inequality for $P_k$
\begin{equation}\label{eq:PoincarePk}
\int_{\H^n} (P_ku) u dV_g \geq \prod_{i=1}^k \frac{(2i-1)^2}4 \int_{\H^n} u^2 dV_g,\quad \quad u\in C_0^\infty(\H^n).
\end{equation}
The following inequality was proved by Lu and Yang for $n > 2k$ and $u\in C_0^\infty(\H^n)$
\begin{equation}\label{eq:LuYangineq}
\int_{\H^n} (P_ku) u dV_g - \prod_{i=1}^k \frac{(2i-1)^2}4 \int_{\H^n} u^2 dV_g \geq C_n \lt(\int_{\H^n} |u|^{\frac{2n}{n-2k}} dV_g\rt)^{\frac{n-2k}n},
\end{equation}
for some constant $C_n >0$. The proof of Lu and Yang is based on the sharp Hardy--Littlewood--Sobolev inequality in $\H^n$ which is different with the one in this paper. Moreover the constant $C_n$ is not explicit. As a consequence of our Theorem \ref{Sobolev}, we obtain the inequality of Lu and Yang for $k=2$ with an explicit constant $\frac{5S_2(n,2)}{(n-1)^2}$ (see Corollary \ref{cor} below).
In the dimension four, i.e., $n =4$, we have the following Adams type inequalities
\begin{theorem}\label{Adams}
For any $\lam < \frac{81}{16}$, then there exists a constant $C_\lam >0$ such that
\begin{equation}\label{eq:Adams}
\sup_{\int_{\H^4} |\De_g u|^2 dV_g - \lam \int_{\H^4} u^2 dV_g \leq 1} \int_{\H^4} \lt(e^{32\pi^2 |u|^2} -1\rt) dV_g \leq C_\lam.
\end{equation}
There exists a constant $C >0$ such that
\begin{equation}\label{eq:Adamsexact}
\sup_{\substack{\int_{\H^4} |\De_g u|^2 dV_g - \frac{81}{16} \int_{\H^4} u^2 dV_g \leq 1,\\ u \, \, \text{\rm is radial}}} \frac1{\int_{\H^4} u^2 dV_g} \int_{\H^4} \frac{e^{32\pi^2 u^2} -1}{(1+ |u|)^2}dV_g \leq C.
\end{equation}
Moreover, the power $2$ in the denominator is sharp in the sense that the supremum will be infinite if we replace it by any power $p < 2$.
\end{theorem}
Adams inequality is the limit case of the Sobolev embedding. The sharp Adams inequality was firstly established for bounded domains in the Euclidean space by Adams \cite{Adams}. Adams inequality is the higher order version of the well known Moser--Trudinger inequality involving with the first order derivative (see \cite{M1970,P1965,T1967,Y1961}). The Adams inequality was extended to unbounded domains and whole space by Ruf and Sani \cite{RufSani}, Lam and Lu \cite{LamLu} and Fontana and Morpurgo \cite{FM}. Especially, we have the following inequality in $\R^4$,
\begin{equation}\label{eq:AdamsR4}
\sup_{\int_{\R^4} (\De u)^2 dx + \int_{\R^4} u^2 dx \leq 1} \int_{\R^4} \lt(e^{32\pi^2 u^2} -1\rt) dx < \infty.
\end{equation}
The Adams inequality was established in the hyperbolic space by Karmakar and Sandeep \cite{KS}. Their results in $\H^4$ is the following
\begin{equation}\label{eq:KarSan}
\sup_{\int_{\H^4} (P_2 u) u dV_g \leq 1} \int_{\H^4} \lt(e^{32\pi^2 u^2} -1\rt) dV_g < \infty.
\end{equation}
Notice that the condition $\int_{\H^4} (P_2 u) u dV_g \leq 1$ and ours in \eqref{eq:Adams} can not compare in general. We refer the readers to the paper of Lu and Yang \cite{LuYang} for several Hardy--Adams inequality in $\H^4$ which extends the Hardy--Moser--Trudinger inequality of Wang and Ye \cite{WY} to second order derivative. The inequality \eqref{eq:Adams} can be derived from the results in \cite{LuYang}, however our approach here is different with theirs by using the technique from Fourier analysis in the hyperbolic spaces and Adams' lemma in \cite{Adams}. The inequality \eqref{eq:Adamsexact} is a kind of Adams inequality with exact growth in $\H^4$. However, it is stated only for radial functions. In \cite{Karmakar}, an Adams inequality with exact growth in $\H^4$ was proved for any function $u$ on $\H^4$ under the condition $\int_{\H^4} (P_2 u) u dV_g \leq 1$. Notice that $\int_{\H^4} (P_2 u) u dV_g$ is equivalent with a full Sobolev norm in $\H^4$ which is different with our functional $\int_{\H^4} (\Delta_g u)^2 dV_g -\frac{81}{16} \int_{\H^4} u^2 dV_g$. So it is interesting if we could prove \eqref{eq:Adamsexact} for any funtions on $\H^4$. The Adams inequality with exact growth in $\R^4$ was proved by Masmoudi and Sani \cite{MS} in the following form
\begin{equation}\label{eq:MasmoudiSani}
\sup_{\int_{\R^4} (\De u)^2 dx \leq 1 } \frac{1}{\int_{\R^4} u^2 dx} \int_{\R^4} \frac{e^{32 \pi^2 u^2} -1}{(1 + |u|)^2} dx < \infty.
\end{equation}
The inequality \eqref{eq:MasmoudiSani} implies \eqref{eq:AdamsR4} and plays important role in the proof of \eqref{eq:Adamsexact}. It was extended to any dimension $n\geq 3$ in \cite{LTZ}. We refer the readers to \cite{MS2017} for the Adams inequality with exact growth for higher order derivatives.
We conclude this introduction by giving some comments on the proof of our main theorems. Our proof is based on the decreasing spherical symmetric rearrangement applied to the setting of hyperbolic spaces. By this technique, we reduce the proof of our theorems to the radial functions in $\H^n$. The next step is the novelty of our approach. Given a radial function $u\in W^{2,2}(\H^n)$, let $v$ be the function on $[0,\infty)$ such that $u(x) = v(V_g(B_g(0,\rho(x))))$, $x\in \H^n$. Using this function $v$, we define a function $u_e$ in $\R^n$ by $u_e(y) = v(\si_n |y|^n)$ with $y\in \R^n$. By the definition of $u_e$, we can easily check that
\begin{equation}\label{eq:dangthucnorm}
\int_{\H^n} \Phi(u) dV_g = \int_0^\infty \Phi(v(s)) ds = \int_{\R^n} \Phi(u_e(y)) dy,
\end{equation}
for any function $\Phi$ on $\R$ such that one of the integrals above is well defined. Our way is to establish a deviation between $\int_{\H^n} (\Delta_g u)^2 dV_g$ and $\int_{\R^n} (\Delta u_e)^2 dy$. The following result is the key tool in the proof of our main theorems.
\begin{theorem}\label{keytool}
Let $n \geq 4$ and let $u\in W^{2,2}(\H^n)$ be a radial function. Suppose $u(x) = v(V_g(B_g(0,\rho(x))))$ for some function $v$ on $[0,\infty)$, and define the function $u_e$ in $\R^n$ by $u_e(y) = v(\si_n |y|^n)$. Then there holds
\begin{equation}\label{eq:keytool}
\int_{\H^n} (\Delta_g u)^2 dV_g -\frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq \int_{\R^n} (\De u_e)^2 dy.
\end{equation}
\end{theorem}
Our main Theorems \ref{Rellich}, \ref{Sobolev} and \eqref{eq:Adams} follow from Theorem \ref{keytool}, the equality \eqref{eq:dangthucnorm} and the similar inequalities in the Euclidean space $\R^n$. This approach was already used by the author in \cite{VHN2018,VHN2018a} to establish several improvements of the Moser--Trudinger inequality and Poincar\'e--Sobolev in the hyperbolic spaces for the first order derivative (e.g., the inequality \eqref{eq:Nguyen} above).
The organization of this paper is as follows. In next Section $2$, we recall some facts about the hyperbolic spaces and the decreasing spherical symmetric rearrangement technique in the hyperbolic spaces which enable us reduce the proof of the main theorems to the radial functions. Section $3$ is devoted to prove Theorem \ref{keytool} which is the key tool in our proof. Theorems \ref{Rellich}, \ref{Sobolev}, \ref{Adams} will be proved in Section $4$.
\section{Preliminaries}
In this section we shall list some properties of the hyperbolic spaces and the symmetrization technique in the hyperbolic spaces. Let $\H^n$ denote the hyperbolic space of dimension $n\geq 2$. The hyperbolic space is a complete and simply connected Riemannian manifold having constant sectional curvature equal to $-1$. There are several models for $\H^n$, for examples, the half space model, the Poincar\'e ball model, the hyperboloid or Lorentz model, etc. All these models are equaivalent. As mentioned in the introduction, we shall use throughout in this paper the Poincar\'e ball model for the hyperbolic space which is very useful for problems involving the rotational symmetry. It is known that the decreasing spherical symmetric rearrangement technique works well in the setting of the hyperbolic space (see, e.g, \cite{Ba}). It is not only the key tool to establish several important inequalities in the hyperbolic spaces such as first order Sobolev inequality, sharp Moser--Trudinger inequality, etc.... but also plays the crucial role in proving the results in this paper. Let us recall some facts on this technique. A measurable function $u:\H^n \to \R$ is called vanishing at the infinity if for any $t >0$ the set $\{|u| > t\}$ has finite $V_g-$measure, i.e.,
\[
V_g(\{|u|> t\}) = \int_{\{|u|> t\}} dV_g < \infty.
\]
For such a function $u$, its distribution function is defined by
\[
\mu_u(t) = V_g( \{|u|> t\}).
\]
Notice that $t \mapsto \mu_u(t)$ is non-increasing and right continuous on $(0,\infty)$. The decreasing rearrangement function $u^*$ of $u$ is defined by
\[
u^*(t) = \sup\{s > 0\, :\, \mu_u(s) > t\}.
\]
The decreasing, spherical symmetry, rearrangement function $u^\sharp$ of $u$ is defined by
\[
u^\sharp(x) = u^*(V_g(B(0,d(0,x)))),\quad x \in \H^n.
\]
It is well-known that $u$ and $u_g^\sharp$ have the same decreasing rearrangement function (which is $u^*$). As a consequence of this equi-distribution, we have
\begin{equation}\label{eq:dongphanbo}
\int_{\H^n} \Phi(|u|) dV_g = \int_{\H^n} \Phi(u^\sharp) dV_g = \int_0^\infty \Phi(u^*(t)) dt,
\end{equation}
for any function $\Phi: [0,\infty) \to [0,\infty)$ which is increasing, continuous and $\Phi(0) =0$. Since $u^*$ is non-increasing function, the maximal function $u^{**}$ of $u^*$ is defined by
\[
u^{**}(t) = \frac1t \int_0^t u^*(s) ds.
\]
Evidently, $u^*(t) \leq u^{**}(t)$. Moreover, as a consequence of Hardy inequality, we have
\begin{proposition}\label{hardy}
Let $u \in L^{p}(\H^n)$ with $1 < p < \infty$, then
\[
\lt(\int_0^\infty u^{**}(t)^p dt\rt)^{\frac1p} \leq \frac p{p-1} \lt(\int_0^\infty u^*(t)^q dt\rt)^{\frac1q} = \frac p{p-1} \|u\|_{L^p(\H^n)}.
\]
\end{proposition}
For $1 < p < \infty$, let $W^{1,2}(\H^n)$ denote the first order $L^p-$Sobolev space in $\H^n$ which is the completion of $C_0^\infty(\H^n)$ under the norm $\|u\|_{W^{1,p}(\H^n)} = \lt(\int_{\H^n} |\na_g u|_g^p dV_g\rt)^{\frac1p}$. The P\'olya--Szeg\"o principle in $\H^n$ asserts that if $u\in W^{1,p}(\H^n)$ then $u^\sharp \in W^{1,p}(\H^n)$ and $\|u^\sharp\|_{W^{1,p}(\H^n)} \leq \|u\|_{W^{1,p}(\H^n)}$. Furthermore, the following Poincar\'e--Sobolev inequality holds (see \cite{KS} for $p =2$ and \cite{NgoNguyen2017} for general $p\in (1,\infty)$)
\[
\int_{\H^n} |\na_g u|_g^p dV_g \geq \lt(\frac{n-1}p\rt)^p \int_{\H^n} |u|^p dV_g.
\]
Similarly, let $W^{2,p}(\H^n)$ denote the second order $L^p-$Sobolev space in $\H^n$ which is the completion of $C_0^\infty(\H^n)$ under the norm $\|u\|_{W^{2,p}(\H^n)} = \lt(\int_{\H^n} |\Delta_g u|^p dV_g\rt)^{\frac12}$. It was proved by Ngo and the author in \cite{NgoNguyen2017} (see also \cite{KS} for $p=2$) that
\[
\int_{\H^n} |\De_g u|^p dV_g \geq \lt(\frac{(n-1)^2}{p p'}\rt)^p \int_{\H^n} |u|^p dV_g, \quad p' = \frac{p}{p-1}.
\]
Moreover, we have
\[
\int_{\H^n} |\Delta_g u|^p dV_g \geq \lt(\frac{n-1}{p'}\rt)^p \int_{\H^n} |\na_g u|_g^p dV_g.
\]
Notice that unlike in the space $W^{1,p}(\H^n)$, we do not have an analogue of P\'olya--Szeg\"o principle in $W^{2,p}(\H^n)$. Hence, in order to prove your main results we will establish the following Talenti type comparison principle in the hyperbolic space $\H^n$ instead of the P\'olya--Szeg\"o principle. We refer the readers to \cite{Talenti} for the Talenti comparison principle in the Euclidean space $\R^n$. We next consider the case $p =2$. Suppose that $u\in W^{2,2}(\H^n)$, we set $f = -\Delta_g u \in L^2(\H^n)$. Then we have from Proposition \ref{hardy} that
\[
\int_0^\infty f^{**}(t)^2 dt \leq 4 \int_0^\infty f^*(t)^2 = 4 \|\Delta_g u\|_{L^2(\H^n)}^2.
\]
Obviously, $u \in L^2(\H^n)$ by the Poincar\'e--Sobolev inequality. For each $t >0$ denote $F_n(t)$ the unique solution of the equation
\begin{equation}\label{eq:Fnfunction}
n \si_n \int_0^{F_n(t)} \sinh^{n-1}(s) ds =t.
\end{equation}
The function $t\mapsto F_n(t)$ is strictly increasing and smooth on $(0,\infty)$ and $F_n(0) =0$. It was proved by Ngo and the author in \cite{NgoNguyen2016} that
\begin{equation}\label{eq:NgoNguyen}
u^*(t) \leq v(t):= \int_t^\infty \frac{s f^{**}(s)}{(n\si_n \sinh(F_n(s))^{n-1})^2} ds, \quad s >0.
\end{equation}
Note that $\sinh(F_n(s))^{n-1} \sim \frac{n-1}{n\si_n} s$ when $s \to \infty$ then the integral in the right hand side of \eqref{eq:NgoNguyen} is well defined for any $t >0$. For $x\in \H^n$, we set $\bar u(x) = v(V_g(B_g(0,\rho(x))))$. It is easy to check that $-\Delta_g \bar u(x) = f^\sharp(x)$ for $x \in \H^n$ and $u^\sharp(x) \leq \bar u(x)$. Consequently, we have
\[
\int_{\H^n} (\Delta_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq \int_{\H^n} (\Delta_g \bar u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} \bar u^2 dV_g,
\]
and by Hardy--Littlewood inequality
\[
\int_{\H^n} \frac{u(x)^2}{V_g(B_g(0,\rho(x)))^a} dV_g \leq \int_{\H^n} \frac{u^\sharp(x)^2}{V_g(B_g(0,\rho(x)))^a} dV_g \leq \int_{\H^n} \frac{\bar u(x)^2}{V_g(B_g(0,\rho(x)))^a} dV_g,
\]
for any $a \geq 0$, and
\[
\int_{\H^n} \lt(e^{32\pi^2 u^2} -1\rt) dV_g \leq \int_{\H^n}\lt(e^{32 \pi^2 \bar u^2} -1\rt) dV_g.
\]
Hence, it is enough to prove our main Theorems \ref{Rellich}, \ref{Sobolev} and \ref{Adams} for funtion $u$ which is nonnegative, decreasing, radially symmetric function such that $-\De_g u$ is nonnegative, decreasing, radially symmetric function too. Especially, it suffices to prove these theorems for radial functions in $W^{2,2}(\H^n)$. So, in the rest of this paper, we only work with radial functions in $W^{2,2}(\R^n)$.
\section{Proof of Theorem \ref{keytool}}
In this section, we provide the proof of Theorem \ref{keytool}. We first give some preparations for our proof. Let $u\in W^{2,2}(\H^n)$ be a radial function, and $v:[0,\infty) \to \R$ be a function such that $u(x) = v(V_g(B_g(0,\rho(x))))$. Let us define the function $u_e$ in $\R^n$ by $u_e(y) = v(\si_n |y|^n)$ with $y \in \R^n$. By a direct computation, we have
\begin{align}\label{eq:Hnlaplace}
-\De_g u(x) &= -\lt(v''(V_g(B_g(0,\rho(x)))) + \frac{2(n-1)}n \frac{\cosh(\rho(x))}{\sinh(\rho(x))} \frac{v'(V_g(B_g(0,\rho(x))))}{\si_n\sinh(\rho(x))^{n-1}}\rt)\notag\\
&\qquad\qquad \times (n\si_n)^2 \sinh(\rho(x))^{2(n-1)},
\end{align}
and
\begin{equation}\label{eq:Elaplace}
-\De u_e(y) = -\lt(v''(\si_n |y|^n) + \frac{2(n-1)}n \frac{v'(\si_n |y|^n)}{\si_n |y|^n}\rt) (n \si_n)^2 |y|^{2(n-1)},\quad y\in \R^n.
\end{equation}
Hence, by making the change of variables and using the definition of function $F_n$ (see \eqref{eq:Fnfunction}), we have
\begin{align}\label{eq:L2normDeltaHn}
\int_{\H^n} (\Delta_g u)^2 dV_g &= (n\si_n)^4 \int_0^\infty \lt(v''(s) + \frac{2(n-1)}{n} \frac{\cosh(F_n(s))}{\sinh(F_n(s))} \frac{v'(s)}{\si_n \sinh(F_n(s))^{n-1}}\rt)^2\notag\\
&\qquad\qquad\qquad\qquad \times \sinh(F_n(s))^{4(n-1)} ds,
\end{align}
and
\begin{equation}\label{eq:L2normDeltaE}
\int_{\R^n} (\De u_e)^2 dy = (n\si_n)^4 \int_0^\infty \lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}{s}\rt)^2 \lt(\frac{s}{\si_n}\rt)^{\frac{4(n-1)}n} ds.
\end{equation}
In order to prove Theorem \ref{keytool}, we need estimate the quantity
\[
\int_{\H^n} (\Delta_g u)^2 dV_g -\int_{\R^n} (\De u_e)^2 dy.
\]
Let us define
\begin{equation}\label{eq:Phinfunction}
\Phi_n(t) = n \int_0^t \sinh(s)^{n-1} ds, \quad t \geq 0.
\end{equation}
From the definition of $\Phi_n$ and $F_n$, we have $\Phi_n(F_n(t)) = \frac t{\si_n}$. It is easy to see that
\begin{equation}\label{eq:tiemcanPhin}
\Phi_n(t) \sim \sinh(t)^n\quad \text{\rm as}\,\, t \downarrow 0,\quad\text{\rm and}\quad \Phi_n(t) \sim \frac{n}{n-1} \sinh(t)^{n-1}\quad \text{\rm as}\,\, t \to \infty,
\end{equation}
hence it holds
\begin{equation}\label{eq:tiemcanFn}
\sinh(F_n(t))^n \sim
\begin{cases}
\frac{t}{\si_n} &\mbox{as $t \to 0$,}\\
\lt(\frac{n-1} n \frac{t}{\si_n}\rt)^{\frac{n}{n-1}} &\mbox{as $t\to \infty$.}
\end{cases}
\end{equation}
Suppose $-\Delta_g u(x) = f(V_g(B_g(0,\rho(x))))$ for some function $f$ on $[0,\infty)$. Hence, by a simple change of variable, it holds $\int_0^\infty f(s)^2 ds < \infty$. For $t >0$, define
\[
\bar f(t) = \frac1 t\int_0^t f(s) ds.
\]
It is straightforward that
\begin{equation}\label{eq:vformula}
v(t) = \int_t^\infty \frac{s \bar f(s)}{(n \si_n \sinh(F_n(s))^{n-1})^2} ds, \quad t >0.
\end{equation}
We have the following expression of $\int_{\H^n} (\Delta_g u)^2 dV_g -\int_{\R^n} (\De u_e)^2 dy$.
\begin{lemma}\label{expressionbyv'}
It holds
\begin{align}\label{eq:expressionbyv'}
&\frac{\int_{\H^n} (\De_g u)^2 dV_g -\int_{\R^n} (\De u_e)^2 dx}{(n\si_n)^{4} } \notag\\
&= \int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 \lt(\sinh(F_n(s))^{4(n-1)} -\lt(\frac{s}{\si_n}\rt)^{\frac{4(n-1)}n} \rt)ds\notag\\
&\quad + \frac{4(n-1)}{n^2} \int_0^\infty v'(s)^2\Bigg(2(n-1) \frac{\cosh(F_n(s))}{s \si_n \sinh(F_n(s))^n} -\frac{n-1}{2 \si_n^2 \sinh(F_n(s))^{2n-2}}\notag\\
&\qquad\qquad\qquad\qquad\qquad \qquad- \frac{n-2}{2 \si_n^2\sinh(F_n(s))^{2n}} -\frac{3n-2}{2 s^2} \Bigg)\sinh(F_n(s))^{4(n-1)}ds.
\end{align}
\end{lemma}
\begin{proof}
This lemma is proved by using integration by parts. Denote $B(s) = \frac{s \cosh(F_n(s))}{\si_n \sinh(F_n(s))^n}$, we then have
\begin{align}\label{eq:1ststep}
&\frac{\int_{\H^n} |\De_g u|^2 dV_g}{(n\si_n)^4}\notag\\
&\qquad = \int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s + \frac{2(n-1)}n (B(s) -1) \frac{v'(s)}{s}\rt)^2 \sinh(F_n(s))^{4(n-1)} ds\notag\\
&\qquad =\int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 \sinh(F_n(s))^{4(n-1)}ds\notag\\
&\qquad\quad + \frac{4(n-1)}n \int_0^\infty \lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)(B(s) -1) \frac{v'(s)}{s} \sinh(F_n(s))^{4(n-1)} ds\notag\\
&\qquad\quad + \frac{4(n-1)^2}{n^2} \int_0^\infty v'(s)^2 \lt(\frac{B(s)-1}s\rt)^2 \sinh(F_n(s))^{4(n-1)} ds\notag\\
&\qquad =\int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 \sinh(F_n(s))^{4(n-1)}ds\notag\\
&\qquad\quad + \frac{4(n-1)^2}{n^2} \int_0^\infty v'(s)^2 \frac{B(s)^2-1}{s^2} \sinh(F_n(s))^{4(n-1)} ds\notag\\
&\qquad\quad + \frac{2(n-1)}n \int_0^\infty ((v'(s))^2)' \frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s ds.
\end{align}
We claim that
\begin{equation}\label{eq:claimv'0}
\lim_{s\to 0}v'(s)^2\frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s = 0,
\end{equation}
and
\begin{equation}\label{eq:claimv'0a}
\lim_{s\to \infty}v'(s)^2\frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s =0.
\end{equation}
Indeed, we have
\[
v'(s) = -\frac{s \bar f(s)}{(n\si_n \sinh(F_n(s))^{n-1})^2)},
\]
hence
\[
v'(s)^2\frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s = \frac{1}{(n\si_n)^4} (B(s) -1) s \bar f(s)^2.
\]
It is easy to check by using \eqref{eq:tiemcanFn} that
\[
\lim_{s\to 0} B(s) =1,\quad \text{\rm and}\quad \lim_{s\to \infty} B(s) = \frac{n}{n-1}.
\]
In other hand, by H\"older inequality we have
\[
|\bar f(s)| \leq \frac{1}{s} \int_0^s |f(t)| dt \leq \frac1{\sqrt{s}} \lt(\int_0^s f(t)^2 dt\rt)^{\frac12},
\]
which implies $\lim_{s\to 0} s \bar f(s)^2 = 0$. The claim \eqref{eq:claimv'0} is then followed. We next consider \eqref{eq:claimv'0a}. For any $\ep >0$, we can take a $t_0 >0$ such that $\int_{t_0}^\infty f(t)^2 dt < \ep^2$. For any $s > t_0$, by H\"older inequality we have
\[
|\bar f(s)| \leq \frac1{t_0}{s} |\bar f(t_0)| + \frac{\sqrt{s-t_0}}{s} \lt(\int_{t_0}^s f(t)^2 dt\rt)^{\frac12} \leq \frac{t_0}{s} |\bar f(t_0)| + \frac{\sqrt{s-t_0}}{s} \ep.
\]
Whence, it holds
\[
\limsup_{s\to \infty} \sqrt{s} |\bar f(s)| \leq \ep,
\]
for any $\ep >0$. Then, we get $\lim_{s\to \infty} s \bar f(s)^2 =0$. This limit and the fact $B(s) \to \frac{n}{n-1}$ as $s \to \infty$ implies the claim \eqref{eq:claimv'0a}.
Applying integration by parts and using the claims \eqref{eq:claimv'0} and \eqref{eq:claimv'0a} to the third in the right hand side of \eqref{eq:1ststep}, we get
\begin{align*}
\int_0^\infty (v'(s)^2)' &\frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s ds\\
&\qquad\qquad = -\int_0^\infty v'(s)^2 \lt(\frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s\rt)' ds.
\end{align*}
Furthermore, from the definition \eqref{eq:Fnfunction} of $F_n$, we have $n \si_n \sinh(F_n(s))^{n-1} F_n'(s) = 1$. Hence, it holds
\begin{align*}
&\lt(\frac{(B(s) -1) \sinh(F_n(s))^{4(n-1)}}s\rt)' \\
&\qquad\qquad\qquad\qquad =\lt(\lt(\frac{B(s)-1}{s}\rt)' + \frac{4(n-1)}n \frac{B(s)^2 -B(s)}{s^2}\rt) \sinh(F_n(s))^{4(n-1)},
\end{align*}
and
\[
\lt(\frac{B(s)}s - \frac1{s}\rt)' = \frac1{n \si_n^2 \sinh(F_n(s))^{2n -2}} - \frac{\cosh(F_n(s))^2}{\si_n^2 \sinh(F_n(s))^{2n}} + \frac1{s^2}.
\]
Plugging the preceding equalities into \eqref{eq:1ststep}, we get
\begin{align*}
&\frac{\int_{\H^n} |\De_g u|^2 dV_g}{(n\si_n)^4} \\
&=\int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 \sinh(F_n(s))^{4(n-1)}ds\\
&\quad - \frac{4(n-1)^2}{n^2} \int_0^\infty v'(s)^2\Bigg(\frac{n}{2(n-1)}\lt(\frac{B(s)-1}{s}\rt)' +\frac{B(s)^2 -2B(s) +1}{s^2} \Bigg)\\
&\qquad\qquad\qquad\qquad\qquad\qquad \times \sinh(F_n(s))^{4(n-1)} ds\\
&=\int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 \sinh(F_n(s))^{4(n-1)}ds\\
&\quad + \frac{4(n-1)}{n^2} \int_0^\infty v'(s)^2\Bigg(2(n-1) \frac{\cosh(F_n(s))}{s \si_n \sinh(F_n(s))^n} -\frac{n-1}{2 \si_n^2 \sinh(F_n(s))^{2n-2}}\\
&\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad - \frac{n-2}{2 \si_n^2\sinh(F_n(s))^{2n}} -\frac{3n-2}{2 s^2} \Bigg)\sinh(F_n(s))^{4(n-1)}ds.
\end{align*}
The desired equality \eqref{eq:expressionbyv'} is now followed from \eqref{eq:L2normDeltaE} and the previous equality.
\end{proof}
To proceed, we shall need the following estimate which is the key in the proof of Theorem \ref{keytool}.
\begin{lemma}\label{keyestimate}
Let $n \geq 4$. There holds
\begin{align}\label{eq:keyestimate}
&\Bigg(2(n-1) \frac{\cosh(F_n(s))}{s \si_n \sinh(F_n(s))^n} -\frac{n-1}{2 \si_n^2 \sinh(F_n(s))^{2n-2}} \notag\\
&\qquad\qquad - \frac{n-2}{2 \si_n^2\sinh(F_n(s))^{2n}} -\frac{3n-2}{2 s^2} \Bigg)\sinh(F_n(s))^{4(n-1)} > \frac{(n-1)^3(n-2)}{2 n^4} \frac{s^2}{\si_n^4},
\end{align}
for any $s >0$.
\end{lemma}
\begin{proof}
We start the proof by remark that the inequality \eqref{eq:keyestimate} is equivalent to
\begin{align}\label{eq:arbitraryn}
\frac{(n-1)^3(n-2)}{2 n^4}& \Phi_n(t)^4+\lt(\frac{n-1}2 \sinh(t)^{2(n-1)} + \frac{n-2}2\sinh(t)^{2n-4}\rt)\Phi_n(t)^2 \notag\\
&\quad -2(n-1) \sinh(t)^{3n-4} \cosh(t) \Phi_n(t) + \frac{3n-2}2 \sinh(t)^{4(n-1)} < 0,
\end{align}
for any $t>0$ where $\Phi_n$ is defined by \eqref{eq:Phinfunction}. Let us define
\begin{align*}
G_n(t) &= \frac{(n-1)^3(n-2)}{2 n^4} \Phi_n(t)^4+\lt(\frac{n-1}2 \sinh(t)^{2(n-1)} + \frac{n-2}2\sinh(t)^{2n-4}\rt)\Phi_n(t)^2 \notag\\
&\quad -2(n-1) \sinh(t)^{3n-4} \cosh(t) \Phi_n(t) + \frac{3n-2}2 \sinh(t)^{4(n-1)}.
\end{align*}
We need to check $G_n(t) \leq 0$ for $t \geq 0$. Notice that $G_n(0) = 0$. Differentiating the function $G_n$ gives
\begin{align*}
G_n'(t)&= \frac{2(n-1)^3(n-2)}{n^3} \Phi_n(t)^3 \sinh(t)^{n-1}\\
&\quad + \lt((n-1)^2 \sinh(t)^{2n-3} \cosh(t)+ (n-2)^2\sinh(t)^{2n-5} \cosh(t)\rt)\Phi_n(t)^2\\
&\quad + n\lt((n-1) \sinh(t)^{2(n-1)} + (n-2)\sinh(t)^{2n-4}\rt)\Phi_n(t) \sinh(t)^{n-1}\\
&\quad -2(n-1) ((3n -3) \sinh(t)^{3n-3} + (3n -4)\sinh(t)^{3n-5})\Phi_n(t)\\
&\quad -2n(n-1) \sinh(t)^{4n-5} \cosh(t) + 2(n-1)(3n -2) \sinh(t)^{4n-5} \cosh(t)\\
&= \frac{2(n-1)^3(n-2)}{n^3} \Phi_n(t)^3 \sinh(t)^{n-1}\\
&\quad + \lt((n-1)^2 \sinh(t)^{2n-3} \cosh(t)+ (n-2)^2\sinh(t)^{2n-5} \cosh(t)\rt)\Phi_n(t)^2\\
&\quad -((n-1)(5n -6) \sinh(t)^{3n -3} + (5n^2 -12n +8) \sinh(t)^{3n-5}) \Phi_n(t)\\
&\quad + 4(n-1)^2 \sinh(t)^{4n-5} \cosh(t).
\end{align*}
Denote
\begin{align*}
H_n(t)&= \frac{2(n-1)^3(n-2)}{n^3} \Phi_n(t)^3\\
&\quad + \lt((n-1)^2 \sinh(t)^{n-2} \cosh(t)+ (n-2)^2\sinh(t)^{n-4} \cosh(t)\rt)\Phi_n(t)^2\\
&\quad -((n-1)(5n -6) \sinh(t)^{2n -2} + (5n^2 -12n +8) \sinh(t)^{2n-4}) \Phi_n(t)\\
&\quad + 4(n-1)^2 \sinh(t)^{3n-4} \cosh(t),
\end{align*}
we then have $G_n'(t) = H_n(t) \sinh(t)^{n-1}$ and $H_n(0) = 0$. Differentiating $H_n$ once, we get
\begin{align*}
&H_n'(t) \\
& = \frac{6(n-1)^3(n-2)}{n^2} \Phi_n(t)^2 \sinh(t)^{n-1}\\
& + \lt((n-1)^3 + \frac{(n-2)(2n^2 -7n +7)}{\sinh(t)^2} + \frac{(n-2)^2(n-4)}{\sinh(t)^4}\rt) \Phi_n(t)^2 \sinh(t)^{n-1}\\
& + 2n \lt((n-1)^2 \sinh(t)^{n-2} \cosh(t)+ (n-2)^2\sinh(t)^{n-4} \cosh(t)\rt)\Phi_n(t) \sinh(t)^{n-1}\\
& -(2(n-1)^2(5n -6) \sinh(t)^{2n -3} + (2n-4)(5n^2 -12n +8) \sinh(t)^{2n-5}) \cosh(t) \Phi_n(t)\\
& - n((n-1)(5n -6) \sinh(t)^{2n -2} + (5n^2 -12n +8) \sinh(t)^{2n-4}) \sinh(t)^{n-1}\\
& + 4(n-1)^2 (3(n-1)\sinh(t)^{3n-3} + (3n -4) \sinh(t)^{3n -5})\\
&= \lt(\frac{(n-1)^3 (n^2 +6n -12)}{n^2}+ \frac{(n-2)(2n^2 -7n +7)}{\sinh(t)^2} + \frac{(n-2)^2(n-4)}{\sinh(t)^4}\rt) \Phi_n(t)^2 \sinh(t)^{n-1}\\
& -(4(n-1)^2(2n-3) \sinh(t)^{2} + 4(n-2)(2n^2 -5n +4)) \cosh(t) \Phi_n(t) \sinh(t)^{2n-5}\\
& + ((n-1)(7n^2 -18 n+12) \sinh(t)^{2} + (7n^3 -28n^2 + 36n -16)) \sinh(t)^{3n-5}.
\end{align*}
Denote
\begin{align*}
J_n(t)&= \Phi_n(t)^2 + \frac{((n-1)(7n^2 -18n + 12)\sinh(t)^2 + 7n^3 -28 n^2 + 36 n -16)\sinh(t)^{2n}}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} \sinh(t)^4 + (n-2)(2n^2 -7n + 7)\sinh(t)^2 + (n-2)^2(n-4)}\\
& -\frac{(4(n-1)^2(2n-3) \sinh(t)^{2} + 4(n-2)(2n^2-5n +4))\sinh(t)^{n}\cosh(t)}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} \sinh(t)^4 + (n-2)(2n^2 -7n + 7)\sinh(t)^2 + (n-2)^2(n-4)} \Phi_n(t).
\end{align*}
We have
\begin{align*}
H_n'(t) &= \Bigg(\frac{(n-1)^3 (n^2 +6n -12)}{n^2}+ \frac{(n-2)(2n^2 -7n +7)}{\sinh(t)^2} + \frac{(n-2)^2(n-4)}{\sinh(t)^4}\Bigg)\\
&\qquad\qquad \times J_n(t)\, \sinh(t)^{n-1},
\end{align*}
and $J_n(0) =0$. Denote $s =\sinh(t)^2$, differentiating $J_n$ we get
\begin{align*}
&\frac{J_n'(t)}{\sinh(t)^{n-1}}\notag\\
& = 2n\Phi_n(t) -n\frac{(4(n-1)^2(2n-3) s + 4(n-2)(2n^2-5n +4))\sinh(t)^{n}\cosh(t)}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4)} \notag\\
& - \frac{4(n-1)^2(2n -3)(n+3)s^2 + 4(4n^4 -10 n^3-n^2 + 19n -14)s + 4(n-2)n(2n^2 -5n +4)}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4)}\Phi_n(t)\notag\\
& +\frac{(4(n-1)^2(2n-3)s + 4(n-2)(2n^2 -5n +4))(4 \frac{(n-1)^3(n^2 + 6n -12)}{n^2} s + 2(n-2)(2n^2 -7n +7))}{(\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4))^2}\notag\\
&\qquad \times s(s+1) \Phi_n(t)\notag\\
& + \frac{(2(n^2 -1)(7n^2 -18n + 12)s + 2n(7n^3 -28n^2 + 36n -16))\sinh(t)^n\cosh(t)}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4)}\notag\\
& -\frac{((n -1)(7n^2 -18n + 12)s + 7n^3 -28n^2 + 36n -16)(4 \frac{(n-1)^3(n^2 + 6n -12)}{n^2} s + 2(n-2)(2n^2 -7n +7))}{(\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4))^2}\notag\\
&\qquad \times s \sinh(t)^n \cosh(t).
\end{align*}
By direct computations, we simplify $\frac{J_n'(t)}{\sinh(t)^{n-1}}$ as
\begin{align}\label{eq:daohamJn}
&\frac{J_n'(t)}{\sinh(t)^{n-1}}\notag\\
&=-\frac{\frac{2(n-1)^2(3n^3+ n^2 -12)}{n}s ^2 + (12 n^4 -18n^3 -46 n^2+ 104 n -56)s + 2n^2(n-2)(3n-4)}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4)}\Phi_n(t)\notag\\
&+\frac{(4(n-1)^2(2n-3)s + 4(n-2)(2n^2 -5n +4))(4 \frac{(n-1)^3(n^2 + 6n -12)}{n^2} s + 2(n-2)(2n^2 -7n +7))}{(\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4))^2}\notag\\
&\qquad \times s(s+1) \Phi_n(t)\notag\\
& + \frac{2(n-1)(3n^3 -n^2 -12n + 12)s + 2n^2(n-2)(3n-4)}{\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4)}\sinh(t)^n \cosh(t)\notag\\
&-\frac{((n -1)(7n^2 -18n + 12)s + 7n^3 -28n^2 + 36n -16)(4 \frac{(n-1)^3(n^2 + 6n -12)}{n^2} s + 2(n-2)(2n^2 -7n +7))}{(\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4))^2}\notag\\
&\qquad \times s \sinh(t)^n \cosh(t)\notag\\
&=-\frac{A(s) \Phi_n(t) - B(s)\sinh(t)^n \cosh(t)}{(\frac{(n-1)^3(n^2 + 6n -12)}{n^2} s^2 + (n-2)(2n^2 -7n + 7)s + (n-2)^2(n-4))^2},
\end{align}
where
\begin{align*}
A(s)& =\frac{6(n-1)^6(n-2)^2(n^2 + 6n-12)}{n^3}s^4 \\
&\quad + \frac{(n-1)^2}{n^2}(24 n^7 -116 n^6-88 n^5 + 1732 n^4 - 4920 n^3 +6744 n^2 -4800 n +1440) s^3\\
&\quad + \frac{n-2}{n^2} (36 n^8 -252 n^7 + 506 n^6 + 392 n^5-3386 n^4 + 6504 n^3 -6480 n^2 + 3456n -768) s^2 \\
&\quad + (n-2)^2(24 n^5 -156 n^4 + 316 n^3 -224 n^2 + 32 n) s\\
&\quad + 2n^2(n-2)^3(3n-4)(n-4),
\end{align*}
and
\begin{align*}
B(s) &= \frac{6(n-1)^5(n^2 + 6n -12)(n-2)^2}{n^2} s^3\\
&\quad + \frac{2(n-1)(n-2)}{n^2} (9 n^7 -43 n^6 -24 n^5 + 502 n^4 -1242 n^3 +1424 n^2 - 816 n +192) s^2\\
&\quad + 2(n-2)^2(9 n^5 -59 n^4 + 131 n^3 -123 n^2 + 46n -8) s\\
&\quad + 2 n^2(n-2)^3(n-4)(3n -4).
\end{align*}
We will need the following lower bound for $\Phi_n(t)$
\begin{equation}\label{eq:lowerboundPhin}
\int_0^t \sinh(s)^{n-1} ds \geq \frac{\sinh(t)^n \cosh(t)((n-3)\sinh(t)^2 + n +2)}{(n-1)(n-3) \sinh(t)^4 + 2n(n-1)\sinh(t)^2 + n(n+2)},
\end{equation}
for any $n \geq 4$. Indeed, let us define the function $K_n$ by
\[
K_n(t) = \int_0^t \sinh(s)^{n-1} ds - \frac{\sinh(t)^n \cosh(t)((n-3)\sinh(t)^2 + n +2)}{(n-1)(n-3) \sinh(t)^4 + 2n(n-1)\sinh(t)^2 + n(n+2)}.
\]
We have $K_n(0) =0$. Differentiating $K_n$ gives
\begin{align*}
K_n'(t) &= \sinh(t)^{n-1} -\frac{(n\sinh(t)^{n-1}\cosh(t)^2 + \sinh(t)^{n+1})((n-3)\sinh(t)^2 + (n+2))}{(n-1)(n-3) \sinh(t)^4 + 2n(n-1)\sinh(t)^2 + n(n+2)}\\
&\quad -\frac{2(n-3)\sinh(t)^{n+1} \cosh(t)^2}{(n-1)(n-3) \sinh(t)^4 + 2n(n-1)\sinh(t)^2 + n(n+2)}\\
&\quad + \frac{4(n-1)\sinh(t)^{n+1} \cosh(t)^2 ((n-3)\sinh(t)^2 + n+2)((n-3)\sinh(t)^2 + n)}{((n-1)(n-3) \sinh(t)^4 + 2n(n-1)\sinh(t)^2 + n(n+2))^2}.
\end{align*}
Denote again $s = \sinh^2(t)$, we have
\begin{align*}
\frac{K_n'(t)}{\sinh(t)^{n-1}}& = 1 -\frac{((n+1)s + n)((n-3)s + n+2) + 2(n-3)s(s+1)}{(n-1)(n-3)s^2 + 2n(n-1)s + n(n+2)}\\
&\quad + 4(n-1) \frac{s(s+1)((n-3)s + n+2)((n-3)s +n)}{((n-1)(n-3)s^2 + 2n(n-1)s + n(n+2))^2}\\
&= \frac{24 s^2}{((n-1)(n-3)s^2 + 2n(n-1)s + n(n+2))^2}\\
&\geq 0.
\end{align*}
Hence $K_n(t) \geq K_n(0) = 0$ for $t\geq 0$ as desired.
We next claim that
\begin{equation}\label{eq:claimkey}
n((n-3)s + n+2) A(s) > ((n-1)(n-3)s^2 + 2n(n-1)s + n(n+2))B(s),\, s >0.
\end{equation}
The claim \eqref{eq:claimkey}, \eqref{eq:lowerboundPhin} and \eqref{eq:daohamJn} implies $J_n'(t) < 0$ for any $t >0$. Therefore, $J_n$ is strictly decreasing in $[0,\infty)$ and $J_n(t) < J_n(0) = 0$ for any $t > 0$. The relation between $J_n$ and $H_n'$ yields $H_n'(t) < 0$ for any $t >0$ which gives $H_n(t) < H_n(0) =0$ for any $t >0$. Whence $G_n'(t) = \sinh^{n-1}(t) \, H_n(t) < 0$ for any $t > 0$ which forces $G_n(t) < G_n(0) =0$ for any $t >0$ as desired.
It remains to verify the claim \eqref{eq:claimkey}. The proof is the long and complicated computations. Define
\[
P(s) = n((n-3)s + n+2) A(s)- ((n-1)(n-3)s^2 + 2n(n-1)s + n(n+2))B(s).
\]
$P$ is a polynomial of $s$ of order $5$, hence we can write
\[
P(s) = a_0(n) + a_1(n) s + a_2(n) s^2 + a_3(n) s^3 + a_4(n) s^4 + a_5(n) s^5,
\]
with coefficients $a_i(n), i= 0, \ldots,5$ depending on $n$. Our way is to show $a_i(n) \geq 0$ for $n\geq 4$. Indeed, $a_0(n) = a_5(n) = 0$. The direct computations give
\begin{align*}
a_4(n)& = \frac{2 (n - 1)^2 (n - 2)}{n^2}(2 n - 3) (3 n - 5) (n^2 + 6 n - 12) (n^3 - 5 n^2 + 6 n - 4)\\
&\quad >0,
\end{align*}
\begin{align*}
a_3(n)& =\frac{2(n-2)}n (18 n^8 - 135 n^7 + 485 n^6 - 1087 n^5 + 1677 n^4 - 1926 n^3 + 1656 n^2 - 888 n + 192)\\
&\quad > 0,
\end{align*}
\begin{align*}
a_2(n) &= \frac{2(n-2)^2}n (18 n^7 - 123 n^6 + 370 n^5 -551 n^4 + 258 n^3 + 360 n^2 -528 n + 192) > 0,
\end{align*}
and
\begin{align*}
a_1(n) & =2 n(n - 2)^3 (n-4)(6 n^3 -n^2 -5 n + 2) \geq 0.
\end{align*}
Hence the claim \eqref{eq:claimkey} is proved, and the proof of this lemma is completely finished.
\end{proof}
We are now ready to prove Theorem \ref{keytool}.
\begin{proof}[Proof of Theorem \ref{keytool}]
It follows from Lemmas \ref{expressionbyv'} and \ref{keyestimate} that
\begin{align}\label{eq:proofTh2.2}
\int_{\H^n} &(\De_g u)^2 dV_g -\int_{\R^n} (\De u_e)^2 dx\notag\\
&\geq (n\si_n)^{4}\int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 \lt(\sinh(F_n(s))^{4(n-1)} -\lt(\frac{s}{\si_n}\rt)^{\frac{4(n-1)}n} \rt)ds\notag\\
&\quad + \frac{2(n-1)^4(n-2)}{ n^4} \int_0^\infty v'(s)^2 (n\si_n)^2 \lt(\frac{s}{\si_n}\rt)^2ds.
\end{align}
It was proved in \cite{VHN2018,VHN2018a} that
\[
\sinh(F_n(s))^{4(n-1)} -\lt(\frac{s}{\si_n}\rt)^{\frac{4(n-1)}n} \geq \lt(\frac{n-1}n\rt)^4 \lt(\frac{s}{\si_n}\rt)^4.
\]
Plugging the preceding estimate into \eqref{eq:proofTh2.2} gives
\begin{align}\label{eq:tofinish}
\int_{\H^n} (\De_g u)^2 dV_g & -\int_{\R^n} (\De u_e)^2 dx\notag\\
&\geq \frac{(n-1)^4}{n^4}\int_0^\infty\lt(v''(s) + \frac{2(n-1)}n \frac{v'(s)}s\rt)^2 (n\si_n)^4 \lt(\frac s{\si_n}\rt)^4ds\notag\\
&\quad + \frac{2(n-1)^4(n-2)}{ n^4} \int_0^\infty v'(s)^2 (n\si_n)^2 \lt(\frac{s}{\si_n}\rt)^2ds\notag\\
&=\frac{(n-1)^4}{n^4}\lt( \int_{\R^n} (\De u_e)^2 |x|^4 dx + 2(n-2) \int_{\R^n} |\na u_e|^2 |x|^2 dx\rt).
\end{align}
Recall the weighted Rellich and Hardy inequality in $\R^n$ (see, e. g, \cite[Theorems $12$ and $13$]{DH})
\[
\int_{\R^n} (\De u_e)^2 |x|^4 dx \geq \frac{n^2(n-4)^2}{16} \int_{\R^n} |u_e|^2 dx,
\]
and
\[
\int_{\R^n} |\na u_e|^2 |x|^2 dx \geq \frac{n^2}{4} \int_{\R^n} |u_e|^2 dx.
\]
Applying these inequalities to the right hand side of \eqref{eq:tofinish}, we get
\begin{align*}
\int_{\H^n} (\De_g u)^2 dV_g -\int_{\R^n} (\De u_e)^2 dx &\geq \frac{(n-1)^4}{n^4} \lt(\frac{n^2(n-4)^2}{16} + 2(n-2)\frac{n^2}{4}\rt) \int_{\R^n} |u_e|^2 dx\\
&= \frac{(n-1)^4}{16} \int_{\R^n} |u_e|^2 dx\\
&= \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g,
\end{align*}
as desired, here we use \eqref{eq:dangthucnorm} for the last equality.
\end{proof}
\section{Proofs of the main theorems}
In this section, we provide the proof of Theorems \ref{Rellich}, \ref{Sobolev} and \ref{Adams}. We begin by proving Theorem \ref{Rellich}.
\subsection{Proof of Theorem \ref{Rellich}}
This subsection is devote to prove Theorem \ref{Rellich}. The proof follows from Theorem \ref{keytool} and the sharp Rellich inequality in $\R^n$ as follows.
\begin{proof}[Proof of Theorem \ref{Rellich}]
As discussed in Section $2$, it is enough to prove Theorem \ref{Rellich} for radial function $u \in W^{2,2}(\H^n)$. Suppose $u \in W^{2,2}(\H^n)$ to be a radial function, and $v$ is a function on $[0,\infty)$ such that $u(x) = v(V_g(B_g(0,\rho(x))))$. Define the function $u_e$ in $\R^n$ by $u_e(y)= v(\si_n |y|^n)$. By Theorem \ref{keytool}, we have
\[
\int_{\H^n} (\De_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq \int_{\R^n} (\De u_e)^2 dy.
\]
Since $n\geq 5$, by Rellich inequality (see \cite{DH}), we have
\begin{align*}
\int_{\R^n} (\De u_e)^2 dy &\geq \frac{n^2(n-4)^2}{16} \int_{\R^n} \frac{u_e(y)^2}{|y|^4} dy\\
& = \frac{n^2(n-4)^2}{16} \int_0^\infty \frac{v(s)^2}{(\frac s{\si_n})^{\frac4n}} ds\\
&= \frac{n^2(n-4)^2}{16} \int_{\H^n} \frac{u(x)^2}{(\frac{V_g(B_g(0,\rho(x)))}{\si_n})^{\frac4n}} dV_g.
\end{align*}
Combining these inequalities finishes the proof of Theorem \ref{Rellich}.
\end{proof}
\subsection{Proof of Theorem \ref{Sobolev}}
The proof of Theorem \ref{Sobolev} is similar with the one of Theorem \ref{Rellich}. Instead of using Rellich inequality, we use the sharp Sobolev inequality \eqref{eq:Sobolevorder2} as follows.
\begin{proof}[Proof of Theorem \ref{Sobolev}]
As discussed in Section $2$, it is enough to prove Theorem \ref{Sobolev} for radial function $u \in W^{2,2}(\H^n)$. Suppose $u \in W^{2,2}(\H^n)$ to be a radial function, and $v$ is a function on $[0,\infty)$ such that $u(x) = v(V_g(B_g(0,\rho(x))))$. Define the function $u_e$ in $\R^n$ by $u_e(y)= v(\si_n |y|^n)$. By Theorem \ref{keytool}, we have
\[
\int_{\H^n} (\De_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq \int_{\R^n} (\De u_e)^2 dy.
\]
Since $n\geq 5$, by the sharp Sobolev inequality \eqref{eq:Sobolevorder2}, we have
\begin{align*}
\int_{\R^n} (\De u_e)^2 dy &\geq S_2(n,2) \lt(\int_{\R^n} |u_e(y)|^{\frac{2n}{n-4}} dy\rt)^{\frac{n-4}n}\\
& = S_2(n,2) \lt(\int_0^\infty |v(s)|^{\frac{2n}{n-4}} ds\rt)^{\frac{n-4}n}\\
&= S_2(n,2) \lt(\int_{\H^n} |u|^{\frac{2n}{n-4}} dV_g\rt)^{\frac{n-4}n}.
\end{align*}
Combining these inequalities finishes the proof of Theorem \ref{Sobolev}.
\end{proof}
As mentioned in introduction, we obtain the following Poincar\'e--Sobolev inequality for the GJMS operator $P_2$ from Theorem \ref{Sobolev} with an explicit constant.
\begin{corollary}\label{cor}
Let $n \geq 5$. It holds
\begin{equation}\label{eq:PS2th}
\int_{\H^n} (P_2 u) u\, dV_g - \frac{9}{16} \int_{\H^n} u^2 dV_g \geq \frac{5S_2(n,2)}{(n-1)^2} \lt(\int_{\H^n} |u|^{\frac{2n}{n-4}} dV_g\rt)^{\frac{n-4}n}, \quad u\in C_0^\infty(\H^n).
\end{equation}
\end{corollary}
The inequality \eqref{eq:PS2th} is exact the inequality \eqref{eq:LuYangineq} for $k =2$ which improves the Poincar\'e inequality \eqref{eq:PoincarePk} for $k=2$. Our proof of \eqref{eq:PS2th} in this paper is completely different with the one of Lu and Yang \cite{LuYanga}. Moreover, we obtain the explicit constant in the right hand side of \eqref{eq:PS2th}.
\begin{proof}[Proof of Corollary \ref{cor}]
For $u \in C_0^\infty(\H^n)$, we have
\begin{align*}
\int_{\H^n} (P_2 u) u\, dV_g - \frac{9}{16} \int_{\H^n} u^2 dV_g&= \int_{\H^n} (\Delta_g u)^2 dV_g + \lt(\frac{n(n-2)}2 -2\rt) \int_{\H^n} (\Delta_g u) u\, dV_g \\
&\quad + \frac{n(n-2) (n(n-2) -8) -9}{16} \int_{\H^n} u^2 dV_g.
\end{align*}
Hence, it holds
\begin{align*}
\int_{\H^n} (P_2 u) u\, dV_g &- \frac{9}{16} \int_{\H^n} u^2 dV_g - \frac{5}{(n-1)^2} \lt(\int_{\H^n} (\De_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g\rt)\\
&= \frac{n(n-2) -4}{(n-1)^2} \int_{\H^n} (\Delta_g u)^2 dV_g + \frac{n(n-2)-4}2 \int_{\H^n} (\Delta_g u) u\, dV_g \\
&\quad + \frac{(n(n-2) -4)(n-1)^2}{16} \int_{\H^n} u^2 dV_g\\
&= \frac{n(n-2) -4}{(n-1)^2} \int_{\H^n} \lt(\Delta_g u + \frac{(n-1)^2}{4} u\rt)^2 dV_g\\
&\geq 0.
\end{align*}
Consequence, we get
\[
\int_{\H^n} (P_2 u) u\, dV_g - \frac{9}{16} \int_{\H^n} u^2 dV_g \geq \frac{5}{(n-1)^2} \lt(\int_{\H^n} (\De_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g\rt).
\]
This corollary follows from the preceding inequality and Theorem \ref{Sobolev}.
\end{proof}
It is well known that the sharp Sobolev inequality and Rellich inequality are special cases of family of Rellich--Sobolev inequalities in $\R^n$ (see, e. g., \cite{Musina,MusinaS}) as follows
\[
\int_{\R^n} (\De w)^2 dy \geq S_{2,s}(n,2) \lt(\int_{\R^n} \frac{|w(y)|^{\frac{2(n-s)}{n-4}}}{|y|^s} dy\rt)^{\frac{n-4}{n-s}}, \quad w \in C_0^\infty(\R^n), \, 0 \leq s \leq 4,
\]
where $S_{2,s}(n,2)$ denotes the sharp constant in the inequality above. Obviously, we have $S_{2,0}(n,2) = S_2(n,2)$ and $S_{2,4}(n,2) = \frac{n^2(n-4)^2}{16}$. Using the Rellich--Sobolev inequality above and the argument in the proof of Theorems \ref{Rellich} and \ref{Sobolev}, we easily obtain the following inequality which combines the sharp Poincar\'e inequality and the sharp Rellich--Sobolev inequality in $\H^n$,
\[
\int_{\H^n} (\De_g u)^2 dV_g - \frac{(n-1)^4}{16} \int_{\H^n} u^2 dV_g \geq S_{2,s}(n,2) \lt(\int_{\H^n} \frac{|u(x)|^{\frac{2(n-s)}{n-4)}}}{(\frac{V_g(B(0,\rho(x)))}{\si_n})^{\frac{s}n}} dV_g\rt)^{\frac{n-4}{n-s}},
\]
for any $u\in C_0^\infty(\H^n)$, $0 \leq s \leq 4$.
\subsection{Proof of Theorem \ref{Adams}}
In this section, we provide the proof of Theorem \ref{Adams}. Similar to the proof of Theorems \ref{Rellich} and \ref{Sobolev}, the proof of Theorem \ref{Adams} is done by using Theorem \ref{keytool} and the sharp Adams inequality \eqref{eq:AdamsR4} in $\R^4$. By scaling, it is easy to derive from \eqref{eq:AdamsR4} the following inequality
\begin{equation}\label{eq:Adamstau}
\sup_{\int_{\R^4} (\De u)^2 + \tau \int_{\R^4} u^2 dx \leq 1} \int_{\R^4} \lt(e^{32\pi^2 u^2} -1\rt) dx \leq \frac1{\tau}\, \sup_{\int_{\R^4} (\De u)^2 + \int_{\R^4} u^2 dx \leq 1} \int_{\R^4} \lt(e^{32\pi^2 u^2} -1\rt) dx.
\end{equation}
\begin{proof}[Proof of Theorem \ref{Adams}]
Again, as discussed in Section $2$, we only need to prove \eqref{eq:Adams} for radial function $u \in W^{2,2}(\H^4)$. Let $\lam < \frac{81}{16}$ and $u \in W^{2,2}(\H^4)$ be a radial function such that
\[
\int_{\H^n} (\Delta_g u)^2 dV_g - \lam \int_{\H^n} u^2 dV_g \leq 1.
\]
Suppose $u(x) = v(V_g(B_g(0,\rho(x))))$ for a function $v$ on $[0,\infty)$, let us define the function $u_e$ in $\R^4$ by $u_e(y) = v(\si_4 |y|^4)$ for $y\in \R^4$. By Theorem \ref{keytool}, we have
\[
\int_{\H^4} (\De_g u)^2 dV_g - \frac{81}{16} \int_{\H^4} u^2 dV_g \geq \int_{\R^4} (\De u_e)^2 dy,
\]
which then implies
\[
\int_{\R^4} (\De u_e)^2 dy + \lt(\frac{81}{16} -\lam\rt) \int_{\R^4} u_e^2 dy \leq \int_{\H^n} (\Delta_g u)^2 dV_g - \lam \int_{\H^n} u^2 dV_g \leq 1.
\]
Since $\lam < \frac{81}{16}$, then $\tau: =\frac{81}{16} -\lam >0$. By the sharp Adams inequality \eqref{eq:Adamstau} for this $\tau$, we have
\[
\int_{\R^n} \lt(e^{32 \pi^2 u_e^2} -1\rt) dy \leq \frac{16}{81 -16 \lam}\,\sup_{\int_{\R^4} (\De u)^2 + \int_{\R^4} u^2 dx \leq 1} \int_{\R^4} \lt(e^{32\pi^2 u^2} -1\rt) dx=: C_\lam.
\]
However, we have
\[
\int_{\R^n} \lt(e^{32 \pi^2 u_e^2} -1\rt) dy = \int_0^\infty \lt(e^{32 \pi^2 v(s)^2} -1\rt)^2 = \int_{\H^n} \lt(e^{32\pi u^2} -1\rt) dV_g.
\]
The proof of \eqref{eq:Adams} is then completed.
The proof of \eqref{eq:Adamsexact} is completely similar with the one of \eqref{eq:Adams} by using Theorem \ref{keytool} and the Adams inequality with exact growth in $\R^4$ due to Masmoudi and Sani. So we omit it. It remains to check the optimality of the power $2$ in the denominator in the integral in \eqref{eq:Adamsexact}. To do this, we construct a sequence of test functions as follows. For each $m \in \N$, consider
\[
u_m(x) =
\begin{cases}
\sqrt{\frac1{32 \pi^2} \ln m} + \sqrt{\frac{1}{8 \pi^2 \ln m}}(1 - \sqrt{m} |x|^2)&\mbox{if $|x|\leq m^{-\frac14}$,}\\
-\sqrt{\frac1{2\pi^2 \ln m}} \ln |x| &\mbox{if $m^{-\frac14} \leq |x| \leq 1$,}\\
\xi_m(x) &\mbox{if $|x| \geq 1$,}
\end{cases}
\]
where $\xi_m \in C_0^\infty(\R^n)$ is a radial function such that $\xi_m(x) =0$ for $|x|\geq 2$,
\[
\xi_m\Big{|}_{\{|x| =1\}} =0,\qquad \frac{\pa \xi_m}{\pa r}\Big{|}_{\{|x| =1\}} = -\sqrt{\frac{1}{2\pi^2 \ln m}},
\]
and $\xi_m, |\na \xi_m|$ and $\De \xi_m$ are all $O((\ln m)^{-\frac12})$. The choice of the sequence $\{\xi_m\}_m$ is inspired by the sequence of Masmoudi and Sani \cite{MS2014}. Following the idea of Karmakar \cite{Karmakar}, we set $\bar u_m(x) = u_m(3 x)$ then the support of $\bar u_m$ is contained in $\{|x|\leq \frac23\}$. We can easily check that
\[
\int_{\H^4} \bar u^2 dV_g = O((\ln m)^{-1}),\quad \text{and}\quad \int_{\H^4} (\De_g \bar u)^2 dV_g = 1 + O((\ln m)^{-1}).
\]
Let $w_m = c_m \bar u_m$ where $c_m$ is chosen such that $\int_{\H^4} (\De_g w_m)^2 dV_g - \frac{81}{16} \int_{\H^4} w_m^2 dV_g =1$. Then $c_m = 1 + O((\ln m)^{-1})$. Notice that $w_m$ is radial function. Moreover, we have
\begin{align*}
\int_{\H^4} \frac{e^{32\pi^2 w_m^2} -1}{(1 + w_m^2)^p} dV_g &\geq \int_{\{|x|\leq 3^{-1} m^{-\frac14}\}}\frac{e^{32\pi^2 w_m} -1}{(1 + w_m)^p} dV_g\\
&\geq C (\ln m)^{-\frac p2} (1 + O((\ln m)^{-1}) \int_{\{|x|\leq 3^{-1} m^{-\frac14}\}} (e^{c_m^2 \ln m} -1) dx\\
&\geq C (\ln m)^{-\frac p2}(1 + O((\ln m)^{-1}) m^{-1}(e^{c_m^2 \ln m} -1)\\
&\geq C (\ln m)^{-\frac p2}(1 + O((\ln m)^{-1})e^{(c_m^2-1)\ln m},
\end{align*}
here we denote by $C$ for any constant which is independent of $m$. Notice that $c_m^2 = 1 + O((\ln m)^{-1})$ and $\int_{\H^4} w_m^2 dV_g = O((\ln m)^{-1})$. Hence it holds
\[
\frac{1}{\int_{\H^4} w_m^2 dV_g} \int_{\H^4} \frac{e^{32\pi^2 w_m^2} -1}{(1 + w_m^2)^p} dV_g \geq C (\ln m)^{1-\frac p2} (1 + O((\ln m)^{-1})) e^{O(1)},
\]
and
\[
\lim_{m\to \infty}\frac{1}{\int_{\H^4} w_m^2 dV_g} \int_{\H^4} \frac{e^{32\pi^2 w_m^2} -1}{(1 + w_m^2)^p} dV_g \geq C (\ln m)^{1-\frac p2} (1 + O((\ln m)^{-1})) e^{O(1)} = \infty,
\]
if $p < 2$. This proves the sharpness of the power $2$ in the denominator in the integral in \eqref{eq:Adamsexact}.
\end{proof}
\section*{Acknowledgments}
The author would like to thank Quoc Hung Phan for useful discussions in the proof of Theorem \ref{keytool}.
| {
"attr-fineweb-edu": 1.37207,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfJDxK7IAE_K9yrVm | \section{Introduction}\label{sec:intro}
Relativistic flows \cite{cattaneo-book-2011,lichnerowicz-book-1967,eckart-prl-1940,muller-zphys-1967,
landau-book-1987,israel-anp-1976,israel-prsl-1979} are of great relevance to
several research fields, including astrophysics and cosmology
\cite{degroot-book-1980,rezzolla-book-2013} and
high energy physics, in particular in connection with the study of the quark
gluon plasma (QGP) \cite{florkowski-rpp-2018}. Relativistic hydrodynamics has
also found application in the context of condensed matter physics, particularly
for the study of strongly correlated electronic fluids in exotic (mostly 2-d)
materials, such as graphene sheets and Weyl semi-metals
\cite{lucas-jopcm-2018}.
The mounting importance of the relativistic hydrodynamic approach for several
physics application areas commands the availability of efficient and versatile
simulation tools. In the last decade, the Relativistic Lattice Boltzmann method
(RLBM) has gained considerable interest in this context. To date, RLBM has been
derived and applied in the limit of vanishingly small Knudsen numbers $\rm Kn$,
defined as the ratio between the particles mean free path and a typical
macroscopic scale of the flow; available methods are increasingly inaccurate as
one increases the value of $\rm Kn$, moving towards beyond-hydrodynamic regimes.
On the other hand, beyond-hydro regimes are very relevant for QGP, especially
with regard to their long-time evolution after the hydrodynamic epoch.
Furthermore, electron conduction in pure enough materials is almost ballistic,
and therefore more attuned to beyond-hydrodynamic descriptions.
The study these systems has been performed in the past as an eremitic expansion
of the purely ballistic regime \cite{romatschke-epjc-2018, borghini-epjc-2018}.
In this work we propose instead an extension of RLBM that builds on the hydrodynamic
regime to further enhance its efficiency in the rarefied gas regime.
The extension of RLBM to the study of rarefied gases has been previously
considered in the work by Ambru\c{s} and Blaga \cite{ambrus-prc-2018}. Based
on off-lattice product-based quadrature rules, their model allow for an
accurate description of one-dimensional flows beyond hydrodynamic regimes.
In this work, we extend the RLBM in order to further enhance its efficiency in
the rarefied gas regime.
For simplicity, in this paper we consider gases of massless particles in a $(2+1)$ space
time, but the same methodologies can be extended to more general equations of
state, suitable for fluids consisting of non-zero mass particles in three space
dimensions.
This paper is organised as follows: in the first part of
Sec.~\ref{sec:model-description} we review the main concepts of relativistic
kinetic theory, which are instrumental for the subsequent description of the
Relativistic Lattice Boltzmann Method. In Sec.~\ref{sec:mom.space-disc}, we dig
deeper into the definition of the model, by describing in more detail a
momentum space discretization procedure which enables the beyond-hydro
capabilities of the scheme. Finally, in Sec.~\ref{sec:num-results}, we present
numerical evidence of the capabilities of the scheme, while Sec. ~\ref{sec:conclusions}
presents our conclusions and prospects of further development.
\section{Model Description}\label{sec:model-description}
In this work we consider a two-dimensional gas of massless particles; we use
a $(2+1)$ dimensional Minkowsky space-time, with metric signature $\eta^{\alpha\beta}=diag(+,-,-)$.
We adopt Einstein's summation convention over repeated indices; Greek indices denote $(2+1)$
space-time coordinates and Latin indices two-dimensional spatial coordinates.
All physical quantities are expressed in natural units, $c = k_{\rm B} = 1$.
\subsection{Relativistic Kinetic Theory}\label{subsec:relativistic-kinetic-theory}
The relativistic Boltzmann equation, here taken in the relaxation time approximation (RTA)
\cite{anderson-witting-ph-1974b,anderson-witting-ph-1974a}, governs the time evolution of
the single particle distribution function $f(x^{\alpha}, p^{\alpha})$, depending on space-time coordinates
$x^{\alpha}=(t, \mathbf{x})$ and momenta $p^{\alpha}=(p^{0}, \mathbf{p})$, with $\mathbf{x}, \mathbf{p} \in \mathbb{R}^{2}$:
\begin{align}\label{eq:boltz_eq}
p^{\alpha}\partial_\alpha f = - \frac{U^\alpha p_\alpha}{\tau} \left( f - f^{\rm eq} \right) \quad ;
\end{align}
$U^\alpha$ is the macroscopic fluid velocity, $\tau$ is the (proper)-relaxation time
and $f^{\rm eq}$ is the equilibrium distribution function, which we write in a general form as
\begin{align}\label{eq:feq_dist}
f^{\rm eq} \propto \frac{1}{ z^{-1} exp{\left(\frac{U_\alpha p^\alpha}{T}\right) + \varepsilon } } , \quad z = exp{\left( \frac{\mu}{T} \right)} ,
\end{align}
with $T$ the temperature, $\mu$ the chemical potential, and $\varepsilon$ distinguishing between the Maxwell-J{\"u}ttner ($\varepsilon = 0$),
Fermi-Dirac ($\varepsilon = 1$) and Bose-Einstein ($\varepsilon = -1$) distributions.
In what follows we will restrict ourselves to the Maxwell-J{\"u}ttner statistic; however we remark that the quadrature rules
introduced in the coming section are general and apply to Fermi or Bose statistics as well.
The particle flow $N^\alpha$ and the energy-momentum tensor $T^{\alpha\beta}$,
respectively the first and second order moment of the distribution function
\begin{align}\label{eq:moments}
N^\alpha = \int f p^\alpha \frac{\diff^2 p}{p_0} \quad , \quad\quad
T^{\alpha\beta} = \int f p^\alpha p^\beta \frac{\diff^2 p}{p_0} \quad ;
\end{align}
can be put in direct relation with a hydrodynamic description of the system.
The RTA in Eq.~\ref{eq:boltz_eq} is in fact compatible with the Landau-Lifshitz \cite{landau-book-1987} decomposition:
\begin{align}\label{eq:ll-decomp}
N^\alpha &= n U^{\alpha} - \frac{n}{P+\epsilon} q^\alpha \quad , \\
T^{\alpha\beta} &= (\epsilon+P)U^\alpha U^\beta - P \eta^{\alpha\beta} + \pi^{<\alpha\beta>} \quad .
\end{align}
$n$ is the particle number density, $P$ the pressure field, $\epsilon$ the energy density,
$q^{\alpha}$ the heat flux, and $\pi^{<\alpha\beta>}$ the pressure deviator.
\subsection{Relativistic Lattice Boltzmann Method}\label{subsec:RLBM}
In this section, we briefly summarise the derivation of the relativistic Lattice Boltzmann method,
referring the reader to a recent review \cite{gabbana-pr-2020} for full details.
The starting point in the development of the scheme is a polynomial expansion
of the equilibrium distribution function (Eq.~\ref{eq:feq_dist}) :
\begin{equation}\label{eq:mj-feq-expansion}
f^{\rm eq}(p^{\mu}, U^{\mu}, T)
=
\omega( p^0) \sum_{k = 0}^{\infty} a^{(k)}_{i_1\dots i_k}( U^{\mu}, T) J^{(k)}_{i_1\dots i_k} ( p^{\mu} ) \quad ,
\end{equation}
with $\{ J^{(k)}_{i_1\dots i_k}, k = 1,2,\dots \} $ a suitable set of polynomials, orthogonal with respect
to the weighting function $\omega(p^0)$, and the expansion coefficients are:
\begin{equation}\label{eq:mj-projection-coefficients}
a^{(k)}_{i_1\dots i_k}( U^{\mu}, T)
=
\int f^{\rm eq}( p^{\mu}, U^{\mu}, T) J_{i_1\dots i_k}^{(k)}( p^{\mu} ) \frac{\diff^2 p}{p^0} \quad .
\end{equation}
The choice of the polynomials to be used in Eq.~\ref{eq:mj-feq-expansion}
is directly related to the specific form of the equilibrium distribution function taken into consideration.
A convenient choice for the weighting function $\omega(p^0)$ is given by
the equilibrium distribution in the fluid rest frame; this choice delivers
the nice and desirable property that the first $N$ coefficients of the truncated version
of Eq.~\ref{eq:mj-feq-expansion} coincide with the first $N$ moments of the distribution.
The next step consists of defining a Gaussian-type quadrature, able to recover
exactly all the moments of the distribution up to a desired order $N$.
The definition of the quadrature
is a crucial aspect, which will be covered in detail in the next section. For the moment
we assume that we can define a set $\{(w_i, p_i^{\mu}), i = 1,2, \dots\} $ of weights and quadrature
nodes, allowing to formulate the discrete version of the equilibrium distribution function:
\begin{equation}\label{eq:mj-feq-expansion-truncated}
f^{\rm eq}_i = f^{\rm eq}(p^{\mu}_i, U^{\mu}, T)
=
w_i \sum_{k = 0}^{N} a_{i_1\dots i_k}^{(k)}( U^{\mu}, T) J_{i_1\dots i_k}^{(k)}( p^{\mu}_i ) \quad .
\end{equation}
Consequently, Eq.~\ref{eq:boltz_eq} is discretized in a set of differential equations for the distributions $f_i$:
\begin{align}
\partial_t f_i + \bm{v}^{i} \cdot \nabla f_i = - \frac{p_i^{\alpha} U_{\alpha}}{p^0_i \tau} \left( f_i - f_i^{\rm eq} \right) \quad ,
\end{align}
where $\bm{v}^{i} = \bm{p}^i / p_0^i $.
Next, by employing an upwind Euler discretization in time with step $\Delta t$ we derive the relativistic Lattice Boltzmann equation:
\begin{equation}\label{eq:discrete-rbe}
f_i(\bm{x} + \bm{v}^{i} \Delta t, t + \Delta t)
=
f_i(\bm{x}, t) + \Delta t~ \frac{p_i^{\alpha} U_{\alpha}}{p^0_i \tau} (f_i^{\rm eq} - f_i(\bm{x}, t) ) \quad .
\end{equation}
From an algorithmic point of view the time evolution of the above equation can be split into
two parts, respectively the \textit{streaming} step, in which information is propagated to the neighboring sites,
and the \textit{collision} step, in which the collisional operator is locally applied to each grid point.
More formally, the streaming step can be defined as
\begin{align}\label{eq:streaming}
f_i^*(\bm{x}, t) = f_i(\bm{x} - \bm{v}^{i} \Delta t, t) \quad ,
\end{align}
where information is moved at a distance $\Delta x = \bm{v}^{i} \Delta t$, which in general might not
define a position on the Cartesian grid.
In such cases it is therefore necessary to implement an interpolation scheme.
In this work we adopt a simple bilinear interpolation scheme:
\begin{align}\label{eq:interpolation}
f_i (\bm{x} - \bm{v}^{i} \Delta t, t) &= \frac{1}{\Delta x \Delta y} \Bigg\{ \notag \\
%
& f_i(\bm{x} ~-\bm{r_x} ~-\bm{r_y}, t)
\Big( \phantom{~1 -} \Delta t \big| v^i_x \big| \Big)
\Big( \phantom{~1 -} \Delta t \big| v^i_y \big| \Big) \notag \\
%
& f_i(\bm{x} \phantom{~-\bm{r_x}} ~-\bm{r_y}, t)
\Big( 1 - \Delta t \big| v^i_x \big| \Big)
\Big( \phantom{~1 -} \Delta t \big| v^i_y \big| \Big) \notag \\
%
& f_i(\bm{x} ~-\bm{r_x} \phantom{~-\bm{r_y}}, t)
\Big( \phantom{~1 -} \Delta t \big| v^i_x \big| \Big)
\Big( 1 - \Delta t \big| v^i_y \big| \Big) \notag \\
%
& f_i(\bm{x} \phantom{~-\bm{r_x}} \phantom{~-\bm{r_y}}, t)
\Big( 1 - \Delta t \big| v^i_x \big| \Big)
\Big( 1 - \Delta t \big| v^i_y \big| \Big) \Bigg\}
\end{align}
with
\begin{align}
\bm{r_x} = sgn(v^i_x) \Delta x \bm{\hat{x}} \\
\bm{r_y} = sgn(v^i_y) \Delta y \bm{\hat{y}}
\end{align}
Next, one needs to compute the macroscopic fields, starting from the moments of the particle distribution functions, which
thanks to a Gaussian quadrature can be defined as discrete summations:
\begin{align}\label{eq:discrete_sum_moments}
N^\alpha = \sum_i^{N_{\rm pop}} p^\alpha_i f_i \quad , \quad T^{\alpha\beta} = \sum_i^{N_{\rm pop}} p^\alpha_i p^\beta_i f_i \quad .
\end{align}
From the definition of the energy-momentum tensor in the Landau frame, we compute the energy density $\epsilon$
and the velocity vector $U^{\alpha}$, by solving the eigenvalue problem:
\begin{equation}
\epsilon U^{\alpha} = T^{\alpha \beta} U_{\beta} \quad .
\end{equation}
The particle density $n$ can be then calculated using the definition of the first order moment, while pressure and
temperature are obtained via a suitable equation of state; in this work we use the ideal equation of state (consistent with the J\"uttner distribution):
\begin{equation}
\epsilon = 2 P = 2 n T \quad .
\end{equation}
The macroscopic fields allow in turn to compute the local equilibrium distribution and finally one applies the relaxation time collisional operator:
\begin{align}\label{eq:collision}
f_i(\bm{x}, t + \Delta t) = f_i^*(\bm{x}, t) + \Delta t~ \frac{p_i^{\alpha} U_{\alpha}}{p^0_i \tau} (f_i^*(\bm{x}, t) - f_i^{\rm eq}) \quad .
\end{align}
\subsection{ Momentum space discretization }\label{sec:mom.space-disc}
As discussed in the previous section, the definition of a Gaussian-type
quadrature represents the cornerstone in the definition of a Lattice Boltzmann
method, since it allows the {\it exact} calculation of integrals of the form of
Eq.~\ref{eq:mj-projection-coefficients} as discrete sums over the discrete
nodes of the quadrature.
In the framework of RLBM, one distinguishes between two approaches in the
definition of quadrature rules, each with advantages and
disadvantages:
i) On-lattice Lebedev-type quadrature rules
ii) Off-lattice product-based quadrature rules.
On-lattice quadrature rules \cite{mendoza-prd-2013,gabbana-pre-2017,gabbana-cf-2018}
allow retaining one of the main LBM features, namely perfect streaming. Indeed, by requiring
that all quadrature points lie on a Cartesian grid, it follows directly that at each time step
information is propagated from one grid cell to a neighbouring one, with two desirable side-effects:
i) super luminal propagation is ruled out by construction, and ii) no artificial
dissipative effects emerge, since there is no need of interpolation.
On the other hand, off-lattice quadratures, typically developed by means of product rules of Gauss-Legendre
and/or Gauss-Laguerre quadratures \cite{romatschke-prc-2011,coelho-cf-2018,ambrus-prc-2018}, offer
the possibility of handling more complex equilibrium distribution functions and to extend the
applicability of the method to regimes that go beyond the hydrodynamic one.
Conversely, the price to pay when going off-lattice is the requirement of an interpolation scheme.
This makes it so that the advantages of on-lattices schemes represent the price to pay when going off-lattice.
\begin{figure}[tbh]
\centering
\includegraphics[width=0.99\textwidth]{./fig1.pdf}
\caption{\small Two examples of stencils compatible with a third order
quadrature. The arrows represent the discrete velocities
$\vec{n}_i$, while the different
colors stand for different energy values $p^0_j$. For an on-lattice quadrature (left panel)
the velocity vectors of all energy shells lie
at the intersection between the Cartesian grid and a circle of radius
$5$. In an off-lattice example (right panel) the
different energy shells are displaced in such a way that the vectors
forming the stencil span uniformly the unit circle. In both cases,
the total number of discrete components is $N_{\rm pop}=48$.
}
\label{fig:1}
\end{figure}
For the definition of on-lattice quadratures, one can follow the so called method of quadrature
with prescribed-abscissas \cite{philippi-pre-2006}.
In practice, one needs to find the weights and the abscissae of a quadrature able
to satisfy the orthonormal conditions, up to the desired order:
\begin{align}\label{eq:mj-orthogonal-conditions}
\int \omega(p^0) J_{i_1\dots i_m}^{(m)}( p^{\mu} ) J_{j_1\dots j_n}^{(n)}( p^{\mu} ) \frac{\diff^2 p}{p^0}
&=
\sum_{i=1}^{N_{\rm pop}} w_i J_{i_1\dots i_m}^{(m)}( p^{\mu}_{i} )J_{j_1\dots j_n}^{(n)}( p^{\mu}_{i} ) \nonumber\\
&=
\delta_{mn} \delta_{i_1 j_1} \dots \delta_{i_n j_m} \quad ;
\end{align}
where $p^{\mu}_{i}$ are the discrete momentum vectors. A convenient parametrization of
the discrete momentum vectors in the ultra-relativistic limit writes as follows:
\begin{equation}\label{eq:}
p^{\mu}_{i,j} = p^0_j \left(1, \frac{\vec{n}_i}{|| \vec{n}_i ||} \right),
\end{equation}
where $\vec{n}_i \in \mathcal{Z}^2$ are the vectors forming the stencil, which are to be found at the intersection
between the Cartesian grid and a circle (or a sphere in (3+1) dimensions).
Massless
particles travel at the speed of light irrespective of their energy, so this set of vectors can be assigned to different energy shells,
each labeled via the index $j$, which are properly chosen in such a
way that Eq.~\ref{eq:mj-orthogonal-conditions} returns valid solutions for the weights $w_i$.
Note that vectors $\vec{n}_i$ must all have the same length $|| \vec{n}_i ||$, so information is correctly propagated at the speed of light.
Finally, an $N$-order quadrature rule needs a minimum of $N + 1$ energy shells, to recover the moments exactly.
Therefore, following the procedures adopted in \cite{gabbana-pr-2020,mendoza-prd-2013}, we
select such shells as the zeros of the orthogonal polynomial $J_{0 \dots 0}^{(N+1)}(p^0)$.
In order to define a quadrature rule on a Cartesian grid it is expedient to use
the same set of velocity vectors for the different energy shells; this allows to
achieve enough degrees of freedom such to satisfy the orthonormal conditions in
Eq.~\ref{eq:mj-orthogonal-conditions} by using vectors of a relatively small
lenght.
In the left panel of Fig.~\ref{fig:1} we show an example of a quadrature recovering up to the third order moments
of the distribution function, using vectors of lenght $5$. Extending the procedure to higher orders
leads to stencils unviable for practical computational purposes, since already going to the fourth order would require
using vectors of lenght $5 \sqrt{13}$.
It is then clear that in order to recover the higher orders of the distribution it is necessary
to relax the condition of on-lattice streaming. Furthermore, when moving off-lattice it becomes
convenient to assign different subsets to the different energy shells. For example, the stencil in the right panel
of Fig.~\ref{fig:2} allows the definition of a quadrature rule that has the same number of discrete components, and
the same accuracy order of its on-lattice counterpart, but with a higher level of isotropy.
In order to define these off-lattice quadratures, one starts from
the observation that the orthonormal conditions in
Eq.~\ref{eq:mj-orthogonal-conditions} are equivalent to requiring the exact calculation of integrals of the form
\begin{align}\label{eq:quad-integrals}
I^{\alpha_1\dots\alpha_k} = \int \omega(p^0) p^{\alpha_1} \dots p^{\alpha_k} \frac{\diff^2 p}{p^0} \quad .
\end{align}
for all $k \leq 2 N$.
For an ultra-relativistic gas one has $p^0 = |\bm{p}|$, so it
is useful to adopt polar coordinates and break down
the integrals of Eq.~\ref{eq:quad-integrals} into a radial part and an angular one:
\begin{align}\label{eq:integrals-pol-coord-ultra}
I^{\alpha_1 \dots \alpha_k} \propto
\left( \int_0^{\infty} e^{-\frac{p}{T}} p^{k} dp \right)
\left( \int_0^{ 2 \pi} (\cos \theta)^{k_1} (\sin \theta)^{k_2} d\theta \right) \quad .
\end{align}
with $ 0 \leq k_1 + k_2 \leq k $.
We form the quadrature rule as a product rule: the Gauss-Laguerre rule is the most natural choice for
the radial component of Eq.~\ref{eq:integrals-pol-coord-ultra}, while for the angular part
we consider a simple mid-point rule (since the angular integral can be reworked using basic
trigonometry as a sum of integrals of circular functions of maximum degree $2N$):
\begin{align}
p_{ij}^{\mu} =
\begin{pmatrix}
p_i \\
p_i \cos \theta_j \\
p_i \sin \theta_j
\end{pmatrix} \quad ,
&&
w_{ij} = w^{(p)}_i w^{(\theta)}_j \quad ,
&&
\forall 0 \leq i \leq N, 0 \leq j \leq 2N \quad .
\end{align}
where $\{p_i, i = 1, 2, \dots N + 1\}$ are the roots of $L_{N+1}(p)$, the Laguerre polynomial of order $N+1$, and
\begin{align}
\theta_j &= j \frac{2\pi}{2N+1} \quad , \\
w^{(\theta)}_j &= \frac{2 \pi}{2N + 1} \quad , \\
w^{(p)}_i &= \frac{p_i}{(N+2)^2 [L_{N+2}(p_i)]^2} \quad .
\end{align}
The total number of points in the quadrature is $N_{\rm pop} = (N+1)(2N+1)$.
In order to move to high Knudsen numbers this level of discretisation is however
not sufficient to properly describe the dynamics of the system. Larger and more
evenly distributed sets of discrete velocities are needed, in order to cover
the velocity space in a more uniform way.
A possible solution is to increase the order of the angular quadrature, i.e.
raise the number of velocities per energy shell, even if this comes at an
increased computational cost. Additionally, a further move that seems to be
beneficial in increasing the quality of the solution without effectively
increasing the number of discrete velocities is the decoupling of radial and
angular abscissae.
In fact, once the required quadrature orders needed to recover the requested hydrodynamic moments are met, the restriction
of using the same sub-stencils $\theta_j$ for every energy shell $p_i$ can be lifted, and the isotropy
of the model can be enhanced with no need of increasing the overall quadrature order.
In $(2+1)$ dimensions this is easily achieved by rotating the sub-stencils related to different energy shells, in
such a way that the discrete velocities cover the velocity space in the most homogeneous possible way.
With these two recipes in mind, our quadrature becomes:
\begin{align}\label{eq:off-lattice-quad}
p_{ij}^{\mu} =
\begin{pmatrix}
p_i \\
p_i \cos \theta_{ij} \\
p_i \sin \theta_{ij}
\end{pmatrix} \quad ,
&&
w_{ij} = w^{(p)}_i w^{(\theta)}_j \quad ,
&&
\forall 0 \leq i \leq N, 0 \leq j \leq K - 1\quad .
\end{align}
where $\rm K$ can be chosen freely as long as $K \geq 2N+1$ and
\begin{align}
\theta_{ij} &= \left(j + \frac{i}{N+1} \right) \frac{2\pi}{K} \quad , \\
w^{(\theta)}_j &= \frac{2 \pi}{K} \quad , \\
w^{(p)}_i &= \frac{p_i}{(N+2)^2 [L_{N+2}(p_i)]^2} \quad .
\end{align}
All together, there are $N_{\rm pop}=K(N+1)$ points.
In the right panel of Fig.~\ref{fig:1} we compare an example of a quadrature obtained
with this new method (right panel) with a more traditional on-lattice one.
\section{Numerical results}\label{sec:num-results}
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\textwidth]{./fig2.pdf}
\caption{\small Mono-dimensional Sod Shock tube benchmark in the free-streaming regime
($\rm{Kn} \to + \infty $, reached by setting $\tau \to +\infty$ in Eq.~\ref{eq:discrete-rbe})
at time $t/t_{max} = 0.9$ with grid size $2000 \times 1$.
The analytic velocity and pressure fields given in \ref{appendix-1} (orange line)
are confronted with different numerical results produced using
different off-lattice stencils. In the top panels, the analytic solution is confronted with
a third order stencil with $K = 12$ (green line) and a fifth order quadrature with the same value
of $\rm K$ (blue line). These panels give no evidence of an increase in the quality of the solution
when increasing the order of the quadrature. In the bottom panels, the analytic solution is confronted
with a third order stencil with $K=120$, that accurately reproduces analytic results.
}
\label{fig:2}
\end{figure}
\subsection{Mono-dimensional Shock Waves}\label{subsec:mono-sod}
We test the ability of our new numerical scheme to simulate beyond-hydrodynamic regimes, considering
as a first benchmark the Sod shock tube problem, which has an analytic solution in the free streaming regime,
derived in \ref{appendix-1}.
In our numerical simulations we consider a tube defined on a grid of $L \times 1$ points. The tube is filled
with a fluid at rest, and there is a discontinuity in the values of the thermodynamic quantities in the middle
of the domain (that is, considering a $[-L/2,L/2]$ domain, at the value $x = 0$);
By normalizing all quantities to appropriate reference values, we take
\begin{align}\label{eq:initial-cond-macro}
\left(~ \frac{P}{P_0},~ \frac{n}{n_0},~ \frac{T}{T_0},~ \beta~ \right) =
\begin{cases}
(2.25,~ 1.5,~ 1.5,~ 0.0) \quad\quad x < 0 \\
(0.05,~ 0.1,~ 0.5,~ 0.0) \quad\quad x > 0
\end{cases}
\end{align}
Once the division between the two domains is removed, pressure and temperature differences
develop into a mono-dimensional dynamics of shock - rarefaction waves traveling along the tube.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\textwidth]{./fig3.pdf}
\caption{\small Comparison of the L2 difference between stencils with
different values of $N_{\rm pop}$ on a grid size $2000 \times 1$ and
a very high resolution instance with $N_{\rm pop} = 2400$ and grid
size $4000 \times 1$ for different values of $\rm Kn$. For values of
$\rm Kn$ in the hydrodynamic regime, the quality of the solution does
not depend significantly on the number of populations $N_{\rm pop}$
and $\epsilon$ only depends on spatial resolution. As $\rm Kn$
increases, $\epsilon$ depends on $N_{\rm pop}$ until the saturation point is
reached, and the residual error depends again on spatial resolution.
The saturation point grows with $\rm Kn$, from $\sim 100$ for
$\rm{Kn}=0.05$ to $\sim 350$ for $\rm{Kn} \sim 10$. All $\epsilon$
values are normalized with respect their asymptotic values
$\epsilon_0$, which is of order $10^{-3}$.
}
\label{fig:3}
\end{figure}
Fig. \ref{fig:2} shows a subset of the results of our simulation at time $t/t_{max} = 0.9$
($t_{max}$ being the time needed by the shock to reach the edge of the box),
for two different quadrature orders and for several choices of $\rm K$. As
higher order quadratures naturally imply larger values of $\rm K$ it is in principle
debatable which is the main actor leading to accurate results in the beyond
hydrodynamics regime. Fig.~\ref{fig:2} provides an answer to this question.
We first show that, for a comparable (and low) value of $\rm K$, quadratures of
different order lead to similar (and unsatisfactory) results; on the other hand,
limiting the quadrature order to $3$ but substantially increasing $\rm K$ (and
consequently $N_{pop})$ we obtain results in very good agreement with the
analytic solution. This provides strong evidence of the important result that the proper representation of these kinetic
regimes can be achieved by employing sufficiently dense velocity sets, even with
quadratures recovering only the minimum number of moments of the particle
distributions able to provide a correct representation of the thermodynamical
evolution of the system.
Starting from the initial conditions defined in Eq.~\ref{eq:initial-cond-macro} we now extend our analysis to intermediate regimes
characterized by finite values of the Knudsen number, with the aim of establishing a relation between $\rm{Kn}$ and the optimal
choice for $\rm K$. In our simulations, we use a fixed value of the relaxation time $\tau$, and assume the following expression
for the Knudsen number
\begin{equation}
\rm{Kn} = \frac{c ~ \tau}{L} \quad
\end{equation}
the value for $\tau$ is properly rescaled when one wants to compare simulations with different $\rm L$ but equal $\rm Kn$.
We use quadrature rules of order $N=3$, with different values of $\rm K$, and compare
against a reference solution obtained solving the RTA with a highly refined discretization
both in terms of momentum space and grid resolution. For the reference solution we use
$K = 600$ and a grid size $4000 \times 1$.
In order to quantify the accuracy of the result we introduce the parameter $\epsilon$,
the relative error computed in L2-norm of the macroscopic velocity profile
\begin{align}\label{eq:rel_err}
\epsilon = \frac{|| \beta - \beta_{\rm ref} ||_2}{|| \beta_{\rm ref} ||_2} \quad .
\end{align}
As expected, at low $\rm Kn$ values, $\epsilon$ stays constant as one increases the value of $\rm K$, and the differences between the two solutions are only
due to the finer spatial resolution of $\beta_{\rm ref}$. When transitioning to beyond-hydro regimes, $\epsilon$ starts to exhibit a power
law dependency with respect to $\rm K$ (and therefore $N_{\rm pop}$) since the low order momentum space discretization comes into play.
The bottom line of this power law decay occurs once the size of the artifacts in the macroscopic profiles
(i.e. the ``staircase'' effect visible in Fig.~\ref{fig:2}), become comparable with the grid spacing.
From that point on, $\epsilon$ stays constant as the spatial resolution error becomes preponderant over the velocity resolution one.
As expected, the optimal choice for $\rm K$ grows as one transitions from the hydrodynamic to the ballistic regime.
In any case, from Fig.~\ref{fig:3} it is possible to appreciate that the minimum number of $N_{\rm pop}$ never exceeds $\sim 350$ populations.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\textwidth]{./fig4.pdf}
\caption{\small Convergence analysis of the numerical scheme in three
different kinetic regimes, using stencils with $N_{\rm pop}$ $48$, $240$,
and $480$. The comparison is performed with respect to a high resolution
simulation using $N_{\rm pop} = 2400$ (grid size $12000 \times 1$).
While in the hydrodynamic regime ($\rm{Kn} = 0.002$)
the results do not depend on the number of discrete components employed,
the dependence of the relative error $\epsilon$ on $N_{\rm pop}$ becomes
evident as we transition towards the ballistic regime. The figures also clearly highlight that,
for the larger $\rm Kn$ values, the error almost reaches a ($N_{\rm pop}$ dependent)
plateau as the grid size becomes finer and finer.
}
\label{fig:4}
\end{figure}
We conclude this section with a convergence analysis of the method.
In Fig.~\ref{fig:4} we analyze the scaling of the relative error Eq.~\ref{eq:rel_err} as a function of
the grid size in three different kinetic regime, comparing stencils with $N_{\rm pop}$ $48$, $240$, and $480$.
While in the hydrodynamic regime ($\rm{Kn} = 0.002$) the results do not depend on the number
of discrete components employed, the dependence of the relative error $\epsilon$ on $N_{\rm pop}$
becomes evident as we transition towards the ballistic regime.
In this case, the constraint of moving particles along a finite number of directions introduces
a systematic error, which becomes dominant as one increases the grid resolutions. As a result,
when the momentum space is not adequately discretized the error stops scaling, eventually reaching almost a plateau value.
\subsection{Two-dimensional Shock Waves}\label{subsec:bi-sod}
Purely bi-dimensional shock waves are commonly used as validation benchmarks in relativistic and non-relativistic
\cite{suzuki-mnras-2016,delzanna-aa-2003,chen-jcp-2017,marti-lrca-2015} CFD solvers, since they provide a useful
test bench to evaluate dynamics in the presence of sharp gradients.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\textwidth]{./fig5.pdf}
\caption{ \small Color plot of the particle density for the bi-dimensional Sod problem, in the presence of the initial conditions of
Eq.~\ref{eq:initial-cond-macro} and at time $t/t_{max} = 0.9$. The top panels show solutions for a spatial grid
with $1000 \times 1000$ lattice points, and $N_{\rm pop} = 2400$. The bottom panels have instead a spatial grid of
$500 \times 500$ lattice points, with $N_{\rm pop} = 48$. From left to right different kinematic regimes (different $\rm Kn$)
are explored. As $\rm Kn$ grows and the dynamic enters the beyond-hydrodynamic regime, $N_{\rm pop} = 48$
performs poorly while sensible results are obtained with the large value of $N_{\rm pop}$. Contour lines are logarithmically spaced.
}
\label{fig:5}
\end{figure}
In a box domain of extension $[-L/2, L/2] \times [-L/2, L/2]$, we impose the initial conditions
\begin{align}\label{eq:initial-cond-macro}
\left(\frac{P}{P_0}, \frac{n}{n_0}, \frac{T}{T_0}, \beta_x, \beta_y\right) =
\begin{cases}
(0.5, ~~ 0.5, ~~ 1.0, ~~ 0.0, ~~ 0.0) \quad, \quad x < 0 \quad y < 0 \quad, \\
(1.0, ~~ 0.5, ~~ 2.0, ~~ 0.0, ~~ 0.1) \quad, \quad x > 0 \quad y < 0 \quad, \\
(1.0, ~~ 0.5, ~~ 2.0, ~~ 0.1, ~~ 0.0) \quad, \quad x < 0 \quad y > 0 \quad, \\
(1.0, ~~ 1.0, ~~ 1.0, ~~ 0.0, ~~ 0.0) \quad, \quad x > 0 \quad y > 0 \quad.
\end{cases}
\end{align}
Under these settings, the system develops into a square shock wave that travels toward the top-right part of the box.
In Fig.~\ref{fig:5} we show a snapshot of the density field at time $t/t_{max} = 0.9$,
for three different values of the Knudsen number, corresponding to a hydrodynamic regime ($\rm{Kn} = 0.002$),
a transition regime ($\rm{Kn} = 0.1$), and an almost ballistic regime ($\rm{Kn} = 100$).
The top panels are obtained using a model employing a third order quadrature with $K = 600$, while the bottom one
uses $K = 12$. The two solvers are in excellent agreement when working in the hydrodynamic regimes,
with artificial patterns emerging as we transition beyond hydro regimes for the case $K = 12$.
Similarly to what has been done in the previous section for the case of the mono-dimensional shock wave,
we have investigated once again the dependency of the optimal choice for $\rm K$ with respect to $\rm{Kn}$.
This time the reference solutions have been calculated using a quadrature with $K=600$ and a grid of size
$1000 \times 1000$. All other simulations employ a grid of size $250 \times 250$.
The results are presented in Fig.~\ref{fig:6}, and closely resemble those presented in the mono-dimensional case,
with the optimal choice for $\rm K$ happens to be consistent with that of a mono-dimensional flow,
and even at large values of $\rm{Kn}$ the minimum number of velocities to be taken in order to obtain a correct
solution is in line with the figures obtained for the Sod tube.
\begin{figure}[h!]
\centering
\includegraphics[width=0.99\textwidth]{./fig6.pdf}
\caption{\small Comparison of the L2 difference between stencils at
different values of $N_{\rm pop}$ (grid size $250 \times 250$) and a very
high resolution instance with $N_{\rm pop} = 2400$ and grid size $1000
\times 1000$ for different values of $\rm Kn$. For values of $\rm Kn$ in
the hydrodynamic regime, the quality of the solution does not depend
significantly on $N_{\rm pop}$. As $\rm Kn$ increases, $\epsilon$ starts
to depend on $N_{\rm pop}$ until the saturation point is reached. After this value, the
stepping error is below the grid resolution and can not be visualized
anymore. The saturation value moves with $\rm Kn$, from $ <100$ in the
case $\rm{Kn}=0.05$ to $\sim 250$ in the case $\rm{Kn} \sim 10$.
$\epsilon$ values are normalized with respect their asymptotic values
$\epsilon_0$, which is of order $10^{-3}$.
}
\label{fig:6}
\end{figure}
\section{Conclusion}\label{sec:conclusions}
In this paper, we have presented a Relativistic Lattice Boltzmann Method for the simulation
of gases of ultra-relativistic particles in two spatial dimensions.
The method is able to describe free-streaming dynamics ($\rm{Kn} \gg 1 $) as well as hydro-dynamics
($\rm{Kn} \ll 1$). The simulation of beyond-hydro regimes is enabled by an off-lattice discretization technique of
the momentum space, which comes at the price of introducing some amount of numerical diffusivity.
The procedure consists in adopting a product rule for the quadratures (strategy already adopted in the past,
for example in \cite{ambrus-prc-2018}) and in the additional step of employing different velocity subsets
for the different energy shells.
In this way, a finer discretization of the two-dimensional velocity space is achieved, which is instrumental
for simulations at high values of $\rm Kn$.
The method has been benchmarked on two different realizations of the Sod shock tube problem, a
popular benchmark in fluid dynamics. We have considered both mono and bi-dimensional flows,
also providing analytical solutions for the limiting ballistic case.
Our results show that it is possible to extend RLBM to beyond-hydro regimes,
provided that a sufficient number of populations is used, independently of the quadrature order.
Also, an analysis on the minimum number of components of the stencils needed
to provide accurate solutions has been conducted, arriving at the result that $N_{\rm pop} \sim 350$
is sufficient for the purpose of reproducing the correct dynamics in every regime.
The numerical method developed in this paper is instrumental for the simulation of
relativistic problems that transition toward beyond hydrodynamic regimes.
Relevant examples in point are Quark Gluon Plasmas produced in heavy ion collisions, and
electron transport in exotic materials such as graphene.
Much is left for the future.
To start, it would be important to evaluate the computational performance
of the method against those of standard Monte Carlo approaches when working at finite Knudsen numbers.
Furthermore, a direct application of the method to the study
of beyond-hydro regimes in graphene will require the definition of appropriate boundary condition
schemes capable of reproducing experimental results \cite{guo-pnas-2017, kumar-np-2017, kiselev-prb-2019}.
Finally, the extension of the method to three spatial dimensions,
as well as to gases of massive particles, will be reported in an extended version of the present paper.
\section*{Acknowledgments}
The authors would like to thank Luciano Rezzolla and Lukas Weih for useful discussions.
DS has been supported by the European Union's Horizon 2020 research and
innovation programme under the Marie Sklodowska-Curie grant agreement No. 765048.
SS acknowledges funding from the European Research Council under the European
Union's Horizon 2020 framework programme (No. P/2014-2020)/ERC Grant Agreement No. 739964 (COPMAT).
AG would like to thank professor Michael G\"unther and professor Matthias Ehrhardt for their kind hospitality at Wuppertal University.
All numerical work has been performed on the COKA computing cluster at Universit\`a di Ferrara.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to
the corresponding author.
| {
"attr-fineweb-edu": 1.754883,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfKI4uzqj8WoUXFV3 | \section{Introduction}
Phase field models are a common framework to describe the
mesoscale kinetics of phase separation and pattern-forming processes
\cite{provatas2010phase, chen2002phase}. Since phase field models
replace a sharp interface by a diffuse order parameter profile, they
avoid numerical interface tracking, and are versatile enough to capture
topological changes. Although such models can be constructed starting
from a systematic coarse-graining of the microscopic Hamiltonian
\cite{giacomin2000macroscopic,giacomin1998phase,giacomin1997phase,
giacomin1996exact}, the use as a numerical tool to approximate
a specific free boundary problem requires in the first instance careful
consideration of their asymptotic long-time sharp interface limits.
In this paper, we will mainly focus on the Cahn-Hilliard equation
for a single conserved order parameter $u=u(\mathbf{x},t)$,
\begin{subequations}\label{main}
\begin{equation}
u_t = - \nabla \cdot \mathbf{j},
\qquad
\mathbf{j} = - M(u) \nabla \mu
\qquad
\mu = - \varepsilon^2 \nabla^2 u + f'(u).
\label{CHE}
\end{equation}
with
a double well potential
\relax
\begin{align}
f(u) &=(1-u^2)^2/2
\label{mainf}
\intertext{%
and the degenerate, quadratic mobility
}
M(u)&=(1-u^2)_+,
\label{mainm}
\intertext{on a bounded two-dimensional domain $\Omega$ with boundary conditions}
\nabla u\cdot\mathbf{n}
= 0, &\qquad \mathbf{j}\cdot\mathbf{n} = 0
\label{BC_usual}
\end{align}
\relax
\end{subequations}
at $\partial \Omega$. Here, $(\cdot)_+$ is the positive part of the
quantity in the brackets, $\mathbf{x}$ represents the two-dimensional
spatial coordinates, $t$ is the time, $\mu$ the chemical potential,
$\mathbf{j}$ the flux, and $\mathbf{n}$ the outward pointing normal
to $\partial \Omega$. Boldface characters generally represent
two-dimensional vectors. Both the potential and the mobility are
defined for all $u$.
The mobility is continuous but not differentiable
at $u=\pm 1$.
The case of a Cahn-Hilliard equation with a constant mobility
has been intensively discussed in the literature.
In particular, the sharp interface limit $\varepsilon\to0$ was determined
by Pego \cite{pego1989front}, and subsequently proven rigorously
by Alikakos et al.~\cite{alikakos1994convergence}. On a long
time scale, $t=O(\varepsilon^{-1})$, the result is the
Mullins--Sekerka problem \cite{mullins1963morphological}.
In particular, the motion of the interface
between the two phases is driven by flux from bulk diffusion.
In contrast, Cahn-Hilliard equations with degenerate mobility are commonly
expected to approximate interface motion by surface diffusion~\cite{Mulli57} on the time
scale $t=O(\varepsilon^{-2})$, where the interface velocity $v_n$ is proportional to the surface Laplacian $\Delta_s$
of the interface curvature $\kappa$,
\begin{equation}\label{vnsl}
v_n\propto\Delta_s\kappa.
\end{equation}
We note that the surface Laplacian is equal to $\partial_{ss}\kappa$ in two space dimensions, where $s$
is the arclength.
In fact, for the case of the degenerate mobility
$M(u)=1-u^2$ and either the logarithmic free energy
\relax
\begin{align*}
f(u) &=
\frac12\theta \left[(1+u)\ln(1+u) + (1-u)\ln(1-u)\right]
+\frac 12 (1-u^2),
\end{align*}
\relax
with temperature $\theta=O(\varepsilon^\alpha)$, or the
double obstacle potential
\[
f(u)=1-u^2 \quad \text{for } |u|\leq 1,
\quad f(u)=\infty \quad \text{otherwise},
\]
Cahn \emph{et al.}~\cite{cahn1996cahn} showed via asymptotic expansions that the sharp interface limit is indeed
interface motion by surface diffusion \eqref{vnsl}.
Although the logarithmic potential and the double obstacle
potential as its deep quench limit are well motivated, in particular for
binary alloys, \cite{CahnH71,CahnT94,TayloC94,CahnH58,giacomin1996exact,
kltahara1978kinetic, PuriBL97, bhate2000diffuse}, other combinations of
potentials and mobility have been used in the literature as a basis for
numerical approaches to surface diffusion~\cite{CenicG13}. Those models are often
employed in more complex situations with additional physical effects, such as the
electromigration in metals \cite{mahadevan1999phase}, heteroepitaxial
growth \cite{ratz2006surface}, anisotropic fields \cite{TorabL12,TorabLVW09}, phase separation
of polymer mixtures \cite{WolteBP06, GemmeBP05} and
more recently in solid-solid dewetting \cite{jiang2012phase} and coupled fluid
flows \cite{AbelsR09, SibleNK13,AbelsGG12}. In those models, a smooth polynomial double-well
free energy is used in combination with the mobility $M(u)=1-u^2$ or the degenerate biquadratic
mobility
$M(u)=(1-u^2)^2$ for $|u|\leq 1$. A smooth free energy is numerically more convenient to implement, especially in a multiphyscial model, as it avoids
the singularity present in either the logarithmic or double obstacle potential. Authors typically justify their
choice of mobility and free energy by adapting the asymptotic analysis by Pego~\cite{pego1989front}
and Cahn et al.~\cite{cahn1996cahn} to obtain the interface motion
\eqref{vnsl} in the sharp interface limit.
Interestingly, Gugenberger et al.~\cite{gugenberger2008comparison},
recently revisited some of these models and pointed out an apparent inconsistency that
appears in the asymptotic derivations except when the interface is flat.
Other evidence suggests that the inconsistency may not be a mere technicality
but that some bulk diffusion is present and enters the interfacial mass
flux at the same order as surface diffusion. This was observed for
example by Bray and Emmott \cite{BrayE95} when considering the coarsening
rates for dilute mixtures, and by Dai and Du \cite{DaiD12} where the
mobility is degenerate on one but is constant on the other side of the interface; the papers by Glasner \cite{Glasn03} and Lu et al.~\cite{LuGBK07} also
use a one-sided degenerate mobility but consider a time regime where
all contributions from the side with the degeneracy are dominated by
bulk diffusion from the other.) In fact,
an early publication by Cahn and Taylor \cite{CahnT94}
remarked that using a biquadratic potential might not drive the order
parameter close enough towards $\pm 1$ to sufficiently suppress
bulk diffusion, citing unpublished numerical results.
Diffuse interface models for binary fluids with a double well
potential and a quadratic
mobility $M(u)=1-u^2$ or $M(u)=(1-u^2)_+$ are investigated in
\cite{AbelsGG12,SibleNK13}. However, in both studies, the leading order
expressions for the interface motion do not contain bulk
diffusion contributions.
In this paper, we aim to resolve the apparent conundrum in the literature, and revisit the sharp interface limit for
\eqref{main}. We will obtain a sharp interface model where the interface
motion is driven by surface diffusion, \emph{i.e.}\ the surface Laplacian,
\emph{and} a flux contribution due to nonlinear bulk diffusion either
from one or both sides of the interface, depending on
the nature of the solutions for $u$ in the outer regime.
The matched asymptotic analysis is rather subtle, and
involves the matching of exponentially
large and small terms and multiple inner layers.
The paper is organised as follows: Section~2 approximates
solutions of \eqref{main} which satisfy $|u|\leq 1$;
Section~3 considers the asymptotic structure of the radially symmetric
stationary state, which demonstrates the matched asymptotic expansion
and exponential matching technique in a simpler setting; Section~4
returns to the general 2D time dependent problem;
Section~5 briefly discusses the sharp interface limit for a class of
solutions with the mobility $M(u)=|1-u^2|$ where $|u|\leq 1$ is not
satisfied, and for the Cahn-Hilliard model with a biquadratic degenerate
mobility $M(u)=((1-u^2)_+)^2$; Section~6 summarises and concludes the work.
\section{Preliminaries}\label{sec:prelim}
In this paper, we are interested in the behaviour of solutions to \eqref{CHE}
describing a system that has separated into regions where $u$ is close to $\pm 1$,
except for inner layers of width $\varepsilon$ between them, and
evolve on the typical time for surface diffusion, $t=O(\varepsilon^{-2})$.
We thus rescale time via $\tau = \varepsilon^2 t $, so that the
Cahn--Hilliard equation reads
\begin{subequations}\label{chefb}
\relax
\begin{equation}
\varepsilon^2 \partial_\tau u = \nabla \cdot \mathbf{j},
\qquad \mathbf{j}= M(u) \nabla \mu,
\qquad
\mu = - \varepsilon^2 \nabla^2 u + f'(u), \label{chereb}
\end{equation}
\relax
and we keep the boundary conditions on $\partial \Omega$,
\relax
\begin{equation}
\nabla u\cdot\mathbf{n}
= 0, \qquad \mathbf{j}\cdot\mathbf{n} = 0.
\qquad \text{at } \partial\Omega
\end{equation}
\relax
We will denote the subsets where $u>0$ and $u<0$ by $\Omega_+$ and
$\Omega_-$, respectively, and identify the location of the interface
with $u=0$. Moreover, we assume that $\Omega_+$
is convex unless otherwise stated, and has $O(1)$ curvature everywhere.
We will focus on solutions of (\ref{chefb}a,b){} that satisfy $|u|\leq 1$.
The existence of such solutions has been shown by Elliott and Garcke \cite{EllioG96}.
The general procedure to obtain a description of the interface evolution is then to consider and match expansions of (\ref{chefb}a,b),
the so-called outer expansions, with inner expansions using
appropriate scaled coordinates local to the interface. The approach
assumes that the solution of (\ref{chefb}a,b){} is quasi-stationary \emph{i.e.}\
close to an equilibrium state. Unfortunately, it is not obvious what
the appropriate nearby equilibrium state could be in the situation
we consider here. The problem arises because equilibrium solution
to (\ref{chefb}a,b){} with constant $\mu$ does not generally satisfy the bound $|u|<1$
inside of $\Omega_+$~\cite{pego1989front}.
It is helpful to revisit the standard matched
asymptotics procedure for (\ref{chefb}a,b){} to understand the implications of
this observation. Notice that the time derivatives drop out of the
lower order outer and inner problems. The leading order inner solution
for the double well potential is simply a tanh-profile, which matches
with $\pm 1$ in the outer solution; the corresponding leading order chemical potential is zero. To
next order, the inner chemical potential is proportional to $\kappa$,
and this supplies boundary conditions for the chemical potential in
the outer problem via matching to be $\mu_1=c_1\kappa$. Here, $\mu_1$
denotes the first non-trivial contribution to the chemical potential in
the outer expansion, $\mu=\varepsilon\mu_1+O(\varepsilon^2)$, and $c_1$ represents
a fixed numerical value. It is obtained from a detailed calculation
along the lines of section~\ref{sec:axi},
which in fact shows that $c_1>0$.
It is easy to see from the third equation in \eqref{chereb} that the outer correction $u_1$
for $u=\pm 1 + \varepsilon u_1$ is given by $u_1=\mu_1/f''(\pm 1)$, thus
$u=\pm 1 + c_1 \kappa \varepsilon/4 + O(\varepsilon^2)$ near the interface. Inside
$\Omega_+$, we therefore have that the outer solution $u>1$. Notice
that we have used that $f$ is smooth at $u=\pm 1$ --- for
the double obstacle potential, there is no correction to $u=\pm 1$
in the outer problem, see \cite{cahn1996cahn}.
\begin{figure}[tp]
\centering
\includegraphics[width=0.78\textwidth]{comparing_initial_condition}
\caption{The long-time solution $u$ for the
radially symmetric degenerate Cahn--Hilliard
equation \eqref{main} for different
initial data and different mobilities. In (a, left panel), the
mobility is \eqref{mainm} and initial data is bounded within
$[-1,1]$, while in (b, right panel) it exceeds 1 and $-1$ to the left
and right, and the mobility is replaced by $M(u)=|1-u^2|$, respectively.
In both panels, the initial data is shown by dashed lines
while the long-time solutions for $\varepsilon=0.05$
are given by solid lines and have converged close to a stationary state.
In (a), this stationary profile is bounded between $[-1,1]$, where we
emphasize that $u$ in the left inset is
still below 1 (dashed line in the inset), while in (b), the upper bound 1 is exceeded for
$r$ less than about 0.4 (see left inset in (b)). Notice that in both (a) and (b),
the value for $u$ for $r>0.7$ is close to but visibly larger than $-1$, by an
amount that is consistent with the $O(\varepsilon)$ correction predicted by the
asymptotic analysis (for (a) in \eqref{ostat}).}
\label{comparing_initial_condition}
\end{figure}
The resolution to the above conundrum comes from the observation that
for a degenerate mobility, slowly evolving solutions can arise from
situations other than constant $\mu$ once
$|u|$ gets close to 1. To obtain an indication of how such
solutions evolve, we look at numerical solutions of the radially
symmetric version of (\ref{chefb}a,b){} on the domain $\Omega=\{(x,y); \,
r<1\}$, where $r=(x^2+y^2)^{1/2}$, starting with a tanh as initial
profile such that $ u_{\mathrm{init}}(r)<1$. The spectral method we
used is briefly described in the appendix. The numerical solution
at a later stage as shown in Fig.~\ref{comparing_initial_condition}
is positive for $r<0.5$ and negative for $r>0.5$. Notice while
for $r>0.6$ the solution for $u$ levels out into a flat state that
is larger than $-1$ by an amount of $O(\varepsilon)$, for $r<0.4$
the solution is much closer to $u=1$. Closer inspection shows that $u$ has a
maximum which approaches $u=1$, say at $r=r^*$.
The maximum of $u$
may touch $u=1$ in either finite or infinite time. In either case,
the solution in $\Omega_+$ splits into two parts to the left and right of
$r^*$.
The flux between the two parts is very small, and this suggests
that they are nearly isolated from each other. In particular, they
do not have to be at the same chemical potential. Since we are only
interested in the phase field where it determines the evolution of
the interface, we cut off the part with $r<r^*$, and consider the
remaining part $r>r^*$ as a free boundary problem.
Returning to the general case of not necessarily radially symmetric
solutions, we introduce a free boundary $\Gamma$
near the interface inside $\Omega_+$, and cut off the parts of the
solution further inside of $\Omega_+$. At $\Gamma$, we impose
\begin{equation}\label{chefb_freebcs}
u=1, \qquad \mathbf{n}_\Gamma \cdot \mathbf{j}=0,
\qquad
\mathbf{n}_\Gamma \cdot \nabla u=0.
\end{equation}
\end{subequations}
Notice that in addition to $u=1$ and vanishing normal flux, a third
condition has been introduced at $\Gamma$. This is expected for
non-degenerate fourth order problems and permits a local expansion
satisfying \eqref{chefb_freebcs} that has the required number of two
degrees of freedom~\cite{KingB01}. Indeed, expanding the solution
to \eqref{chefb} in a travelling wave frame local to $\Gamma$
with respect to the coordinate $\eta$ normal to $\Gamma$ gives
$u=1-a\eta^2+O(\eta^3)$, where $a$ and the position of the free boundary
implicit in the travelling wave transformation represent the two degrees of freedom.
Also observe that if $u>-1$ by $O(\varepsilon)$ as suggested by the numerical
solution in Fig.~\ref{comparing_initial_condition}(a), then
$M(u)=O(\varepsilon)$. Since $\mu=O(\varepsilon)$, we expect a nonlinear
bulk flux of order $O(\varepsilon^2)$ at the interface arising from
$\Omega_-$. This is the same order as the expected flux from surface
diffusion. Indeed, as shown below, both contributions are present in the leading
order sharp interface model~\eqref{one_side_porous_mediumd}.
Another scenario is conceivable if the mobility is changed to $|1-u^2|$.
Then, with an appropriate initial condition, we obtained numerical
results for the radially symmetric case which suggest a solution
that is not confined to $|u|<1$ and which in fact converges to the usual
stationary Cahn-Hilliard solution (considered, for example,
in \cite{Nieth95})
for which $\mu$ is
constant in $\Omega$, and $u$ is larger than one in most of $\Omega_+$.
These results are shown in Fig.~\ref{comparing_initial_condition}(b).
In this case, bulk fluxes from both $\Omega_+$ and $\Omega_-$ contribute to the leading order interface dynamics,
see section~\ref{init_con_sil}.
\section{Radially symmetric stationary solution}\label{sec:axi}
By setting $u_\tau=0$ in \eqref{chefb} for a radially symmetric domain
$\Omega=\{(x,y); r<1\}$ and radially symmetric
$u=u(r)$, where $r=(x^2+y^2)^{1/2}$
and then integrating we obtain
\begin{subequations}\label{eqche}
\relax
\begin{align}
\frac{\varepsilon^2}{r} \frac{\mathrm{d}}{\mathrm{d} r} \left( r \frac{\mathrm{d} u}{\mathrm{d} r} \right)+ \eta-2u(u^2-1) &= 0,
\label{ode_bvp_freebdyproblem}
\\
u'(1) &= 0,\label{newradbc} \\
u(r^*) = 1, \qquad u'(r^*) &= 0.
\label{free_bdy_condition}
\end{align}
\relax
The point $r^*$ represents the location of the free boundary $\Gamma$ that needs to
be determined as part of the problem. The chemical potential $\eta$ is constant that needs to be determined by fixing the size of
the $\Omega_+$. This can be done by specifying
the $\int_\Omega u$, or, simpler, the
position $r_0$ of the interface,
\relax
\begin{equation}
u(r_0) = 0.
\label{free_bdy_interface}
\end{equation}
\relax
\end{subequations}
Note that if we do not consider a free boundary $\Gamma$ and
impose $u'(0)=0$ instead of \eqref{free_bdy_condition},
then there exist exactly two solutions
(which can be discerned by the sign of $u(0)$) as was shown
in \cite{Nieth95}.
We will now investigate \eqref{eqche}
in the sharp interface limit $\varepsilon\to 0$ using matched asymptotics.
There is one outer region away from the interface, and
two inner layers, one located at the interface $r_0$ and one
located at $r^*$.
\subsection*{Outer region}
Inserting the ansatz
\relax
\begin{align*}
u &= u_0 + \varepsilon u_1 + \cdots, \qquad
\eta = \eta_0+ \varepsilon \eta_1 + \cdots,
\end{align*}
\relax
into \eqref{ode_bvp_freebdyproblem} and \eqref{newradbc} and
taking into account that the chemical potential $\eta$ is a constant quickly
reveals that $u_0$, $u_1$ and $u_2$ are also constants. Their values
are fixed by standard matching, that is, they are equal
to the limits of the inner solutions as $\rho\to\infty$,
which therefore have to be bounded in this limit.
\subsection*{Inner layer about the interface}
To elucidate the asymptotic structure of the interface,
we strain the coordinates about $r_0$ and write
\begin{equation}
\rho = \frac{r - r_0}{\varepsilon},
\end{equation}
so that for $U(\rho)=u(r)$, and with the interface curvature $\kappa=1/r_0$,
we have
\begin{equation}
\ti{U}'' + \varepsilon \frac{\ti{U}'}{\kappa^{-1} + \varepsilon \rho} + \eta -2 (\ti{U}^3-\ti{U}) = 0,
\qquad U(0)=0.
\end{equation}
Expanding
\relax
$
\ti{U} = \ti{U}_0 + \varepsilon \ti{U}_1 + \cdots,
$
\relax
we have, to leading order,
\begin{equation}\label{u_0_sol}
\ti{U}_0'' -2 (\ti{U}_0^3-\ti{U}_0) = \eta_0,
\qquad U_0(0)=0.
\end{equation}
To match with the outer and the solution near $\Gamma$, $\ti{U}_0$ needs
to be bounded for $\rho\to\pm \infty$, which gives
\begin{equation}
\ti{U}_0 = - \tanh \rho, \quad \eta_0=0.
\end{equation}
To $O(\varepsilon)$ we have
\begin{equation}
\ti{U}_1'' -2 (3 \ti{U}_0^2-1)\ti{U}_1 = -\eta_1 - \kappa \ti{U}_0',
\quad
\ti{U}_1(0) =0,
\label{firsto_inner}
\end{equation}
for which the solution that is bounded as $\rho \rightarrow \infty$ is given by
\begin{eqnarray}
\ti{U}_1 &=&
-\frac{1}{16} (\eta_1 + 2\kappa )
\text{sech}^2 \rho +
\frac{1}{3} (3\eta_1 - 2\kappa)
\text{sech}^2 \rho \left( \frac{3 \rho}{8} + \frac{1}{4} \sinh 2\rho + \frac{1}{32}\sinh 4 \rho \right) \nonumber \\
&& + \frac{1}{8} (2 \kappa - \eta_1) + \frac{1}{48} (2\kappa - 3 \eta_1) (2 \cosh 2\rho - 5 \; \text{sech}^2 \rho).
\label{u_1_soln}
\end{eqnarray}
\subsection*{Inner layer about $\Gamma$}
We centre the coordinates about the free boundary $r=r^*$ and write
\begin{equation}
z = \rho + \sigma,
\qquad
\sigma\equiv(r_0-r^*)/\varepsilon.
\end{equation}
Substituting in the ansatz
$
\bar{U} = 1 + \varepsilon \bar{U}_1 + \varepsilon^2 \bar{U}_2+\ldots,
$
we obtain, to $O(\varepsilon)$, the problem
\begin{subequations}
\relax
\begin{align}
\bar{U}_1'' - 4 \bar{U}_1 &= -\eta_1,
\label{fistoder_freebdy}
\\
\bar{U}_1(0) &= 0, \quad \bar{U}_1'(0) =0,
\label{init_con_interior_firstord}
\end{align}
\relax
\end{subequations}
with the solution
\begin{equation}
\bar{U}_1 = \frac{\eta_1}{4} \left( 1 - \cosh 2 z \right).
\label{v1}
\end{equation}
\subsection*{Matching}
\label{exponential_matching}
We first observe from \eqref{free_bdy_condition} that
the location of the free boundary $\Gamma$ in the inner coordinate
$\rho= -\sigma$ satisfies $U(-\sigma)=1$, $U'(-\sigma)=0$.
However, for $\varepsilon\to 0$, we also have $U\to U_0=-\tanh(\rho)<1$.
To reconcile these conditions, we need to assume $\sigma\to \infty$
as $\varepsilon \to 0$.
Matching of the inner expansions therefore involves
exponential terms with large negative arguments $\rho$, or conversely for
large positive $z$, which we deal with in the spirit of Langer \cite{langer1983}, see also
\cite{korzec2008stationary}. The solution centred at the interface is
expanded at $\rho\to-\infty$ and the result written and re-expanded in
terms of $z=\rho+\sigma$. Notice that this change of variables can lead
to terms changing their order in $\varepsilon$ if $\sigma$ has the appropriate magnitude.
The solution for the layer around the free boundary $\Gamma$
is directly expanded
in terms of $z\to\infty$ and then the terms are matched between the two expansions.
Expanding $U_0$ and $U_1$ for $\rho\to -\infty$ and substituting $\rho=z-\sigma$ gives
\begin{eqnarray}
\ti{U} &=& \Big( 1 - \underbrace{2 \rm e^{-2\sigma} \rm e^{2z}}_{\mytag{A}{termA}} + O(\rm \rm e^{4 z}) \Big) + \varepsilon \left\{ \underbrace{\frac{1}{24} (2\kappa - 3\eta_1)\rm e^{2\sigma} \rm e^{-2z}}_{\mytag{B}{termB}} + \underbrace{\frac{1}{2} (\kappa - \eta_1)}_{\mytag{C}{termC}} \right. \nonumber \\
&& \left.+\underbrace{ \left[ \left( \frac{7\eta_1}{4} - \frac{11 \kappa}{6}\right) +\left( \frac{3 \eta_1}{2} -\kappa \right)(z-\sigma) \right] \rm e^{-2\sigma} \rm e^{2z}}_{\mytag{D}{termD}} + O(\rm e^{4z} ) \right\} \nonumber \\
&& + O(\varepsilon^2).
\label{asym_series_1}
\end{eqnarray}
The inner expansion for $\bar U$
at $z\to\infty$ is
\begin{equation}
\bar{U} = 1 + \underbrace{ \frac{\varepsilon \eta_1}{4}}_{\mytag{E}{termE}} - \underbrace{\frac{\varepsilon \eta_1}{8} \rm e^{2z}}_{\mytag{F}{termF}} - \underbrace{\frac{\varepsilon \eta_1}{8} \rm e^{-2z}}_{\mytag{G}{termG}} + O(\varepsilon^2).
\label{asym_series_2}
\end{equation}
Comparing terms in (\ref{asym_series_1}) and (\ref{asym_series_2})
of the same order of $\varepsilon$ functional dependence with respect to $z$,
we notice first that the constant terms at $O(1)$ are already matched. Matching
$\varepsilon$\ref{termC} and \ref{termE},
yields
\begin{equation}
\eta_1 = \frac{2}{3} \kappa.
\label{eta_1_eq}
\end{equation}
As a result, the term \ref{termB} is zero.
Matching term \ref{termA} and \ref{termF}, we arrive at
the condition
$2 \rm e^{-2\sigma} = {\varepsilon\kappa}/{12}$,
which we solve for $\sigma$, giving
\relax
\begin{equation}
\sigma = \frac{1}{2}\log\left( \frac{24}{\varepsilon\kappa}\right).
\end{equation}
\relax
We can now determine the outer solutions.
We note that in the more general, time
dependent situation,
the presence of a non-zero correction will give rise to a flux
at $O(\varepsilon^2)$.
Using the limits of $U_0$ and
$U_1$ as $\rho\to\infty$, we obtain
\begin{equation}\label{ostat}
u_0 = - 1 , \qquad
u_1 = \frac{\kappa}{6}.
\end{equation}
\subsection*{Higher corrections}
At this stage, it is obvious that the matching is not yet complete to $O(\varepsilon)$,
as the terms in
\eqref{asym_series_2} and
\eqref{asym_series_1}, respectively,
$\varepsilon$\ref{termD} and \ref{termG}
are non-zero and lack counterparts in the other expansion.
This can be resolved by considering the next higher order solutions
$\bar{U}_2 $ and $U_2$, which, in fact, will also be useful in section~\ref{sec:sid}.
We include $\varepsilon^2\eta_2$ in the expansion for $\eta$,
and allow for corrections to $\sigma$
via the expansion
\begin{equation}\label{sigexp}
\sigma = \frac{1}{2}\log\left( \frac{24}{\varepsilon\kappa}\right)+\varepsilon \sigma_1 + \cdots.
\end{equation}
The $O(\varepsilon^2)$ problem at the interface is given by
\relax
\begin{align}
\ti{U}_2'' -2 (3 \ti{U}_0^2-1)\ti{U}_2 &= -\eta_2 - \kappa \ti{U}_1' + \rho \kappa^2 \ti{U}_0' + 6 \ti{U}_0 \ti{U}_1^2 \nonumber \\
&= -\eta_2 - \frac{\kappa^2}{6} \tanh^5 \rho - \rho \kappa^2 \text{sech}^2 \rho - \frac{\kappa^2}{3} \tanh \rho \; \text{sech}^2 \rho,
\label{second_ord_inner}
\end{align} together with
\relax
$U_2(0)=0$ and boundedness for $U_2$ as $\rho\to\infty$.
The solution is
\relax
\begin{align}
\ti{U}_2 &= - \frac{\eta_2}{8} - \frac{\rho \kappa^2}{4} - \frac{1}{8} \cosh 2 \rho \left( \eta_2 + \frac{2}{3} \rho \kappa^2 \right) + \frac{1}{16} \text{sech}^2 \rho \left( 5 \eta_2 + \frac{23}{6} \rho \kappa^2 - 2 \rho^2 \kappa^2 \right) \nonumber \\
&\quad+ \frac{1}{4} \rho \kappa^2 \log \left( \frac{1}{2} \rm e^{\rho} \right) \text{sech}^2 \rho + \frac{\kappa^2}{8} \text{sech}^2 \rho \; \mathrm{Li}_2 (-\rm e^{2\rho})
\nonumber \\
&\quad - \frac{\kappa^2}{288} \sinh 2\rho \left( 1- 24 \log \cosh \rho \right)
\nonumber \\
&\quad - \frac{\kappa^2}{96} \tanh \rho \left( 1- 24 \log \cosh \rho - \frac{8}{3} \text{sech}^2 \rho \right)
+ \frac{1}{16}\left(\frac{\pi^2}{6} \kappa^2 - \eta_2 \right)
\text{sech}^2 \rho
\notag\\
&\quad
+\left( \frac{\kappa^2}{36} ( 1+ 24 \log 2) + \eta_2\right)
\text{sech}^2 \rho \left( \frac{3 \rho}{8} + \frac{1}{4} \sinh 2\rho +\frac{1}{32} \sinh 4 \rho \right),
\label{fullsol_u2}
\end{align}
\relax
where $\mathrm{Li}_2 (x)$ is the dilogarithm function.
For $\bar{U}_2(z)$ we have
\begin{subequations}
\relax
\begin{align}
\bar{U}_2 '' - 4 \bar{U}_2 + \kappa \bar{U}_1' - 6 \bar{U}_1^2 + \eta_2&= 0,\\
\bar{U}_2(0) = 0, \quad \bar{U}_2'(0) &=0,
\end{align}
\relax
\end{subequations}
which has the solution
\relax
\begin{align}
\bar{U}_2 &=\left( \frac{\kappa}{12}\right)^2 (\cosh 4z + 3\rm e^{-2z} (1+ 4z) - 9 ) + \left(\frac{\kappa}{12}\right)^2 \rm e^{2z}
\notag
\\ & \qquad + \left(\frac{\kappa}{6} \right)^2 \rm e^{-2z} + \frac{\eta_2}{4} (1- \cosh 2z).
\label{second_ord_free_bdy}
\end{align}
\relax
Expanding $U=\ti{U}_0+\varepsilon\ti{U}_1+\varepsilon^2U_2+\cdots$ for $\rho \rightarrow -\infty$,
substituting in $\rho=z-\sigma$ and using \eqref{sigexp} leads to
\begin{eqnarray}
\ti{U} &=& 1 - \frac{\varepsilon \kappa}{12} \rm e^{2 z} (1 - 2 \varepsilon \sigma_1 )
+ \frac{1}{2} \left( \frac{\varepsilon \kappa}{12}\right)^2 \rm e^{4z}
+ \varepsilon
\left(\frac{\kappa}{6} - \frac{\varepsilon \kappa^2}{36} \rm e^{2 z} \right) \nonumber \\
&&+ \varepsilon^2 \left[ -\frac{1}{8} \eta_2 \left( \frac{24}{\varepsilon \kappa} \right) (1+2 \varepsilon \sigma_1) \rm e^{-2z} + \left( \frac{\eta_2}{4} - \frac{\kappa^2}{16} \right) \right]
+ O(\varepsilon^{3}).
\label{com_u1_u2}
\end{eqnarray}
Similarly, the expansion for $\bar{U}=\bar{U}_0+\varepsilon\bar{U}_1+\varepsilon^2U_2+\cdots$
as $z\to\infty$ is
\relax
\begin{align}
\bar{U} &= 1 + \varepsilon \frac{\kappa}{6} \left( 1- \cosh 2z \right) \nonumber \\
&\quad + \varepsilon^2 \left[ \frac{1}{2}
\left(\frac{\kappa}{12}\right)^2 \rm e^{4z} + \frac{1}{2}
\left(\frac{\kappa}{12}\right)^2 \rm e^{-4z}
+ \left(\frac{\kappa}{12}\right)^2 ( 3\rm e^{-2z} (1+ 4z) - 9 ) \right. \nonumber \\
&\left. \;\;\;\ + \left(\frac{\kappa}{12}\right)^2 \rm e^{2z} + \left(\frac{\kappa}{6}\right)^2 \rm e^{-2z} + \frac{\eta_2}{4} (1-\cosh 2z) \right].
\label{second_order_freebdy}
\end{align}
\relax
Now, we can match the $\rm e^{-2z}$ at $O(\varepsilon)$ and the
$\rm e^{2z}$ at $O(\varepsilon^2)$ terms,
and arrive at, respectively,
\relax
\begin{align}
\label{val_eta_2_sig_1}
\eta_2 &= \frac{\kappa^2}{36},
\qquad
\sigma_1 = \frac{3 \kappa}{16}.
\end{align}
\relax
For completeness
we note that the next order outer correction $u_2$ is again a constant equal
to the limit of $U_2$ as $\rho\to\infty$, with the value
$u_2 = {7\kappa^2}/{144}$.
\begin{figure}
\centering
\subfloat{
\includegraphics[width=0.50 \textwidth]{freebdy_comparison_corrected}
}
\subfloat{
\includegraphics[width=0.455 \textwidth]{excess_pot_asym}
}
\caption{Comparing the asymptotic and numerical results for (left) the position of the free boundary and (right) the chemical potential, for a range of $\varepsilon$ and $r_0=1/2$.}
\label{num_asyp_expo}
\end{figure}
Figure \ref{num_asyp_expo} shows that the asymptotic results agree well
with the position of $\Gamma$ and the chemical potential obtained from
numerical solutions of the ODE free boundary problem (\ref{eqche}),
confirming the validity of the matched asymptotic results. The solutions
were obtained by a shooting method with fixed $\eta$ using the Matlab
package \textit{ode15s}, with $u(1)$ and \eqref{free_bdy_condition} as
the shooting parameter and condition. The value of $\eta$ is adjusted
in an outer loop via the bisection method until $r_0=1/2$ is achieved to a
$10^{-10}$ accuracy.
\section{Sharp Interface Dynamics}\label{sec:sid}
\subsection{Outer variables}
Motivated by the stationary state, we now consider the asymptotic
structure of the dynamical problem that arises for non-radially symmetric
interface geometries. For the outer expansions, we will use
\relax
\begin{align*}
u & = u_0 + \varepsilon u_1 + \varepsilon^2 u_2 + \cdots, \quad
\mu = \mu_0 + \varepsilon \mu_1 + \varepsilon^2 \mu_2 + \cdots, \quad
\mathbf{j}=\mathbf{j}_0 + \varepsilon \mathbf{j}_1 + \varepsilon^2 \mathbf{j}_2 + \cdots.
\end{align*}
\relax
\subsection{Inner variables}
As in \cite{pego1989front, gugenberger2008comparison}, we define
the local coordinates relative to the position of the interface
(parametrised by $s$), and write
\begin{equation}
\mathbf{r}(s,r,\tau) = \mathbf{R}(s,\tau) + r \mathbf{n}(s,\tau),
\end{equation}
where $\mathbf{R}$, the position of the interface $\zeta$, is defined by
\begin{equation}\label{zetapos}
u(\mathbf{R},t)=0,
\end{equation}
and $\mathbf{t} = \partial \mathbf{R}/\partial s$ is the unit tangent vector, and $\mathbf{n}$ is the unit outward normal. From
the Serret-Frenet formulae in 2D we have that
$\kappa \mathbf{t} = \partial \mathbf{n}/\partial s$, thus
\begin{equation}
\frac{\partial \mathbf{r}}{\partial r} = \mathbf{n}(s),
\qquad
\frac{\partial \mathbf{r}}{\partial s} = (1+r\kappa) \mathbf{t}(s),
\end{equation}
where $\mathbf{t}(s)$ is the unit tangent vector to the interface, and $\kappa$ is the curvature. We adopt the convention that the curvature is positively defined if the osculating circle lies inside $\Omega_+$. The gradient operator in these curvilinear coordinates reads
\begin{equation}
\nabla = \mathbf{n} \partial_r + \frac{1}{1+r\kappa} \mathbf{t} \; \partial_s,
\label{grad_in_local}
\end{equation}
and the divergence operator of a vector field $\mathbf{A} \equiv A_r \mathbf{n}+ A_s\mathbf{t}$ reads
\begin{equation}
\nabla \cdot \mathbf{A} = \frac{1}{1+r\kappa} \left[ \partial_{r}\Big( (1+r \kappa) A_{n} \Big) + \partial_{s} \left(\frac{1}{1+r\kappa} A_{s} \right) \right].
\label{div_in_local}
\end{equation}
We let $s$ and $\rho=r/\varepsilon$ be the inner coordinates at the interface,
and let $U(\rho,s,\tau)$, $\eta(\rho,s,\tau)$ and $\mathbf{J}(\rho,s,\tau)$
denote the order parameter, chemical potential
and flux written in these coordinates, respectively.
In inner coordinates, the combination of the first two equations, in
\eqref{chereb}, and \eqref{zetapos}, become
\begin{subequations}\label{chin}
\begin{align} \label{china}
\varepsilon^2 \partial_\tau \ti{U} - \varepsilon v_n \partial_\rho \ti{U}
&= \nabla \cdot \left( M(\ti{U}) \nabla \ti{\eta} \right), \\
\eta &= - \varepsilon^2 \nabla^2 U + f'(U),\\
U(0) &=0,
\end{align}
with $v_n=\bf{R}_\tau\cdot n$. Using equations (\ref{grad_in_local}) and (\ref{div_in_local}), we obtain
\begin{eqnarray}
\nabla \cdot \left( M(\ti{U}) \nabla \right) &=& \varepsilon^{-2} \partial_{\rho} M(\ti{U}_0) \partial_{\rho} \nonumber\\
&&+ \varepsilon^{-1} \Bigg\{ \partial_{\rho} \Big(\kappa \rho M(\ti{U}_0) + M'(\ti{U}_0) \ti{U}_1 \Big) \partial_{\rho} - \kappa \rho \; \partial_\rho M(\ti{U}_0) \partial_\rho \Bigg\} \nonumber \\
&&+ \bigg\{ \kappa^2 \rho^2 \partial_\rho M(\ti{U}_0) \partial_{\rho} - \kappa \rho \partial_\rho \Big(\kappa \rho M(\ti{U}_0) + M'(\ti{U}_0) \ti{U}_1 \Big) \partial_\rho \nonumber \\
&& + \partial_\rho \Big(\kappa \rho M'(\ti{U}_0) \ti{U}_1 + \frac{1}{2} M''(\ti{U}_0) \ti{U}_1^2 + M'(\ti{U}_0) \ti{U}_2 \Big) \partial_\rho \nonumber \\
&&+ \partial_s M(\ti{U}_0) \partial_s \bigg\}+O(\varepsilon).
\label{chinb}
\end{eqnarray}
\end{subequations}
Notice that the corresponding expression for $\nabla^2$ can be easily obtained from this
by setting $M\equiv 1$.
Taking only the first equation in \eqref{chereb} we have
\begin{equation}
\varepsilon^2 \partial_\tau \ti{U} - \varepsilon v_n \partial_\rho \ti{U} =
\frac{1}{1+\varepsilon\rho\kappa} \left[\varepsilon^{-1} \partial_{\rho}\Big( (1+\varepsilon\rho\kappa) J_{n} \Big) + \partial_{s} \left(\frac{1}{1+\varepsilon\rho\kappa} J_{s} \right) \right].
\end{equation}
In inner coordinates, we will only need to know the normal component
$J_n=\mathbf{n}\cdot \mathbf{J}$ of the flux explicitly in terms of
the order parameter and chemical potential. It is given by
\relax
\begin{align}
J_n
&= \frac{M(\ti{U})}{\varepsilon} \partial_\rho \ti{\eta} \nonumber \\
&=
\varepsilon^{-1} M(\ti{U}_0) \partial_{\rho} \ti{\eta}_0
+M'(U_0) U_1 \partial_\rho \eta_0+M(\ti{U}_0) \partial_{\rho} \ti{\eta}_1
\notag\\
&\quad+ \varepsilon \left( M(\ti{U}_0) \partial_{\rho}\ti{\eta}_2
+ M'(\ti{U}_0) \ti{U}_1 \partial_\rho \ti{\eta}_1
+ M'(\ti{U}_{0}) \ti{U}_2 \partial_{\rho} \ti{\eta}_0
+\frac{1}{2} M''(\ti{U}_0) \ti{U}_1^2 \partial_{\rho} \eta_0 \right) \nonumber \\
&\quad+
\varepsilon^2 \left[ M(\ti{U}_0) \partial_{\rho} \ti{\eta}_3 + M'(\ti{U}_0) \ti{U}_1 \partial_{\rho}
\ti{\eta}_2 + \left( M'(\ti{U}_0) \ti{U}_2 + \frac{1}{2} M''(\ti{U}_0) \ti{U}_1^2
\right)\partial_{\rho} \ti{\eta}_1 \right. \nonumber \\
& \qquad\qquad + \left. \left( M'(\ti{U}_0) \ti{U}_3 + M''(\ti{U}_0) \ti{U}_1\ti{U}_2
+ \frac{1}{6} M'''(\ti{U}_0) \ti{U}_1^3 \right)\partial_{\rho}
\ti{\eta}_0 \right]
+ O(\varepsilon^3),
\label{inner-flux}
\end{align}
\relax
which also motivates our ansatz for the expansion for $\mathbf{J}$,
given the obvious ansatz for the other variables,
\relax
\begin{align*}
U & = U_0 + \varepsilon U_1 + \varepsilon^2 U_2 + \cdots, \quad
\eta = \eta_0 + \varepsilon \eta_1 + \varepsilon^2 \eta_2 + \cdots, \\
\mathbf{J}&=\varepsilon^{-1}\mathbf{J}_{-1}
+\mathbf{J}_0 + \varepsilon \mathbf{J}_1 + \varepsilon^2 \mathbf{J}_2 + \cdots.
\end{align*}
\relax
Moreover, we introduce $z = \rho + \sigma(s,t)$ as the coordinate for the inner layer
about the the free boundary $\Gamma$, so that the order parameter, chemical potential and flux
in these variables are given by $\bar{U}(z,s,\tau)$,
$\bar{\eta}(z,s,\tau)$ and $\bar{\mathbf{J}}(z,s,\tau)$, respectively, with expansions
\relax
\begin{align*}
\bar{U} & = \bar{U}_0 + \varepsilon \bar{U}_1 + \varepsilon^2 \bar{U}_2 + \cdots, \quad
\bar{\eta} = \bar{\eta}_0 + \varepsilon \bar{\eta}_1 + \bar{\varepsilon}^2 \bar{\eta_2} + \cdots,\\
\bar{\mathbf{J}}&=
\varepsilon^{-1}\bar{\mathbf{J}}_{-1}+
\bar{\mathbf{J}}_0 + \varepsilon \bar{\mathbf{J}}_1 + \varepsilon^2 \bar{\mathbf{J}}_2 + \cdots.
\end{align*}
\relax
Notice that the location where the two inner layers are centred
depends on $\varepsilon$ and therefore, in principle, $\sigma$ and also $R$
need to be expanded in terms of $\varepsilon$ as well. However, we are only
interested in the leading order interface motion, so to keep the notation
simple, we do not distinguish between $\sigma$ and $R$ and their leading
order contributions. We now solve and match the outer and inner problems
order by order.
\subsection{Matching}\label{subsec:ord_sol}
\subsubsection*{Leading order}
For the outer problem, we obtain to leading order
\begin{equation}
\nabla \cdot \mathbf{j}_0 = 0, \quad
\mathbf{j}_0 = M(u_0) \nabla \mu_0 , \quad
\mu_0 = f'(u_0).
\label{eliptic_pde}
\end{equation}
The requisite boundary conditions are $\nabla_n u_0 = 0 $, and
$\mathbf{n}\cdot \mathbf{j}_0= 0 $ on $\partial \Omega$.
We have
\begin{equation}
u_0 = - 1, \qquad \mu_0 = 0.
\end{equation}
The leading order expansion about the interface reads,
\begin{equation}
M(\ti{U}_0 )\partial_{\rho} \ti{\eta}_0 = a_1(s,\tau),
\quad
f'(\ti{U}_0) - \partial_{\rho\rho} \ti{U}_0 = \ti{\eta}_0.\label{U0}
\end{equation}
From the matching conditions, we require $U_0$ to be bounded for
$\rho\to\pm \infty$. In fact, $U(\rho\to-\infty)=-1$, giving $\eta_0\to
0$. This implies $a_1=0$, therefore also $\eta_0=0$, which we note
matches with $\mu_0$. Moreover, from \eqref{U0}$_2$
and from \eqref{inner-flux} we have
\begin{equation}
\ti{U}_0 = -\tanh \rho,
\qquad
J_{n,-1} = 0.
\end{equation}
The leading order approximation of the order parameter
in the coordinates of the inner layer at $\Gamma$
is easily found to be $\bar{\mathbf{U}}_0=1$,
and also for the chemical potential $\bar{\eta}_0=0$,
and the normal component of the flux $\bar{J}_{n,-1} = 0$.
\subsubsection*{O($\varepsilon$) correction}
The first two parts of the outer correction problem for \eqref{chereb}
are automatically satisfied, since $\mu_0 = 0$ and $M(u_0) = 0$, by
\begin{equation}\label{j1sol}
\mathbf{j}_1 = 0.
\end{equation}
The last part requires
\begin{equation}
\mu_1 = f''(u_0) u_1 = 4 u_1.
\label{eq_chem_pot}
\end{equation}
From \eqref{chin}, and noting that $\ti{\eta}_0 = 0$, we have
\begin{equation}
\partial_{\rho} \left( M(\ti{U}_0) \partial_{\rho} \ti{\eta}_1 \right) = 0,
\quad
\ti{\eta}_1 = - \partial_{\rho\rho} \ti{U}_{1} - \kappa \partial_\rho \ti{U}_0
+ f''(\ti{U}_0)\ti{U}_1,
\quad
U_1(0)=0,
\label{mu_const_o_eps}
\end{equation}
thus $M(\ti{U}_0) \partial_{\rho} \ti{\eta}_1 = J_{n,0}$ is
constant in $\rho$. Since $J_{n,0}$ has to match with $j_0$, it
is zero. Therefore, $\eta_1=\eta_1(s,t)$ does not depend on $\rho$.
Now (\ref{mu_const_o_eps})$_2$ and (\ref{mu_const_o_eps})$_3$ represent
the same problem as (\ref{firsto_inner}). As such, the solution
$\ti{U}_1(\rho,s,\tau)$ that is bounded as $\rho \rightarrow \infty$ can be read
off (\ref{u_1_soln}).
The $O(\varepsilon)$ problem for the inner layer at $\Gamma$ becomes
\begin{equation}
\bar{\eta}_1 = - \partial_{zz} \bar{U}_{1} + 4 \bar{U}_1,
\end{equation}
with $\bar{\eta}_1$ that does not depend on $z$,
supplemented with the conditions $\bar{U}_1(z,0,\tau) = 1$,
$\bar{U}_{1z}(z,0,\tau) = 0$. This equation is the same as the $O(\varepsilon)$
equation for the stationary state about the free boundary, and the
solution is given by (\ref{v1}).
The inner layers about $\Gamma$ and about the
interface can be matched, as outlined in
section~\ref{exponential_matching}, to obtain
\begin{equation}\label{eta1sol}
\bar\eta_1=\eta_1 = \frac{2}{3} \kappa.
\end{equation}
\subsubsection*{O($\varepsilon^2$) correction}
Combining the first two equations in \eqref{chereb}
and expanding to $O(\varepsilon^2)$ yields
\begin{equation}
\nabla \cdot \left( M'(u_0) u_1 \nabla \mu_1 \right)= 0.
\end{equation}
In view of the discontinuous derivative of $M$ at $u=u_0=-1$, we
remark that here and in the following we will use the convention
that $M'(\pm 1)$ denotes the one-sided limit for $|u|\to 1^-$, in particular that
$M'(-1)=2$, and likewise for higher derivatives.
Equation (\ref{eq_chem_pot}) provides a relation between $\mu_1$ and $u_1$.
Thus, we have
\begin{equation}
\nabla \cdot \left(\mu_1 \nabla \mu_1 \right) =0
\label{porous_medium}
\end{equation}
with the boundary condition
$\nabla_n \mu_1 = 0$ on $\partial \Omega$,
and, from matching $\mu_1$ with $\eta_1$ (given in \eqref{eta1sol})
at the interface,
\begin{equation}\label{mu1bc}
\mu_1 = \frac{2}{3} \kappa.
\end{equation}
Expanding the second equation in \eqref{chereb} to $O(\varepsilon^2)$
also gives us an
expression for the normal flux
\begin{equation}\label{j2sol}
\mathbf{n}\cdot \mathbf{j}_2 = u_1 M'(u_0) \nabla_n \mu_1 = \frac{1}{2} \mu_1 \nabla_n \mu_1,
\end{equation}
which is not in general zero.
\subsubsection*{Inner expansion about the interface}
From the $O(1)$ terms in \eqref{chin}, we obtain
\begin{equation}
\partial_\rho
\left( M(\ti{U}_0) \partial_{\rho} \ti{\eta}_2 \right)=0.
\label{eq_order_2}
\end{equation}
Thus, $M(\ti{U}_0) \partial_{\rho} \ti{\eta}_2$ is constant
in $\rho$ and since we can identify this expression via \eqref{inner-flux}
as $J_{n,1}$, which has to match with $\mathbf{n}\cdot \mathbf{j}_1=0$.
Therefore we can deduce that
\begin{equation}
J_{n,1}=M(\ti{U}_0) \partial_{\rho} \ti{\eta}_2=0,
\end{equation}
and $\eta_2$ is independent
of $\rho$. The solution for $\eta_2$ is found in essentially the
same way as in Section~\ref{sec:axi}, see
\eqref{sigexp} -- \eqref{val_eta_2_sig_1}, thus
\begin{equation}\label{eta2sol3}
\eta_2(s,\tau)=\frac{\kappa^2}{36}.
\end{equation}
\subsubsection*{O($\varepsilon^3$) correction}
Noting that $\ti{\eta}_0$, $\ti{\eta}_1$ and $\ti{\eta}_2$ are
independent of $\rho$, the $O(\varepsilon)$ terms in
\eqref{chin} yield
\begin{eqnarray}
-v_n \partial_\rho \ti{U}_0
&=& \partial_\rho M(\ti{U}_0) \partial_\rho \ti{\eta}_3 + \frac{2}{3}M(\ti{U}_0) \partial_{ss} \kappa.
\label{third_order}
\end{eqnarray}
Integrating equation (\ref{third_order}) from $-\infty$ to $\infty$,
we arrive at
\begin{equation}\label{vnjump}
v_n =\frac{1}{2} [M(\ti{U}_0) \partial_\rho \ti{\eta}_3 ]_{-\infty}^{\infty} + \frac{2}{3} \partial_{ss} \kappa.
\end{equation}
From \eqref{inner-flux}, we can identify the term in the bracket
as
\begin{equation}\label{Jn2}
J_{n,2}=M(\ti{U}_0) \partial_\rho \ti{\eta}_3.
\end{equation}
At $\rho\to-\infty$, we need to match $\eta_3$ and $J_{n,2}$
with the solution
for $\bar{\eta}_3$ and $\mathbf{n}\cdot\bar{\mathbf{J}}_2$
in the inner layer at $\Gamma$, which in the former case
is a function independent of $z$, and in the latter is just
zero. Thus, $\eta_3$ is matched to
a constant for $\rho\to-\infty$, and $J_{n,2}$ is matched to zero, thus
\begin{equation}\label{J2minf}
\lim_{\rho \rightarrow -\infty} M(\ti{U}_0) \partial_{\rho} \ti{\eta}_3=
\lim_{\rho \rightarrow -\infty}J_{n,2}=0.
\end{equation}
We next consider the contribution from $J_{n,2}$ as $\rho\to\infty$. It
is tempting to use \eqref{Jn2} to argue that, since $M(U_0)\to0$
exponentially fast, $J_{n,2}$ also has to tend to zero.
Then, however, $J_{n,2}$ cannot be be matched with $\mathbf{n} \cdot
\mathbf{j}_2$, as we cannot simply set the latter to zero: The bulk
equation \eqref{porous_medium} has already got a boundary condition
at $\zeta$, namely \eqref{mu1bc}, and setting $\mathbf{n} \cdot
\mathbf{j}_2=0$ would impose too many conditions there.
We therefore drop the idea that $J_{n,2}\to 0$ as $\rho\to\infty$ and
match the normal fluxes,
\relax
\begin{align}
\lim_{\rho \rightarrow \infty} J_{n,2} &=
\left. \mathbf{n} \cdot \mathbf{j}_2\right|_\zeta, \label{Jj}
\end{align}
\relax
Keeping in mind that non-trivial solutions for $\mu_1$ will arise from
\eqref{porous_medium}, \eqref{mu1bc} and $\nabla_n \mu_1 = 0$ at $\partial
\Omega$, we expect that $J_{n,2}$ will not, in general
be zero because of \eqref{j2sol} and \eqref{Jj}.
Substituting \eqref{Jn2}
and \eqref{j2sol} into the left and right hand sides of \eqref{Jj}, respectively, we obtain
\relax
\begin{align}
\lim_{\rho \rightarrow \infty} M(\ti{U}_0) \partial_{\rho} \ti{\eta}_3&=
\frac{1}{2} \mu_1 \nabla_n \mu_1 |_{\zeta}, \label{J2pinf}
\end{align}
\relax
so that now the boundary terms in \eqref{vnjump} have been determined
in terms of $\mu_1$.
Now, however, we have to accept that in general there will be
exponential growth in $\eta_3$ as $\rho\to\infty$: if the left hand side of \eqref{J2pinf}
is nonzero, and $M(U_0)\to 0$ exponentially fast as $\rho\to\infty$, then
$\eta_3$ has to grow exponentially.
In fact, if we solve
\eqref{Jn2} for $\eta_3$, and eliminate $J_{n,2}$ via \eqref{Jj} and \eqref{j2sol},
we obtain the solution
\begin{equation}\label{eta3}
\eta_3=\frac{\mu_1 \nabla_n \mu_1 |_{\zeta}}{16}
\,\left(\rm e^{2\rho} + 2{\rho}\right) + \eta_3^0,
\end{equation}
where $\eta_3^0$ is an integration constant.
The term proportional $\rm e^{2\rho}$ is the exponentially
growing term and it does not appear to be matchable to the
outer solution. We will resolve this issue in a separate section, by
introducing another inner layer, and for now continue with analysing
the sharp interface model, which in summary is given by
\begin{subequations}
\label{one_side_porous_medium}
\relax
\begin{align}
\nabla \cdot (\mu_1 \nabla \mu_1) &= 0, \quad \text{in } \Omega_+, \\
\mu_1 &= \frac{2}{3} \kappa, \quad \text{on } \zeta, \\
\nabla_n \mu_1 &= 0 , \quad \text{on } \partial \Omega_{\mathrm{ext}},\\
v_n &= \frac{2}{3} \partial_{ss} \kappa + \frac{1}{4} \mu_1 \nabla_n \mu_1\quad \text{on } \zeta. \label{one_side_porous_mediumd}
\end{align}
\relax
\end{subequations}
\subsection{Additional inner layer}
The exponential growth of $\eta_3$ at $\rho\to\infty$ is a direct
consequence of the exponential decay of $M(U_0)$ to 0 as $U_0$
approaches $-1$ exponentially fast. Notice, however, that the inner
solution including the correction terms does not decay to $-1$,
because $U_1(\rho\to\infty)>0$,
so that
\[
M(U_0+\varepsilon U_1+\cdots)=
M(U_0)+\varepsilon M'(U_0) U_1+\cdots
\]
approaches a non-zero $O(\varepsilon)$ value as $\rho\to\infty$. We
need to ensure that the correction $\varepsilon M'(U_0) U_1$ to
$M(U_0)$ enters into the calculation of
the chemical potential as soon as $\rho$ is in the range where $M(U_0)$
and $ \varepsilon M'(U_0) U_1$ have the same order of magnitude. This happens
when $U_0+1=O(\varepsilon)$, i.e.\ when $\rho\sim -(1/2)\ln\varepsilon$. We therefore
introduce another layer via
\relax
\begin{align*}
\rho&=\frac{1}{2}\ln\left(\frac1\varepsilon\right)+y,
\quad
\hat{U}(y)=U(\rho),\quad
\hat{\eta}(y)=\eta(\rho),\quad
\hat {\mathbf{J}}(y)=\mathbf{J}(\rho).
\end{align*}
\relax
Notice the similarity with the change of variables at $\Gamma$. Indeed,
the solution in the new layer will have exponential terms in the
expansion at $y\to-\infty$ that need to be matched with the
expansion at the interface
$\rho\to\infty$.
In terms of the new variables, the Cahn--Hilliard equation becomes
\relax
\begin{align}
\varepsilon^2 \partial_\tau \hat{U} - \varepsilon v_n \partial_y \hat{U}
&= \nabla \cdot \left( M(\hat{U}) \nabla \hat{\eta} \right),\\
\hat \eta &=
-\partial_{yy} \hat U
-\frac{\varepsilon\kappa}{1+\varepsilon\kappa\left(y-\frac12\ln\varepsilon\right)}\partial_y \hat U
\notag\\ &\quad
-\frac{\varepsilon^2}{1+\varepsilon\kappa\left(y-\frac12\ln\varepsilon\right)}
\partial_s\left(\frac{\partial_s \hat{U}}{1+\varepsilon\kappa\left(y-
\frac12\ln\varepsilon\right)}\right) + f'(\hat U).
\end{align}
\relax
We expand
\relax
\begin{align*}
\hat{U} & = -1 + \varepsilon \hat{U}_1 + \varepsilon^2 \hat{U}_2 + \cdots, \quad
\hat{\eta} = \varepsilon \hat{\eta}_1 + \hat{\varepsilon}^2 \hat{\eta_2} + \cdots,\nonumber\\
\hat{\mathbf{J}}&=
\hat{\mathbf{J}}_0 + \varepsilon \hat{\mathbf{J}}_1 + \varepsilon^2 \hat{\mathbf{J}}_2 + \cdots,
\end{align*}
\relax
where we have tacitly anticipated that $\hat\eta_0=0$, $\hat{\mathbf{J}}_{-1}=0$.
Inserting these gives
\relax
\begin{align}
\nabla \cdot \left( M(\hat{U}) \nabla \hat{\eta} \right) &=
\partial_{y} \left[M'(-1) \hat{U}_1\partial_{y}\hat{\eta}_1\right]
+\varepsilon \partial_{y} \left[M'(-1) \hat{U}_1\partial_{y}\hat{\eta}_2\right]
+O(\varepsilon^2).
\end{align}
\relax
The normal flux $\hat{J}_n=\mathbf{n}\cdot\hat{\mathbf{J}}$ is given by
\relax
\begin{align}
\hat{J}_n
&= \frac{M(\ti{U})}{\varepsilon} \partial_\rho \ti{\eta}
= \left[M'(-1)\hat{U}_1+
\varepsilon \left(\left(M''(-1)/2\right) \hat{U}_1^2 + M'(-1)\hat{U}_2 \right) +
O(\varepsilon^2)\right]
\nonumber\\
&\hspace*{30ex}
\times \left[\varepsilon \partial_y
\hat{\eta}_1+\varepsilon^2
\partial_y\hat{\eta}_2+O(\varepsilon^3)\right].\label{hatJn}
\end{align}
\relax
Comparison with the ansatz for the expansion of $\hat{\mathbf J}$ immediately implies
$\hat{J}_{n,0}=0$.
\subsection*{Leading order problem}
To leading order, we have
\relax
\begin{align}
-\partial_y\left[M'(-1) \hat{U}_1\partial_y \hat{\eta}_1 \right] &=0,
\qquad
-\partial_{yy} \hat{U}_1+f''(-1) \hat{U}_1 = \hat{\eta}_1.
\end{align}
\relax
Integrating the first of these once, we obtain that the expression in square bracket has
to be a constant in $y$. From \eqref{hatJn}, we see this is the term
$\hat{J}_{n,1}$ in the normal flux, which has to match to ${J}_{n,1}$
and $\mathbf{n}\cdot{\mathbf{j}}_1$ in the interface layer and the
outer problem, respectively. Thus
$\hat{J}_{n,1}=0$.
Therefore, the contribution $\hat{\eta}_1$ is also a constant that needs to match
to the same value $\kappa/6$ towards the outer and the interface layer, \emph{i.e.}\ for
$\hat y\to \pm \infty$, so that we have
\relax
\begin{align}
\hat \eta_1 &= \frac23 \kappa,
\qquad
\hat{U}_1 = c_1 \mathrm{e}^{-2y}+ c_2 \mathrm{e}^{2y}+\frac16 \kappa.
\end{align}
\relax
Matching this to the constant outer $u_1=\kappa/6$, obtained from
\eqref{eq_chem_pot} and \eqref{eta1sol}, forces $c_2=0$. We next expand
$U_0$ at $\rho\to\infty$,
\begin{equation}
U_0 = -1 + 2 \rm{e}^{-2\rho} + O( e^{-4\rho}).
\end{equation}
The second term accrues a factor of $\varepsilon$ upon passing to $y$-variables,
and thus has to match with the exponential term in
$\varepsilon \hat U_1 $,
giving $c_1=2$ and
\begin{equation}
\hat U_1 = 2 \mathrm{e}^{-2y} + \frac16\kappa.
\end{equation}
\subsection*{First correction problem}
To next order, we obtain
\begin{subequations}
\relax
\begin{align}
-\partial_y\left[M'(-1) \hat{U}_1\partial_y \hat{\eta}_2 \right] &=0,\label{nohata} \\
-\partial_{yy} \hat{U}_2-\kappa\partial_y\hat{U}_1
+f''(-1) \hat{U}_2+f'''(-1)\hat U_1 &= \hat{\eta}_2,\label{nohatb}\\
\hat{J}_{n,2}&= M'(-1) \hat{U}_1\partial_y\hat \eta_2.
\label{nohatc}
\end{align}
\relax
\end{subequations}
From \eqref{nohata} and \eqref{nohatc}, and matching the flux contribution
$\hat{J}_{n,2}$ to the outer $\mathbf{n}\cdot\mathbf{j}_2$, we obtain
\relax
\begin{align}
M'(-1) \hat{U}_1 \partial_y \hat{\eta}_2
= \frac{1}{2}\left.\mu_1\nabla_n\mu_1\right|_\zeta,
\end{align}
\relax
which in turn has the solution
\begin{equation}\label{heta2}
\hat \eta_2=\frac{\left.\mu_1\nabla_n\mu_1\right|_\zeta}{\kappa M'(-1)}\ln\left(\frac\kappa{12} \mathrm{e}^{2y}+1\right)+\frac{\kappa^2}{36}.
\end{equation}
The integration constant has been fixed by matching $\hat\eta_2$
for $y\to-\infty$ with the interface solution $\eta_2$, see \eqref{eta2sol3}. We now need
to check if the exponential term in \eqref{heta2} matches with the
exponential term in \eqref{eta3}. Expanding at $y\to-\infty$ is
trivial, and then substituting in $y=\rho+\ln\varepsilon/2$ gives
\begin{equation}
\hat \eta_2=\frac{\varepsilon}{8M'(-1)}\left.\mu_1\nabla_n\mu_1\right|_\zeta
\rm e^{2\rho}+\frac{\kappa^2}{36}.
\end{equation}
Thus, $\varepsilon^2\hat\eta_2$ contains a term proportional to $\varepsilon^3 \mathrm{e}^{2y}$ term that is identical to the $\varepsilon^3 \mathrm{e}^{2y}$ term that
appears in $\varepsilon^3 \eta_3$, see \eqref{eta3}. Thus, we have
resolved the issue with the exponentially
growing term (for $\rho\to\infty$) in the correction to
the chemical potential in the interface layer expansion.
\subsection{Linear stability analysis}
\begin{table}
\centering
\begin{tabular}{| l | l | l | l | l | l | l | l |}
\hline
$\varepsilon$ & 0.01 & 0.005 & 0.003 & 0.002 & 0.001 & \bf Eq (\ref{solid_diffusion_one_side_decay_rate}) & \bf Eq (\ref{pure_solid_diffusion_decay_rate}) \\ \hline
$\lambda_{m=2}$ & $-$133.2 & $-$133.8 & $-$136.0 & $-$136.3 & $-$137.0 & \bf $-$137.4
& \bf $-$128 \\
\hline
\end{tabular}
\caption{\label{diffuse_sharp_deg1}
Relaxation rates obtained from the linearised phase field model
(\ref{lin_stab_pde_deg_mobility}) are shown for different values of $\varepsilon$
in the first five columns, and compared to the eigenvalues obtained
for linearised sharp interface models for pure surface diffusion
\eqref{pure_solid_diffusion_decay_rate} and the porous medium type
model \eqref{solid_diffusion_one_side_decay_rate} in the next-to-last
and the last column, respectively, with $\mathfrak{M} = 2/3$. }
\end{table}
Besides the usual surface diffusion term, equation
(\ref{one_side_porous_medium}) contains an additional normal flux
term which is nonlocal. In cases where there are multiple regions
of $u$ close to 1, the
nonlocal term couples the interfaces of these regions with each other
and drive coarsening where the larger regions grow at the expense
of smaller ones.
This is not expected for pure surface diffusion.
Even for a single convex domain that is slightly perturbed
from its radially symmetric state, the effect on the relaxation dynamics
is noticeable, as we now explore.
To compare the sharp interface model with the phase field model, we consider the relaxation of an azimuthal perturbation to a radially symmetric stationary state with curvature $\kappa = 1/r_0$. For azimuthal perturbations proportional to $\cos m\theta$, the pure surface diffusion model $v_n = \mathfrak{M} \partial_{ss} \kappa$ predicts an exponential decay rate
\begin{equation}
\lambda = - \mathfrak{M} \frac{m^2(m^2-1)}{r_0^4}.
\label{pure_solid_diffusion_decay_rate}
\end{equation}
In contrast, the decay rate in the porous medium model, Equation (\ref{one_side_porous_medium}), is given by
\begin{equation}
\lambda = - \frac{2}{3} \frac{m^2(m^2-1)}{r_0^4} - \frac{1}{9} \frac{m(m^2-1)}{r_0^4} \tanh (m \log r_0^{-1}).
\label{solid_diffusion_one_side_decay_rate}
\end{equation}
In the diffuse interface model, the perturbation $v_1(r,t) \cos m\theta$ satisfies
\begin{eqnarray}
v_{1t} &=& \frac{1}{r} \frac{\partial}{\partial r} \left( r M(v_0) \frac{\partial \mathfrak{m}_1}{\partial r} \right) - \frac{m^2}{r^2} M(v_0) \mathfrak{m}_1, \nonumber \\
\mathfrak{m}_1 &=& -\frac{\varepsilon^2}{r} \frac{\partial}{\partial r}\left( r \frac{\partial v_1 }{\partial r} \right) + \left( \frac{m \varepsilon}{r} \right)^2 v_1+ f''(v_0) v_1,
\label{lin_stab_pde_deg_mobility}
\end{eqnarray}
where $v_0(r)$ is the radially symmetric stationary state. We solve this
system numerically, using the Chebyshev spectral collocation method
(see Appendix) with $\Delta t = 10^{-3}$ and 400 mesh points until
$t = 1/\varepsilon^2$. The decay rate of the eigenfunction is tracked by
monitoring its maximum. The diffuse interface decay rates are scaled
with $1/\varepsilon^2$ to compare with the sharp interface model. The base
state that is needed for this calculation is determined \emph{a priori}
with the interface, \emph{i.e.}\ the zero contour, positioned at $r_0=0.5$.
The initial condition for the perturbation,
\begin{equation}
v_1(0,r) = \exp{\left[ {1}/({a^2 - (r_0 - r)^2 }) \right]},
\end{equation}
acts approximately as a shift to the leading order shape of the inner
layer. The constant $a$ is chosen so that the support of $v_1(0,r)$
lies in the range $r>r^*$.
The results are compared in Table \ref{diffuse_sharp_deg1}.
They show that the decay rate of the azimuthal perturbation to the
radially symmetric base state obtained for $m=2$ tends to the eigenvalue
for the linearised sharp interface model \emph{with} the contribution from
nonlinear bulk diffusion, rather than to the one for pure surface
diffusion. This confirms that \eqref{one_side_porous_medium} describes
the leading oder sharp interface evolution for the Cahn--Hilliard model
\eqref{main} correctly, and that the sharp interface motion is distinct
from the one induced by pure surface diffusion.
\section{Modifications}
\subsection{Solutions with $u>1$ for the mobility $M(u)=|1-u^2|$}
\label{init_con_sil}
As pointed out in Section~\ref{sec:axi}, solutions that have a modulus
$|u|>1$ and converge to the usual stationary Cahn--Hilliard solutions
are conceivable for the mobility $M(u)=|1-u^2|$ and are seen to arise
in numerical solutions with this mobility for appropriate initial conditions.
For this case, we can carry out the asymptotic derivations to obtain
the sharp interface limit and match the inner problem to outer solutions
on both sides of the interface, accepting thereby that the outer solution
for $u$ in $\Omega_+$ is larger than one. Otherwise the detailed derivations follow
the same pattern as in section~\ref{subsec:ord_sol} and can be found
in~\cite{mscalphalee2013}.
\begin{table
\centering
\begin{tabular}{| l | l | l | l | l | l |}
\hline
$\varepsilon$ & 0.01 & 0.005 & 0.002 & 0.001 & \bf Eq (\ref{solid_diffusion_two_side_decay_rate}) \\ \hline
$\lambda_{m=2}$ & $-$144.7 & $-$146.3 & $-$147.5 & $-$147.8 & \bf $-$148.1 \\
\hline
\end{tabular}
\caption{The decay rates of an azimuthal perturbation obtained by the diffuse and sharp interface models show good agreement for general initial condition not bounded between $\pm1$ and mobility $M(u) = 1-u^2$. The numerical method and discretisation parameters are the same as in Table \ref{diffuse_sharp_deg1}. The description of the numerical approach and parameters
carries over from Table \ref{diffuse_sharp_deg1}.}
\label{diffuse_sharp_deg2}
\end{table}
The upshot is that the sharp interface model now
has contributions from nonlinear bulk diffusion
on both sides of the interface, in addition to surface diffusion, \emph{viz.}
\begin{subequations}\label{sharp_interface_porousm}
\relax
\begin{align}
\nabla \cdot (\mu_1^\pm \nabla \mu_1^\pm) &= 0, \; \text{on} \; \Omega_{\pm}, \\
\mu_1^{\pm} &= \frac{2}{3} \kappa, \; \text{on} \; \zeta, \\
\nabla_n \mu_1^{+} &= 0, \; \text{on} \; \partial \Omega,\\
v_n &= \frac{2}{3} \partial_{ss} \kappa + \frac{1}{4} ( \mu^+_1 \nabla_n \mu^+_1 + \mu^-_1 \nabla_n \mu^-_1), \; \text{on} \; \zeta.
\label{sipvn}
\end{align}
\relax
\end{subequations}
This sharp interface model predicts an exponential decay rate of
\begin{equation}
\lambda = - \frac{2}{3} \frac{m^2(m^2-1)}{r_0^4} - \frac{1}{9} \frac{m(m^2-1)}{r_0^4} ( \tanh (m \log r_0^{-1}) +1) \label{solid_diffusion_two_side_decay_rate}
\end{equation}
for the evolution of the perturbation to the radially symmetric stationary
state with wave number $m$. Table \ref{diffuse_sharp_deg2} shows that
equation (\ref{solid_diffusion_two_side_decay_rate}) is indeed consistent
with numerical results for the diffuse model.
As a cautionary remark, we note that we are dealing here
with a sign-changing solution of a degenerate fourth order
problem, in the sense that $1-u$ changes sign
and the mobility degenerates. The theory
for this type of problems is still being developed
\cite{Galak10,EvansGK07,AlvarG13,BowenW06,Berni96,Galak13}.
\subsection{Degenerate biquadratic mobility}
\label{higher_order_deg}
For the mobilities investigated so far, nonlinear bulk diffusion
enters at the same order as surface diffusion.
If we employ $\tilde M(u) = ((1-u^2)_+)^2$, then
\begin{equation}
j_2 = u_1 \tilde M'(u_0) \nabla_n \mu_1 = 0.
\end{equation}
The contribution of the bulk diffusion flux to the normal velocity
of the interface is subdominant to surface diffusion and therefore
\begin{eqnarray}\label{vnsd}
v_n &=&
\frac{1}{3} \int_{-\infty}^{\infty}
\text{sech}^4 \rho \; \mathrm{d} \rho \; \partial_{ss} \kappa =
\frac{4}{9} \partial_{ss} \kappa.
\end{eqnarray}
Table \ref{diffuse_sharp_deg3} shows that the decay rate
obtained from the numerical solution of the diffuse interface model for
the degenerate biquadratic mobility
is indeed consistent with
the predictions obtained for the sharp interface model~\eqref{vnsd}
with pure surface diffusion.
\begin{table}
\centering
\begin{tabular}{| l | l | l | l | l |}
\hline
$\varepsilon$ & 0.01 & 0.005 & 0.001 & \bf Eq (\ref{pure_solid_diffusion_decay_rate}) \\ \hline
$\lambda_{m=2}$ & $-$84.6 & $-$84.7 & $-$85.2 & $\bf $-$85.\dot{3}$ \\
\hline
\end{tabular}
\caption{The decay rates obtained by the diffuse interface model
for the mobility $M(u) = ((1-u^2)_+)^2$ and $|u|<1$ show good agreement with the
surface diffusion model in \eqref{pure_solid_diffusion_decay_rate}, with
$\mathfrak{M}=4/9$, as $\varepsilon \rightarrow 0$.
The description of the numerical approach and parameters
carries over from table~\eqref{diffuse_sharp_deg1}.}
\label{diffuse_sharp_deg3}
\end{table}
\section{Conclusions}
In this paper, we have derived the sharp interface limit for a
Cahn--Hilliard model in two space dimensions with a nonlinear mobility
$M(u)=(1-u^2)_+$, and a double-well potential with minima at
$\pm 1$ for the homogeneous part of the free energy. We found that
in addition to surface diffusion, there is also a contri bution from
bulk diffusion to the interface motion which enters at the same order.
This contribution enters only from one side of the interface, whereas
for the mobility $M(u)=|1-u^2|$, solutions have also been considered for which
bulk diffusion in the sharp interface limit enters from both sides
at the same order as surface diffusion.
The situation studied here was focused on the case of
convex $\Omega_+=\{\mathbf{x}\in\Omega;\, u>0\}$
with an $O(1)$ curvature for the interace $u=0$,
though the asymptotic analysis also remains
valid if $\Omega_+$ is the union of well-separated convex
domains. The dynamics for concentric circles of different phases has
also been looked into \cite{mscalphalee2013}. For the case where the
interface has turning points, the derivation needs to be revisited,
since the the location of the free boundary $\Gamma$, given by
$\rho=\sigma$ in inner coordinates about the interface, depends
on the curvature so that $|\sigma|\to\infty$ if $\kappa$ tends
to zero. Moreover, as the curvature changes sign, $\Gamma$ changes
the side of the interface. On a different plane, it would also be
interesting to investigate the coarsening behaviour \cite{BrayE95}
for the sharp interface model \eqref{one_side_porous_medium}.
For ensembles of two or more disconnected spheres, pure surface
diffusion does not give rise to coarsening, but coarsening is expected for the
mixed surface/bulk diffusion flux in \eqref{one_side_porous_medium}.
While the Cahn--Hilliard equation \eqref{main} plays a role in some
biological models, see for example~\cite{KlappD06}, and may have
significance in modelling spinodal decomposition in porous media,
possibly with different combinations of mobilities, e.g. $M(u) = |1-u^2|
+ \alpha (1-u^2)^2$, see \cite{mscalphalee2013}, the main motiviation for
our investigation stems from the role degenerate Cahn-Hilliard models
play as a basis for numerical simulations for surface diffusion with
interface motion driven by \eqref{vnsl}. The upshot for the specific
combination of mobility and double well potential used in \eqref{main} is
not useful for this purpose, since a contribution from bulk diffusion
enters at the same order. For mobilities with higher degeneracy,
such as $M(u)=((1-u^2)_+)^2$,
this undesired effect is of higher order
and can be made arbitrarily small, at least
in principle, by reducing $\varepsilon$.
Nevertheless, for finite $\varepsilon$, it is still present and a
cumulative effect may arise for example through a small but persistent
coarsening of phase-separated domains.
A range of alternatives can be found in the
literature, in particular using the combination of $M=(1-u^2)_+$ or $M=|1-u^2|$ with the
logarithmic or with the double obstacle potential \cite{cahn1996cahn}.
These combinations force the order
parameter $u$ to be equal to or much closer to $\pm 1$ away from the interface,
thus shutting out the bulk diffusion more effectively.
Numerical methods have been developed for these
combinations and investigated in
the literature, see for example~\cite{BarreB02, BarreBG02, BarreBG98,
BarreBG99, BarreGN07, GarckNS07, BanasNN13}. Other approaches that have
been suggested include a dependence of the mobility on the gradients
of the order parameter \cite{mahadevan1999phase}, tensorial mobilities
\cite{gugenberger2008comparison}, or singular expressions for the chemical
potential \cite{RaRV06}.
As a final remark, we note that many analytical questions remain open.
For example, the existence of solutions that preserve the property that $|u|>1$
in some parts of $\Omega$ has not been shown.
Also, the approximation or \eqref{main} by a free boundary
problem \eqref{chefb} should be investigated systematically using
$b=\text{min } (1-|u|)>0$ as a small parameter, in the spirit of what
was done, for example, in \cite{KingB01} for the precursor model of a
spreading droplet. The conditions at the free boundary $\Gamma$ could
then be recovered from matching to an inner solution. If $b\to 0$
in finite time,
the effect of the ``precursor'' regularisation is lost and either
the regularising effect implicit in the numerical discretisation or
any explicit regularisation that is used (e.g., the one
suggested in \cite{EllioG96}) have to be taken into account. It
would be interesting to see for which regularisations the conditions in \eqref{chefb_freebcs}
are recovered. We note, however, that the evolution of the leading order
sharp interface model in $\Omega_-$ is usually insensitive to the conditions
imposed at $\Gamma$.
\section{Appendix: Numerical Methods}
We numerically solved the radially symmetric counterpart to \eqref{main} in polar coordinates
without an explicit regularsisation (such as the on used in \cite{EllioG96})
via a Chebyshev spectral
collocation method in space and semi-implicit
time-stepping, using a linearised convex splitting scheme to treat $f$.
For details on spectral methods in general, we refer the reader
to the references \cite{trefethen2000spectral,trefethen2013approximation}.
We also split the mobility as $M(u) \equiv (M(u) - \theta) + \theta$, to evaluate $(M(u) - \theta)$ at the previous time step whilst solving the remaining $\theta$ portion at the next time step,
which improved the stability. We choose $\theta = 0.01 \varepsilon$ in our simulations.
Varying $\theta$ confirmed that the results did not sensitively depend on its
value provided it was $O(\varepsilon)$.
As the Chebyshev--Lobatto points are scarcest in the middle of the
domain, we resolve the interior layer by introducing a non-linear map
$x \in [-1,1] \mapsto r \in
[0,1]$, as suggested in \cite{boyd1992arctan},
$
r =({1}/{2}) +\mathrm{arctan} \left(\delta \tan \pi x/2 \right)/\pi,
$
where $0<\delta<1$ is a parameter that determines the degree of stretching of the interior domain, with a smaller value of $\delta$ corresponding to greater degree of localisation of mesh points about the centre of the domain. In this paper, we general set $\delta = 10 \varepsilon$.
This choice of $\delta$ is guided by numerical experiments, which show that further increase in the number of mesh points does not alter the stationary solution.
Moreover, since $r=0$ is a regular singular point,
we additionally map the domain $r\in[0,1]$ linearly onto a truncated domain $[10^{-10},1]$. Again, we verified that varying the truncation parameter did not
affect the numerical results.
Unless otherwise stated, the numerical simulations reported in the paper
are done with 400 collocation points and timestep $\Delta t = 10^{-3}$.
The linearised phase-field models were solved using the same method, with a
base state that was obtained from a preceding run and then ``frozen'' in time, \emph{i.e.}\
not co-evolved with the perturbation.
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 1.694336,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfKLxK0iCl36cYU1z | \section{Introduction}
One of the most important problems in a sensor networks is the cost of sensor movement.
We can analyse the cost measured either as a sum or maximum movement of sensors from their initial location towards target positions.
This issue was discussed in the papers (see eg. \cite{ajtai_84, kranakis_shaikhet,
KK_2016_cube, dam_2014,spa_2013}) and \cite{talagrand_2005} a book related.
In \cite{talagrand_2005} the author investigates matching theorems for $N$ random variables independently uniformly distributed in the $d-$dimensional unit cube
$[0,1]^d,$ where $d\ge 2.$
\cite{spa_2013} adreesses the expected sum of movement of $n$ identical sensors displaced uniformly and independently at random in the unit interval
to attain the coverage of the unit interval. Further, \cite{KK_2016_cube} the authors studied the movement of $n$ sensors with identical $d-$dimensional cube sensing
radius in $d$ dimensions when the cost of movement of sensor is proportional to some (fixed) power $a>0.$
Another approach is the problem of displacing random sensors in the half-line $[0,\infty)$ to avoid interference (cf \cite{kranakis_shaikhet}).
In this paper, we restrict our study to the sensors which are placed at random on a line according to a Poisson process.
More importantly, our work is closely related to \cite{dam_2014} where
the authors studied the event distance between two i.i.d. Poisson processes with respective arrival times
$X_1,X_2,\dots$ and $Y_1,Y_2,\dots$ on a line.
In \cite{dam_2014} the closed formula for event distances $\E{|X_{k+r}-Y_k|},$ for
any $k\ge 1, r\ge 0$ was derived as the combination of the Pochhammer polynomials.
The following open problem was proposed in \cite{dam_2014} to study more general moments $\E{|X_{i}-Y_j|^a},$ where $a$ is fixed.
We derive a closed formula for the moments $\E{|X_{k+r}-Y_k|^a},$ for
any $k\ge 1, r\ge 0,$ when $a$ is natural number and provide asymptotics to real-valuded exponents.
\subsection{Preliminaries}
In this subsection we introduce some basic concepts and recall some useful identities involving
indefinite and definite integrals, binomial coefficients and special functions which will be useful in the analysis in the next section.\\
The Euler Gamma function $\Gamma(z)=\int_0^{\infty}t^{z-1}e^{-t}dt$ is defined for $z>0.$ Moreover, we have
$\Gamma(n+1)=n!,$ when $n$ is natural number. We will use the Legendre duplication formula (see \cite[Identity 5.5.5]{NIST})
\begin{equation}
\label{eq:legendre}
\Gamma(2z)=(2\pi)^{-1/2}2^{2z-\frac{1}{2}}\Gamma(z)\Gamma\left(z+\frac{1}{2}\right).
\end{equation}
Applying the basic identity $\Gamma(z+1)=z\Gamma(z)$ with $z=k+1+\frac{a}{2},$ $z=k+2$ and getting the difference of the sequence
$\frac{2k}{2+a}\frac{\Gamma\left(\frac{a}{2}+k+1\right)}{\Gamma(k+1)}$ we have
$$\frac{2(k+1)}{2+a}\frac{\Gamma\left(\frac{a}{2}+k+2\right)}{\Gamma(k+2)}-
\frac{2k}{2+a}\frac{\Gamma\left(\frac{a}{2}+k+1\right)}{\Gamma(k+1)}=\frac{\Gamma(k+1+\frac{a}{2})}{\Gamma(k+1)}.$$
Applying this formula for $k=0$ to $n-1$ we easily derive
\begin{equation}
\label{eq:binomial2}
\sum_{k=1}^{n}\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(k)}=\frac{2n}{2+a}\frac{\Gamma\left(n+1+\frac{a}{2}\right)}{\Gamma(n+1)},\,\,\,\text{when}\,\,\,a\in N.
\end{equation}
Let $X_i$ be the arrival time of the $i$th event in a Poisson process with arrival rate $\lambda$.
We know that the random variable $X_i$ obeys the Gamma distribution with parameters $i, \lambda.$
Its probability density function is given by $f_{i,\lambda}(t)=\lambda e^{-\lambda t}\frac{(\lambda t)^{i-1}}{(i-1)!}$
and $\Pr\left[X_i\ge t\right]=\int_t^{\infty}f_{i,\lambda}(t)dt$
(see \cite{kingman,dam_2014,ross_2002}).
Using integration by parts we can derive the following identities
\begin{equation}
\label{integral_a}
\int_0^{x}f_{m,\lambda}(t)dt=1-e^{-\lambda x}\sum_{l=0}^{m-1}\frac{(\lambda x)^l}{l!},
\end{equation}
\begin{equation}
\label{integral_1}
\int_0^{\infty}f_{m,\lambda}(t)dt=1,
\end{equation}
where $m$ is nonegative integer and $\lambda,x>0.$
We will use the following binomial identity
\begin{equation}
\label{eq:binomial1}
\sum_{j=0}^{a}(-1)^{a-j}\binom{j+k-1}{k-1}\binom{a-j+k-1}{k-1}=
\begin{cases} \binom{\frac{a}{2}+k-1}{k-1} &\mbox{if } a \equiv 0 \\
0 & \mbox{if } a \equiv 1 \end{cases} \pmod{2}.
\end{equation}
This identity can be easily checked using generating functions. Notice that \\
$\frac{1}{(1-z)^k}=\sum_{j\ge 0}\binom{j+k-1}{k-1}z^j.$ Multiplying together
$\frac{1}{(1-z)^k}\frac{1}{(1+z)^k}=\frac{1}{(1-z^2)^k}$ and equating coefficients of $z^a$ on both sides
of this equation gives us (\ref{eq:binomial1}).
We recall the definition of the Pochhammer polynomial \cite{concrete_1994}
\begin{equation}
\label{eq:rising}
x^{(k)} = \begin{cases}
x(x+1)\dots(x+k-1) & \mbox{for } k\ge 1\\
1 &\mbox{for } k=0.
\end{cases}
\end{equation}
\subsection{Outline and results of the paper}
We consider the event distance to the power $a$ between two i.i.d. Poisson processes with arrival rate $\lambda$ and respective arrival times $X_1,X_2,\dots$ and
$Y_1,Y_2,\dots$ on a line.
We give a closed form formula for the moments distance
$\E{|X_{k+r}-Y_k|^a}, $ for any integer $k\ge 1, r\ge 0,$ when $a$
is natural number as the combination of the Pochhammer polynomials (see Theorem \ref{thm:mainclosedbe} and Theorem \ref{thm:mainclosedbeolul}).
Especially, for $r=0,$ the closed analytical formula
for $\E{|X_{k}-Y_k|^a},$ when $k\ge 1$ and $a\in N$
was obtained involving Gamma functions
(see Theorem \ref{thm:mainclosedbe} and Theorem \ref{thm:mainclosedbeoa1}).
As a consequence we derive that the expected cost to the power $b>0$ of a minimum weight matching with edges $\{X_k,Y_k\}$ between
two i.i.d Poisson processes with arrival rate $\lambda=n$ and respective arrival times $X_1,X_2,\dots X_n$ and $Y_1,Y_2,\dots Y_n$ is in $\Theta\left(n^{1-\frac{b}{2}}\right),$
when $b\ge 1,$ and in $O\left(n^{1-\frac{b}{2}}\right),$
when $0 <b< 1.$
Here is an outline of the paper.
In Section \ref{tight:sec} we obtain closed formula for event distances to the power $a$ of two i.i.d. Poisson processes, when $a\in N.$
In Section \ref{sec:appl} we derive the asymptotic results with application to sensor networks. Finally, Section \ref{sec:concl} provides conclusions.
\section{Closed formula for the moments}
\label{tight:sec}
Consider two i.i.d. Poisson processes with arrival rate $\lambda$ and respective arrival times $X_1,X_2,\dots$ and
$Y_1,Y_2,\dots$ on a line. We give a closed analytical formula for the moment distance
$\E{|X_{k+r}-Y_k|^a}, $ for any integer $k\ge 1, r\ge 0,$ when $a$
is natural number
\subsection{Closed formula when $a$ is an even natural number}
\label{tight:even}
We begin with the following lemma which is helpful in the proof of Theorem \ref{thm:mainclosedbe}.
\begin{Lemma}
\label{lem:mainclosed}
Assume that, $a$ is even natural number. Let $i\ge 1, k\ge 1.$
Then
$$\E{|X_i-Y_k|^a}=\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}.$$
\end{Lemma}
\begin{proof}
As a first step, observe the following formula
\begin{equation}
\label{eq:firster}
\E{|X_i-Y_k|^a}=\int_{0}^{\infty}f_{k,\lambda}(y_k)\E{|X_i-y_k|^a}dy_k.
\end{equation}
Hence, computing the moment $\E{|X_i-Y_k|^a}$ is reduced to computing the moment $\E{|X_i-y_k|^a}.$
Observe that
$$
\E{|X_i-y_k|^a}=\int_{0}^{\infty}(t-y_k)^af_{i,\lambda}(t)dt=
\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}y_k^{a-j}\int_{0}^{\infty}t^jf_{i,\lambda}(t)dt.
$$
Applying Identity (\ref{integral_1}) and Definition (\ref{eq:rising}) we have
\begin{equation}
\label{eq:mid}
\E{|X_i-y_k|^a}=\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}\frac {i^{(j)}}{{\lambda}^j}y_k^{a-j}.
\end{equation}
Putting together (\ref{eq:mid}) and (\ref{eq:firster}) we get
$$
\E{|X_i-Y_k|^a}=\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}\frac {i^{(j)}}{{\lambda}^j}
\int_{0}^{\infty}y_k^{a-j}f_{k,\lambda}(y_k)dy_k.
$$
Hence, from Identity (\ref{integral_1}) and Definition (\ref{eq:rising}) we deduce that
$$
\E{|X_i-Y_k|^a}
=\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}.
$$
This completes the proof of Lemma \ref{lem:mainclosed}.
\end{proof}
\begin{theorem}
\label{thm:mainclosedbe} Let $a$ be an even natural number.
Consider two $i.i.d$ Poisson processes having identical arrival rate $\lambda >0$ and let $X_1,X_2,\dots$ and $Y_1,Y_2,\dots$
be their arrival times, respectively. The following identities are valid for all $k\ge 1, r\ge 0$:
\begin{align*}
\E{|X_{k+r}-Y_k|^a}&=\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}(k+r)^{(j)}k^{(a-j)},\\
\E{|X_k-Y_k|^a}&=\frac{a!}{{\lambda}^a}\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(k)\Gamma\left(\frac{a}{2}+1\right)}.
\end{align*}
\end{theorem}
\begin{proof}
The first part of the theorem follows immediately from Lemma \ref{lem:mainclosed} with $i=k+r.$
Putting together the first part of the theorem with $r=0,$ Definition (\ref{eq:rising}) and Identity (\ref{eq:binomial1}) we get
$$
\E{|X_k-Y_k|^a}=\frac{a!}{{\lambda}^a}\binom{\frac{a}{2}+k-1}{k-1}.
$$
Note that, if $\frac{a}{2}\in N,$ then $\binom{\frac{a}{2}+k-1}{k-1}=\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(k)\Gamma\left(\frac{a}{2}+1\right)}.$
This is enough to prove Theorem \ref{thm:mainclosedbe}.
\end{proof}
\subsection{Closed formula when $a$ is an odd natural number}
\label{tight:odd}
Our analysis of the moment distance, when $a$ is an odd natural number proceeds along the following steps.
Firstly, we give Lemma \ref{thm:mainclosedb} and Lemma \ref{thm:mainclosedbeo} which are helpful in the proof of Theorem \ref{thm:mainclosedbeolul}.
Then, Theorem \ref{thm:mainclosedbeoa1} follows from Theorem \ref{thm:mainclosedbeolul} and Lemma \ref{lem:gamma}.
\begin{Lemma}
\label{thm:mainclosedb}
The following identity is valid for all $i\ge 1, k\ge 1,$ when $a$ is an odd natural number:
\begin{align*}
& \E{| X_i- Y_k|^a}=
(-1)\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}\\
&+
\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}
\sum_{l=0}^{i+j-1}\binom{k+l-1+a-j}{l}\frac{1}{2^{k+l-1+a-j}}.
\end{align*}
\end{Lemma}
\begin{proof}
As a first step, observe the following formula
\begin{equation}
\label{eq:firstera}
\E{|X_i-Y_k|^a}=\int_{0}^{\infty}f_{k,\lambda}(y_k)\E{|X_i-y_k|^a}dy_k.
\end{equation}
Hence, computing the moment $\E{|X_i-Y_k|^a}$ is reduced to computing the moment $\E{|X_i-y_k|^a}.$
Observe that
$$
\E{|X_i-y_k|^a}=
\int_{0}^{\infty}(t-y_k)^af_{i,\lambda}(t)dt
-2\int_{0}^{y_k}(t-y_k)^af_{i,\lambda}(t)dt.
$$
Therefore, $\E{|X_i-Y_k|^a}$ is equal to the sum of the following two integrals which we evaluate separately.
\begin{align}
& \label{eq:integral_first1}
\int_{0}^{\infty}f_{k,\lambda}(y_k)
\int_{0}^{\infty}(t-y_k)^af_{i,\lambda}(t)dtdy_k,\\
& \label{eq:integral_first2}
(-2)\int_{0}^{\infty}f_{k,\lambda}(y_k)
\int_{0}^{y_k}(t-y_k)^af_{i,\lambda}(t)dt
dy_k.
\end{align}
Case of integral (\ref{eq:integral_first1}). The calculation are almost exactly the same as in the proof of Lemma \ref{lem:mainclosed}.
Applying Identities (\ref{integral_1}) and (\ref{eq:rising}) we have
\begin{equation}
\label{eq:last1}
\int_{0}^{\infty} f_{k,\lambda}(y_k)
\int_{0}^{\infty}(t-y_k)^a f_{i,\lambda}(t) dtdy_k
=
\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}.
\end{equation}
Case of integral (\ref{eq:integral_first2}).
$$
(-2) \int_{0}^{\infty}f_{k,\lambda}(y_k)
\int_{0}^{y_k}(t-y_k)^af_{i,\lambda}(t)dtdy_k
=\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j} A(j),
$$
where
$$A(j)=(-2)
\int_{0}^{\infty}y_k^{a-j}f_{k,\lambda}(y_k)
\int_{0}^{y_k}t^jf_{i,\lambda}(t)dtdy_k.
$$
Applying Identities (\ref{integral_a}) and (\ref{integral_1}) we have
$$A(j)=A_1(j)+A_2(j),$$
where
$$
A_1(j)=(-2)\int_{0}^{\infty}y_k^{a-j}f_{k,\lambda}(y_k)\frac{i^{(j)}}{\lambda^j} dy_k
=(-2)\frac{1}{\lambda^a}(a-j)^{(k)}i^{(j)},
$$
\begin{align*}
A_2(j)&=\int_{0}^{\infty}2y_k^{a-j}f_{k,\lambda}(y_k)\frac{i^{(j)}}{\lambda^j}
\sum_{l=0}^{i+j-1}e^{-\lambda y_k}\frac{(\lambda y_k)^l}{l!}dy_k\\
&=
\frac{i^{(j)}}{\lambda^a}\sum_{l=0}^{i+j-1}\frac{(a-j+k-1+l)!}{l!(k-1)!}\frac{1}{2^{k+l-1+a-j}}.
\end{align*}
Therefore, we deduce that
\begin{equation}
\label{eq:last2}
\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j} A_1(j)=(-2)\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)},
\end{equation}
\begin{align}
\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j} A_2(j) = &
\frac{1}{{\lambda}^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}\nonumber \\
\label{eq:last4}
& \times \sum_{l=0}^{i+j-1}\binom{k+l-1+a-j}{l}\frac{1}{2^{k+l-1+a-j}}.
\end{align}
Adding Formulas (\ref{eq:last1}), (\ref{eq:last2}) and (\ref{eq:last4}) we derive the desired formula for $\E{|X_i-Y_k|^a},$
when $a$ is odd natural number.
This completes the proof of Lemma \ref{thm:mainclosedb}.
\end{proof}
Now we give a simpler expression for the moment distance of two i.i.d. Poisson processes in the following lemma.
\begin{Lemma}
\label{thm:mainclosedbeo}
Assume that, $a$ is odd natural number. Let $i\ge 1, k\ge 1.$
Then
\begin{align*}
&\E{|X_i-Y_k|^a}\\
&\,\,\,\,\,=
\left(\sum_{l=k}^{i+a-1}\binom{l+k-1}{l}\frac{1}{2^{l+k-1}}\right)\frac{1}{\lambda^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}\\
&\,\,\,\,\,+
\frac{1}{\lambda^a 2^{i+k-2+a}}
\sum_{l=0}^{a-1}\left(\sum_{j=0}^{l}\binom{a}{j}(-1)^{j}i^{(j)}k^{(a-j)}\right) \binom{i+k+a-1}{i+l}.
\end{align*}
\end{Lemma}
\begin{proof}
Applying Lemma \ref{thm:mainclosedb} we deduce that
\begin{equation}
\label{eq:expection}
\E{|X_i-Y_k|^a}=
(-1)\frac{1}{\lambda^a}\sum_{j=0}^{a}B_1(j)+\frac{1}{\lambda^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}B_2(j),
\end{equation}
where
$
B_1(j)=\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)},\,\,\,
B_2(j)=\sum_{l=0}^{i-1+j}\binom{l+k-1+a-j}{l}\frac{1}{2^{l+k-1+a-j}}.
$
Using summation by parts
$$\sum_{l=0}^{i-1+j}g(l+1)(f(l+1)-f(l))=
\sum_{l=0}^{i+j}(g(l+1)-g(l))f(l)+g(i+j+1)f(i+j)-g(0)f(0)$$
for $\,\,f(l)=\frac{-2}{2^{l+k-1+a-j}}\,\,$ and
$\,\,g(l) = \begin{cases}
\binom{l-1+k-1+a-j}{l-1} & \mbox{for } l\ge 1\\
0 &\mbox{for } l=0
\end{cases}\,\,$
we have
\begin{align*}
&\sum_{l=0}^{i-1+j} \binom{l+k-1+a-j}{l}\frac{1}{2^{l+k-1+a-j}}\\
&\,\,\,\,=
\sum_{l=0}^{i+j}\binom{l+k-1+a-(j+1)}{l}\frac{1}{2^{l+k-1+a-(j+1)}}
-\frac{1}{2^{i+k-2+a}}\binom{i+k+a-1}{i+j}.
\end{align*}
Therefore
$$B_2(j)=B_2(j+1)+B_3(j),\,\,\,
\text{where} \,\,\,
B_3(j)=-\frac{1}{2^{i+k-2+a}}\binom{i+k+a-1}{i+j}.$$
Hence, we deduce that
\begin{equation}
\label{eq:final1}
B_2(j)=B_2(a)+\sum_{l=j}^{a-1}B_3(l)\,\,\,
\text{for} \,\,\, j\in \{0,1,\dots ,a-1\}.
\end{equation}
Applying Identity (\ref{eq:final1}) to Formula (\ref{eq:expection}) we have
\begin{align}
\nonumber\E{|X_i-Y_k|^a}&=
(B_2(a)-1)\frac{1}{\lambda^a}\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}\\
&+\label{eq:as200}
\frac{1}{\lambda^a}\sum_{j=0}^{a-1}\binom{a}{j}(-1)^{a-j}i^{(j)}k^{(a-j)}\sum_{l=j}^{a-1}B_3(l).
\end{align}
Using the identity
$\sum_{j=0}^{m}\binom{m+j}{m}2^{-j}=2^m$
(see \cite[Identity 5.20, p. 167]{concrete_1994}) for $m=k-1$ we get
\begin{equation}
\label{eq:as100}
B_2(a)-1=\sum_{l=k}^{i+a-1}\binom{l+k-1}{l}\frac{1}{2^{l+k-1}}.
\end{equation}
Combining together (\ref{eq:as200}), (\ref{eq:as100}) and changing summation in the second sum in (\ref{eq:as200}) we get the desired result.
\end{proof}
\begin{theorem}
\label{thm:mainclosedbeolul}
Let $a$ be an odd natural number.
Consider two $i.i.d$ Poisson processes having identical arrival rate $\lambda >0$ and let $X_1,X_2,\dots$ and $Y_1,Y_2,\dots$
be their arrival times, respectively. The following identity is valid for all $r\ge 0,$ $k\ge 1$:
\begin{align*}
&\E{|X_{k+r}-Y_k|^a}\\
&\,\,\,\,\,=
\frac{1}{\lambda^a}\frac{\Gamma\left(k+\frac{1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma(k+1)}
\sum_{l=0}^{r+a-1}\frac{(2k)^{(l)}}{(k+1)^{(l)}2^l}
\sum_{j=0}^{a}\binom{a}{j}(-1)^{a-j}(k+r)^{(j)}k^{(a-j)}\\
&\,\,\,\,\,+
\frac{1}{\lambda^a 2^{r-1}}\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(1/2)\Gamma(k)}
\sum_{l=0}^{a-1}\left(\sum_{j=0}^{l}\binom{a}{j}(-1)^{j}(k+r)^{(j)}k^{(a-j)}\right)\frac{k^{\left(\frac{a+1}{2}\right)(2k+a)^{(r)}}}{k^{(r+l+1)}k^{(a-l)}}.
\end{align*}
\end{theorem}
\begin{proof}
Applying Lemma \ref{thm:mainclosedbeo} for $i=k+r$ we deduce that
\begin{align*}
\E{|X_{k+r}-Y_k|^a}&=
-\frac{a!}{\lambda^a}\sum_{l=k}^{k+r+a-1}\binom{l+k-1}{l}\frac{1}{2^{l+k-1}}C(k,r,a)\\
&+
\frac{a!}{\lambda^a 2^{2k+r-2+a}}
\sum_{l=0}^{a-1}C(k,r,l) \binom{2k+r+a-1}{k+r+l},
\end{align*}
where $C(k,r,l)=\sum_{j=0}^{l}\binom{a}{j}(-1)^{j}(k+r)^{(j)}k^{(a-j)}.$
Using the Legendre duplication formula (\ref{eq:legendre}) for $z=\frac{a-1}{2}+k$ we get
\begin{equation}
\label{eq:twopoints}
\Gamma(2k+a-1)=\pi^{-1/2}2^{2k+a-2}\Gamma\left(\frac{a-1}{2}+k\right)\Gamma\left(\frac{a}{2}+k\right).
\end{equation}
Applying Formula (\ref{eq:twopoints}) for $a=1$ and the identity $\Gamma(1/2)=\sqrt{\pi}$ we derive\\
$
2^{-2k+1}\frac{(2k-1)!}{(k-1)!k!}=\frac{\Gamma\left(k+\frac{1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma(k+1)}.
$
Therefore
\begin{align}
\sum_{l=k}^{k+r+a-1}\binom{l+k-1}{l}\frac{1}{2^{l+k-1}}&=
\frac{1}{2^{2k-1}}\sum_{l=0}^{r+a-1}\binom{2k-1+l}{k+l}\frac{1}{2^l}\nonumber \\
&= \label{eq:twopointsa}
\frac{\Gamma\left(k+\frac{1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma(k+1)}\sum_{l=0}^{r+a-1}\frac{(2k)^{(l)}}{(k+1)^{(l)}2^l}.
\end{align}
Combining Formula (\ref{eq:twopoints}) and the identity $\Gamma(1/2)=\sqrt{\pi}$ we get
$$
\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(k)}
=\frac{\Gamma(1/2)}{2^{2k+a-2}}
\frac{\Gamma(2k+a-1)}
{\Gamma(k)\Gamma\left(\frac{a-1}{2}+k\right)}
=\frac{\Gamma(1/2)}{2^{2k+a-2}}
\frac{(2k-1)!}{(k-1)!(k-1)!}
\frac{(2k)^{(a-1)}}{k^{(\frac{a-1}{2})}}.
$$
Therefore
\begin{equation}
\label{eq:twopointsb}
\frac{1}{2^{2k+r-2+a}}\binom{2k+r+a-1}{k+r+l}=\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(1/2)\Gamma(k)}
\frac{k^{\left(\frac{a+1}{2}\right)}}{2^{r-1}}\frac{(2k+a)^{(r)}}{k^{(r+l+1)}k^{(a-l)}}.
\end{equation}
Putting together (\ref{eq:twopointsa}) and (\ref{eq:twopointsb}) completes the proof of Theorem \ref{thm:mainclosedbeolul}.
\end{proof}
The following lemma will be helpful in the proof of Theorem \ref{thm:mainclosedbeoa1}.
\begin{Lemma}
\label{lem:gamma}
Assume that, $a$ is an even natural number. Let $k\ge 1.$ Then
\begin{equation}
\label{eq:lasttech}
\sum_{l=0}^{a-1}\left(\sum_{j=0}^{l}(-1)^jk^{(a-j)}k^{(j)}\binom{a}{j}\right)
\frac{k^{(\frac{a+1}{2})}}{k^{(1+l)}k^{(a-l)}}
=
\frac{a!\sqrt{\pi}}{2\Gamma\left(\frac{a}{2}+1\right)}.
\end{equation}
\end{Lemma}
\begin{proof}
From Identities (\ref{eq:binomial1}) and $k^{(a-j)}k^{(j)}\binom{a}{j}=k^{(j)}k^{(a-j)}\binom{a}{a-j}$
we deduce that
\begin{align}
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\sum_{j=0}^{l}(-1)^jk^{(a-j)}k^{(j)}\binom{a}{j}=-\sum_{j=a}^{a-l}k^{(j)}k^{(a-j)}(-1)^{j}\binom{a}{a-j}\nonumber\\
=&\label{eq:med}\sum_{j=0}^{a-l-1}k^{(j)}k^{(a-j)}(-1)^{j}\binom{a}{a-j}=\sum_{j=0}^{a-l-1}(-1)^jk^{(a-j)}k^{(j)}\binom{a}{j}.
\end{align}
Let
$$D(k,a)=\sum_{l=0}^{a-1}\left(\sum_{j=0}^{l}(-1)^jk^{(a-j)}k^{(j)}\binom{a}{j}\right)\frac{k^{(\frac{a+1}{2})}}{k^{(1+l)}k^{(a-l)}}
$$
Applying Equation (\ref{eq:med}) we deduce that
$$
D(k,a)=D_1(k,a)+D_2(k,a),
$$
where
\begin{align*}
D_1(k,a)&=\sum_{l=0}^{\frac{a-1}{2}}\left(\sum_{j=0}^{l}(-1)^jk^{(a-j)}k^{(j)}\binom{a}{j}\right)
\frac{k^{(\frac{a+1}{2})}}{k^{(1+l)}k^{(a-l)}},\\
D_2(k,a)&=\sum^{a-1}_{l=\frac{a-1}{2}+1}\left(\sum_{j=0}^{a-l-1}(-1)^jk^{(a-j)}k^{(j)}\binom{a}{j}\right)
\frac{k^{(\frac{a+1}{2})}}{k^{(1+l)}k^{(a-l)}}.
\end{align*}
Therefore, we have
\begin{align*}
D(k,a)&=
\sum_{l=0}^{\frac{a-1}{2}}\sum_{j=0}^{l}\binom{a}{j}(-1)^jk^{(j)}(k+a-l)^{(l-j)}(k+l+1)^{\left(\frac{a-1}{2}-l\right)}\\
&+
\sum_{l=\frac{a-1}{2}+1}^{a-1}\sum_{j=0}^{a-l-1}\binom{a}{j}(-1)^jk^{(j)}(k+l+1)^{(a-1-j-l)}(k+a-l)^{\left(l-\frac{a-1}{2}\right)}.
\end{align*}
Observe that $(k+a-l)^{(l-j)},$ $(k+l+1)^{\left(\frac{a-1}{2}-l\right)}$ are polynomials of variable $k$ for each $j\in\{0,1,\dots ,l\},$
$l\in\{0,1,\dots, \frac{a-1}{2}\}$
and
$(k+l+1)^{(a-1-j-l)},$ $(k+a-l)^{\left(l-\frac{a-1}{2}\right)}$ are polynomials of variable $k$ for each $j\in\{0,1,\dots ,a-1-l\},$
$l\in\{\frac{a-1}{2}+1,\dots,a-1\}.$
Therefore,
$D(k,a)$ is the polynomial of variable $k$ of degree less than $\frac{a-1}{2}.$
Hence, to prove Equality (\ref{eq:lasttech}) it remains to obtain the following equality
\begin{align*}
D(k,a)&=
\sum_{l=0}^{\frac{a-1}{2}}\sum_{j=0}^{l}\binom{a}{j}(-1)^jk^{(j)}(k+a-l)^{(l-j)}(k+l+1)^{\left(\frac{a-1}{2}-l\right)}\\
&+
\sum_{l=\frac{a-1}{2}+1}^{a-1}\sum_{j=0}^{a-l-1}\binom{a}{j}(-1)^jk^{(j)}(k+l+1)^{(a-1-j-l)}(k+a-l)^{\left(l-\frac{a-1}{2}\right)}\\
&=
\frac{a!\sqrt{\pi}}{2\Gamma\left(\frac{a}{2}+1\right)}\,\,\, \text{for each} \,\,\, k=0,-1,-2,\dots,-\frac{a-1}{2}.
\end{align*}
Let $b\in\{0,\dots,\frac{a-1}{2}\}.$ Observe that
\begin{align*}
(-b+a-l)^{(l-j)}(-b+l+1)^{\left(\frac{a-1}{2}-l\right)}&=0\,\, \text{for}\,\, 0\le l\le b-1, \,\,0\le j\le l,\\
(-b+l+1)^{(a-1-j-l)}(-b+a-l)^{\left(l-\frac{a-1}{2}\right)}&=0\,\, \text{for}\,\, a-1-(b-1)\le l\le a-1,\\
&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, 0\le j\le a-l-1,\\
(-b)^{(j)}&=0\,\,\text{for}\,\, b+1\le j.
\end{align*}
Applying this
we have
\begin{align*}
D(-b,a)&=\sum_{l=b}^{\frac{a-1}{2}}\sum_{j=0}^{b}\binom{a}{j}(-1)^j(-b)^j(-b+a-l)^{(l-j)}(-b+l+1)^{\left(\frac{a-1}{2}-l\right)}\\
+&
\sum_{l=\frac{a-1}{2}+1}^{a-1-b}\sum_{j=0}^{b}\binom{a}{j}(-1)^j(-b)^j(-b+l+1)^{(a-1-j-l)}(-b+a-l)^{\left(l-\frac{a-1}{2}\right)}\\
=&\sum_{l=b}^{\frac{a-1}{2}}\sum_{j=0}^{b}\binom{a}{j}\frac{b!}{(b-j)!}\frac{(a-j-1-b)!}{(a-l-1-b)!}\frac{\left(\frac{a-1}{2}-b\right)!}{(l-b)!}\\
+&
\sum_{l=\frac{a-1}{2}+1}^{a-1-b}\sum_{j=0}^{b}\binom{a}{j}\frac{b!}{(b-j)!}\frac{(a-j-1-b)!}{(l-b)!}\frac{\left(\frac{a-1}{2}-b\right)!}{(a-l-1-b)!}\\
=&\left(\frac{a-1}{2}-b\right)!b!\left(\sum_{j=0}^{b}\binom{a}{j}\binom{a-1-b-j}{b-j}\right)\sum_{l=b}^{a-1-b}\binom{a-1-2b}{a-l-1-b}.
\end{align*}
Notice that
$$\sum_{l=b}^{a-1-b}\binom{a-1-2b}{a-l-1-b}=\sum_{l=0}^{a-1-2b}\binom{a-1-2b}{l}=2^{a-2b-1}.$$
Applying this and the identity
\begin{equation}
\label{eq:difficultt}
\sum_{j=0}^{b}\binom{a}{j}\binom{a-1-b-j}{b-j}=
\begin{cases} \frac{2^b}{b!}\prod_{j=1}^{b}(a-(2j-1)) &\mbox{if } b \neq 0 \\
1 & \mbox{if } b=0 \end{cases},
\end{equation}
(see \cite[Identity 7.17, p. 36]{Gould}) we get
\begin{align*}
D(-b,a)
&=2^{a-1}\left(\frac{a-1}{2}-b\right)!\begin{cases} \frac{2^b}{b!}\prod_{j=1}^{b}(a-(2j-1)) &\mbox{if } b \neq 0 \\
1 & \mbox{if } b=0 \end{cases}\\
&=2^{a-1}\left(\frac{a-1}{2}\right)!.
\end{align*}
Finally, from the Legendre duplication formula (\ref{eq:legendre}) for $z=\frac{a+1}{2}$
we deduce that
$$
D(-b,a)=\frac{a!\sqrt{\pi}}{2\Gamma\left(\frac{a}{2}+1\right)}\,\,\,\, \text{for all}\,\,\,\, b\in\left\{0,\dots,\frac{a-1}{2}\right\}.
$$
This is enough to prove Lemma \ref{lem:gamma}.
\end{proof}
\begin{theorem}
\label{thm:mainclosedbeoa1} Let $a$ be an odd natural number.
Consider two $i.i.d$ Poisson processes having identical arrival rate $\lambda >0$ and let $X_1,X_2,\dots$ and $Y_1,Y_2,\dots$
be their arrival times, respectively. The following identity is valid for all $k\ge 1$:
$$
\E{|X_k-Y_k|^a}=\frac{a!}{\lambda^a}\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(k)\Gamma\left(\frac{a}{2}+1\right)}.
$$
\end{theorem}
\begin{proof}
First, we substitute Identity (\ref{eq:binomial1}) into Theorem \ref{thm:mainclosedbeolul} and observe that
\begin{align*}
& \E{|X_{k}-Y_{k}|^a}\\
&\,\,\,\,\,=\frac{1}{\lambda^a \sqrt{\pi}2^{-1}}\frac{\Gamma\left(\frac{a}{2}+k\right)}{\Gamma(k)}
\sum_{l=0}^{a-1}\left(\sum_{j=0}^{l}\binom{a}{j}(-1)^{j}(k)^{(j)}k^{(a-j)}\right)\frac{k^{\left(\frac{a+1}{2}\right)}}{k^{(l+1)}k^{(a-l)}}.
\end{align*}
Then, the result of Theorem \ref{thm:mainclosedbeoa1} follows from Lemma \ref{lem:gamma} and the identity $\Gamma(1/2)=\sqrt{\pi}.$
\end{proof}
\section{Application to sensor networks}
\label{sec:appl}
Consider the sensors thrown randomly according to Poisson processes with arrival rate $\lambda$ in the half-line $[0,\infty).$
We assume that, the $i-$th event in this Poisson process represents the position of the $i-$th sensor.
\begin{theorem}
\label{thm:asym}
Fix $a\in N.$
Let $X_1,X_2,\dots$ and $Y_1,Y_2,\dots$ be the arrival times of two i.i.d. Poisson processes, respectively, with arrival rate $\lambda.$ Then
\begin{equation}
\label{eq:exact}
\sum_{k=1}^{n} \E{|X_k-Y_k|^a}=\frac{a!}{\lambda^a}\frac{2n}{2+a}\frac{\Gamma\left(n+1+\frac{a}{2}\right)}{\Gamma\left(\frac{a}{2}+1\right)\Gamma(n+1)}.
\end{equation}
\end{theorem}
\begin{proof}
The result of the theorem follows immediately by summing the corresponding identities from the second part of Theorem \ref{thm:mainclosedbe} and Theorem \ref{thm:mainclosedbeoa1}
as well as Identity (\ref{eq:binomial2}).
\end{proof}
The next theorem provides the asymptotics results for $\E{|X_k-Y_k|^b}$ and \\ $\sum_{k=1}^n\E{|X_k-Y_k|^b},$ when $b\in R.$
\begin{theorem}
\label{thm:asympt}
Fix $b\in R.$
Let $X_1,X_2,\dots$ and $Y_1,Y_2,\dots$ be the arrival times of two i.i.d. Poisson processes, respectively, with arrival rate $\lambda.$ Then
\begin{equation}
\label{eq:asympt01}
\E{|X_k-Y_k|^b}=\begin{cases} \Theta\left(\frac{k^{\frac{b}{2}}}{{\lambda}^b}\right)\,\,\, &\mbox{if}\,\,\, b \ge 1 \\
O\left(\frac{k^{\frac{b}{2}}}{{\lambda}^b}\right)\,\,\, & \mbox{if }\,\,\, 0< b < 1, \end{cases}
\end{equation}
\begin{equation}
\label{eq:asympt02}
\sum_{k=1}^{n} \E{|X_k-Y_k|^a}=\begin{cases} \Theta\left(\frac{n\cdot n^{\frac{a}{2}}}{{\lambda}^a}\right)\,\,\, &\mbox{if}\,\,\, a \ge 1 \\
O\left(\frac{n\cdot n^{\frac{a}{2}}}{{\lambda}^a}\right),\,\,\, & \mbox{if }\,\,\, 0< a < 1. \end{cases}
\end{equation}
\end{theorem}
\begin{proof}
First of all, we discuss the proof of Equation (\ref{eq:asympt01}).
Observe that, the result for $a\in N$ follows from the second part of
Theorem \ref{thm:mainclosedbe} and Theorem \ref{thm:mainclosedbeoa1}
as well as the standard asymptotic expansion for the Gamma function
\begin{equation}
\label{eq:gamma}
z^{c_1-b_1} \frac{\Gamma(z+b_1)}{\Gamma(z+c_1)}=1+\frac{(c_1-b_1)(c_1+b_1-1)}{2z}+O\left(1/z^2\right),\,\,\,\,\text{as}\,\, z\rightarrow\infty
\end{equation}
(see \cite[Identity 2.36, p. 40]{Szpankowski}) for $z=k,$ $b_1=\frac{a}{2}$ and $c_1=0.$
Hence, we get
\begin{equation}
\label{eq:estim001}
\E{|X_k-Y_k|^a}=\Theta\left(\frac{k^{\frac{a}{2}}}{{\lambda}^a}\right),\,\,\,\text{when}\,\, a\in N.
\end{equation}
Therefore, we may assume that $b>0$ and $b\notin N.$ We use H\"{o}lder's inequality for integrals with parameters
$\frac{\lceil b\rceil}{b},$ $\frac{\lceil b\rceil}{\lceil b\rceil -b}$ and get
\begin{equation}
\label{holder01}
\E{|X_k-Y_k|^b}\le\left( \E{|X_k-Y_k|^{\lceil b\rceil}}\right)^{\frac{b}{\lceil b \rceil}}.
\end{equation}
Putting together Equation (\ref{holder01}) and Equation (\ref{eq:estim001}) with $a:=\lceil b\rceil$ we deduce that
$$
\E{|X_k-Y_k|^b}=O\left(\frac{k^{\frac{b}{2}}}{{\lambda}^b}\right),\,\,\,\text{when}\,\,\, b> 0\,\,\,\text{and}\,\,\,b\notin N.
$$
This is enough to prove the upper bound.
To prove the lower bound assume that $b>1$ and $b\notin N.$ We use H\"older inequality for integrals with parameters
$b,$ $\frac{b}{b-1}$ and get
\begin{equation}
\label{holder02}
\E{|X_k-Y_k|}\le \left(\E{|X_k-Y_k|^{b}}\right)^{\frac{1}{b}}.
\end{equation}
Putting together Equation (\ref{holder02}) and Equation (\ref{eq:estim001}) with $a:=1$ we deduce that
$$
\E{|X_k-Y_k|^b}=\Omega\left(\frac{k^{\frac{b}{2}}}{{\lambda}^b}\right),\,\,\,\text{when}\,\,\, b\ge 1\,\,\,\text{and}\,\,\,b\notin N.
$$
This finishes the proof of the first part of the theorem.
The second part of the theorem ( Equation (\ref{eq:asympt02})) follows immediately from the identity
$\sum_{k=1}^{n}k^{\frac{b}{2}}=\Theta\left(n^{\frac{b}{2}+1}\right)$
and the first part of the theorem (Equation (\ref{eq:asympt01})).
\end{proof}
Fix $a>0.$
Let $X_1,X_2,\dots X_n$ and $Y_1,Y_2,\dots Y_n$ be the first $n$ arrival times of two i.i.d. Poisson processes, respectively, with arrival rate $\lambda=n.$
The sensors in $X_1,X_2,\dots X_n$ are colored $0$ and the sensors in $Y_1,Y_2,\dots Y_n$ are colored $1.$
Then, as a direct application of Theorem \ref{thm:asympt} we conclude that the cost of a minimum to the power $b$ matching between two bicolored random point-sets on a line
(see \cite{dam_2014} for details)
is in $\Theta\left(n^{1-\frac{b}{2}}\right),$
when $b\ge 1,$ and in $O\left(n^{1-\frac{b}{2}}\right),$
when $0 <b< 1.$
\section{Conclusion}
\label{sec:concl}
In this paper, we studied the moment distance between Poisson events of two i.i.d. Poisson processes
with arrival rate $\lambda$ and respective arrival times $X_1,X_2,\dots$ and
$Y_1,Y_2,\dots$ on a line. We obtained a closed form formula for the $\E{|X_{k+r}-Y_k|^a},$ where $k\ge1, r\ge0$ and $a\in N$
and provide asymptotics to real-valuded exponents.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.914062,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfL0241xiER2ZIC5d | \section{Introduction}
Charge symmetry, the equivalence of the $u$ quark in the proton and
the $d$ in the neutron, and vice versa, is an excellent approximation
in nuclear and hadronic systems --- typically respected at $\sim 1\%$
precision~\cite{Londergan:2009kj,Londergan:1998ai,Miller:2006tv}. Current deep inelastic
scattering measurements are such that this level of precision has not yet been
reached, with current bounds on charge symmetry violation (CSV) in
parton distributions in the range
5-10\%~\cite{Martin:2003sk}.
Such possibly large CSV effects are of particular interest in the
context of a new program at Jefferson Laboratory~\cite{JLAB} which
aims to measure the electron-deuteron parity-violating deep inelastic
scattering (PVDIS) asymmetry to better than 1\% precision. This would
offer an improvement of roughly an order of magnitude over early SLAC
measurements\cite{Prescott:1979dh}, with the potential to constitute
an important new test of the Standard Model. Reaching this goal will
rely on a precise control of strong interaction processes. CSV is
likely to be the most significant hadronic uncertainty at the kinematics typical
of the JLab
program\cite{Hobbs:2008mm,Hobbs:2011dy,Mantry:2010ki}.
Phenomenological studies suggest that CSV could cause $\sim 1.5-2\%$
variations in the PVDIS asymmetry~\cite{Martin:2003sk}. This is
sufficient to disguise any signature of new physics, such as
supersymmetry, expected to appear at the $1\%$
level~\cite{Kurylov:2003xa}.
Here we review our recent work\cite{Shanahan:2013vla} which has
determined the CSV moments of parton distributions from lattice QCD.
Our results, based on $2+1$-flavor lattice QCD
simulations~\cite{Horsley:2010th,Cloet:2012db}, reveal $\sim 0.20 \pm
0.06\%$ CSV in the quark momentum fractions.
This corresponds to a $\sim 0.4-0.6\%$ correction to the PVDIS asymmetry.
This precision represents an order of magnitude improvement over the
phenomenological bounds reported in Ref.~\cite{Martin:2003sk}.
This result also constitutes an important step towards resolving the
famous NuTeV anomaly~\cite{Zeller:2001hh,Bentz:2009yy}. Whereas the
original report of a 3-sigma discrepancy with the Standard Model was
based on the assumption of negligible CSV, effects of the magnitude
and sign reported here act to reduce this discrepancy by one sigma.
Similar results for spin-dependent parton CSV suggest corrections to
the Bjorken sum rule\cite{Alekseev:2010hc} at the half-percent level
which could possibly be seen at a future electron collider\cite{Deshpande:2005wd}.
In Section~\ref{sec:BaryonSplittings} we introduce the techniques used
for our calculation in the context of the octet baryon mass
splittings\cite{Shanahan:2013apa}.
Section~\ref{sec:CSVParton} summarizes our parton CSV results,
presented in full in Ref.~\cite{Shanahan:2013vla}.
Related work which reveals that the fraction of baryon spin which is
carried by the quarks is in fact structure-dependent rather than
universal across the baryon octet\cite{Shanahan:2012wa} is highlighted
in Section~\ref{sec:SpinFrac}.
\section{Baryon mass splittings}
\label{sec:BaryonSplittings}
Charge symmetry refers to the invariance of the strong interaction
under a $180^\circ$ rotation about the `2' axis in isospin space. At
the parton level this invariance implies the equivalence of the $u$
quark in the proton and the $d$ quark in the neutron, and
vice-versa. The symmetry would be exact if
\begin{itemize}
\item the up and down quarks were mass degenerate: $m_u=m_d$
\item the quark electromagnetic charges were equal: $Q_u=Q_d$.
\end{itemize}
Of course, both of these conditions are broken in nature. This
breaking manifests itself, for example, as mass differences between
members of baryon isospin multiplets.
While these differences have been measured extremely precisely
experimentally\cite{Beringer:1900zz},
the decomposition of these quantities into strong (from $m_u \ne m_d$)
and electromagnetic (EM) contributions is much less well known.
Phenomenological best estimates come from an application of the
Cottingham sum rule\cite{Gasser:1982ap} which relates the
electromagnetic baryon self-energy to electron scattering observables.
Walker-Loud, Carlson \& Miller (WLCM) have recently revised the
standard Cottingham formula\cite{WalkerLoud:2012bg}; noting that two
Lorentz equivalent decompositions of the $\gamma N \rightarrow \gamma
N$ Compton amplitude produce inequivalent self-energies,
WLCM use a subtracted dispersion relation to remove the
ambiguity. This revision modifies traditional values of the EM part of
the baryon mass splittings.
It is clearly valuable to independently determine either the strong
or EM contribution to the proton-neutron mass difference. In principle
this is achievable with lattice QCD. At this time, however, most
lattice simulations for the octet baryon masses are performed with 2+1
quark flavours, that is, with mass-degenerate light quarks: $m_u=m_d$.
Our analysis uses isospin-averaged lattice simulation
results\cite{Aoki:2008sm,Bietenholz:2011qq} to constrain chiral
perturbation theory expressions for the baryon masses. Because of the
symmetries of chiral perturbation theory, the only additional input
required to determine the strong contribution to the baryon mass
splittings is the up-down quark mass ratio $m_u/m_d$. The remainder of
this section is devoted to an illustration of this method.
The usual meson-baryon Lagrangian can be written
\begin{align*} \nonumber
\mathcal{L}^B=&i\,\mathrm{Tr}\, \overline{\bf B}(v\cdot \mathcal{D}){\bf B} +2D \,\mathrm{Tr}\, \overline{\bf B}S^\mu \{A_\mu, {\bf B}\}+2F \,\mathrm{Tr}\, \overline{\bf B}S^\mu \left[ A_\mu, {\bf B} \right] \\ \nonumber
& {+ 2b_D\,\mathrm{Tr}\, \overline{\bf B}\{\mathcal{M}_q, {\bf B}\} +2b_F\,\mathrm{Tr}\, \overline{\bf B}\left[ \mathcal{M}_q, {\bf B} \right] }\\
& {+ 2\sigma_0 \,\mathrm{Tr}\, \mathcal{M}_q\,\,\mathrm{Tr}\, \overline{\bf B}{\bf B}}.
\end{align*}
The $D$ and $F$ terms denote the meson--baryon interactions and
generate the nonanalytic quark mass dependence associated with quantum
fluctuations of the pseudo-Goldstone modes.
The explicit quark mass dependence is carried by the mass matrix
$\mathcal{M}_q$, which is related to only three undetermined
low-energy constants: $b_D$, $b_F$ and $\sigma_0$ (at this order).
With these constants determined by a fit to isospin-averaged
(2+1-flavour) lattice data, there are no new parameters in the
effective field theory relevant to CSV.
Combined with appropriate treatment of the CSV loop
corrections, our analysis of two independent
lattice simulations yields the charge symmetry-breaking derivative\cite{Shanahan:2012wa}
\begin{align*}
m_\pi^2\frac{d}{d\omega}(M_n-M_p)&=(20.3\pm 1.2)\,\mathrm{MeV}\quad \textrm{[PACS-CS]}\\
m_\pi^2\frac{d}{d\omega}(M_n-M_p)&=(16.6\pm 1.2)\,\mathrm{MeV}\quad \textrm{[QCDSF]}.
\end{align*}
Here the quark mass splitting is denoted by $\omega$, which is related to the quark mass ratio ($R=m_u/m_d$) by
\begin{equation}
\omega=\frac{1}{2}\frac{(1-R)}{(1+R)}m_{\pi\mathrm{(phys)}}^2.
\end{equation}
The dependence of our determination of $(M_p-M_n)^\text{Strong}$ on
the input quark mass ratio is indicated in Fig.~\ref{fig:StrongQ}.
\begin{figure}[!htbpf]
\begin{center}
\includegraphics[width=0.65\textwidth]{dMstrongFigADL.pdf}
\caption{Strong nucleon mass splitting from our analysis of two
independent lattice simulations (QCDSF\cite{Bietenholz:2011qq} and
PACS-CS\cite{Aoki:2008sm}), plotted against the quark mass ratio
$m_u/m_d$. Phenomenological (Leutwyler\cite{Leutwyler:1996qg}) and
lattice (FLAG\cite{Colangelo:2010et}) values for this ratio are
shown.}
\label{fig:StrongQ}
\end{center}
\end{figure}
In Figure \ref{fig:StrongEM} this analysis, where we consider both PACS-CS and Leutwyler results and allow for both Leutwlyer and FLAG values of the ratio $m_u/m_d$, is compared against a
recent strong mass splitting calculation of the BMW
Collaboration\cite{Borsanyi:2013lga} and the phenomenological
estimates of the electromagnetic self
energy\cite{Gasser:1982ap,WalkerLoud:2012bg}. Only for the purpose of
simplifying the graphic have we not shown other recent lattice QCD
estimates of the strong contribution to the mass splitting~\cite{Horsley:2012fw,deDivitiis:2011eh,Blum:2010ym,Beane:2006fk}.
\begin{figure}[!htbpf]
\begin{center}
\includegraphics[width=0.65\textwidth]{pmnFig4.pdf}
\caption{Status of the nucleon mass splitting
decomposition. Gasser-Leutwyler\cite{Gasser:1982ap} and
WLCM\cite{WalkerLoud:2012bg} calculations of the electromagnetic
contribution are compared with the strong contribution determined in
this work\cite{Shanahan:2012wa} and by the BMW lattice
collaboration\cite{Borsanyi:2013lga}. The black line indicates the
experimental determination of the total mass
difference\cite{Beringer:1900zz}.}
\label{fig:StrongEM}
\end{center}
\end{figure}
\section{CSV parton distribution moments}
\label{sec:CSVParton}
The spin-independent CSV Mellin moments are defined as
\begin{align*}
\delta u^{m\pm} &= \int_0^1 dx x^m (u^{p\pm}(x)-d^{n\pm}(x)) \\
& = \langle x^m \rangle_u^{p\pm} - \langle x^m \rangle_d^{n\pm}, \\[8pt]
\delta d^{m\pm} &= \int_0^1 dx x^m (d^{p\pm}(x) - u^{n\pm}(x)) \\
&= \langle x^m \rangle_d^{p\pm}- \langle x^m \rangle_u^{n\pm},
\end{align*}
with similar expressions for the analogous spin-dependent terms $\delta\Delta q^\pm$. Here, the plus (minus) superscripts indicate C-even (C-odd) distributions $q^{\pm}(x)=q(x)\pm \overline{q}(x)$.
The first two spin-dependent and first spin-independent
lattice-accessible moments have recently been determined from
$2+1-$flavor lattice QCD by the QCDSF/UKQCD
Collaboration\cite{Cloet:2012db,Horsley:2010th}.
These original papers made first estimates for the amount of CSV in the parton moments by considering the leading flavour expansion about the SU(3) symmetric point\cite{Cloet:2012db,Horsley:2010th}.
In Ref.\cite{Shanahan:2013vla} we applied an SU(3) chiral expansion in the same fashion as the baryon mass expansion described above. This enabled us to extrapolate the results away from the SU(3) symmetric point to determine the CSV contribution at the physical quark masses.
Although this work only determines the lowest nontrival spin-independent
moment, we can infer the CSV distribution as
shown in Fig.~\ref{fig:csvpdf} by using the same parameterisation of the $x$ dependence
as Ref.~\cite{Martin:2003sk}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.68\textwidth]{CSVPDF}
\caption{Charge symmetry violating momentum fraction using simple phenomenological parameterisation $\delta q(x)=\kappa x^{-1/2}(1-x)^4(x-1/11)$ with normalisation determined from the lattice moment\cite{Shanahan:2013vla}.
\label{fig:csvpdf}}
\end{center}
\end{figure}
This magnitude of charge symmetry breaking is found to be in agreement
with phenomenological MIT bag model
estimates\cite{Rodionov:1994cg,Londergan:2003ij}. This result is of particular
significance in the context of a new program to measure the (PVDIS)
asymmetry to high precision at Jefferson
Laboratory\cite{JLAB,Wang:2013kkc}. Further, the sign and magnitude of these results
suggest a 1-$\sigma$ reduction of the NuTeV
anomaly\cite{Bentz:2009yy}.
\section{Octet baryon spin fractions}
\label{sec:SpinFrac}
In addition to using the chiral extrapolation of the previous section
to extract CSV effects, we have also determined the relative quark
spin fractions in the octet baryons\cite{Shanahan:2013apa}.
Figure~\ref{fig:SpinFraction}, taken from Ref.\cite{Shanahan:2013apa},
illustrates that the quark spin fraction is environment dependent. The
figure clearly highlights that this result is evident in the bare lattice
results, with considerable enhancement seen in the extrapolation to
the physical point.
Clearly, any candidate explanation of the proton spin problem must
allow for the fraction of spin carried by the quarks to be dependent
on baryon structure.
\begin{figure}
\label{fig:SpinFraction}
\begin{center}
\includegraphics[width=0.68\textwidth]{DoublyRepSpinFracLonger.pdf}
\end{center}
\caption{Ratio of doubly-represented quark spin fractions in the octet baryons, taken from Ref.\cite{Shanahan:2013apa} . $X_\pi$ is the singlet quark mass.}
\end{figure}
This finding is supported by a Cloudy Bag Model calculation, which
includes relativistic and one-gluon-exchange
corrections~\cite{Myhrer:1988ap,Myhrer:2007cf,Schreiber:1988uw}.
Within this model, the observed variation in quark spin arises from
the meson cloud correction being considerably smaller in the $\Xi$
than in the nucleon. That, combined with the less relativistic motion
of the heavier strange quark, results in the total spin fraction in
the $\Xi$ being significantly larger than in the nucleon.
\section{Conclusion}
The effects of charge symmetry violation (CSV) are becoming
increasingly significant in precision studies of the Standard
Model. Recent results, based on $2+1-$flavor lattice QCD simulations,
unambiguously resolve CSV in the quark Mellin moments. These results
reduce the NuTeV anomaly from $3\sigma$ to $2\sigma$ and could improve
the sensitivity of Standard Model tests such as the PVDIS program at
Jefferson Laboratory. The same lattice QCD studies show that the
fraction of baryon spin carried by the quarks is structure-dependent,
rather than universal across the baryon octet.
\section{Acknowledgements}
This work was supported by the University of Adelaide and the
Australian Research Council through through the ARC Centre of
Excellence for Particle Physics at the Terascale and grants FL0992247
(AWT), DP110101265 (RDY) and FT120100821 (RDY).
\bibliographystyle{ws-procs9x6}
| {
"attr-fineweb-edu": 1.811523,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfNA25V5jZBMj-jnJ | \section*{Acknowledgements}
MB was supported by the South African Radio Astronomy Observatory, which is a
facility of the National Research Foundation, an agency of the Department of Science
and Technology and he was also supported by the Claude Leon Foundation.
DS acknowledges financial support from the Fondecyt project number 11140496.
DS would like to thank INFN for supporting a visit in Bologna during which this work was carried on.
This work has made use of the Horizon Cluster hosted by Institut d'Astrophysique de Paris.
We thank Stephane Rouberol for smoothly running this cluster. UC was partially supported
within the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, and received
financial state aid managed by the Agence Nationale de la Recherche, as part of the
programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02.
MB, DP and FF acknowledge financial support by ASI n.I/023/12/0 "Attivit\`a relative alla
fase B2/C per la missione Euclid", ASI Grant 2016-24-H.0 and partial financial
support by the ASI/INAF Agreement I/072/09/0 for the Planck LFI Activity of Phase E2.
\section{Conclusion}
\label{sec:conclusion}
eJBD theories represent the simplest scalar-tensor theory of gravity where
Newton's constant is allow to vary becoming a dynamical field, as a function
of space and time.
This class of theories have been already severely constrained from Solar System experiments
leading to $\gamma_\mathrm{PN} - 1 = (2.1 \pm 2.3) \times 10^{-5}$ at 68\% CL \cite{Bertotti:2003rm}.
These Solar System tests constrain the weak-field behaviour of
gravity, and the strong-field behaviour that this family of theories can still
exhibit is contrained by the binary pulsar \cite{Zhu:2015mdo,Archibald:2018oxs}.
However, it is conceivable that gravity differed considerably from GR in the early Universe.
Even if GR seems to work well today on Solar System scales,
in several scalar-tensor theories there is generally an attractor mechanism that
drives to an effective cosmological constant at late time.
BBN \cite{Copi:2003xd,Bambi:2005fi} provides a test of gravity at
early times based on the impact of the effective gravitational constant on the expansion rate
and on the cosmological abundances of the light elements produced during BBN.
Cosmological observations, such as CMB anisotropies and LSS matter distribution,
probe different epochs and scales of the Universe.
The redshift of matter-radiation equality is modified in eJBD theories by the motion
of the scalar field driven by pressureless matter and this results in a shift
of the CMB acoustic peaks \cite{Liddle:1998ij,Chen:1999qh}.
$Planck$ data have been already used to constrain this eJBD models
\cite{Avilez:2013dxa,Li:2013nwa,Umilta:2015cta,Ooba:2016slp}
(see \cite{Nagata:2003qn,Acquaviva:2004ti,Wu:2009zb} for analysis with pre-$Planck$ data).
Latest $Planck$ 2015 publicly data release constrain $1 - \gamma_\mathrm{PN} < 0.007$ at 95\% CL,
and the combination of CMB and LSS data through the addition of BAO information
has shown a promising way to further constrain these class of models in light of upcoming
LSS experiments, leading to $1 - \gamma_\mathrm{PN} < 0.003$ at 95\% CL \cite{Ballardini:2016cvy}.
In this paper we investigated how well some future CMB experiments and LSS surveys will be able
to improve current cosmological constraints on this simple scalar-tensor theories.
We consider eJBD theory of gravity where a potential term is included in order to embed in the
original JBD theory the current acceleration phase of the Universe.
Our results have been enlightening and we can summarise them as follows:
\begin{itemize}
\item Future CMB experiment, such as AdvACT, CORE and Stage-4 CMB, will improve current
constraints from $Planck$ 2015 alone by a factor 3-5, thanks to a better measure of the small
scale CMB anisotropies. We find that in the best case $\sigma\left(1 - \gamma_\mathrm{PN}\right) \simeq 0.0005$ at 68\% CL
for propose space and ground-base CMB experiment CORE and S4.
\item We forecast the combination of CMB and spectroscopic surveys using the 3-dimensional
observed galaxy power spectrum. We consider a Euclid-like spectrocopic survey and
to complete the redshift coverage of a Euclid-like selection function $0.9<z<1.8$
\cite{Pozzetti:2016cch,Merson:2017efv}
we include optical spectroscopic observations from BOSS in the range $0.2<z<0.8$
\cite{Dawson:2012va,Alam:2016hwk}.
The combination of quasi-linear information up to $k_{\rm max}=0.1\,h/\text{Mpc}$ for the
Euclid-like and BOSS GC to the CMB leads roughly a reduction around three-ten times with respect
to the uncertainties obtained
with the CMB alone, with a best case bound of $\sigma\left(1 - \gamma_\mathrm{PN}\right) \simeq 0.0002$ at 68\% CL.
\item We find that the inclusion of mildly non-linear scales in the galaxy power spectrum
is crucial to drive the contraints from cosmological observation at the same order of current
Solar System constraints.
\item WL surves will improve the sensitivity on $\gamma$ approximately by a factor 2.
\item The best bound that we obtain combining all the three cosmological probes and including
non-linear scales in the GC up to $k_{\rm max} = 0.25\,h/\text{Mpc}$ and in the WL up to
$\ell_{\rm max} = 5000$ is $\sigma\left(1 - \gamma_\mathrm{PN}\right) \simeq 0.000062$ at 68\% CL.
This forecast is only approximately a factor three worst than the current Solar System constraint.
\end{itemize}
Although consistent with \cite{Acquaviva:2007mm,Alonso:2016suf} our estimate of $\gamma_{\rm PN}$ is
based on different assuptions.
It is difficult to compare our results with the pioneering work \cite{Acquaviva:2007mm}:
theoretical predictions, forecast methodology and experimental specifications in \cite{Acquaviva:2007mm}
are different from our analysis. Overall, we can say that our forecasted uncertainty on $\gamma_{\rm PN}$ is
more optimistic than those quoted in \cite{Acquaviva:2007mm} because we combine expected constraints from different probes.
\cite{Alonso:2016suf} use LSST photometric specifications for galaxy clustering and weak lensing,
and SKA1-MID intensity mapping, whereas we use Euclid-like spectroscopic survey for galaxy clustering
and photometric specifications for weak lensing; we do not consider any screening.
We close with three final remarks. Our forecasted sensitivity on $\gamma_{\rm PN}$ is smaller
than the one obtained from models with a non-universal coupling between dark
matter and dark energy and motivated by eJBD theories \cite{Amendola:2011ie}. As second remark,
our works shows the importance of developing non-linear approximation schemes for eJBD theories
\cite{Perrotta:2003rh,Li:2010zw,Taruya:2014faa,Winther:2015wla} to reach the accuracy required
by future cosmological observations. As a third and conclusive point, it would be interesting to
further add complementary probes at low redshift: indeed, we have been quite inclusive with the
forecasts from next CMB polarisation experiments whereas other measurement at lower redshift
complementary to Euclid and BOSS and might be crucial to strengthen our predictions.
\section{Statistical errors forecasts}
\label{sec:five}
In this section we estimate marginalised statistical errors for the cosmological
parameters of our model, using the Fisher matrix calculation.
The probes are assumed to be independent, hence the total Fisher matrix is simply
given by the sum of the single Fisher matrices:
\begin{equation}
F_{\alpha\beta} = F_{\alpha\beta}^\text{CMB}+F_{\alpha\beta}^\text{GC}+F_{\alpha\beta}^\text{WL} \,.
\end{equation}
We perform the Fisher forecast analysis for the set of parameters
$\omega_{\rm c},{\bf \theta} = \{\omega_{\rm b},h_0,n_{\rm s},\ln(10^{10}A_{\rm s}),\gamma\}$.
For the CMB we consider also the reionization optical depth $\tau$ and then we marginalize over
it before to combine the CMB Fisher matrix with the other two.
We assume as fiducial model a flat cosmology with best-fit parameters corresponding
to $\omega_\textrm{c}\equiv\Omega_\textrm{c}h^2=0.1205$,
$\omega_\textrm{b}\equiv\Omega_\textrm{b}h^2=0.02218$, $h_0\equiv H_0/100=0.6693$, $\tau=0.0596$,
$n_\textrm{s}=0.9619$, and $\ln\left(10^{10}\ A_\textrm{s}\right)=3.056$
consistent with the recent results of $Planck$ \cite{Aghanim:2016yuo}.
As a fiducial value for the coupling to the Ricci curvature we choose $\gamma=10^{-5}$,
the value is within the 95\% CL upper bound from current cosmological data \cite{Ballardini:2016cvy}
and is contrained at 3$\sigma$ with the Solar System data \cite{Bertotti:2003rm}.
We have considered two fiducial potentials, a quartic potential and a constant one.
The CMB angular power spectra, the matter power spectra, together with the Hubble parameter $H(z)$,
the angular diameter distance $D_A(z)$, and growth rate $f(z,k)$ have been computed with CLASSig,
a modified version of the Einstein-Boltzmann code CLASS
\footnote{\href{http://github.com/lesgourg/class_public}{http://github.com/lesgourg/class\_public}}
\cite{Lesgourgues:2011re,Blas:2011rf} dedicated to eJBD theory \cite{Umilta:2015cta}.
This code has been successfully validated against other codes in \cite{Bellini:2017avd}.
Non-linear scales have been included to the matter power spectrum assuming
the halofit model from \cite{Takahashi:2012em}.
\begin{figure}
\includegraphics[width=.45\textwidth]{SingleProbe_n4g1e5_v2.pdf}
\includegraphics[width=.45\textwidth]{CMBGC_n4g1e5_v2.pdf}
\caption{Left: joint marginalized constraints (68\%-95\% CL) on $h_0$ and $\gamma$ from single
probe alone. Dashed lines correspond to the 68\%-95\% CL using the GC information from Euclid-like
in combination with BOSS.
Right: joint marginalized constraints (68\%-95\% CL) on $h_0$ and $\gamma$ from the combination
CMB+Euclid-GC for the four CMB surveys. Dashed lines correspond to the 68\% CL using the GC information
up to $k_{\rm max}=0.25\,h/\text{Mpc}$.}
\label{fig:2D}
\end{figure}
Fig.~\ref{fig:2D} shows the constraints from the single observational probes. The different
orientation of the 2-dimensional contours shows that the most efficient way to reduce the
constraint error on $\gamma$ is to combine different cosmological probes.\\
With our Fisher approach, we find that the uncertainty from $Planck$ simulated data alone is
$\sigma(\gamma)\simeq 0.00064$ at 68\% CL (consistent with our finding with $Planck$ 2015 real
data \cite{Ballardini:2016cvy}) will improve by a factor three using AdvACT+$Planck$, a factor
four with CORE, and a factor five with the combination S4+LiteBIRD, i.e.
$\sigma(\gamma)\simeq0.00022,\,0.00016,\,0.00013$ at 68\% CL respectively.
The combination of quasi-linear information with $k_{\rm max}=0.1\,h/\text{Mpc}$ from galaxy
GC spectrum from Euclid-like and BOSS to the CMB leads to a significant improvement of the
uncertainty on $\gamma$, approximately three-ten times with respect to the constraints obtained
with the CMB alone.
In order to understand the improvement carried by mildly non-linear scales, we also include
the case of $k_{\rm max}=0.25\,h/\text{Mpc}$ which further improves the uncertainty on $\gamma$.
In this case, we find five-twenty times better
errors compared to CMB alone. We show in Fig.~\ref{fig:2D} the 2-dimensional
marginal errors for the combination of CMB+Euclid-GC Fisher matricies for $Planck$,
AdvACT+$Planck$, CORE, S4+LiteBIRD which correspond to the uncertainties of
$\sigma(\gamma)\simeq0.000058\, (0.000032)$,
$\sigma(\gamma)\simeq0.000054\, (0.000027)$,
$\sigma(\gamma)\simeq0.000052\, (0.000025)$,
$\sigma(\gamma)\simeq0.000051\, (0.000025)$ at 68\% CL for $k_{\rm max}=0.1\, (0.25)\,h/\text{Mpc}$;
including also BOSS-GC information we obtain respectively
$0.000057\, (0.000031)$,
$0.000052\, (0.000026)$,
$0.000050\, (0.000024)$,
$0.000049\, (0.000023)$ at 68\% CL for $k_{\rm max}=0.1\, (0.25)\,h/\text{Mpc}$.
Finally, we considered the combination of our three cosmological probes (CMB, GC, WL) to identify
the tightest constraint on $\gamma$ by including non-linear scales through WL.
The sensitivity on $\gamma$ combining the CMB with GC information up to
$k_{\rm max}=0.1\,h/\text{Mpc}$ and WL assuming $\ell_{\rm max}=1500$ corresponds to
$\sigma(\gamma)\simeq0.000045$,
$\sigma(\gamma)\simeq0.000037$,
$\sigma(\gamma)\simeq0.000029$,
$\sigma(\gamma)\simeq0.000023$,
at 68\% CL for $Planck$, AdvACT+$Planck$, CORE, and S4+LiteBIRD. These uncertainties
improve of another 1.5 factor if we push the GC information up to $k_{\rm max}=0.25\,h/\text{Mpc}$.
We show in Fig.~\ref{fig:wl-lmax} the impact of non-linear scales pushing the WL up to our optimistic
case of $\ell_{\rm max}=5000$. We find a small improvement in terms of error on $\gamma$
pushing $\ell_{\rm max}$ for the WL from 1500 to 5000.
\begin{figure}
\includegraphics[width=.45\textwidth]{gamma_lmax_v2.pdf}
\includegraphics[width=.45\textwidth]{nvar_n0g1e5_v2.pdf}
\caption{Left: forecast marginalized constraint on $\gamma$ (68\% CL) as a function
of the maximum multipole $\ell_{\rm max}$ included in the WL. Dashed lines
correspond to the combination GC using the Euclid-GC information
up to $k_{\rm max}=0.25\,h/\text{Mpc}$, and dot-dashed lines including also BOSS
up to $k_{\rm max}=0.25\,h/\text{Mpc}$. The gray shaded
region represent the 68\% CL contraint from Solar System data \cite{Bertotti:2003rm}.
Right: joint marginalized constraints (68\%-95\% CL) on $n_{\rm IG}$ and $\gamma$ from the
combination CMB+Euclid-GC for the four CMB surveys. Dashed lines correspond to the 68\% CL
using the Euclid-GC information up to $k_{\rm max}=0.25\,h/\text{Mpc}$.}
\label{fig:wl-lmax}
\end{figure}
The most conservative forecast of Euclid-like plus BOSS to quasi-linear scales in combination
with $Planck$ improve between a factor three and approximately a factor twenty the current
uncertainties based on $Planck$ 2015 and BAO data.
In Tab.~\ref{tab:errors}, we show the marginalized uncertainties on all the
cosmological parameters from the combination of the CMB surveys with Euclid-like and
BOSS using the conservative range of scales.
\begin{table}[!h]
\centering
\caption{Marginalized uncertainties (68\% CL) on the cosmological parameters
for $\gamma=10^{-5}$ and $n_{\rm IG}=4$.
We consider the combination of three CMB surveys with the information up
to quasi-linear scales from the GC ($k_{\rm max}=0.1\ h/$Mpc) using Euclid-like
plus BOSS and WL ($\ell_{\rm max}=1500$) from Euclid-like.
Numbers in round brakets refer to the uncertainties for our optimistic case
with GC up to $k_{\rm max}=0.25\ h/$Mpc and WL up to $\ell_{\rm max}=5000$.}
\label{tab:errors}
\vspace*{0.2cm}
\begin{tabular}{|c|ccc|}
\hline
\rule[-1mm]{0mm}{.4cm}
& $Planck$ & AdvACT+$Planck$ & S4+LiteBIRD \\
& + BOSS-GC & + BOSS-GC & + BOSS-GC \\
& + Euclid-GC+WL & + Euclid-GC+WL & + Euclid-GC+WL \\
\hline
\rule[-1mm]{0mm}{.4cm}
$10^4~\sigma(\omega_c)$ & 1.3 (1.1) & 1.0 (0.78) & 0.83 (0.67) \\
\rule[-1mm]{0mm}{.4cm}
$10^5~\sigma(\omega_b)$ & 9.8 (8.8) & 4.5 (3.8) & 3.2 (2.8) \\
\rule[-1mm]{0mm}{.4cm}
$10^3~\sigma(h)$ & 1.7 (0.99) & 1.4 (0.84) & 1.0 (0.64) \\
\rule[-1mm]{0mm}{.4cm}
$10^3~\sigma(n_s)$ & 1.8 (1.2) & 1.5 (0.96) & 1.2 (0.88) \\
\rule[-1mm]{0mm}{.4cm}
$10^3~\sigma\left(\ln\left(10^{10}A_s\right)\right)$ & 1.3 (0.75) & 1.1 (0.71) & 0.98 (0.56) \\
\rule[-1mm]{0mm}{.4cm}
$10^5~\sigma(\gamma)$ & 4.5 (2.7) & 3.7 (2.2) & 2.3 (1.5) \\
\hline
\end{tabular}
\end{table}
We study the impact of a larger fiducial value for $\gamma=10^{-4}$
(still compatible with current $Planck$ 2015 + BAO constraints) on the forecasted uncertainties.
We find that the effect on CMB and GC is around $\sim 10\%$ on the uncertainties; this implies
that we will be able to detect at 2-5~$\sigma$ CL a value of $\gamma=10^{-4}$ with the combination
of the CMB with GC information from a Euclid-like experiment.
Regarding the WL, the uncertainties halve leading to a clearer detection of such $\gamma$ at more
then 5~$\sigma$ when WL from Euclid-like is added.
We repeat our series of forecasts with a different potential for the scalar field with an index
equal to zero, i.e. $n_{\rm IG}=0$, namely a cosmological constant.
For this fiducial cosmology we find only a small degradation of the uncertainties on $\gamma$,
pointing to a weak correlation between the two parameters.
Finally, we test also the possibility to constraint the index of the scalar potential
around $n_{\rm IG}=0$ with future cosmological data, see Fig.~\ref{fig:wl-lmax}.
The tightest uncertainty that we obtain combining all the three cosmological probes and
including non-linear scales in the GC up to $k_{\rm max}=0.25\,h/\text{Mpc}$ and in the
WL up to $\ell_{\rm max}=5000$ is $\sigma\left(n_{\rm IG}\right) \simeq 5$ at 68\% CL.
In order to compare our finding on cosmological scales with the constraints obtained within
the Solar System we quote the constraint on the post-Newtonian parameters defined for this
class of eJBD theories as:
\begin{equation}
\gamma_{\rm PN} = \frac{1+4\gamma}{1+8\gamma} \,.
\end{equation}
Our derived forecasted uncertainties span between
$\sigma\left(|\gamma_{\rm PN}-1|\right) \simeq 5.3\times10^{-4}-2.5\times10^{-3}$ at 68\% CL
for a CMB experiment with a $Planck$ sensitivity through a future CMB experiment
able to perform a cosmic-variance measurement of the E-mode polarization at small scales.
Combining CMB information with GC and WL, we find
$\sigma\left(|\gamma_{\rm PN}-1|\right) \simeq 9.2\times10^{-5}$ and including non-linear
scales a minimum error on the deviation from GR in the weak field limit corresponding
to $\sigma\left(|\gamma_{\rm PN}-1|\right) \simeq 6.2\times10^{-5}$ at 68\% CL.
\section{Fisher approach for LSS data}
\label{sec:four}
We now give the details for the Fisher forecasts with future LSS data.
We consider Euclid-like specifications as a representative case for future galaxy surveys.
Euclid is a mission of the ESA Cosmic Vision program that it is expected to be launched in 2022.
It will perform both a spectroscopic and a photometric survey: the first aims mainly at measuring the
galaxy power spectrum of $\sim 30,000,000$ galaxies while the second at measuring
the weak lensing signal by imaging $\sim 1.5$ billion galaxies.
Both surveys will be able to constrain both the expansion and growth history of the universe and
will cover a total area of $15,000$ square degrees.
\subsection{Spectroscopic galaxy power spectrum}
Following \cite{Seo:2003pu}, we write the linear observed galaxy power spectrum as:
\begin{equation}
P_{gal}(z:k_r,\mu_r) =
\frac{D_{Ar}^{2}(z)H(z)}{D_{A}^{2}(z)H_{r}(z)} \left[b(z)\sigma_8(z)+f(z,\,k)\sigma_8(z)\mu^{2}\right]^{2} \frac{P_{r}(z,\,k)}{\sigma_8^2(z)}+ P_\mathrm{shot}(z) \,,
\label{eq:pk}
\end{equation}
where the subscript $r$ refers to the reference (or fiducial) cosmological model.
Here $P_\mathrm{shot}(z)$ is a scale-independent offset due to imperfect removal of shot-noise,
$\mu \equiv \vec{k}\cdot\hat{r}/k$ is the cosine of the angle of the wave mode with respect to
the line of sight pointing into the direction $\hat{r}$, $P_{r}(z,\,k)$ is the fiducial matter
power spectrum evaluated at different redshifts, $b(z)$ is the bias factor, $f(z,\,k)$
is the growth rate, $H(z)$ is the Hubble parameter and $D_{A}(z)$ is the angular diameter distance.
The wavenumber $k$ and $\mu$ have also to be written in terms of the fiducial cosmology
(see for more details \cite{Seo:2003pu,Amendola:2004be,Sapone:2007jn}).
The fiducial bias used in this paper is $b(z)=0.72 z+ 0.7$ according to \cite{Merson:2019vfr}.
The Fisher matrix for the galaxy power spectrum is given by \cite{Seo:2003pu}:
\begin{equation}
F_{\alpha\beta} = \int_{k_\text{min}}^{k_\text{max}}\frac{k^2\mathrm{d} k}{4\pi^2}\frac{\partial \ln P_{gal}(z;\,k,\mu)}{\partial \theta_\alpha}\frac{\partial \ln P_{gal}(z;\,k,\mu)}{\partial \theta_\beta}\times V_\text{eff}\,.
\end{equation}
The observed galaxy power spectrum is given by Eq.~\eqref{eq:pk} and the derivatives are evaluated
numerically at the fiducial cosmology; $k_\text{min} = 0.001\,h/\text{Mpc}$ and its value depends
on the survey size whereas $k_\text{max}$ is such that root mean square amplitude of the density
fluctuations at the scale $R_\text{max} = 2\pi/k_\text{max}\,\text{Mpc}/h$ is
$\sigma^2(R_\text{max}) = 0.25$, however in order to not depend strongly on the non-linear information
we consider two cases imposing an additional cut at $k_\text{max} = 0.1\,h/\text{Mpc}$ and at
$k_\text{max} = 0.25\,h/\text{Mpc}$.
The effective volume of the survey in each bin is given by:
\begin{equation}
V_\text{eff} = \left(\frac{\bar{n}\,P_{gal}(z;\,k, \mu)}{1+\bar{n}\,P_{gal}(z;\,k, \mu)}\right)^2V_\text{survey}\,,
\end{equation}
where $\bar{n}$ is the average comoving number density in each bin, the value of the $\bar{n}$ and fiducial specific Euclid-like specifications can be found in \cite{Pozzetti:2016cch,Merson:2017efv}.
To complete the GC information, we include low-redshift spectroscopic information
from BOSS \cite{Dawson:2012va,Alam:2016hwk} on the redshift range $0.2 < z < 0.8$ over 10,000
square degrees.
\subsection{Weak Lensing}
\label{subsec:wlps}
The weak lensing convergence power spectrum is given by
\cite{Hu:1999ek,Hu:2002rm,Heavens:2003jx,Jain:2003tba,Amendola:2007rr}:
\begin{equation}
P_{ij}(\ell) = H_0^3
\int_0^{\infty}\frac{\mathrm{d} z}{E(z)^2}\:W_{i}(z)W_{j}(z)\: P_\mathrm{NL}\left(k=\frac{H_0\,\ell}{r(z)},z\right) \,,
\label{eq:convergence-wl}
\end{equation}
where the subscript ${ij}$ refers to the redshift bins around $z_i$ and $z_j$, with $W_i(z)$ is
the window function (see \cite{Majerotto:2015bra} for more details).
The tomographic overall radial distribution function of galaxies for a Euclid-like
photometric survey is \cite{Amendola:2012ys}:
\begin{equation}
D(z) = z^2\exp\left[-\left(z/z_0\right)^{1.5}\right]\,.
\end{equation}
with $z_0 = z_\text{mean}/1.412$ and mean redshift $z_\text{mean} = 0.9$, the number density
is $d = 35 \text{ galaxy per arcmin}^2$.
Moreover we consider a survey up to $z_\text{max}=3$ divided into 10 bins each containing the same
number of galaxies.
While tomography in general greatly reduces statistical errors the actual shape of the choice of
the binning does not affect results in a serious way, although in principle there is room for
optimisation \cite{Schaefer:2011dx}.
The Fisher matrix for weak lensing is defined as:
\begin{equation}
F_{\alpha\beta} = f_\text{sky}\sum_\ell\frac{(2\ell+1)\Delta \ell}{2}\frac{\partial P_{i\,j}}{\partial\theta_\alpha}C_{j\,k}^{-1}\frac{\partial P_{k\,m}}{\partial\theta_\beta}C_{m\,i}^{-1} \,,
\end{equation}
where $\Delta \ell$ is the step in multipoles, to which we chose 100 step in logarithm scale; whereas $\theta_\alpha$ are the cosmological parameters and:
\begin{equation}
C_{j\,k} = P_{j\,k}+\delta_{j\,k}\langle\gamma_\text{int}^2\rangle\,n_j^{-1} \,,
\end{equation}
where $\gamma_\text{int}$ is the rms intrinsic shear, which is assumed
$\langle\gamma_\text{int}^2\rangle^{1/2} = 0.22$.
The number of galaxies per steradians in each bin is defined as:
\begin{equation}
n_j = 3600\,d\left(\frac{180}{\pi}\right)^2\hat{n}_j \,,
\end{equation}
where $d$ is the number of galaxies per square arcminute and $\hat{n}_j$ is the faction of sources
that belongs to the $j-$th bin.
\section{Introduction}
\label{sec:intro}
Jordan-Brans-Dicke (JBD) theory of gravity \cite{Jordan:1949zz,Brans:1961sx} is among the
simplest extensions of general relativity (GR), in which the gravitational field is mediated by
a scalar field whose inverse plays the role of an effective gravitational constant which varies
in space and time.
JBD theory depends on just one additional parameter $\omega_\mathrm{BD}$, connected to the
post-Newonian parameter $\gamma_\mathrm{PN} = (1+\omega_\mathrm{BD})/(2+\omega_\mathrm{BD})$
measuring the deviations from Einstein GR, which is recovered in the limit of
$\gamma_\mathrm{PN} \rightarrow 1$, i.e. $\omega_\mathrm{BD} \rightarrow + \infty$.
Observations on a wide range of scales constrain JBD theory around GR: the tightest limits,
$\gamma_\mathrm{PN} - 1 = (2.1 \pm 2.3) \times 10^{-5}$ (68\% CL) are obtained
from radar timing data by the Cassini spacecraft within our Solar System \cite{Bertotti:2003rm}.
Extended JBD (eJBD) theory of gravity with a potential term for the scalar field:
\begin{equation}
{\cal S} = \int d^4x \sqrt{-g}\, \left[\frac{1}{16 \pi}
\left( \phi R - \frac{\omega_\mathrm{BD}}{\phi} g^{\mu \nu} \partial_{\mu} \phi \partial_{\nu} \phi \right) - V(\phi)
+ \mathcal{L}_{\rm m} \right] \,,
\label{eJBD}
\end{equation}
include the simplest scalar-tensor models of dark energy in which the current acceleration
of the Universe is connected to a variation of the effective gravitational constant
\cite{Wetterich:1987fk,Uzan:1999ch,Perrotta:1999am,Bartolo:1999sq,Amendola:1999qq,Chiba:1999wt,Boisseau:2000pr}
(see also Ref.~\cite{Cooper:1982du}).
These models are also known as extended quintessence \cite{Perrotta:1999am,Chiba:1999wt}.
The phenomenology in the eJBD theory of gravity is much richer than in Einstein Gravity (EG), since cosmological
variation of the effective gravitational constant could lead to different predictions not only
for cosmic microwave background (CMB) anisotropy \cite{Chen:1999qh} and the growth of structures, but also for
Big Bang Nucleosynthesis (BBN) \cite{Copi:2003xd,Bambi:2005fi}.
Testing the viability of the cosmology in eJBD theory is fully complementary to the Solar System
constraints just presented.
For models described by Eq.~\eqref{eJBD} with a quadratic potential \cite{Cooper:1982du,Wetterich:1987fk,Finelli:2007wb},
the recent $Planck$ 2015 \cite{Aghanim:2015xee,Ade:2015zua} and baryonic acoustic oscillations
(BAO) data \cite{Beutler:2011hx,Ross:2014qpa,Anderson:2013zyy} constrain
$1 - \gamma_\mathrm{PN} < 0.003$ (95\% CL) \cite{Ballardini:2016cvy}
(see also \cite{Ooba:2016slp} for constraints obtained by relaxing the hypothesis of flat spatial
sections and \cite{Avilez:2013dxa,Li:2013nwa,Umilta:2015cta} for the constraints based on the
$Planck$ 2013 data).
These cosmological constraints on $\gamma_{\rm PN}$ are approximately two order of magnitude looser
than Solar System constraints.
In this paper we investigate the capabilities of future CMB and large scale structures (LSS)
observations to further improve the cosmological constraints on the post-Newtonian parameter
$\gamma_{\rm PN}$ within the eJBD theory, as also forecasted in \cite{Acquaviva:2007mm,Alonso:2016suf}.
We expect that upcoming galaxy surveys such as
DESI \footnote{\href{http://desi.lbl.gov/}{http://desi.lbl.gov/}} \cite{Levi:2013gra},
Euclid \footnote{\href{http://sci.esa.int/euclid/}{http://sci.esa.int/euclid/}}
\cite{Laureijs:2011gra,Amendola:2012ys},
LSST \footnote{\href{http://www.lsst.org/}{http://www.lsst.org/}} \cite{Abell:2009aa},
SKA \footnote{\href{http://www.skatelescope.org/}{http://www.skatelescope.org/}}
\cite{Maartens:2015mra,Bacon:2018dui}, will help in improving the
constraints of structure formation on $\gamma_{\rm PN}$ for the eJBD theory.
As a representative example of what we will gain from upcoming galaxy surveys, we consider the
two main probes of Euclid, galaxy clustering (GC), and weak lensing (WL).
In addition, we will consider the role of possible future CMB polarization anisotropy observations,
as AdvACT \cite{Calabrese:2014gwa}, CORE \cite{Delabrouille:2017rct,DiValentino:2016foa,Finelli:2016cyd},
LiteBIRD \cite{Matsumura:2013aja,Errard:2015cxa}, and S4 \cite{Abazajian:2016yjj}, in further improving
on the $Planck$ measurements.
Our paper is organized as follows. After this introduction, we give a lighting review of
eJBD recast as Induced Gravity (IG) (by a redefinition of the scalar field with standard units and
standard kinetic term) in Section~\ref{sec:two}.
In Section~\ref{sec:three} and \ref{sec:four} we present the Fisher methodology for CMB and LSS
for our science forecasts.
In Section~\ref{sec:five} we present our results and in Section~\ref{sec:conclusion} we draw our conclusions.
\section{Fisher approach for CMB anisotropy data}
\label{sec:three}
In this section, we start describing the formalism for our science forecasts.
Under the Gaussian assumption for signal and noise, the Fisher matrix for CMB
anisotropies in temperature and polarization
\cite{Knox:1995dq,Jungman:1995bz,Seljak:1996ti,Zaldarriaga:1996xe,Kamionkowski:1996ks} is:
\begin{equation}
\label{eqn:fisherCMB}
{\bf{\rm F}}_{\alpha\beta}^{\rm CMB} = f_{\rm sky}\sum_{\ell} \frac{(2 \ell+1) }{2}\sum_{\rm X,Y} \frac{\partial C_\ell^{\rm X}}{\partial\theta_\alpha}
\left({\rm Cov}_\ell^{-1}\right)_{\rm XY} \frac{\partial C_\ell^{\rm Y}}{\partial\theta_\beta} \,,
\end{equation}
where $C_\ell^{\rm X}$ is the CMB angular power spectrum in the $\ell^\text{th}$ multipole
for X,Y $\in ($TT$,\,$EE$,\,$TE$,\,\phi\phi,\,$T$\phi)$
\footnote{E$\phi$ has a negligible effect on the constraints, we do not consider its contribution.},
and $\theta_\alpha$ refers to the base of parameters considered in the analysis
which are specified in Sec.~\ref{sec:five} togheter with their best-fit value.
The elements of the symmetric angular power spectrum covariance matrix $\text{Cov}_\ell$
at the $\ell^\text{th}$ multipole are:
\begin{align}
\left({\rm Cov}_\ell\right)_{\rm TTTT}
&= \left(\bar{C}_\ell^{\rm TT}\right)^2-2\frac{\left(\bar{C}_\ell^{\rm TE}\bar{C}_\ell^{\rm T\phi}\right)^2}{\bar{C}_\ell^{\rm EE}\bar{C}_\ell^{\rm \phi\phi}} \,,\\
\left({\rm Cov}_\ell\right)_{\rm EEEE}
&= \left(\bar{C}_\ell^{\rm EE}\right)^2 \,,\\
\left({\rm Cov}_\ell\right)_{\rm TETE}
&= \frac{\left(\bar{C}_\ell^{\rm TE}\right)^2+\bar{C}_\ell^{\rm TT}\bar{C}_\ell^{\rm EE}}{2}
-\frac{\bar{C}_\ell^{\rm EE}\left(\bar{C}_\ell^{\rm T\phi}\right)^2}{2\bar{C}_\ell^{\rm \phi\phi}} \,,\\
\left({\rm Cov}_\ell\right)_{\rm \phi\phi\phi\phi}
&= \left(\bar{C}_\ell^{\rm \phi\phi}\right)^2 \,,\\
\left({\rm Cov}_\ell\right)_{\rm T\phi T\phi}
&= \frac{\left(\bar{C}_\ell^{\rm T\phi}\right)^2+\bar{C}_\ell^{\rm TT}\bar{C}_\ell^{\phi\phi}}{2}
-\frac{\bar{C}_\ell^{\phi\phi}\left(\bar{C}_\ell^{\rm TE}\right)^2}{2\bar{C}_\ell^{\rm EE}} \,,\\
\left({\rm Cov}_\ell\right)_{\rm TTEE}
&= \left(\bar{C}_\ell^{\rm TE}\right)^2 \,,\\
\left({\rm Cov}_\ell\right)_{\rm TTTE}
&= \bar{C}_\ell^{\rm TT}\bar{C}_\ell^{\rm TE}-\frac{\bar{C}_\ell^{\rm TE}\left(\bar{C}_\ell^{\rm T\phi}\right)^2}{\bar{C}_\ell^{\phi\phi}} \,,\\
\left({\rm Cov}_\ell\right)_{\rm TT\phi\phi}
&= \left(\bar{C}_\ell^{\rm T\phi}\right)^2 \,,\\
\left({\rm Cov}_\ell\right)_{\rm TTT\phi}
&= \bar{C}_\ell^{\rm TT}\bar{C}_\ell^{\rm T\phi}-\frac{\bar{C}_\ell^{\rm T\phi}\left(\bar{C}_\ell^{\rm TE}\right)^2}{\bar{C}_\ell^{\rm EE}} \,,\\
\left({\rm Cov}_\ell\right)_{\rm EETE}
&= \bar{C}_\ell^{\rm EE}\bar{C}_\ell^{\rm TE} \,,\\
\left({\rm Cov}_\ell\right)_{\rm \phi\phi T\phi}
&= \bar{C}_\ell^{\phi\phi}\bar{C}_\ell^{\rm T\phi} \,,
\end{align}
where $\bar{C}_\ell^{\rm X} = C^{\rm X}_\ell + N^{\rm X}_\ell$ is the sum of the signal and the noise,
with $N^{\rm TE}_\ell,N^{\rm T\phi}_\ell=0$.
For the temperature and polarization angular power spectra, here $N^{\rm X}_\ell=\sigma_{\rm X} b^{-2}_\ell$
is the isotropic noise convolved
with the instrument beam, $b_\ell^2$ is the beam window function, assumed Gaussian, with
$b_\ell = e^{-\ell (\ell + 1) \theta_{\rm FWHM}^2/16\ln 2}$; $\theta_{\rm FWHM}$ is the full
width half maximum (FWHM) of the beam in radians; $w_{\rm TT}$ and $w_{\rm EE}$ are the inverse
square of the detector noise level on a steradian patch for temperature and polarization,
respectively. For multiple frequency channels, $\sigma^{-1}_{\rm X} b_\ell^2$ is replaced by sum
of this quantity for each channels \cite{Knox:1995dq}:
\begin{equation}
N^{\rm X}_\ell = \left[ \sum_{\rm channels} \frac{1}{N^{\rm X}_{\rm \ell,i}} \right]^{-1} \,.
\end{equation}
We consider the minimum variance estimator for the noise of the
lensing potential by combining the TT, EE, BB, TE, TB, EB CMB estimators
calculated according to \cite{Hu:2001kj}.\\
In this paper, we consider four different cases as representative of current CMB
measurements and future concepts.
We study the predictions for a $Planck$-like experiment consideridering the specifications
of $f_{\rm sky} = 0.7$, and a multipole range from $\ell_{\rm min}=2$ up to $\ell_{\rm max} = 2500$
in Eq.~\eqref{eqn:fisherCMB}.
We use one cosmological frequency of 143 GHz assuming in flight performace corresponding to
a sensitivity of $33\,\mu$K-arcmin in temperature and $70.2\,\mu$K-acmin
in polarization, with a Gaussian beam width of 7.3 arcmin \cite{Adam:2015rua},
see CMB-1 in \cite{Ballardini:2016hpi}.
Since small-scale CMB anisotropy measurements will improve thanks to Stage-3 generation of
ground-based CMB experiments, we consider AdvACT \cite{Calabrese:2014gwa,Allison:2015fac} with a
noise level of $1.4\,\mu$K-arcmin in temperature and $8\,\mu$K-acmin in polarization, with a
Gaussian beam width of 1.4 arcmin and $f_{\rm sky} = 0.4$, over a multipole range
$30 \leq \ell \leq 3000$.
As concept for the next generation of CMB polarization experiments, we consider CORE and Stage-4
(hereafter S4).
For CORE, we consider six frequency channels between 130 and 220 GHz with noise sensitivities
of $1.5\,\mu$K-arcmin in temperature and $2\,\mu$K-acmin in polarization, with a Gaussian beam
width of 5.5 arcmin \cite{Delabrouille:2017rct,DiValentino:2016foa,Finelli:2016cyd}.
We consider $\ell_{\rm max}=3000$ for the CORE configuration with a sky coverage of $f_{\rm sky} = 0.7$.
The ground-based S4 proposal will be able to map modes up to $\ell\sim 5000$.
Following \cite{Abazajian:2016yjj}, we consider for S4 a sensitivity
$\sigma_{\rm T}=\sigma_{\rm P}/\sqrt{2}=1\,\mu$K-arcmin with a resolution of $\theta_{\rm FWHM}=3$ arcmin
over $\sim40\%$ of the sky. Ground-based facilities are limited on large scales due to galactic
foreground contamination and in addition a contamination is expected on the small scales in temperature.
For these reasons, we assume for S4 $\ell_{\rm min}=30$ and a different cut at high-$\ell$ of
$\ell_{\rm max}^{\rm T}=3000$ in temperature and $\ell_{\rm max}^{\rm P}=5000$ in polarization.
To complement at low multipoles, i.e. $2\leq \ell < 30$, we combine with $Planck$ AdvACT and
with the Japan CMB polarization space mission proposal LiteBIRD \cite{Matsumura:2013aja,Errard:2015cxa}
S4.
For the estimate of the noise of the lensing potential we use the multipole range
$30 \leq \ell \leq 3000$.
\section{Dark Energy within the extended Jordan-Brans-Dicke theories}
\label{sec:two}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{background_IG.pdf}
\caption{Evolution of $\sigma/\sigma_0$ (top panel), $w_{\rm DE}$ (middle panel),
and $\Omega_i$ (bottom panel) as function of $z$ for different choices of $\gamma$
from $\gamma=10^{-4}$ to $\gamma=10^{-3}$ for $n=4$.
The value $\sigma_0$ of the scalar field at present is fixed consistently with the
Cavendish-type measurement of the gravitational constant $G = 6.67\times10^{-8}$ cm$^3$ g$^{-1}$ s$^{-2}$.}
\label{fig:back}
\end{figure}
In this section we review some general considerations of the late-time cosmology within the
eJBD theories.
We consider a field redefinition to recast the eJBD action in Eq.~\eqref{eJBD} into an
action for induced gravity (IG) with a standard kinetic term for a scalar field $\sigma$:
\begin{equation}
\label{eqn:action}
\mathcal{S} = \int d^4x \sqrt{-g}\left[ \frac{\gamma\sigma^2R}{2}
- \frac{g^{\mu\nu}}{2}\partial_\mu\sigma\partial_\nu\sigma - V(\sigma) + \mathcal{L}_\mathrm{m} \right] \,.
\end{equation}
where $\gamma = (4 \omega_\mathrm{BD})^{-1}$ and $\gamma\sigma^2=\phi$.
The cosmology evolution after inflation can be divided roughly in three stages and is summarized
in Fig.~\ref{fig:back}.
In the first stage relevant for our study, i.e. deep in the radiation era, $\sigma$ is almost frozen,
since it is effectively massless and non-relativistic matter is subdominant.
During the subsequent matter dominated era, $\sigma$ is driven by non-relativistic matter to higher
values, leading to an effective gravitational constant $G_{\rm N}(a)=1/\left(8\pi\gamma\sigma^2\right)$
which decrease in time.
The potential $V(\sigma)$ kicks in only at recent times determining the rate of the accelerated
expansion.
For a simple monomial potential $V(\sigma) \propto \sigma^{n_{\rm IG}}$ and
in absence of matter, exact power-law solutions for the scale factor $a (t) \sim t^p$ describing
an accelerated expansion exist for the class of monomial potentials with
$p = 2\frac{1+ (n_{\rm IG}+2) \gamma}{(n_{\rm IG}-4)(n_{\rm IG}-2) \gamma}$
\cite{Barrow:1990nv,Cerioni:2009kn}. A de Sitter solution for
the scale factor is found instead for $n_{\rm IG}=2 \,, 4$.
In Fig.~\ref{fig:back} we display different quantities as a function of redshift:
the scalar field normalized to its value at present (left panel), the parameter of state $w_\mathrm{DE}$
of the effective dark energy component (middle panel), and the critical densities corresponding
to EG with a gravitational constant given by the current value of the scalar field, i.e.
$8 \pi G_{\rm N} (z=0)=1/(\gamma \sigma_0^2)$.
It is interesting to note from $w_\mathrm{DE}$ displayed in Fig.~\ref{fig:back} that the effective parameter
of state for dark energy in these extended JBD models is similar to the so called old
\cite{Doran:2006kp} and new \cite{Poulin:2018cxd} early dark energy models.
Since now on we will restrict ourselves to two cases of monomial potentials,
i.e. $V(\sigma) \propto \sigma^{n_{\rm IG}}$ with $n_{\rm IG}=4$ or $n_{\rm IG}=0$,
suitable to reproduce a background cosmology in agreement with observations. We consider a scalar
field $\sigma=\sigma_i$ nearly at rest deep in the radiation era, since an initial non-vanishing time derivative
would be otherwise rapidly dissipated \cite{Finelli:2007wb}. The initial time derivative of the scalar field
is taken as $d \sigma/\mathrm{d} \tau = 3 \gamma \omega \sigma_i/2$ -
with $\omega = \frac{\rho_{m \, 0}}{\sqrt{3 \gamma \rho_{r \, 0}} (1+6 \gamma) \sigma_i}$ - satisfying the
equation of motion. We choose $\sigma_i$ by fixing the value $\sigma_0$ of the scalar field at present
consistently with the Cavendish-type measurement of the gravitational constant $G = 6.67\times10^{-8}$ cm$^3$ g$^{-1}$ s$^{-2}$,
i.e. $\gamma \sigma_0^2 = \frac{1}{8\pi G}\frac{1 + 8\gamma}{1 + 6\gamma}$.
We also consider adiabatic initial conditions for fluctuations \cite{Paoletti:2018xet}.
In this way for a given potential the models we study have just one parameter in addition to the $\Lambda$CDM model,
i.e. the coupling to the Ricci curvature $\Lambda$CDM model $\gamma$.
The evolution of linear perturbations in this class of eJBD can be described with a set of dimensionless
functions $\alpha_{\rm M} = \mathrm{d} \ln \phi / \mathrm{d} \ln a$, $\alpha_{\rm B} = -\alpha_{\rm M}$,
$\alpha_{\rm K} = \omega_{\rm BD} \alpha_{\rm M}^2$, and $\alpha_{\rm T} = 0$
according to the parametrisation introduced in Ref.~\cite{Bellini:2014fua}.
$Planck$ 2015 temperature, polarization and lensing \cite{Aghanim:2015xee,Ade:2015zua}
constrain $\gamma < 0.0017$ at 95\% CL
and by combining with BAO data the 95\% CL upper bound tightens to $0.00075$
\cite{Ballardini:2016cvy}.
The cosmological variation of the effective gravitational strength between now respect to the
one in the relativistic era is constrained as $|\delta G_N/G_N| < 0.039$ at 95\% CL
\cite{Ballardini:2016cvy}.
Such eJBD models predict a value for the Hubble parameter larger than $\Lambda$CDM, because of
a degeneracy between $\gamma$ and $H_0$. This effect can be easily understood by interpreting the
larger value of the effective gravitational constant in the past as a larger
number of relativistic degrees of freedom.
Constraints on $\gamma$ and $\delta G_N/G_N$ based on current CMB and BAO data do not depend significantly on the index of the monomial potential, but cosmological bounds on $\dot G_N/G_N$
do \cite{Ballardini:2016cvy}.
There is still cosmological information for eJBD models to extract from the CMB pattern
beyond $Planck$.
In Fig.~\ref{fig:Cl-relative}, it shows the residuals of the lensed TT and EE CMB angular power
spectrum as function of the multipole $\ell$ with respect to the sample variance for
a sky fraction of $f_{\rm sky}=0.7$ and $f_{\rm sky}=0.4$.
Note the promise of E-mode polarization spectrum to constrain $\gamma$,
we show the room of improvement on $\gamma$ expected from CMB temperature and polarization
power spectra.
\begin{figure}
\includegraphics[width=.8\textwidth]{cl_table_variance.png}
\caption{Relative change in the CMB angular power spectra induced by different values
of the coupling parameter from $\gamma=10^{-4}$ to $\gamma=10^{-3}$.
The black dashed (dotted) line refers to the noise spectrum of a cosmic-variance
limited experiment with a sky fraction of $f_{\rm sky}=0.7\,(0.4)$.}
\label{fig:Cl-relative}
\end{figure}
| {
"attr-fineweb-edu": 1.067383,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfRI25V5jWIVPNSVv | \section{Introduction}
Finding the gravitational solutions describing branes intersecting other branes, or branes ending
on other branes, is a challenging problem, which only has a solution in some very special cases.
In addition to its intrinsic interest, this problem is also interesting in the context of the
duality between gravitational theories and quantum field theories, since many quantum field
theories have interesting descriptions using branes ending on other branes (following \cite{Ganor,Hanany:1996ie}), and finding the corresponding gravitational solutions would enable (when they are
weakly coupled and curved) studying the corresponding quantum field theories at strong coupling.
The gravitational solutions for D3-branes intersecting 5-branes along $2+1$ dimensions, in the near-horizon limit of the D3-branes, and in configurations that break half of the supersymmetry of the D3-branes, were found a few years ago in \cite{D'Hoker:2007xy,D'Hoker:2007xz}. In fact, these authors constructed
all solutions that have the same symmetries as this near-horizon limit, whose symmetry algebra is the $2+1$ dimensional ${\cal N}=4$ superconformal algebra $OSp(4|4)$ (with sixteen supercharges). They did this by
finding the general local solution to the BPS equations, and then analyzing an ansatz for the global structure of the solutions. Some of the solutions found in \cite{D'Hoker:2007xy,D'Hoker:2007xz} were conjectured to describe D3-branes intersecting 5-branes, and thus to be dual to the $3+1$ dimensional ${\cal N}=4$ supersymmetric Yang-Mills (SYM) theory with a $2+1$ dimensional defect corresponding to the intersection region (the precise description of this defect depends on the identity of the 5-branes involved). We review the general solutions of \cite{D'Hoker:2007xy,D'Hoker:2007xz} in section \ref{dhetalreview},
and the specific solutions corresponding to D3-branes intersecting 5-branes in section \ref{sec:5brane} (these solutions were also analyzed in detail very recently in \cite{Bachas:2011xa}, which has some overlap with our results in this section).
Configurations of D3-branes ending on 5-branes have the same symmetries as D3-branes intersecting 5-branes, so it is interesting to ask whether they are also included in the solutions classified by \cite{D'Hoker:2007xy,D'Hoker:2007xz}. In section \ref{sec:endingbranes} we answer this question in the affirmative, and show that we can obtain these solutions by a limit of the solutions of D3-branes intersecting 5-branes, in which the number of D3-branes on one side of the 5-branes is taken to zero. From the field theory point of view, the possible boundary conditions for the ${\cal N}=4$ SYM theory that preserve half of the supersymmetry were classified (partly using brane configurations) in \cite{Gaiotto:2008sa,Gaiotto:2008ak}. The solutions we find are dual to the ${\cal N}=4$ SYM theory living on ${\mathbb R}^{2,1}$ times a half-line with such boundary conditions, and we precisely identify the parameters of our solutions with the possible boundary conditions for D3-branes ending on 5-branes, classified in \cite{Gaiotto:2008sa,Gaiotto:2008ak}. Our solutions thus enable for the first time the study of the ${\cal N}=4$ SYM theory living on a half-line with such boundary conditions, in the range of parameters where the gravity solutions are weakly coupled and weakly curved; as usual this range corresponds to taking the large $N$ limit of the ${\cal N}=4$ theory with $SU(N)$ gauge group, with large and fixed 't Hooft coupling $\lambda = g_{YM}^2 N$. This is a necessary first step towards trying to find gravity solutions for more complicated brane configurations, that would involve either D3-branes ending on 5-branes at both ends (for $2+1$ dimensional gauge theories) or D4-branes ending on NS5-branes (for $3+1$ dimensional gauge theories); in both cases the near-horizon limit of the D-branes would not
have a conformal symmetry, so the number of supercharges is at most eight, making it much harder to find the corresponding solutions.
Since ${\mathbb R}^{2,1}$ times a half-line is conformally related to four dimensional anti-de Sitter space ($AdS_4$), our solutions also provide the gravitational dual for the ${\cal N}=4$ SYM theory on $AdS_4$ with the same boundary conditions (at the boundary of $AdS_4$). This problem was recently discussed in \cite{Aharony:2010ay}, where
the gravitational (string theory) duals were found only for boundary conditions related to orbifold or orientifold planes; our analysis completes the discussion of \cite{Aharony:2010ay} by providing the gravitational duals for the theory with more general boundary conditions, coming from D3-branes ending on 5-branes.
Given the solutions for D3-branes ending on 5-branes, we can now perform
computations in the corresponding field theories at large $N$ and large 't Hooft coupling. One thing
which would be nice to compute is the spectrum of these theories (the spectrum of anomalous
dimensions of various operators associated with the defect), but this is a difficult computation,
that was recently discussed (but not completed) in \cite{Bachas:2011xa} for the case of intersecting branes (our solutions are a special case of this). As discussed in \cite{Bachas:2011xa}, this computation is particularly interesting because of the conjecture in \cite{Karch:2001cw} that solutions of this type could exhibit
a ``locally localized graviton'', but we postpone further discussions of this issue to the future. In
section \ref{sec:one-point} we perform the simplest computation in these theories -- that of one-point
functions of chiral operators of ${\cal N}=4$ SYM in the presence of the boundary. On the half-line
with coordinate $z \geq 0$ such one-point functions go like a negative power of $z$ (depending on the
dimension of the operator). We compute the one-point functions of the three lowest-dimension chiral
operators that have non-zero one-point functions, for the general solutions of D3-branes ending
on 5-branes. For the special case of D3-branes ending on D5-branes, which has a weakly coupled limit,
we can compute the one-point
functions also at small 't Hooft coupling, and we find that they differ from the strong coupling
results (the dependence on the specific choice of boundary condition is not the same at weak coupling
as at strong coupling). We end in section \ref{summary} with a summary of our results and a discussion of
remaining open questions.
\section{Review of type IIB solutions with $OSp(4|4)$ symmetry}
\label{dhetalreview}
\subsection{Type IIB supergravity with $ SO(2,3) \times SO(3) \times SO(3) $ symmetry}
In \cite{D'Hoker:2007xy,D'Hoker:2007xz}, all type IIB supergravity
solutions invariant under the 3d ${\cal N}=4$ superconformal group $OSp(4|4)$
(which contains 16 supercharges and the bosonic symmetry $SO(2,3)\times SO(3)\times SO(3)$)
were obtained by D'Hoker, Estes and Gutperle. The main motivation for this
work was to find AdS/CFT duals of 4d $\mathcal{N}=4$ super Yang-Mills theory
with a 3d defect and maximal supersymmetry, but these are also precisely the symmetries
of D3-branes ending on 5-branes, in the near-horizon limit of the D3-branes.
The symmetries above suggest a space-time manifold which is a warped product
\begin{equation}
AdS_{4}\times S_{1}^{2}\times S_{2}^{2}\times\Sigma,\label{eq:space-time manifold}
\end{equation}
where $\Sigma$ is an orientable Riemann manifold over which the $AdS_{4}$
and the two spheres are warped.
We adopt the conventions of D'Hoker et al. (see \cite{D'Hoker:2007xy} and references therein).
We join
the real NS-NS 3-form $H_{(3)} = d B_{(2)}$
and the real RR 3-form $F_{(3)} = d C_{(2)}$ into a complex 3-form field strength ${\tilde F}_{(3)}=d{\tilde B}_{(2)}$ according to
\begin{equation}\label{complex 3-form field strength}
{\tilde F}_{(3)}=H_{(3)}+i F_{(3)},\qquad\qquad {\tilde B}_{(2)} = B_{(2)} + i C_{(2)},
\end{equation}
and the self-dual 5-form field strength is related to the real 4-form potential $C_{(4)}$ by
\begin{equation}\label{fiveformdef}
F_{(5)} = dC_{(4)}+\frac{i}{16}({\tilde B}_{(2)}\wedge\bar{\tilde F}_{(3)}-\bar{\tilde B}_{(2)}\wedge {\tilde F}_{(3)}).
\end{equation}
We write a general ansatz for the bosonic fields
that is compatible with the symmetries. For the metric, allowing for
warp factors of the $AdS_{4}$ and $S_{1,2}^{2}$ over the Riemann surface, we have
\begin{equation}
ds^{2}=f_{4}^{2}ds_{AdS_{4}}^{2}+f_{1}^{2}ds_{S_{1}^{2}}^{2}+f_{2}^{2}ds_{S_{2}^{2}}^{2}+ds_{\Sigma}^{2}.
\label{eq:space-time metric ansatz}
\end{equation}
Here $f_{1,2,4}$ are real functions on $\Sigma$.
We can always choose complex coordinates $w$ on $\Sigma$ such that
\begin{equation}
ds_{\Sigma}^{2}=4\rho^{2}\left|dw\right|^{2},
\end{equation}
for some real function $\rho$ on $\Sigma$. As in \cite{D'Hoker:2007xy}, hatted vielbeins $\hat{e}$ will refer
to orthonormal frames in the product space \eqref{eq:space-time manifold},
while unhatted ones will refer to vielbeins of the full space-time geometry
\eqref{eq:space-time metric ansatz}. Thus
\begin{eqnarray}
e^{m} & = & f_{4}\hat{e}^{m},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, m=0,1,2,3,\nonumber \\
e^{i_{1}} & = & f_{1}\hat{e}^{i_{1}},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, i_{1}=4,5, \label{vielbeins}\\
e^{i_{2}} &= & f_{2}\hat{e}^{i_{2}},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, i_{2}=6,7. \nonumber
\end{eqnarray}
The most general non-trivial 2-form potential that is compatible with the symmetries is
\begin{equation}
{\tilde B}_{(2)}=b_{1}\hat{e}^{45}+ib_{2}\hat{e}^{67},
\label{eq:ansatz for 2-form potential}
\end{equation}
where $b_{1,2}$ are complex functions on $\Sigma$. Similarly, for
the self-dual 5-form
\begin{equation}
F_{(5)}=f_{a}(-e^{0123a}+\varepsilon^{ac}\delta_{cb}e^{4567b}),
\end{equation}
where $a,b=8,9$ are directions along $\Sigma$, and $f_{a}$ are real
functions on $\Sigma$. It is convenient to define
\begin{equation}
e^{z}=\frac{1}{2}(e^{8}+ie^{9})=\rho dw,
\end{equation}
and then using \eqref{vielbeins} we can write
\begin{equation}
F_{(5)}=-2{\rm Re}(f_{z}\rho dw)f_{4}^{4}\hat{e}^{0123}+2{\rm Im}(f_{z}\rho dw)f_{1}^{2}f_{2}^{2}\hat{e}^{4567}.
\end{equation}
\subsection{Local solutions of the BPS equations}
Solutions of the supergravity equations of motion preserving 16 supersymmetries and respecting the bosonic symmetry discussed above, correspond to configurations for which the BPS equations for vanishing of the fermionic fields have 16 independent solutions. Remarkably, D'Hoker et al. were able to crunch the reduced BPS equations into an integrable system and solve it in closed form.
The general solution is given in terms of two real harmonic functions $ h_1 $ and $ h_2 $ on the Riemann surface $ \Sigma $. D'Hoker et al. showed that the $ SL(2,\mathbb{R}) $ symmetry of type IIB supergravity can be used to map any such solution to one where the axion $C_{(0)}$ vanishes, and $b_1$ and $b_2$ are real, so we will assume this from here on for simplicity. That fixes the $SL(2,\mathbb{R})$ symmetry up to the discrete S-duality transformation which reverses the sign of the dilaton and exchanges the two 2-forms; this acts on the solutions by exchanging $h_1$ with $h_2$ and exchanging the two two-spheres.
It is convenient to express the solutions in terms of the following four real functions
\begin{align}
W & \equiv \partial_{w}h_{1}\partial_{\bar{w}}h_{2}+\partial_{w}h_{2}\partial_{\bar{w}}h_{1}, &
X & \equiv i( \partial_{w}h_{1}\partial_{\bar{w}}h_{2}- \partial_{w}h_{2}\partial_{\bar{w}}h_{1}),\\
N_{1} & \equiv 2h_{1}h_{2}\left|\partial_{w}h_{1}\right|^{2}-h_{1}^{2}W, &
N_{2} & \equiv 2h_{1}h_{2}\left|\partial_{w}h_{2}\right|^{2}-h_{2}^{2}W.
\end{align}
The local solutions are as follows \cite{D'Hoker:2007xy,D'Hoker:2007xz}. The dilaton is given by
\begin{equation}
e^{2\Phi} = \frac{N_2}{N_1}.
\label{eq:local solutions - dilaton}
\end{equation}
$W$ obeys $W \leq 0$. The metric factors are
\begin{equation}
\rho^{2} = e^{-\frac{1}{2}\Phi}\frac{\sqrt{N_{2}\left|W\right|}}{h_{1}h_{2}},\quad
f_{1}^{2} = 2e^{\frac{1}{2}\Phi}h_{1}^{2}\sqrt{\frac{\left|W\right|}{N_{1}}},\quad
f_{2}^{2} = 2e^{-\frac{1}{2}\Phi}h_{2}^{2}\sqrt{\frac{\left|W\right|}{N_{2}}},\quad
f_{4}^{2} = 2e^{-\frac{1}{2}\Phi}\sqrt{\frac{N_{2}}{\left|W\right|}},
\label{eq:local solutions - metric factors}
\end{equation}
and the 2-form potentials are
\begin{equation} \label{eq:2form potentials}
b_1= +2\tilde{h}_2 + 2h_1^2 h_2 \frac{X}{N_1},\qquad
b_2 = -2 \tilde{h}_1 + 2h_1h_2^2 \frac{X}{N_2},
\end{equation}
where $ \tilde{h}_1 $ and $ \tilde{h}_2 $ are the harmonic duals of $ h_1 $ and $ h_2 $, respectively\footnote{That
is, each pair combines to a holomorphic function $ \mathcal{A}(w) = \tilde{h}_1 + i h_1$ and $ \mathcal{B}(w) = h_2 - i \tilde{h}_2$.}.
Note that
$\tilde h_{1,2}$ (and therefore $b_{2,1}$, which can be thought of as the integrals of the two 2-forms fields over the two 2-cycles on which they are non-vanishing) are defined up to additive real
constants. These constants do not affect the 3-form field strengths, but they affect 5-form computations as
we will discuss in detail below. Such a freedom is to
be expected, since in string theory there are large gauge transformations that shift the integral
of 2-form fields over 2-cycles by an integer, and in supergravity we are not sensitive to this
quantization so we have a freedom to perform any shifts.
Finally, the 5-form field strength $F_{(5)}$, for which we will only need the components along $\hat{e}^{4567}$, is given by
\begin{equation}
2{\rm Im}(f_{z}\rho dw)f_{1}^{2}f_{2}^{2}=2{\rm Im}\left(\left[ 3 i(h_{1}\partial_{w}h_{2}-h_{2}\partial_{w}h_{1}) + \partial_{w}(h_{1}h_{2}\frac{X}{W}) \right]\frac{f_{1}^{2}f_{2}^{2}}{f_{4}^{4}} dw \right).
\end{equation}
\subsection{An ansatz for $ h_{1,2} $ using genus $g$ surfaces} \label{subsec:genus g ansatz}
For the solutions above to be regular solutions of type IIB supergravity, we must impose some additional global restrictions on $ h_{1,2} $ as functions on $ \Sigma $. The conditions for such non-singular solutions, whose boundaries are locally $AdS_5\times S^5$, were investigated in \cite{D'Hoker:2007xz}, and can be solved by constructing $ h_{1,2} $ as functions on a hyper-elliptic Riemann surface of genus $ g $.
The genus {\it g} surface can be taken to be the lower-half-plane
\begin{equation}
\Sigma=\{u \in \mathbb{C}\ |\ {\rm Im}(u) \le 0 \},
\end{equation}
characterized by the algebraic equation
\begin{equation}
s^2(u)=(u-e_1)\prod_{k=1}^g(u-e_{2k})(u-e_{2k+1}), \quad e_{i} \in \mathbb R.
\label{eq:algebraic equation of the genus g serfuce}
\end{equation}
Here the $SL(2,\mathbb R)$ symmetry of the lower half-plane\footnote{That is, the conformal Killing group of the lower half-plane, which is not to be confused with the $SL(2,\mathbb{R})$ symmetry of type IIB supergravity, also mentioned before.} was used to fix one of the branch points of $s$ to $e_{2g+2}=\infty$.
The boundary of the Riemann surface $ \partial \Sigma $, which is the real line, will not be a boundary of the full 10d geometry, as we review below.
The ansatz for the holomorphic differentials $ \partial h_{1,2} $ on the surface above is given by
\begin{equation} \label{eq:holo.diff}
\partial h_1 = -i \frac{P(u)Q_1(u)}{s^3(u)}du,\qquad\qquad
\partial h_2 = - \frac{P(u)Q_2(u)}{s^3(u)}du,
\end{equation}
where $P(u)$ is a real polynomial of degree $2g$ with $g$ complex zeros, and $Q_{1,2}(u)$ are real polynomials of degree $g+1$ with real zeros,
\begin{equation}
\begin{split}
P(u)= & \prod_{a=1}^g(u-u_a)(u-\bar u_a), \quad {\rm Im} (u_a) \le 0,\\
Q_1(u)= & \prod_{b=1}^{g+1} (u-\alpha_b), \quad \alpha_{g+1}<\alpha_{g}<...<\alpha_{2}<\alpha_{1} \in \mathbb R,\\
Q_2(u)= & \prod_{b=1}^{g+1} (u-\beta_b), \quad \beta_{g+1}<\beta_{g}<...<\beta_{2}<\beta_{1} \in \mathbb R.\\
\end{split}
\end{equation}
By construction, the holomorphic differentials $\partial h_1$ and $\partial h_2$ have branch points at $u=e_1,\cdots,e_{2g+1}$ and at $e_{2g+2}=\infty$.
It was shown in \cite{D'Hoker:2007xz} that regularity of the solution requires the following ordering for the real roots and branch points: generically we must have\footnote{Less generic cases, when some points coincide, will be discussed below.}
\begin{equation} \label{eq:parameters}
\alpha_{g+1}<e_{2g+1}<\beta_{g+1}<e_{2g}<...<\alpha_{b}<e_{2b-1}<\beta_b<...<e_2<\alpha_1<e_1<\beta_1.
\end{equation}
In the next subsection we will see that the $\{u_a\}$ may be determined in terms of $\{\alpha_b\}$, $\{\beta_b\}$ and $\{e_i\}$. The generic
genus $g$ solution is thus parameterized by $4g+6$ real parameters: $(2g+1)-2$ moduli of $\Sigma$ (we can parameterize them by the values of the $e_i$, after fixing one of them to infinity and two more to some other fixed values by conformal transformations of the half-plane), $2g+2$ real zeros $\{\alpha_b\}$ and $\{\beta_b\}$ of the holomorphic differentials $\partial h_1$ and $\partial h_2$, one overall scale of the dilaton (above we arbitrarily fixed the string coupling to be one at $u=\infty$, but we can always shift the dilaton by a constant), one for the overall scale of the 10-dimensional metric (which again was arbitrarily fixed by our normalization of the various polynomials above), and three coordinates of the global $SL(2,\mathbb{R})$ group of type IIB supergravity that rotate the solutions we wrote to general solutions with a non-vanishing axion field.
\subsection{Asymptotic $ AdS_5 \times S^5 $ regions}
We may understand the topology and geometry of this general solution in three steps. First, we explain how $ S^2_1 $ and $ S^2_2 $ are fibred over $ \partial \Sigma $, and why the points of $ \partial \Sigma $ (except the branch points $\{e_i\}$) are actually regular interior points of the full 10d geometry. The second step is to understand what the solution looks like near each of the branch points. Finally, we will describe the boundary geometry and its interpretation.
For the first step, note that for generic points on $ \partial \Sigma $ not to be a boundary of the full 10d geometry we must demand that the radius of one (and only one) of the spheres will shrink to zero there. That is, generically $ f_1^2=0 $ or $ f_2^2=0 $ (but not both) on $ \partial \Sigma $. This in turn can be shown to imply that either $ h_1 $ or $ h_2 $ should vanish at every point on $ \partial \Sigma $.
It can be seen from \eqref{eq:holo.diff} and \eqref{eq:algebraic equation of the genus g serfuce} that $ \partial h_{1,2} $ alternate at each branch point $e_i$ between taking real and imaginary values on $ \partial \Sigma $. This means that $ h_{1,2} $ satisfy alternating Neumann and Dirichlet boundary conditions along the real line\footnote{This fact can be made transparent by choosing real coordinates $u={\tilde x}+i{\tilde y}$, such that $\partial_u h_j = \frac{1}{2}(\partial_{\tilde x} h_j - i \partial_{\tilde y} h_j).$ }; whenever a branch point $e_i$ on $ \partial \Sigma $ is crossed, the type of boundary condition changes. Moreover, $ h_1 $ and $ h_2 $ have opposite boundary conditions; $ h_1 $ has Neumann boundary conditions whenever $ h_2 $ has Dirichlet, and vice versa (see figure \ref{fig:u-plane for the general solution}). The discussion in the previous paragraph implies in addition that whenever $h_1$ ($h_2$) has a Dirichlet boundary condition, this must take the form $h_1=0$ ($h_2=0$). This is not directly obvious from the ansatz \eqref{eq:holo.diff}; requiring that the values of $h_i$ are the same on each interval where they are supposed to vanish gives $2g$ constraints on the solution, of the form
\begin{equation} \label{eq:constr}
\begin{split}
\int_{e_{2i}}^{e_{2i-1}} (\partial_u h_1 du + \partial_{\bar u} h_1 d{\bar u})=0, \quad\quad \int_{e_{2i+1}}^{e_{2i}} (\partial_u h_2 du + \partial_{\bar u} h_2 d{\bar u})=0; \quad i=1,\cdots,g,
\end{split}
\end{equation}
which can be used to fix the values of $\{u_a\}$.
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{u-plane2.pdf}
\caption{In the hyper-elliptic ansatz, $ \Sigma $ is the lower half of the complex plane. The dots on $ \partial \Sigma $ represent the branch points. On each segment connecting two branch points one (and only one) of the spheres has a vanishing radius. Note that generic points on the real axis $ \partial \Sigma$ are not a boundary of the 10d space-time. }
\label{fig:u-plane for the general solution}
\end{figure}
The discussion above tells us what cycles we have in the full 10d geometry. Curves whose end points are on $ \partial \Sigma $ and are separated by at least one branch point lead to a non-trivial cycle. For example, in figure \ref{fig:u-plane for the general solution}, consider the curve joining the points $ B $ and $ C $, which we may take to be parameterized by $ y \in [0,\pi/2] $. At the point $ B $ ($ y=\pi/2 $) $ S_1^2 $ has a vanishing radius, while at the point $ C $ ($ y=0 $) $ S_2^2 $ does. Hence, this gives a 5-cycle in the full geometry. We will indeed see below that near each branch point the compact part of the geometry will have the topology of $ S^5 $. By similar arguments, the curve joining $ A $ and $ D $, jumping over two branch points, gives a 3-cycle, since the same sphere ($ S_1^2 $) vanishes on both ends of the curve.
Next we want to describe how the solution looks near the branch points $\{e_i\}$. We will show that the (non-compact)
Riemann surface $\Sigma$ develops a semi-infinite spike there, corresponding to an
asymptotically $AdS_5\times S^5$ region. Choosing coordinates $v=u-e_i$ ($v=-1/u$ for the branch point at $\infty$), the real harmonic functions $h_{1,2}$ near a branch point $v=0$ assume the form
\begin{equation}\label{leadingh12}
\begin{split}
h_1 = & 2 i \left( \gamma_1^i \frac{1}{\sqrt{v}} - \delta_1^i \sqrt{v} \right) + O(v^{3/2}) + c.c.,\\
h_2 = & 2 \left( \gamma_2^i \frac{1}{\sqrt{v}} - \delta_2^i \sqrt{v} \right) + O(v^{3/2}) + c.c.,
\end{split}
\end{equation}
where the constants are
\begin{equation}
\label{eq:gamma_1,2 for general genus solution}
\gamma_1^i = \frac{P(e_i)Q_1(e_i)}{\Pi_{j \neq i} (e_i-e_j)^{3/2}},\qquad\qquad
\gamma_2^i = \frac{P(e_i)Q_2(e_i)}{\Pi_{j \neq i} (e_i-e_j)^{3/2}},
\end{equation}
and
\begin{equation}
\begin{split}\label{eq:delta_1,2 for general genus solution}
\delta_1^i = & \gamma_1^i \left [ \sum_{k=1}^g \left( \frac{1}{e_i-u_k}+\frac{1}{e_i-\bar{u}_k} \right) + \sum_{k=1}^{g+1} \frac{1}{e_i-\alpha_k} - \frac{3}{2} \sum_{k \neq i}^{2g+1} \frac{1}{e_i-e_k} \right],\\
\delta_2^i = & \gamma_2^i \left[ \sum_{k=1}^g \left( \frac{1}{e_i-u_k}+\frac{1}{e_i-\bar{u}_k} \right) + \sum_{k=1}^{g+1} \frac{1}{e_i-\beta_k} - \frac{3}{2} \sum_{k \neq i}^{2g+1} \frac{1}{e_i-e_k} \right],\\
\end{split}
\end{equation}
for $i=1,\ldots,2g+1$. Note that $\gamma_a^i$ and $\delta_a^i$ ($a=1,2$) alternate
between real and imaginary values as we go along the boundary. At $ u=\infty $ we have $\gamma_1^\infty=\gamma_2^\infty=i$, and
\begin{equation}
\begin{split}\label{eq:delta_1,2 at inf for general genus solution}
-i \delta_1^\infty = & \sum_{k=1}^g \left( u_k+\bar{u}_k \right) + \sum_{k=1}^{g+1} \alpha_k - \frac{3}{2} \sum_{k =1}^{2g+1} e_k,\\
-i \delta_2^\infty = & \sum_{k=1}^g \left( u_k+\bar{u}_k \right) + \sum_{k=1}^{g+1} \beta_k - \frac{3}{2} \sum_{k =1}^{2g+1} e_k.
\end{split}
\end{equation}
In terms of real coordinates $x$ and $y$ defined by $v=e^{-2(x+iy)}$, with $- \infty \le x \le \infty$ and $ 0 \le y \le \pi/2$, the asymptotic region $v \rightarrow 0$ maps to $x \rightarrow \infty$. In this limit, the dilaton behaves as
\begin{equation}
e^{\Phi}= \left|\frac{\gamma_2^i}{\gamma_1^i}\right|+O(e^{-4x}).
\end{equation}
The metric factors for the half of the branch points for which $\gamma_a^i$ and $\delta_a^i$ are real are, to leading order,
\begin{align}\label{metricads}
\rho^2 = & 2 \sqrt{2 |\Delta_i|} + O (e^{-2x}),&
f_1^2 = & 8 \sqrt{2 |\Delta_i|} \sin^2 (y) + O (e^{-2x}),\nonumber\\
f_2^2 = & 8 \sqrt{2 |\Delta_i|} \cos^2 (y) + O (e^{-2x}),&
f_4^2 = & 8 \frac{|\gamma_1^i| |\gamma_2^i|}{\sqrt{2 |\Delta_i|}} e^{2x} + O(1),
\end{align}
where $\Delta_i\equiv \gamma_1^i \delta_2^i -\gamma_2^i \delta_1^i$. Note that here $\rho^2$ is the coefficient in the metric of $4(dx^2+dy^2)$. The other branch points have similar expressions with $f_1^2$ and $f_2^2$ interchanged. The coordinate $ y $ here is precisely the one which traverses the circular curve from $ B $ to $ C $ depicted in figure \ref{fig:u-plane for the general solution}. We can see that, indeed, $ f_{1,2}^2 $ and the $dy^2$ term in the metric of $\Sigma$ combine in the correct way to give a 5-sphere close to the singular points. Likewise, for $ x \rightarrow \infty $, $AdS_4$ with the warp factor $f_4^2$ joins the $x$ coordinate to give an $ AdS_5 $ (up to corrections of order $e^{-2x}$),
\begin{equation}
ds^2_{AdS_5} = dx^2 + \cosh^2(x) ds^2_{AdS_4}.
\end{equation}
The geometry is thus asymptotically $AdS_5 \times S^5$.
The 5-form flux over the 5-sphere near the singular points can be shown to have the expected value for an
$AdS_5\times S^5$ solution with the metric \eqref{metricads} above\footnote{We adopt the conventions of \cite{Grana:2005jc} for the fluxes, with the only difference being a factor of 4 in our definition of the self-dual 5-form field strength \eqref{fiveformdef}. Note that at the classical level the fluxes are defined up to an overall normalization constant in the supergravity action, which is fixed once we fix the normalization of one of the fluxes.},
\begin{equation}
\lim_{x \rightarrow \infty} \int_{S^5} F_{(5)} = 2 (4 \pi)^3 \Delta_i, \quad \textrm{where} \quad N_i = 8 (4 \pi)^3 |\Delta_i| \in \mathbb{Z}
\label{5-form flux in the asymptotic region}
\end{equation}
is (after charge quantization is taken into account) the integer D3-brane charge. Using
the fact that $W \leq 0$, one can show that $\Delta_i$, and thus the
5-form flux, alternates between positive and negative values (depending on
whether the $\gamma_a^i$ and $\delta_a^i$ are real or imaginary). One can
also show that the total 5-form flux summed over all singular points
vanishes, as expected.
All the information extracted above regarding the $ AdS_5 \times S^5 $ geometry is captured in the two leading orders of $ h_{1,2} $ (or analogously of $ \partial h_{1,2} $) in the expansion around the branch points \eqref{leadingh12}. This will be a recurring theme in this paper. Below, we will see other types of singularities, giving rise to different geometries. We note, from \eqref{eq:holo.diff}, that in this generic case the leading singularity in both $ \partial h_1 $ and $ \partial h_2 $ is $ 3/2 $, i.e. $ v^{-3/2} $. We denote this by $ (3/2,3/2) $, where the first entry refers to the singularity in $ \partial h_1 $ and the second to that of $ \partial h_2 $.
Clearly, the degree of the singularity depends on the coordinate system. It is to be understood that when we use the notation above for the singularities in $ \partial h_{1,2} $ we always refer to the original coordinates in \eqref{eq:holo.diff}.
The general genus $g$ solution thus has $(2g+2)$ asymptotic $AdS_5\times S^5$ regions, with 5-form fluxes proportional to $N_i$.
In the dual field theory this seems to describe $(2g+2)$ half-lines, with some 4d ${\cal N}=4$
$SU(N_i)$ SYM theory living on each half-line, joining together and interacting at a point (times ${\mathbb R}^{2,1}$). To see this, consider the boundary of
our space-time (\ref{eq:space-time metric ansatz}). Writing the $AdS_4$ metric in Poincar\'e coordinates as $ds_{AdS_4}^2 = (dx^{\mu} dx_{\mu} + dz^2) / z^2$ ($\mu=0,1,2$, $z > 0$), there is a boundary wherever the coefficient of $dx^{\mu}dx_{\mu}$ diverges. One place where this happens is where $f_4$ diverges. This happens precisely at the branch points $\{e_i\}$, and we saw
that at each such point we have an $AdS_5\times S^5$ geometry, with a radial coordinate $x$, and with a
four dimensional boundary given by the half-line $\{x^{\mu}, z > 0\}$. (We can view this boundary either
as a half-line, or as an $AdS_4$, the two are related by a conformal transformation.) Another component
of the boundary is at $z=0$. This component is three dimensional, parameterized just by $\{x^{\mu}\}$.
The radial coordinate
approaching this boundary component is $z$, and the other coordinates, whose size remains fixed
as we approach the boundary, include the full Riemann surface $\Sigma$, as well as the two
two-spheres. Thus, all the half-lines discussed above end on this 3d boundary component, which can be
viewed as the intersection of $(2g+2)$ half-lines.
The generic solutions with $g > 0$ that have interior zeros $u_a$ are actually singular at
$u=u_a$ (the metric of $\Sigma$ has a conical singularity there), and we will not discuss them further here\footnote{We thank C. Bachas and J. Estes for bringing this point, which was mentioned in a footnote in \cite{Bachas:2011xa}, to our attention.}. Thus, the only generic solutions that
are non-singular are the genus zero ``Janus solutions''. However, we will see in the next
section that the higher genus solutions have a particular degeneration limit in which they
give sensible solutions of type IIB string theory.
\section{Solutions with 5-branes} \label{sec:5brane}
The generic solutions of the previous section have various degeneration limits, where
several branch points and/or zeros of $Q_i$ come together. One interesting limit, discussed in
\cite{D'Hoker:2007xz}, is when two adjacent branch points $e_i$ come
together; this limit describes 5-branes. Between every two branch points
there is one $\alpha_i$ or one $\beta_i$, so either an $\alpha$ point or a $\beta$ point also
joins the two branch points in this degeneration.
We have already emphasized that the information about the asymptotic $ AdS_5 \times S^5 $ geometry is encoded in the singular behavior of the differentials $\partial{h_{1,2}}$ near the branch points. The same is also true for the 5-branes at the degeneration point. For this reason, we may find the properties of the 5-branes from the simplest possible case of genus one, without any sacrifice of generality.
\subsection{The genus one case} \label{subsec:genus1}
The generic genus one solution has four branch points at $u=e_{1,2,3},\infty$. There are four additional parameters: $ \alpha_{1,2} $ which are the two real zeros of $ \partial h_1 $, and $ \beta_{1,2} $ which are those of $\partial{h_2}$. They satisfy the ordering\footnote{Here, and throughout this paper,
we arbitrarily choose the branch point at infinity to come between a $\beta$ point and an $\alpha$ point, in this order along the boundary. Solutions with the opposite order may easily be found from the ones we describe by an S-duality transformation, or by taking $u \to -{\bar u}$.}
\begin{equation}
\alpha_2 < e_3 < \beta_2 < e_2 < \alpha_1 < e_1 < \beta_1.
\end{equation}
Following \cite{D'Hoker:2007xz} we choose the collapse $e_1=e_2 \equiv k^2$ (with $k>0$), which implies $\alpha_1=k^2$. We may also fix $e_3=0$, and relabel $\alpha_2=\alpha$ (see figure \ref{fig:genus 1 in the collapse limit}). The four remaining parameters are
\begin{equation}
\alpha<0<\beta_2<k^2<\beta_1.
\end{equation}
Recall that $ \partial h_{1,2} $ also have a mutual complex zero $ u_1 $, which is not free and should be fixed as a function of the other parameters. It is shown in \cite{D'Hoker:2007xz} that regularity and the negativity of $W$ imply that in our limit $ u_1 = k^2 $. We then have
\begin{equation}\label{genusone}
\partial h_1 = -i \frac{(u-\alpha)}{u^{3/2}}du,\qquad\qquad
\partial h_2 = - \frac{(u-\beta_1)(u-\beta_2)}{(u-k^2)u^{3/2}}du.
\end{equation}
As before, there is a $ (3/2,3/2) $ singularity at $ u=0,\infty $, corresponding to asymptotic $ AdS_5 \times S^5 $ regions. However, the degeneration limit has resulted in a new type of singularity $ (0,1) $ at $ u=k^2 $.
\begin{figure}[h!]
\centering
\includegraphics[scale=1.2]{D5.pdf}
\caption{The generic genus one solution has four asymptotic regions. The 5-brane solution is achieved by collapsing two such adjacent regions. The picture shows that in the full geometry there is a 3-cycle surrounding the object at $u=k^2$, as expected of a 5-brane.}
\label{fig:genus 1 in the collapse limit}
\end{figure}
The behavior near the branch points at $ u=0,\infty $ is fully described by $ \gamma^{0,\infty}_{1,2} $ and $ \delta^{0,\infty}_{1,2} $ from \eqref{eq:gamma_1,2 for general genus solution}, \eqref{eq:delta_1,2 for general genus solution}. In terms of the parameters above, they are given by
\begin{align}
\gamma_1^0 & = \alpha, & \gamma_1^{\infty} & = i, \nonumber \\
\gamma_2^0 & = \frac{\beta_1 \beta_2}{k^2}, & \gamma_2^{\infty} & = i, \nonumber \\
\delta_1^0 & = -1, & \delta_1^{\infty} & = i \alpha, \\
\delta_2^0 & = \frac{\beta_1 \beta_2-k^2(\beta_1+\beta_2)}{k^4}, & \delta_2^{\infty} & = i (\beta_1+\beta_2-k^2), \nonumber
\end{align}
such that
\begin{align}\label{eq:genus 1 - 5-flux at the asymptotic regions}
\Delta_0 & = \frac{k^2 \beta_1 \beta_2 + \alpha \beta_1 \beta_2 -\alpha k^2(\beta_1+\beta_2)}{k^4}, & \Delta_{\infty} & = \alpha+k^2-\beta_1-\beta_2.
\end{align}
Note that the $SL(2,\mathbb R)$ symmetry of type IIB supergravity was used to set the string coupling near $e_{2g+2}=\infty$ to one. We see that three of the parameters of our solution correspond to the D3-brane charges in the two remaining asymptotic regions, proportional to $\Delta_0$ and $\Delta_{\infty}$, and to the string coupling at the second asymptotic region $e_3=0$.
Choosing a new coordinate $ w^2=u $, such that $ \Sigma = \{ {\rm Re}(w)\le 0, {\rm Im}(w)\ge 0 \} $,
the differentials \eqref{eq:holo.diff} can be integrated in closed form (see \cite{D'Hoker:2007xz})
\begin{equation}
\label{eq:h_1,2 in the collapse limit}
\begin{split}
h_1 = & - 2 i \left( w - \bar{w} \right) \left[1-\frac{\alpha}{|w|^2} \right],\\
h_2 = & - 2 \left( w + \bar{w} \right) \left[1+\frac{\beta_1 \beta_2}{k^2|w|^2} \right] + \frac{(\beta_1-k^2)(k^2-\beta_2)}{k^3} \ln \left( \frac{|w-k|^2}{|w+k|^2}\right).\\
\end{split}
\end{equation}
We now zoom in on the singular region near $ w=-k $ by choosing the coordinates $ w = r e^{i\psi}-k $ and expanding in small $ r $. This gives at leading order
\begin{equation}
h_1 = 4 c \, r \sin (\psi) ,\qquad\qquad
h_2 = 2 b - 2 d \ln (r^2),\\
\label{eq:h12 5brane}
\end{equation}
where
\begin{equation}
\begin{split}
c = & 1 - \frac{\alpha}{k^2}, \qquad \qquad
2d = \frac{(\beta_1 - k^2)(k^2 - \beta_2)}{k^3}, \\
b = & 2k +2\frac{\beta_1 \beta_2}{k^3}+\frac{(\beta_1 - k^2)(k^2 - \beta_2)\ln (4k^2)}{k^3} .
\end{split}
\end{equation}
For the dilaton we have at leading order
\begin{equation}
e^{2 \Phi} = \frac{d^2}{c^2} \frac{|\ln(r)|}{r^2},
\end{equation}
such that it diverges as $r\to 0$.
The metric functions at leading order are
\begin{align}\label{eq:metric factors in the collapse limit}
\rho^2 = & 2 \sqrt{cd} \, \frac{1}{r^{\frac{3}{2}} |\ln (r) |^{\frac{1}{4}}}, & f_1^2 = & 8 \sqrt{cd} \, \frac{r^{\frac{1}{2}}}{|\ln (r) |^{\frac{1}{4}}} \sin^2 (\psi),\nonumber \\
& & \\
f_2^2 = & 8 \sqrt{cd} \, r^{\frac{1}{2}}|\ln (r) |^{\frac{3}{4}}, & f_4^2 = & 8 \sqrt{cd} \, r^{\frac{1}{2}}|\ln (r) |^{\frac{3}{4}}.\nonumber
\end{align}
Note that $f_4^2$
no longer diverges near the branch points, so $w=-k$ does not give an extra boundary component.
We find that, in the vicinity of $ r=0 $, the metric factorizes to $ AdS_4 \times S^2_2 $ and $ S^3 $,
\begin{equation}
ds^2 = f_4^2(r) \left[ ds^2_{AdS_4}+ds^2_{S^2_2} \right]+ 4 r^2 \rho^2(r) \left[ d\psi^2+ \sin^2(\psi) ds^2_{S^2_1}+\frac{1}{r^2}dr^2 \right].
\end{equation}
Together with the behavior of the dilaton, this suggests an interpretation of the supergravity solution as including NS5-branes, whose world-volume consists of the $AdS_4\times S^2$ at $u=k^2$. Thus, this degenerate limit of the generic solution, in which the point $u_1$
joined with two branch points at the boundary, gives a non-singular configuration
in string theory. The transverse 3-cycle indeed carries 3-form flux, as we now show.
Since the 3-sphere is extended in the directions $ 4,5 $ and $ \psi $, we see from \eqref{eq:ansatz for 2-form potential} that there is only a contribution to the flux coming from $ b_1 $. Recalling that $ b_{1,2} $ are real, we learn that this contribution comes from the real part of the 2-form potential, which in light of \eqref{complex 3-form field strength} gives NS-NS flux. This signals the presence of NS5-branes at the point $ r=0 $.
To evaluate the 3-form flux we need the differentials of the harmonic duals $\tilde{h}_{1,2}$ introduced in \eqref{eq:2form potentials}. Using $\partial \tilde{h}_{1,2}=i\partial h_{1,2}$, and expanding around $w=-k$, we have at leading order $ d \tilde{h}_2 = (4d)d \psi$. The NS-NS 3-form flux is therefore
\begin{equation}
\label{nsflux}
\lim_{r\rightarrow 0} \int_{\Sigma_3} H_{(3)} = \lim_{r\rightarrow 0} \int_{S^3} db_1 \wedge \hat e^{45}
= 32 \pi^2 d \equiv n_5, \quad n_5 \in \mathbb{Z};
\end{equation}
while the RR 3-form flux vanishes. Our solution thus includes $n_5$ NS5-branes wrapping $AdS_4\times S^2$.
The 3-form flux is conserved, and the flux sourced by the 5-branes goes off
into the other regions of the geometry.
To find D5-branes, we should take a different collapse limit such that $ f^2_2 $ vanishes on both sides of the singular point, with a $\beta$ point between the two collapsing branch points. Then, $ S^2_2 $ combines with $ \psi $ to form an $ S^3 $, and we get a contribution from the imaginary part of the 2-form potential, which means that the branes source RR 3-form flux and are therefore D5-branes.
This can also be seen from the S-duality transformation mentioned above.
The computation of the 5-form flux near the 5-brane ``throat'' is complicated by the presence of a Chern-Simons type term. The 5-form $F_{(5)}$ \eqref{fiveformdef} is not conserved, but rather satisfies (in our normalizations)
$d F_{(5)} = {1\over 4} H_{(3)} \wedge F_{(3)}$. Thus, if we want a
conserved 5-form that could lead to a conserved charge, we need to take a different 5-form
\begin{equation} \label{tildefive}
{\tilde F}_{(5)} = F_{(5)} + a C_{(2)} \wedge H_{(3)} - \left({1\over 4} -a \right) B_{(2)} \wedge F_{(3)},
\end{equation}
for
some real constant $a$. Generally the Page charge coming from such a 5-form (with $a=0$ or $a=\frac{1}{4}$) is the only conserved and
quantized charge, but it is not gauge-invariant due to the gauge freedom of shifting $B_{(2)}$ and $C_{(2)}$ (see \cite{Marolf:2000cb} for a general discussion). In our solutions we can sometimes fix this freedom by requiring ${\tilde F}_{(5)}$ to be non-singular, even when the two-cycles shrink to zero size.
However, in solutions with NS5-branes like the one we discuss here, the definition \eqref{tildefive} (for generic values of $a$) is not good since there is no globally well-defined non-singular choice of $B_{(2)}$ (and thus of ${\tilde F}_{(5)}$), as we now argue.
The issue is that in solutions with NS5-branes, the $S^2$ on which we have a non-zero $B_{(2)}$ is part of an $S^3$ on which there are $n_5$ units of NS-NS 3-form flux, as we showed above. The $S^2$ shrinks on both poles of the $S^3$, but the value of $B_{(2)}$ (integrated over $S^2$) at these two poles differs by $n_5$. So, if we want $B_{(2)}$ to be globally well-defined, its integral over $S^2$ must be non-zero at least at one
of these poles, but this means that $B_{(2)}$ is singular at that pole. Alternatively, we can define $B_{(2)}$ in different patches (with each patch including a single pole of $S^3$) such that it vanishes at both poles, but then it is not globally well-defined (since we need a non-trivial gauge transformation to relate the different patches; the patches are separated by the generalization of a Dirac string). Similarly, in the presence of D5-branes, there is no globally well-defined and non-singular choice of $C_{(2)}$.
To see this explicitly from our solution, note that to compute $B_{(2)}$ and $C_{(2)}$ we must find the harmonic duals ${\tilde h}_{1,2}$ themselves (rather than just their derivatives). Using again $\partial \tilde{h}_{1,2}=i\partial h_{1,2}$ we have
\begin{equation}\label{eq:harm duals}
\begin{split}
\tilde{h}_1 = & 2 \left( w + \bar{w} \right) \left[1 + \frac{\alpha}{|w|^2} \right],\\
\tilde{h}_2 = & - 2 i \left( w - \bar{w} \right) \left[1+\frac{\beta_1 \beta_2}{k^2|w|^2} \right] - i \frac{(\beta_1-k^2)(k^2-\beta_2)}{k^3} \ln \left(\frac{w^2 - k^2}{\bar{w}^2 - k^2} \right),\\
\end{split}
\end{equation}
up to arbitrary additive constants.
We see that $b_1=2\tilde{h}_2+\ldots$ changes by $8 \pi d=n_5 / 4\pi$ when going from one side of the point $w=-k$ to the other (on the real line). But the two-cycle $S^2_1$ vanishes on both sides of this point, so we
cannot have $b_1$ vanishing everywhere that the $S^2$ shrinks, and there is no non-singular choice of $B_{(2)}$ (if we want it to be globally well-defined).
This implies that for this solution the only globally well-defined non-singular choice for a conserved 5-form is ${\cal F}_1 \equiv F_{(5)} + {1\over 4} C_{(2)} \wedge H_{(3)}$, where we choose the arbitrary
constant in $\tilde{h}_1$ so that $C_{(2)}$ vanishes everywhere that $S^2_2$ shrinks to zero size (this is a specific fixing of the gauge freedom of shifting $C_{(2)}$). Similarly, in solutions with D5-branes we need to choose ${\cal F}_2 \equiv F_{(5)} - {1\over 4} B_{(2)} \wedge F_{(3)}$.
Thus, in order to compute the 5-form flux in our solution, we need to choose
$\tilde{h}_1$ to vanish on $w \in [0,i\infty)$, as it does for the expression we wrote above. Expanding $\tilde{h}_1$ around $w=-k$ we then find to leading order $\tilde{h}_1=-4k(1+\alpha/k^2)$. The conserved 5-form flux coming from the NS5-brane singularity is therefore
\begin{align}
\begin{split}
\label{eq:5-form flux in the throat}
\lim_{r\rightarrow 0} \int_{\Sigma_3\times S^2_2} (F_{(5)}+{1\over 4} C_{(2)}\wedge H_{(3)}) =& (4 \pi)^2 8 \pi \, 2k(2-c)d \\
=& (4 \pi)^2 8 \pi \frac{(k^2+\alpha)(\beta_1 - k^2)(k^2-\beta_2)}{k^4} \\
=& \frac{1}{4} (N_{\infty} - N_0).
\end{split}
\end{align}
We see that the 5-form flux going into the 5-brane ``throat'' exactly balances the surplus flux coming from the asymptotic $AdS_5\times S^5$ region at $u=0$ relative to the one at infinity, as it should by charge conservation since $d{\cal F}_1=0$ (note that the second terms in ${\cal F}_1$ and ${\cal F}_2$ do not contribute at the $AdS_5\times S^5$ singularities).
We can now interpret the degeneration limit described above of the genus one solution as describing
the near-horizon limit of D3-branes intersecting NS5-branes, such that some of the D3-branes end on
the 5-branes. More precisely, the four parameters of our solution correspond to the number of
NS5-branes \eqref{nsflux}, the number of D3-branes on each side of it \eqref{eq:genus 1 - 5-flux at the asymptotic regions}, and the relative asymptotic string coupling
between the two asymptotic regions; the near-horizon interpretation is only valid when the string
couplings in the two asymptotic regions coincide.
When our solution has a singularity of this type with $n_5$ NS5-branes or $n_5$ D5-branes wrapping $AdS_4\times S^2$, the low-energy theory on these branes includes a $U(n_5)$ gauge symmetry. The 5-branes intersect the boundary of our space-time along the 3d component of the boundary (where all half-lines intersect), and thus the corresponding field theories have a $U(n_5)$ global symmetry, with the currents localized in three dimensions.
\subsection{D3-branes intersecting several stacks of 5-branes} \label{subsec:3intersect5}
Consider the generic genus $g$ solution reviewed in \S\ref{subsec:genus g ansatz}. The parameters of this solution are the $2g+1$ branch points $e_i$, the $g+1$ real zeros $\alpha_b$ of $\partial h_{1}$, and the $g+1$ real zeros $\beta_b$ of $ \partial h_{2}$.\footnote{Recall that the complex zeros $ u_a $ are not free parameters, and are determined by \eqref{eq:constr}. } It is convenient to represent these parameters as a string of consecutive zeros and branch points by writing (the indices are omitted for simplicity, and the ordering is implicit, see \eqref{eq:parameters})
\begin{equation} \label{eq:para.string}
\alpha \quad e \quad \beta \quad e \quad \alpha \quad e \quad \beta \quad e \quad \ldots \quad e \quad \alpha \quad e \quad \beta \quad e \quad \alpha \quad e \quad \beta.
\end{equation}
As explained above, we may introduce 5-branes into the solution by collapsing adjacent pairs of branch points. When there is an $ \alpha $ between the branch points, we get a stack of NS5-branes. Let us denote such a collapse by a triplet $ (e \alpha e) $. Similarly, D5-branes are obtained by a collapse $ (e \beta e) $.
Whenever a pair of branch points is collapsed, one of the complex zeros $ u_a $ gets fixed to the collapse point. This results in a $ (0,1) $ singularity for the $ (e \alpha e) $ collapse, and $ (1,0) $ for the $ (e \beta e) $ collapse. As mentioned above, non-singular solutions are
obtained only when there are no points $u_a$ in the interior. Thus, we need to have $ g $ stacks of 5-branes, such that all the $ u_a$ go to the boundary. We then remain with two asymptotic
$AdS_5\times S^5$ regions, as we expect for solutions corresponding to intersecting branes.
We have fixed one of the two $(3/2,3/2)$ branch points to $u=\infty$, and we may fix the other to $u=0$.
\eqref{eq:para.string} has now turned into
\begin{equation} \label{eq:multiple collapse}
\alpha \quad (e \beta e) \quad \alpha \quad \ldots \quad \alpha \quad (e \beta e) \quad \alpha \quad e \quad \beta \quad (e \alpha e) \quad \beta \quad \ldots \quad \beta \quad (e \alpha e) \quad \beta.
\end{equation}
Clearly, the $ (3/2,3/2) $ branch point at $ u=0 $ has to lie to the left of a $ \beta $ and right of an $ \alpha $ (otherwise there are more than two uncollapsed branch points). There are $ g+1 $ options for how to collapse the other branch points, corresponding to having $ 0,\cdots,g $ $ (e \alpha e) $ collapses (stacks of NS5-branes). Let $ n $ stand for the number of stacks of NS5-branes and $ m $ for the number of stacks of D5-branes. We then have $ n + m = g $. For example, the genus one solution considered above has $ n=1 $ and $ m=0 $.
\begin{figure}[h!]
\subfigure[]{
{\begin{overpic}[width=0.55\linewidth]{TwoThroats.PNG}
\put(-5,40){$AdS_5 \times S_5$}
\put(90,42){$AdS_5 \times S_5$}
\put(54,0){D5}
\put(32,0){D5}
\put(75,0){D5}
\put(53,70){NS5}
\put(32,62){NS5}
\put(77,58){NS5}
\end{overpic}}
}\qquad \qquad
\subfigure[]{
\includegraphics[scale=1.2]{intersecting_QFT.pdf}
}
\caption{(a) A schematic picture of the six dimensional space made from the two
two-spheres and $\Sigma$, for the solutions of this subsection corresponding to D3-branes intersecting D5-branes and NS5-branes. This space is non-compact along
two $AdS_5\times S^5$ ``throats'', and has several D5-brane and NS5-brane ``throats'' coming out of its interior. (b) The dual field theory, which describes two 4d theories on a half-line interacting with a 3d ``defect'' theory. Note that the $z$ coordinate which parameterizes the half-lines in (b) is part of the $AdS_4$ space, and is not visible in (a).}
\label{fig:intersecting branes}
\end{figure}
Note that all the NS5-branes are to the right of the $u=0$ branch point, while all the D5-branes are to the left of it. In accordance with this, let us denote the locations of the NS5-branes by $ k^2_a $ ($ a=1,\cdots,n $), and those of the D5-branes by $ -l^2_b $ ($ b=1,\cdots,m $), with $k_a, l_b > 0$. The ordering \eqref{eq:multiple collapse} is then (reinstating the indices)
\begin{equation}
\alpha_{g+1} < -l^2_{m} < \alpha_g < ... < - l_1^2 < \alpha_{n+1} < 0 < \beta_{n+1} < k^2_{n} < ... < \beta_2<k^2_1<\beta_1,
\end{equation}
and the holomorphic differentials are
\begin{equation}
\partial h_1 = - i \frac{\prod_{b=1}^{m+1}(u-\alpha_{b+n})}{\prod_{b=1}^m(u+l_b^2)}\frac{du}{u^{3/2}},\qquad\qquad
\partial h_2 = - \frac{\prod_{a=1}^{n+1}(u-\beta_a)}{\prod_{a=1}^n(u-k_a^2)}\frac{du}{u^{3/2}}.
\end{equation}
The 3-form fluxes of each 5-brane stack may be computed as in the previous subsection; we will
discuss the computation of 5-form fluxes in the next section.
We interpret these solutions (in the special case where the asymptotic dilaton is the same in both
$AdS_5\times S^5$ regions) as describing D3-branes intersecting (and possibly ending on) multiple stacks of 5-branes.
The brane orientations are the same as in the standard brane constructions yielding theories with
3d ${\cal N}=4$ supersymmetry \cite{Ganor,Hanany:1996ie}; the NS5-branes fill three of the directions orthogonal
to the D3-branes, and the D5-branes fill the other three orthogonal directions. In the conformal
limit that our solutions describe, all these branes intersect at a point
times $\mathbb{R}^{2,1}$ (before any back-reaction is taken into account). We will discuss the
distinction between the different NS5-brane (D5-brane) stacks in the next section.
\section{D3-branes ending on 5-branes}\label{sec:endingbranes}
Our main goal in this paper is to construct the supergravity solutions of D3-branes ending on 5-branes. Several years ago, Gaiotto and Witten \cite{Gaiotto:2008sa,Gaiotto:2008ak} classified all possible supersymmetry-preserving boundary conditions for the ${\cal N}=4$ SYM theory, and in particular the boundary conditions that correspond to D3-branes ending on 5-branes, and we would like to compare the results we find to their analysis. In the previous section we looked at solutions that had two asymptotic $ AdS_5 \times S^5 $ regions. In the brane picture, each one of those regions is interpreted as the near horizon limit of a stack of D3-branes, where the number of branes is controlled by the 5-form flux in that region. When the numbers of D3-branes in the two regions are not equal, some of the D3-branes end on 5-branes. Setting one of the 5-form fluxes to zero means that there are no D3-branes in this region, and that the metric there should be that of flat space (instead of having an
$AdS_5\times S^5$ throat). In such a case all the D3-branes coming from the other asymptotic region end
on 5-branes.
We now examine how the gravity solution behaves in this limit. Once again, we first consider the genus one case in full detail, before proceeding to the more general solutions.
\subsection{The genus one case}
Consider the 5-form fluxes $ \Delta _{0,\infty} $ computed in \eqref{eq:genus 1 - 5-flux at the asymptotic regions}. We want to find a limit such that the flux at the origin vanishes
while the flux at infinity is kept finite. Recalling figure \ref{fig:genus 1 in the collapse limit}, the topology suggests that we
must keep the NS5-branes, which are localized at the point $k^{2}$, at a finite distance from the origin. Therefore we shall keep $k$ fixed in this limit. Then, we need to take
\begin{equation}
\alpha,\beta_{2}\rightarrow0.
\label{eq:genus 1 - limit of vanishing flux}
\end{equation}
With this choice
\begin{equation}
\Delta_{0}= 0,\qquad \qquad \Delta_{\infty}= k^{2}-\beta_{1}.
\label{eq:genus 1 - vanishing flux}
\end{equation}
A priori, the correct way to take the limit could involve keeping the ratio of $\alpha$ and $\beta_{2}$ fixed to some value, but we will see that the limit is independent of this ratio.
In the spirit of section \ref{sec:5brane} we think of this limit as a new kind of collapse $ (\alpha e \beta) $.
In this limit, the holomorphic differentials \eqref{genusone} assume the following form
\begin{equation}
\partial h_1 = -i \frac{du}{u^{1/2}},\qquad\qquad
\partial h_2 = - \frac{(u-\beta_1)}{(u-k^2)}\frac{du}{u^{1/2}}.
\end{equation}
We see that this limit gives a new type of singularity $ (1/2,1/2) $ at $u=0$. One can prove
that this is the only other possible singularity of $\partial h_1$ and $\partial h_2$ that can occur as
a limit of the solutions we discuss, without giving a singularity of the full geometry.
Using the exact form of $ h_{1,2} $ given in \eqref{eq:h_1,2 in the collapse limit}, with coordinates $ w = re^{i \theta} $ ($\theta \in[\frac{\pi}{2},\pi]$), near $r=0$ ($u=0$) the real harmonic functions behave as
\begin{equation}
h_{1} = 4r\sin (\theta),\qquad\qquad
h_{2} = -\frac{4\beta_{1}r\cos (\theta)}{k^{2}}+\frac{4(k^{2}-\beta_{1})r^{3}\cos(3 \theta)}{3k^{4}}+O(r^{5}).
\label{eq:h after limit}
\end{equation}
Note that the singular terms drop out.
We can now plug this into \eqref{eq:local solutions - dilaton} and \eqref{eq:local solutions - metric factors} and find the leading behavior near $r=0$
\begin{align}
e^{2\Phi} & = \frac{\beta_{1}^{2}}{k^{4}},&
\rho^{2} & = \frac{2\sqrt{2|\Delta_{\infty}|}}{k^{2}}, \nonumber \\
f_{1}^{2} & = \frac{8\sqrt{2|\Delta_{\infty}|}}{k^{2}}r^{2}\sin^2 (\theta),&
f_{2}^{2} & = \frac{8\sqrt{2|\Delta_{\infty}|}}{k^{2}}r^{2}\cos^2(\theta),\quad \quad \quad
f_{4}^{2} = \frac{8\beta_{1}}{\sqrt{2|\Delta_{\infty}|}}.
\label{eq:flat metric near the origin after the limit}
\end{align}
Note the $r^{2}$ factor in $f_{1}^{2}$ and $f_{2}^{2}$. This means
that the radius of $S^{5}$ decreases as we approach the branch
point at $w=0$, resulting in a flat metric, as in spherical
coordinates in ${\mathbb R}^6$. Subsequently, there is no topologically non-trivial
cycle, needed to support a 5-form flux. Additionally, the $r^{-2}=e^{2x}$
singularity in $f_{4}^{2}$, responsible for the asymptotic
$AdS_{5}$ structure, is no longer present.
The metric near $w=0$ is therefore
\begin{equation}
ds^{2}=\frac{8\sqrt{2|\Delta_{\infty}|}}{k^{2}}\left(dr^{2}+r^{2} (d\theta^{2}+\sin^2 (\theta) \, ds_{S_{1}^{2}}^{2} + \cos^2(\theta) \, ds_{S_{2}^{2}}^{2})\right)+\frac{8\beta_{1}}{\sqrt{2|\Delta_{\infty}|}}ds_{AdS_{4}}^{2},
\end{equation}
which is precisely the metric of $AdS_{4} \times \mathbb{R}^{6}$.
The point $w=0$ ($u=0$) is thus just a regular point in the full geometry. We have therefore
obtained a smooth solution describing D3-branes ending on NS5-branes, with a single asymptotic $AdS_5\times S^5$
region at $u=\infty$, a single NS5-brane stack located at $u=k^2$, and no
other singular points. Similarly, one may
obtain from a different degeneration limit of the genus one case the solution for D3-branes
ending on D5-branes.
\subsection{D3-branes ending on multiple stacks of 5-branes}
The main lesson of the genus one case is that it is possible to locally turn off the 5-form flux, emanating from an asymptotic $ AdS_5 \times S^5 $ region at $ u=e $, by letting $ \alpha $ and $ \beta $ coalesce to $ e $ (an $ (\alpha e \beta) $ collapse). This changes the singularity at $ e $, from $ (3/2,3/2) $ to $ (1/2,1/2) $, leading to a smooth $ AdS_4 \times \mathbb{R}^6 $ geometry at that point.
A $(\beta e \alpha)$ collapse gives the same result.
The local nature of this procedure means that it is easily applicable to the more general solution of multiple stacks of 5-branes intersecting D3-branes, introduced in \S\ref{subsec:3intersect5}. Consider the schematic representation of this solution given in \eqref{eq:multiple collapse}. Recall that in this solution there are two asymptotic $ AdS_5 \times S^5 $ regions, at $ u=0 $ and $ u=\infty $, corresponding to D3-branes ending on stacks of 5-branes from both sides. Taking an $ (\alpha e \beta) $ collapse at $ u=0 $ leads to a new solution, with D3-branes ending on the 5-branes from only one side :
\begin{equation}
\alpha \quad (e \beta e) \quad \alpha \quad \ldots \quad \alpha \quad (e \beta e) \quad (\alpha e \beta) \quad (e \alpha e) \quad \beta \quad \ldots \quad \beta \quad (e \alpha e) \quad \beta.
\end{equation}
The remaining $2g$ parameters are
\begin{equation}
\alpha_{g+1}<-l_m^2<\alpha_g<...<\alpha_{n+2}<-l_{1}^2<0<k^2_{n}<\beta_n<...<\beta_2<k^2_1<\beta_1,
\end{equation}
with holomorphic differentials
\begin{equation} \label{eq:nholodiff}
\partial h_1 = - i \frac{1}{\sqrt{u}} \prod_{b=1}^{m} \frac{(u-\alpha_{b+n+1})}{(u+l_b^2)}du ,\qquad\qquad
\partial h_2 = - \frac{1}{\sqrt{u}} \prod_{a=1}^{n} \frac{(u-\beta_a)}{(u-k_a^2)}du .
\end{equation}
It is convenient to substitute $u=w^2$ in $ \partial h_{1,2} $ as in \S\ref{subsec:genus1}. In this coordinate $h_{1,2}$ are given by
\begin{equation}\label{eq:general harm}
\begin{split}
h_1 = & 4 \mathrm{Im}(w) + 2 \sum^{m}_{b=1} \tilde{d}_b \ln \left( \frac{|l_b - i w|^2}{|l_b + i w|^2} \right),\\
h_2 = & -4 \mathrm{Re}(w) - 2 \sum^{n}_{a=1} d_a \ln \left( \frac{|k_a + w|^2}{|k_a - w|^2} \right),\\
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
d_a \equiv & \frac{(\beta_a-k^2_a)}{2 k_a} \prod_{c \ne a}^{n}\frac{(k^2_a - \beta_c)}{(k^2_a - k^2_c)}, \qquad \qquad
\tilde d_b \equiv \frac{(-\alpha_{b+n+1}-l^2_b)}{2l_b} \prod_{c \ne b}^{m}\frac{(l^2_b + \alpha_{c+n+1})}{(l^2_b - l^2_c)}. \\
\end{split}
\end{equation}
Note that $d_a,\tilde{d}_a>0$ (recall $k_a,l_b>0$).
The coordinate $w$ occupies the second quadrant of the complex plane,
$\{\mathrm{Re}(w)<0,\mathrm{Im}(w)>0\}$. The NS5-branes are then located on the negative real line ($\{-k_a\}$), while the D5-branes are located on the positive imaginary line ($\{il_b\}$). Near each of these points, the local supergravity solution is as we have discussed in \S\ref{subsec:genus1}. To demonstrate this, expand \eqref{eq:general harm} near a stack of NS5-branes at $w=-k_a$. Using $w=re^{i\psi}-k_a$ we find
\begin{equation}
\begin{split}
h_1 = 4 c_a r \sin(\psi), \qquad \qquad h_2 = 2 b_a - 2 d_a \ln (r^2),\\
\end{split}
\end{equation}
where $c_a,b_a$ are real constants that depend on the parameters of the solution. This is the same local form of $h_{1,2}$ as we found in \eqref{eq:h12 5brane}. The resulting calculation of the metric and 3-form flux can be carried over without any change. Similar considerations apply to the D5-branes at $w=il_b$.
Repeating the calculations of \S\ref{subsec:genus1}, we find that the 3-form flux carried by the stack of NS5-branes at $w=-k_a$ is given by
\begin{equation}
\int_{S^3} H_{(3)} = n_a, \qquad n_a\equiv 32 \pi^2 d_a \in \mathbb{Z}, \qquad \qquad \int_{S^3} F_{(3)}=0.
\end{equation}
The number of NS5-branes filling the $AdS_4\times S^2$ at $w=-k_a$ is therefore $n_a$.
A similar analysis near the $b$'th stack of D5-branes ($w = i l_b$) gives
\begin{equation}
\begin{split}
\int_{S^3} H_{(3)} = 0, \qquad\qquad
\int_{S^3} F_{(3)} = -m_b, \qquad m_b \equiv 32 \pi^2 \tilde d_b \in \mathbb{Z},
\end{split}
\end{equation}
such that there are $m_b$ D5-branes in this stack.
\begin{figure}[h!]
\subfigure[]{
{\begin{overpic}[width=0.55\linewidth]{OneThroat.PNG}
\put(50,-1){D5}
\put(27,6){D5}
\put(88,6){D5}
\put(53,72){NS5}
\put(29,71){NS5}
\put(85,63){NS5}
\put(-5,47){$AdS_5 \times S_5$}
\end{overpic}}
}\qquad \qquad
\subfigure[]{
\includegraphics[scale=1.2]{ending_QFT.pdf}
}
\caption{(a) A schematic picture of the six dimensional space made from the two
two-spheres and $\Sigma$, for the solutions of this subsection corresponding to D3-branes ending on D5-branes and NS5-branes. This space is non-compact along
the $AdS_5\times S^5$ ``throat'', and has several D5-brane and NS5-brane ``throats'' coming out of its interior. (b) The dual field theory, which describes the 4d ${\cal N}=4$ SYM theory on a half-line with some boundary condition (that could include interactions with a 3d SCFT at the boundary). As in the previous figure, the 4d boundary component lives infinitely far away along the ``throat'' in figure (a).
}
\label{fig:ending branes}
\end{figure}
We have already discussed the difficulty in defining a conserved and globally well-defined 5-form flux in solutions where there are both NS5 and D5-branes. Before we show how this difficulty may be circumvented, let us first describe the simpler case where only 5-branes of one type appear. As explained towards the end of \S\ref{subsec:genus1}, for solutions that involve only NS5-branes we may use
\begin{equation}
{\cal F}_1 \equiv F_{(5)} + {1\over 4} C_{(2)} \wedge H_{(3)},
\end{equation}
which is both conserved and globally well-defined. Likewise, for D5-branes we use
\begin{equation}
{\cal F}_2 \equiv F_{(5)} - {1\over 4} B_{(2)} \wedge F_{(3)}.
\end{equation}
Hence, for the solutions with only NS5-branes, we find that the 5-form flux penetrating the 5-cycle $S^3 \times S^2_2$ at $w=-k_a$ is given by
\begin{equation}
\int_{\Sigma_5^a} {\cal F}_1 = \frac{1}{4} K_a n_a, \quad K_a \equiv 32 \pi k_a,
\end{equation}
which we interpret as having $n_a K_a$ D3-branes ending on this 5-brane stack, or $K_a$ D3-branes ending on each NS5-brane. Similarly, for the solutions with only D5-branes we find
\begin{equation}
\int_{\Sigma_5^b} {\cal F}_2 = \frac{1}{4} L_b m_b, \quad L_b \equiv 32 \pi l_b,
\end{equation}
with $L_b$ D3-branes ending on each D5-brane. In both cases the total 5-form flux summing over all 5-brane singularities equals the 5-form flux at the $AdS_5\times S^5$ singularity, as expected (all D3-branes end on 5-branes).
These solutions match nicely with the classification \cite{Gaiotto:2008sa,Gaiotto:2008ak} of the possible half-supersymmetric boundary conditions related to D3-branes ending on D5-branes or NS5-branes. In \cite{Gaiotto:2008sa,Gaiotto:2008ak} the possible boundary conditions for D3-branes ending on D5-branes were discussed by a weak coupling analysis; the direct classification of boundary conditions for D3-branes ending on NS5-branes is more complicated since these involve (in all cases except the case of a single NS5-brane) a non-trivial 3d superconformal field theory (SCFT) on the boundary of the half-line, but it must be the same as that of D5-branes by S-duality. For D3-branes ending on D5-branes,
the boundary conditions are classified (see \cite{Gaiotto:2008sa,Gaiotto:2008ak} and references therein) in terms of the behavior of three of the adjoint scalar fields $X_i$ ($i=1,2,3$) of the ${\cal N}=4$ SYM theory (the ones corresponding to the motion of the D3-branes along the D5-branes) near the boundary of the half-line at $z=0$. The different boundary conditions correspond to choosing an $N$-dimensional representation $\tau_i$ ($i=1,2,3$) of $SU(2)$ ($[\tau_i, \tau_j] = i \epsilon_{ijk} \tau_k$), and the scalar fields then behave near the boundary as $X_i = \tau_i / z$. Each $N$-dimensional representation can be decomposed into irreducible representations, so that it contains $m_b$ copies of the $L_b$-dimensional representation, and the number of irreducible representations that appears, $\sum_b m_b$, is identified with the number of D5-branes. We interpret such boundary conditions as having $L_b$ D3-branes ending on each of the $m_b$ D5-branes, for every value of $b$, and we thus have the same labeling for our solutions above as for the possible boundary conditions. It is easy to show that the global symmetries $\prod_b U(m_b)$ also agree.
Let us recall the difficulty in finding a conserved and globally well-defined 5-form when there are both D5-branes and NS5-branes. The technical issue is that to define a conserved 5-form we need to have either $B_{(2)}$ or $C_{(2)}$ non-singular. However,
whenever we have a D5-brane singularity, ${\tilde h}_1$ (and, thus, also $C_{(2)}$) jumps by the number of D5-branes as we go along the real line from one side of the D5-brane singularity to the other, so it cannot be taken to vanish all along the region where the corresponding 2-cycle vanishes. The same is true for ${\tilde h}_2$ at NS5-brane singularities. The fact that the definition of the D3-brane charge in this case
is problematic is related to the fact that \cite{Hanany:1996ie} configurations of D5-branes intersecting NS5-branes carry D3-brane charge (due to the Chern-Simons term in the type IIB supergravity action); and,
related to this, the number of D3-branes ending on an NS5-brane (D5-brane) changes as this brane is moved past a D5-brane (NS5-brane), so it is not clear how to identify this number.
However, there is a natural way to define a conserved 5-form charge in our solutions for this case as well\footnote{We thank
Don Marolf for this suggestion.}. The 5-form ${\cal F}_1$ is well-defined near all the NS5-brane
singularities at $u=k_a^2$, and the 5-form ${\cal F}_2$ is well-defined near all the D5-brane
singularities at $u=-l_a^2$. We can extend the regions where these two 5-forms are well-defined so
that together they cover all of $\Sigma$. The main constraint is that the region $\Sigma_1$ where
${\cal F}_1$ is well-defined cannot include more than one interval separating different branch points
with negative $u$ (where the $S^2$ on which $C_{(2)} \neq 0$ shrinks to zero size), while the region $\Sigma_2$ where ${\cal F}_2$ is well-defined cannot include more than one
interval separating different branch points with positive $u$ (where the $S^2$ on which $B_{(2)} \neq 0$ shrinks to zero size). This leaves us with two possible choices for
the topology of these regions. We can take $\Sigma_1$ to be a region that intersects the real line
along $[a,b]$, where $-l_{1}^2 < a < 0$ and $k_1^2 < b < \infty$, and $\Sigma_2$ to be the complement
of this region, see figure \ref{fig:flux-definition}; this fulfills the requirements above. There is then a unique non-singular choice for ${\cal F}_1$ in $\Sigma_1$ by choosing $C_{(2)}$ to vanish on $[-l_{1}^2,0]$, and similarly there is a unique
non-singular choice for ${\cal F}_2$ in $\Sigma_2$ by choosing $B_{(2)}$ to vanish on $[k_1^2,\infty]$. The other choice is to take $\Sigma_2$ to be
a region that intersects the real line along $[{\tilde a}, {\tilde b}]$ with $-\infty < {\tilde a} < -l_m^2$ and $0 < {\tilde b} < k_n^2$. The two choices are related by S-duality together with a reflection of
the $u$-plane, so we will focus on the first choice here.
\begin{figure}[h!]
\centering
\includegraphics[scale=1]{flux4.pdf}
\caption{The $u$-plane for solutions of D3-branes ending on NS5-branes and D5-branes, with the $AdS_5\times S^5$ singularity chosen to be at $u=\infty$ and the $AdS_4\times {\mathbb R}^6$ point at $u=0$. We depict the first choice for the surface $\Sigma_1$ on which ${\cal F}_1$ is well-defined, and for the surface $\Sigma_2$ on which ${\cal F}_2$ is well-defined, separated by the curve $\gamma$.}
\label{fig:flux-definition}
\end{figure}
At first sight, the fact that we used two different 5-forms to cover $\Sigma$ does not allow us to
obtain a conserved charge. However, consider the integral of ${\cal F}_1$ on the boundary $\partial \Sigma_1$ of $\Sigma_1$ (times the two two-spheres). Since $d{\cal F}_1=0$, this integral vanishes. On the other hand, it has two contributions; one from the ``external'' boundary of $\partial \Sigma_1$ which is along $\partial \Sigma$, where it gets contributions from the 5-brane singularities analogous to the ones we computed before
(and not from any other points on the boundary), and one from the ``internal'' boundary, along the curve $\gamma$ in figure \ref{fig:flux-definition}. Similarly, the integral of ${\cal F}_2$ on $\partial \Sigma_2$ also vanishes, and it is given by the sum of the
contributions from the singularities along the real line (both the 5-brane singularities and the
$AdS_5\times S^5$ point at $u=\infty$), plus the contribution from the ``internal'' boundary. If we add
these two integrals, the
total contribution from the ``internal boundary'' $\gamma$ is the integral of
\begin{equation}
{\cal F}_1-{\cal F}_2 = \frac{1}{4} d(B_{(2)}\wedge C_{(2)})
\end{equation}
along this boundary; but this is just proportional to the difference in the
values of $B_{(2)}\wedge C_{(2)}$ between the two edges of this boundary $\gamma$ at $u=a$ and $u=b$, and this
vanishes since either $B_{(2)}$ or $C_{(2)}$ vanishes at each of these points. Thus, we find that the
sum of the 5-form fluxes ${\cal F}_1$ or ${\cal F}_2$ over the 5-brane and $AdS_5\times S^5$ singularities vanishes, so this
defines a conserved charge\footnote{A similar conserved charge may be defined for the solutions reviewed in \S\ref{subsec:3intersect5}, describing D3-branes intersecting D5-branes and NS5-branes.}.
Let us now apply this condition to fix $\tilde{h}_{1,2}$. We must impose that $\tilde{h}_1$ vanishes on the interval $[-l^2_{1},0]$ and $\tilde{h}_2$ on the interval $[k^2_1,\infty]$. Translating this condition to the $w$ plane means that $\tilde{h}_1$ vanishes on $[0,il_{1}]$
while $\tilde{h}_2$ vanishes on $[\infty,-k_1]$. We get
\begin{equation}\label{eq:general dual harm}
\begin{split}
\tilde{h}_1 = & 4 \mathrm{Re}(w) + 2i \sum^{m}_{b=1} \tilde{d}_b \ln \left[ \frac{(l_b - i w)(l_b - i \bar{w})}{(l_b + i w)(l_b + i \bar{w})}\right] ,\\
\tilde{h}_2 = & 4 \mathrm{Im}(w) - 2i \sum^{n}_{a=1} d_a \ln \left[\frac{(k_a + w)(k_a - \bar{w})}{(k_a - w)(k_a + \bar{w})} \right] - 4 \pi \sum^{n}_{a=1} d_a .\\
\end{split}
\end{equation}
Consider $\tilde{h}_1$ first. Each term in the sum is proportional to $\mathrm{Im}[\ln(l_b-iw)-\ln(l_b+iw)]$ and thus vanishes on the interval $[0,il_b]$ where the logarithms have the same imaginary part \footnote{We take all the branch cuts to lie outside of the second quadrant, and choose the principal branch for all the logarithms.}. Hence the condition is satisfied. For $\tilde{h}_2$, by the same argument, the sum of logarithms vanishes on $[-k_n,0]$. Going along the negative real axis we jump over $n$ discontinuities, accumulating a contribution of $4 \pi d_a$ from each one of them, and thus we obtain the desired result on $[\infty,-k_1]$.
To compute the 5-form flux we need the value of $C_{(2)}$ ($=-2\tilde{h}_1 \hat{e}^{67}+\ldots$) at the position of the NS5-branes. Expanding $\tilde{h}_1$ around $w=-k_a$ we find
\begin{equation}
C_{(2)} = 8(k_a + 2 \sum^{m}_{b=1} \tilde{d}_b \arctan \left( \frac{k_a}{l_b}\right) ) + \cdots.
\end{equation}
Likewise, expanding $B_{(2)}$ ($=2\tilde{h}_2 \hat{e}^{45}+\ldots$) near the D5-branes at $w=il_b$ we find
\begin{equation}
B_{(2)} = 8(l_b - 2 \sum^{n}_{a=1} d_a \arctan \left( \frac{k_a}{l_b}\right)) + \cdots.
\end{equation}
The 5-form flux coming form the $a$'th stack of NS5-branes is given by
\begin{equation}
\int_{\Sigma_5^a} {\cal F}_1 = 8 \pi n_a (k_a + 2 \sum^{m}_{b=1} \tilde{d}_b \arctan \left( \frac{k_a}{l_b}\right)),
\end{equation}
and the flux coming from the $b$'th stack of D5-branes is
\begin{equation}
\int_{\Sigma_5^b} {\cal F}_2 = 8 \pi m_b (l_b - 2 \sum^{n}_{a=1} d_a \arctan \left( \frac{k_a}{l_b}\right)).
\end{equation}
The sum of all these fluxes exactly cancels the 5-form flux from $u=\infty$, as expected.
Note that the 5-form flux going into NS5-brane singularities is bounded from below by zero ($n_a, k_a, {\tilde d}_b, l_b > 0$), with the bound attained in the limit $k_a \to 0$. Similarly, the quantized 5-form flux per D5-brane going into D5-brane singularities is bounded from below by minus the total number of NS5-branes $\#_{NS5} = \sum_a n_a = 32\pi^2 \sum_a d_a$ (since $m_b>0$, $\arctan(x)< \pi/2$), with the bound attained in the limit $l_b \to 0$.
If we take the other
choice for the topology of $\Sigma_1$ and $\Sigma_2$, we would obtain a different value for these charges, but one that
would still be conserved; the difference between the two choices is a shift in the 5-form flux coming
from all D5-brane singularities by a constant times the D5-brane charge there times the total number of NS5-branes, and a shift in the opposite direction of the 5-form flux coming from all NS5-brane singularities, by a constant times the NS5-brane charge there times the total number of D5-branes.
The boundary conditions corresponding to configurations of D3-branes ending on both D5-branes and NS5-branes were also classified in \cite{Gaiotto:2008sa,Gaiotto:2008ak}. In this case one has to be careful about
the fact that the number of D3-branes ending on each 5-brane is not well-defined, since this changes when we move an NS5-brane past a D5-brane. However, it was shown in \cite{Gaiotto:2008ak} that if one ``regulates'' a brane configuration for
D3-branes ending on D5-branes and NS5-branes by slightly separating the 5-branes along the $z$ direction, then one can define a ``linking number'' \cite{Hanany:1996ie} associated with each 5-brane, which does not change when the
branes are moved around. The possible boundary conditions are then in one-to-one correspondence with the list of linking numbers associated with the D5-branes and the NS5-branes. The linking number $L_b$ associated with a D5-brane was defined in \cite{Gaiotto:2008ak} as the net number of D3-branes ending on it from the right (namely, the number of D3-branes ending on it from the right, minus the number of D3-branes
ending on it from the left), plus the number of NS5-branes on its left (=at smaller values of $z$). Similarly, the linking number $K_a$ associated with an NS5-brane was defined as the net number of D3-branes ending on it from the right, plus the number of D5-branes on its left. As discussed in \cite{Gaiotto:2008ak}, both linking numbers obey $L_b > 0$, $K_a > 0$.
One may hope that these linking numbers would correspond to the 5-form fluxes that we defined above for each 5-brane stack (divided by the number of 5-branes in that stack), since these should be related to the numbers of D3-branes ending on each 5-brane, but this is clearly not correct. For one thing, we had two different definitions of the 5-form, and it is not clear which one should map to the linking numbers; another issue is that the linking numbers defined in \cite{Gaiotto:2008ak} do not sum to the total number of D3-branes, but rather to that number plus the total number of D5-branes $\#_{D5}$ times the total number of NS5-branes $\#_{NS5}$. However, it is easy to see how to correct both problems. An equally natural definition of the linking number for D5-branes in some brane configuration is by taking ${\tilde L}_b$ to be the net number of D3-branes ending on it from the right, minus the number of NS5-branes on its right. This simply differs from the previous definition $L_b$ by subtracting from it $\#_{NS5}$. Similarly, one can define a different linking number ${\tilde K}_a$ for NS5-branes, to be the net number of D3-branes ending on it from the right, minus the number of D5-branes on its right. This differs from the previous
definition $K_a$ by subtracting from it $\#_{D5}$ \footnote{Of course we could also shift the linking
numbers by other multiples of $\#_{D5}$ and $\#_{NS5}$, but the two definitions described here are the
simplest and most natural ones, and they are quantized to integer values, unlike the original definition given in \cite{Hanany:1996ie}.}. Now, if we choose to characterize the D5-branes by the linking number ${\tilde L}_b$, and the NS5-branes by the linking number $K_a$, then these linking numbers (which still uniquely characterize a given boundary condition) sum to the total number of D3-branes, and we claim that they can be identified with the 5-form fluxes that we found above, for the first choice of the topology of $\Sigma_1$ and $\Sigma_2$. Namely, we identify
\begin{equation}
K_a = 32\pi (k_a + 2 \sum^{m}_{b=1} \tilde{d}_b \arctan \left( \frac{k_a}{l_b}\right)),\qquad
{\tilde L}_b = 32 \pi (l_b - 2 \sum^{n}_{a=1} d_a \arctan \left( \frac{k_a}{l_b}\right)).
\end{equation}
As a check of this claim, note that the linking numbers defined in this way obey $K_a > 0$, ${\tilde L}_b > -\#_{NS5}$, which is precisely the same as the lower bounds we found above. We could also choose the linking numbers $L_b$ for the D5-branes, and ${\tilde K}_a$ for the NS5-branes. With this choice the linking numbers also sum to the total number of D3-branes, and they would precisely match with the 5-form fluxes that we would find using the second choice for the topology of $\Sigma_1$ and $\Sigma_2$ above. Thus, we find a precise matching between the classification of our supergravity solutions, and the supersymmetric boundary conditions for D3-branes ending on 5-branes classified in \cite{Gaiotto:2008ak}.
Our general solution for the D3-branes ending on 5-branes is written in terms of $2g$ physical parameters (up to $SL(2,\mathbb{R})$ transformations of type IIB supergravity). These are the $n$ parameters $\{n_a\}$ that count the number of NS5-branes in each stack, the $n$ parameters $\{k_a\}$ that are related to the number of D3-branes ending on each of them, the $m=g-n$ parameters $\{m_b\}$ that count the number of D5-branes in each stack, and the $m$ parameters $\{l_b\}$ that are related to the number of D3-branes ending on them. When the number of D3-branes ending on each 5-brane in the $c$'th stack is equal to the number of D3-branes ending on each 5-brane in the $(c+1)$'th stack, the two stacks come together $k_c=k_{c+1}$ ($l_c=l_{c+1}$) and the solution reduces to the genus $g-1$ case with $n-1$ ($m-1$) stacks of NS5-(D5)branes and $m$ ($n$) stacks of D5-(NS5)branes. The fact that the 5-branes are separated in the $u$-plane and ordered along the boundary according to the number of D3-branes ending on them is natural, since this number controls
the bending of the 5-branes once back-reaction is taken into account; see, for example, figure 11 of \cite{Gaiotto:2008sa}.
Of course, the gravity solutions are only weakly curved when the number of D3-branes $N$ is large, and also when the number of 5-branes $m_b,n_a$ in each stack is large, $m_b,n_a \gg 1$; the solutions for small values of $m_b$ or $n_a$ include highly curved 5-brane ``throats''. If we take the large $N$ limit while keeping the ratios of the positions of the singularities in the $u$-plane
fixed, the number of 5-branes scales as $\sqrt{N}$ (and also the number of D3-branes ending on each
5-brane scales as $\sqrt{N}$). This is the natural scaling in gravity, since then both the radius of
the $S^5$ in the asymptotic $AdS_5\times S^5$ region, and the radius of the $S^3$ in the 5-brane throats in units of the string scale, scale as $N^{1/4}$. But we can also take a different large $N$ limit keeping the numbers of 5-branes
fixed, and as long as this number is large, our solutions are still weakly curved.
\section{One-point functions of chiral operators}\label{sec:one-point}
We next turn to the computation of field theory observables in the backgrounds
described in \S\ref{sec:endingbranes}. The simplest possible observables are one-point functions. In
a conformal field theory without a defect/boundary these have to vanish, but in
a conformal field theory on a half-line $z > 0$ with boundary conditions
preserving the lower-dimensional conformal symmetry, scalar primary operators
${\cal O}$ of dimension $\Delta$ are allowed to have one-point functions
$\langle {\cal O} \rangle = c / z^{\Delta}$ \cite{McAvity:1995zd}. (If we view our solutions as describing the
${\cal N}=4$ SYM theory on $AdS_4$, this corresponds to a constant vacuum expectation value
of ${\cal O}$ on $AdS_4$.)
In our case we have the 4d ${\cal N}=4$ SYM theory living on a half-line.
The boundary conditions break the $SU(4)$ global symmetry of this theory to
$SO(4) \simeq SU(2)\times SU(2)$, so only operators that are singlets of $SO(4)$
are allowed to have one-point functions. What are the lowest dimension
operators that are allowed to have one-point functions? The lowest-dimension
operator related by AdS/CFT to the metric, which corresponds in the bulk to
a combination of the trace of the metric on $AdS_5$, its trace on $S^5$, and
the 5-form field, is a scalar operator of dimension $\Delta=2$ in the ${\bf 20'}$
representation of $SU(4)$ \cite{Kim:1985ez}. This representation contains one singlet
of $SO(4)$. If we denote the three adjoint scalar fields
corresponding to the motion of the D3-branes along the D5-branes by $X_i$ ($i=1,2,3$),
and the three fields corresponding to the motion along the NS5-branes by $Y_i$ ($i=1,2,3$),
then it is given by ${\cal O}_2 = N {\rm tr}(X_1^2+X_2^2+X_3^2-Y_1^2-Y_2^2-Y_3^2)$.
The lowest-dimension scalar operator coming from the 2-form fields is a dimension
$\Delta=3$ complex scalar operator in the ${\bf 10}$ representation of $SU(4)$;
again this contains one singlet of $SO(4)$. Denoting the gauginos of the ${\cal N}=4$
SYM theory by $\lambda_a$ ($a=1,2,3,4$), the form of this operator is schematically
${\cal O}_3 = N {\rm tr}(\lambda_a \lambda_a + X_1 [X_2, X_3] + i Y_1 [Y_2, Y_3])$
(we assume that the kinetic terms of all ${\cal N}=4$ SYM fields are proportional to $1/g_{YM}^2$).
Finally, the lowest-dimension scalar operator coming from the dilaton-axion sector
is a dimension $\Delta=4$ complex singlet operator, whose real part takes the schematic form
${\cal O}_4 = N {\rm tr}(F_{\mu \nu}^2 + {\rm fermions} + \sum_{i < j} [X_i, X_j]^2 +
\sum_{i < j} [Y_i, Y_j]^2 + \sum_{i,j} [X_i, Y_j]^2)$.
Using the gravity solutions, we can compute the one-point functions of these operators
(and any other chiral operators) in the limit of large $N$ and large 't Hooft coupling.
To do this, we need to consider the
behavior of the background fields close to the boundary of our solutions at $u=\infty$,
where the solution is approximately $AdS_5\times S^5$.
In terms of the coordinate $v=-1/u$, the holomorphic differentials \eqref{eq:nholodiff} have the following asymptotic expansion near $v=0$ :
\begin{equation}
\begin{split}
\partial h_1 = & - i \left( \gamma_1 \frac{1}{v^{3/2}} + \delta_1 \frac{1}{\sqrt{v}} + \eta_1 \sqrt{v} \right)dv + O(v^{3/2}),\\
\partial h_2 = & - \left( \gamma_2 \frac{1}{v^{3/2}} + \delta_2 \frac{1}{\sqrt{v}} + \eta_2 \sqrt{v} \right) dv+ O(v^{3/2}),\\
\end{split}
\end{equation}
where the values of $\gamma_a, \delta_a, \eta_a$ depend on the specific solution.
In terms of real coordinates
\begin{equation}
v=e^{-2(x+iy)}, \quad - \infty \le x \le \infty, \quad 0 \le y \le \pi/2,
\end{equation}
the asymptotic region $v \rightarrow 0$ maps to $x \rightarrow \infty$. The metric factors up to next-to-leading order are
\begin{equation}
\begin{split}
\rho^2 = & 2 \sqrt{2 |\Delta|} + \frac{\sqrt{2 |\Delta|}}{\gamma_1 \gamma_2} \left\{ (\gamma_1 \delta_2 + \gamma_2 \delta_1) \cos (2y) + 2\frac{\Omega}{\Delta}\cos(2y) \right\} e^{-2x} + O(e^{-4x}),\\
f_1^2 = & 8 \sqrt{2 |\Delta|} \cos^2 (y) + 4 \frac{\sqrt{2 |\Delta|}}{\gamma_1 \gamma_2} \left\{(\gamma_1 \delta_2 + \gamma_2 \delta_1) [-2-\cos (2y)] + 2\frac{\Omega}{\Delta}\cos(2y) \right\}\cos^2 (y) e^{-2x} + O (e^{-4x}),\\
f_2^2 = & 8 \sqrt{2 |\Delta|} \sin^2 (y) + 4 \frac{\sqrt{2 |\Delta|}}{\gamma_1 \gamma_2} \left\{ (\gamma_1 \delta_2 + \gamma_2 \delta_1) [2-\cos (2y)] + 2\frac{\Omega}{\Delta}\cos(2y) \right\} \sin^2 (y) e^{-2x} + O (e^{-4x}),\\
f_4^2 = & 8 \frac{|\gamma_1| |\gamma_2|}{\sqrt{2 |\Delta|}} e^{2x} + 4 \frac{|\gamma_1| |\gamma_2|}{\gamma_1 \gamma_2 \sqrt{2 |\Delta|}} \left\{ [2\Delta+(\gamma_1 \delta_2 +\gamma_2 \delta_1) \cos(2y)] - 2\frac{\Omega}{\Delta}\cos(2y) \right\} + O(e^{-2x}),\\
\end{split}
\end{equation}
where $\rho^2$ is the coefficient of $4(dx^2+dy^2)$ and we introduced the notation
\begin{equation}
\Delta\equiv \gamma_1 \delta_2 - \gamma_2 \delta_1,\qquad\qquad
\Omega\equiv (\gamma_1)^2 \gamma_2 \eta_2-(\gamma_2)^2 \gamma_1 \eta_1.
\end{equation}
So far we've been working in a ``conformal gauge", in which the residual diffeomorphism invariance of the supergravity solution consists of conformal transformations of the Riemann surface $\Sigma$. In order to easily read off the supergravity prediction for the vacuum expectation value (VEV) of the corresponding operators of the dual CFT, the solution has to be rewritten in the de Donder-Lorentz gauge, in which the contribution from the $SO(6)$ singlet spherical harmonic to the Kaluza-Klein expansion of the metric compactified on $S^5$ vanishes \cite{Kim:1985ez}. This is achieved by the diffeomorphism
\begin{equation} \label{eq:diffeo}
e^{2x} \rightarrow \frac{1}{2}\frac{ |\Delta|}{|\gamma_1| |\gamma_2|} \left(e^{2x} - \frac{1}{2} a_y \cos(2y)\right), \qquad \sin^2(y) \rightarrow \sin^2(y) \left(1+ a_y \cos^2(y) e^{-2x} \right),
\end{equation}
with
\begin{equation}
a_y = - 4 \frac{|\gamma_1| |\gamma_2|}{\gamma_1 \gamma_2} \frac{1}{|\Delta|}(\gamma_1 \delta_2 +\gamma_2 \delta_1).
\end{equation}
The metric then becomes
\begin{equation}\label{asymmetric}
ds^2= 8 \sqrt{2 |\Delta|} (ds^2_{AdS_5}+ds_{S^5}^2)+ 8 \sqrt{2 |\Delta|} \delta \zeta \cos(2y) e^{-2x} \left(ds_{S^5}^2 + dx^2 - \frac{1}{4} e^{2x}ds^2_{AdS_4}\right) + O(e^{-4x}),
\end{equation}
where
%
\begin{equation}
\delta \zeta = \frac{1}{ |\Delta|} \frac{|\gamma_1| |\gamma_2|}{\gamma_1 \gamma_2} \left[ -3 (\gamma_1 \delta_2 + \gamma_2 \delta_1)+ 2\frac{\Omega}{\Delta}\right].
\end{equation}
The dilaton and the functions defining the 2-form potentials up to next-to-leading order are
%
\begin{equation}
\begin{split}
e^{\Phi}= & \left|\frac{\gamma_2}{\gamma_1}\right|+\frac{1}{2}\left|\frac{\gamma_2}{\gamma_1}\right| \frac{\Delta}{ (\gamma_2 \gamma_1)^2 } \left[3 (\gamma_1 \delta_2 + \gamma_2 \delta_1) - 2\frac{\Omega}{\Delta} \right] e^{-4x} + O(e^{-6x}),\\
b_1= & \frac{32}{3} \frac{1}{\sqrt{2|\Delta|}} \frac{\Delta}{|\Delta|}\left|\frac{\gamma_2}{\gamma_1}\right|^{\frac{1}{2}} \left[ 3 (\gamma_1 \delta_2 + \gamma_2 \delta_1) - 2\frac{\Omega}{\Delta}\right] \cos^3(y) e^{-3x} + O(e^{-5x}), \\
b_2= & \frac{32}{3} \frac{1}{\sqrt{2|\Delta|}} \frac{\Delta}{|\Delta|} \left|\frac{\gamma_1}{\gamma_2}\right|^{\frac{1}{2}} \left[ 3 (\gamma_1 \delta_2 + \gamma_2 \delta_1)- 2\frac{\Omega}{\Delta}\right] \sin^3(y) e^{-3x} + O(e^{-5x}). \\
\end{split}
\end{equation}
For the special case of D3-branes ending on $n=g$ stacks of NS5-branes, the constants describing the asymptotic behavior of the real harmonic functions $h_1$ and $h_2$ are
\begin{equation}
\begin{split}
\gamma_1&= i, \quad \quad \gamma_2 = i,\\
\delta_1&= 0, \quad \quad \delta_2 = i \sum_{a=1}^g (\beta_a-k_a^2),\\
\eta_1&= 0, \quad \quad \eta_2 = -i \sum_{c \ne a}^g [\frac{1}{2}(\beta_c \beta_a+k_c^2k_a^2)-\beta_c k_a^2]-i \sum_{a=1}^g (k_a^4-\beta_a k_a^2), \\
\end{split}
\end{equation}
such that $\Delta = i \delta_2$ and $\Omega =-i \eta_2$. The number of D3-branes ending on the 5-branes is thus $N = 8 (4\pi)^3 \sum_{a=1}^g (\beta_a - k_a^2)$. We can then write
\begin{equation}
\begin{split}\label{asymvalues}
\delta \zeta = & \frac{1}{ |\delta_2|^2 }[3 (\delta_2)^2- 2 i \eta_2],\\
e^{\Phi}= & 1 - \frac{1}{2} [3 (\delta_2)^2- 2 i \eta_2] e^{-4x} + O(e^{-6x}),\\
b_1= & \frac{16 \sqrt{2}}{3} \frac{1}{|\delta_2|^{\frac{3}{2}}}[3 (\delta_2)^2- 2i \eta_2] \cos^3(y) e^{-3x} + O (e^{-5x}),\\
b_2= & \frac{16 \sqrt{2}}{3} \frac{1}{|\delta_2|^{\frac{3}{2}}} [3 (\delta_2)^2- 2 i \eta_2] \sin^3(y) e^{-3x} + O (e^{-5x}).
\end{split}
\end{equation}
Expressed (implicitly) in terms of the numbers of 5-branes (through $\{\beta_a\}$), and the numbers of D3-branes ending on
each 5-brane, we can read off from \eqref{asymmetric}, \eqref{asymvalues} the following expectation values for ${\cal O}_{2,3,4}$ (up to an overall normalization of each operator that we do not carefully fix here)\footnote{Note that generally the one-point functions of operators are not simply related to the coefficients of the normalizable modes of the corresponding fields near the boundary of $AdS_5$, but have additional contributions involving the normalizable modes of other fields; see, for instance, \cite{Skenderis:2006uy}. However, for the specific operators that we discuss here, the additional contributions are absent.} :
\begin{equation}
\begin{split}\label{gravityvevs}
\langle {\cal O}_2 \rangle \propto &
\left[ \frac{N^2}{16(4\pi)^6} - \sum_{a=1}^g (\beta_a^2 - k_a^4) \right]
\frac{1}{z^2},\\
\langle {\cal O}_3 \rangle \propto &
\left[ \frac{N^2}{16(4\pi)^6} - \sum_{a=1}^g (\beta_a^2 - k_a^4) \right]
\frac{1}{z^3},\\
\langle {\cal O}_4 \rangle \propto &
\left[ \frac{N^2}{16(4\pi)^6} - \sum_{a=1}^g (\beta_a^2 - k_a^4) \right]
\frac{1}{z^4}.\\
\end{split}
\end{equation}
For the special case of D3-branes ending on $m=g$ stacks of D5-branes, the expectation values are the same up to the replacement $\beta_a \rightarrow -\alpha_b$ and $k_a \rightarrow l_b$.
Note that the simplest large $N$ limit involves scaling all special points on the real axis as $N$,
namely $\beta_a \propto N$ and $k_a \propto \sqrt{N}$. In this limit the number of 5-branes scales
as $\sqrt{N}$, and the expectation values above scale as $N^2$ (which is the standard normalization
of all correlation functions in the large $N$ limit). If we want the number of 5-branes to be of
order $N$, we need to leave $k_a$ fixed in the large $N$ limit, but the one-point functions still scale
as $N^2$. On the other hand, if we want the number of 5-branes to remain of order one, we need to
take $k_a \propto N$, $(\beta_a - k_a^2) \propto N$ in the large $N$ limit; in this limit the one-point
functions \eqref{gravityvevs} scale as $N^3$.
We can compare \eqref{gravityvevs} to the same expectation values at weak coupling. As reviewed
above, the weak coupling boundary conditions were discussed in \cite{Gaiotto:2008sa,Gaiotto:2008ak}. For D3-branes ending
on NS5-branes these boundary conditions involve a strongly coupled 3d SCFT living at
$z=0$, so we do not know how to compute anything. However, for D3-branes ending purely on
D5-branes, the boundary conditions are given by $X_i = \tau_i / z$, where $\tau_i$
is some $N$-dimensional representation of $SU(2)$ ($[\tau_i, \tau_j] = i \epsilon_{ijk} \tau_k$),
and we can use this to compute the
expectation values of our operators in the weak coupling limit. Our solutions involve $g$
stacks of D5-branes, with $m_b$ D5-branes in each stack, and $L_b$ D3-branes ending on each
5-brane in the $b$'th stack, and we identified them above with the $N$-dimensional
representation of $SU(2)$ that has $m_b$ blocks of size $L_b\times L_b$.
To compute $\langle {\cal O}_2 \rangle$ at weak coupling, we thus need to compute
${\rm tr}(X_1^2+X_2^2+X_3^2)$ in this representation. This is proportional to $\sum_b m_b C_{L_b}$, where $C_{L_b}$
is the second Casimir of the $L_b$-dimensional representation of $SU(2)$, equal to
$C_{L_b} = (L_b^2 - 1) / 2$. Thus, in the large $L_b$ limit in which our solutions are valid we expect
$\langle {\cal O}_2 \rangle = N (\sum_b m_b L_b^2) / z^2$, up to a multiplicative constant that
is independent of $m_b, L_b$. In fact, given the expressions
above for ${\cal O}_{3,4}$, it is easy to see using the $SU(2)$ algebra that they are
also proportional to precisely the same expression, just with a different power of $z$. One can check that
these results do not agree with the strong coupling results \eqref{gravityvevs} computed above, indicating that
the one-point functions of these operators have a non-trivial dependence on the 't Hooft
coupling. In fact, when the number of 5-branes is of order $\sqrt{N}$, we even find a different
power of $N$ at weak and strong coupling; in this case at weak coupling the one-point functions scale as
$N^{5/2}$. On the other hand, when the number of 5-branes is of order one we find weak-coupling
one-point functions of order $N^3$, and when it is of order $N$ we find weak-coupling one-point
functions of order $N^2$, which is similar to the strong coupling behavior (but the precise
dependence on the number of D3-branes ending on each 5-brane stack is different).
It is curious that both at weak coupling and at strong coupling, all three operators
have the same expectation value (up to an overall constant that we did not fix, but the dependence of all three operators on the numbers of D3-branes ending on each 5-brane is the same); perhaps
this indicates some non-renormalization theorem for ratios of expectation values. It would be interesting to try to compute these one-point functions exactly as
a function of the 't Hooft coupling; perhaps this can be done, like similar computations,
using integrability or localization methods.
\section{Summary and conclusions}\label{summary}
In this paper we used the results of \cite{D'Hoker:2007xy,D'Hoker:2007xz} to construct gravitational
duals for the ${\cal N}=4$ SYM theory on ${\mathbb R}^{2,1}$ times a half-line (or on $AdS_4$) with various boundary conditions that preserve half of the supersymmetry, describing the near-horizon limit of D3-branes ending on 5-branes. We obtain an explicit closed form for these solutions, given by plugging \eqref{eq:general harm} into the equations of section \ref{dhetalreview}, and we find a one-to-one mapping between our solutions and the boundary
conditions for D3-branes ending on 5-branes, classified in \cite{Gaiotto:2008sa,Gaiotto:2008ak}. Assuming that the classification of solutions in \cite{D'Hoker:2007xy,D'Hoker:2007xz} is complete, we present the most general solutions of this type.
These should correspond to the most general supersymmetric boundary conditions of ${\cal N}=4$ SYM that have a supergravity (with 5-branes) approximation for some range of their parameters; there can also be other types of boundary conditions that never have a purely supergravity description, such as the orientifold/orbifold boundary conditions discussed in \cite{Aharony:2010ay}.
A simple generalization of the solutions we find (which goes beyond supergravity)
involves adding $M$ D3-branes sitting at the point $u=0$ where the two two-spheres
go to zero size. This gives a generalized boundary condition with an extra $U(M)$
global symmetry, coming from the gauge symmetry on the D3-branes; in the field theory
this comes from $M$ additional charged matter fields living on the
boundary. We can think of the new boundary condition in the language
of the brane construction as starting from a solution with $M$ semi-infinite D3-branes
on the other side of the 5-branes, but taking a limit where the $3+1$ dimensional
gauge theory on these D3-branes decouples, leaving behind a global symmetry. Such a
decoupling limit involves taking the gauge coupling on these D3-branes to zero. In the
brane construction we cannot really do this since the string coupling on both stacks
of semi-infinite D3-branes is the same, but in the solutions of \cite{D'Hoker:2007xy,D'Hoker:2007xz} there are independent string coupling parameters
for the two stacks of semi-infinite D3-branes (as in the ``Janus solutions'') so such a
limit is possible. Naively we would describe such a limit by starting with an extra
$AdS_5$ singularity at $u=0$ and taking the asymptotic string coupling down the $AdS_5$
throat to zero, but we claim that the limiting solution is simply described by putting
$M$ D3-branes at $u=0$. Note that the string scale in our solutions is finite, so we
cannot replace the D3-branes by an $AdS_5\times S^5$ ``throat''. The precise description of the new boundary conditions in gauge
theory can be derived along the lines of \cite{Gaiotto:2008sa,Gaiotto:2008ak}, just
adding $M$ extra semi-infinite D3-branes (with vanishing gauge coupling on their
worldvolume). This gives $M$ extra charged fields under the last gauge group in the
quiver diagram. One can also obtain such fields by adding $M$ extra 5-branes, so the
solutions described in this paragraph are not independent of the general solutions we
described above, but should be thought of as a different way to describe a limit of the general solutions in which the linking number of some 5-branes is very small. This alternative description could be more
useful for some range of parameters.
There are many remaining open questions.
In this paper we only studied the solutions of \cite{D'Hoker:2007xy,D'Hoker:2007xz} that have no $u_a$
points (zeros of $\partial h_{1,2}$) in the middle of the Riemann manifold $\Sigma$,
since solutions with such points appear to have conical singularities. It
would be interesting to study further the solutions with $u_a$ points,
to see if in string theory there is some way to resolve their singularities.
All of our solutions involve regions which look like NS5-branes and/or D5-branes wrapped on $AdS_4\times S^2$. In these regions the dilaton blows up (for NS5-branes) and supergravity breaks down, which is not surprising since there are many light fields hiding there that are not seen in supergravity (in particular, for $m$ 5-branes there are $U(m)$ gauge fields living on $AdS_4\times S^2$). The solutions near $m$ NS5-branes involve a ``throat'' region where the radius of curvature (in the string frame) is $\sqrt{m}$ times the string scale, so for small $m$ stringy corrections to supergravity are important. Note that from the point of view of our solutions the ``natural'' scaling at large $N$ (where $N$ is the number of D3-branes) is to have the number of 5-branes in each stack scale as $\sqrt{N}$, since only in this case the supergravity solution scales uniformly when we take large $N$. However, our solutions are also well-behaved (away from the 5-branes) when $m$ is large and fixed in the large $N$ limit, and only in the fixed $m$ case do we expect to have a standard 't Hooft large $N$ limit (in which the number of gauge-invariant operators remains fixed at large $N$). For NS5-branes in flat space there is a well-known string theory description of the corresponding ``throat'' using an exact worldsheet CFT, and it would be interesting to see if this can be extended to the case of 5-branes on $AdS_4\times S^2$. For 5-branes in flat space one can resolve the strong coupling region by slightly separating the 5-branes in specific ways (as in, for instance, \cite{Giveon:1999px}), and it would be interesting to see if this can be done also in our case, by splitting the 5-branes along the real axis in the $u$-plane. A particularly interesting case is that of a single NS5-brane; the general boundary conditions involving NS5-branes include non-trivial 3d SCFTs on the boundary, but the single NS5-brane corresponds just to simple Neumann/Dirichlet boundary conditions for all the fields of the ${\cal N}=4$ SYM theory, so it is the only case with NS5-branes that has a weakly coupled description. From the gravity point of view, we get also in this case a highly curved ``throat'', but since in this case there is no non-Abelian gauge symmetry hidden in the ``throat'', it is plausible that this ``throat'' has a smooth resolution in string theory with no strong coupling region. This issue deserves further study.
There are many computations that can be done using the solutions we find; in this paper we only computed a few one-point functions of chiral operators of the ${\cal N}=4$ SYM theory, and found that they do not agree with the weak coupling results. It would be interesting to analyze the behavior of these one-point functions as a function of the 't Hooft coupling, to see if it can be found exactly. It would also be interesting to compute other observables, and to see if there are any observables in these theories that are protected by supersymmetry (independent of the 't Hooft coupling).
While the one-point functions in such backgrounds are uniquely determined (up to a constant) by the conformal symmetry, two-point functions are not \cite{McAvity:1995zd}, and it would be interesting to compute them and to see what they teach us about these theories. It is particularly interesting to compute the spectrum of our solutions, which maps to the spectrum of anomalous dimensions of 3d ``boundary operators'' in the field theory; this computation was recently discussed in \cite{Bachas:2011xa} for a more general class of solutions, but it is beyond the scope of this paper. One could also analyze the spectrum of states which are not part of supergravity, such as (D)-strings stretched between 5-brane stacks, or branes wrapping non-trivial cycles in our solutions. There are also states coming from the fields living on the wrapped 5-branes; the states coming from the massless fields on the 5-branes wrapping $AdS_4\times S^2$, which are in short representations of $OSp(4|4)$, were classified in \cite{DeWolfe:2001pq}.
There are many possible generalizations of our solutions, but most of the interesting ones involve configurations with less supersymmetry, so they would be harder to construct. This includes in particular the brane configurations of D3-branes stretched between 5-branes, and of D4-branes ending on (or stretched between) 5-branes, whose construction was one of the main motivations for this work. There is one case which involves the same amount of supersymmetry, which is that of M2-branes ending on M5-branes, and it would be interesting to generalize the analysis of our paper to this case using the solutions of \cite{D'Hoker:2008wc,D'Hoker:2009my}. The field theory corresponding to this case was recently discussed in \cite{Berman:2009kj,Chu:2009ms,Berman:2009xd}.
For solutions that have both NS5-brane and D5-brane singularities adjacent to the
$AdS_5\times S^5$ singularity, one can also consider
a limit of our solutions in which the D3-brane flux in the single asymptotic
$AdS_5\times S^5$ region goes to zero. In this limit the $\alpha$ and $\beta$ points
adjacent to the $AdS_5\times S^5$ singularity approach this singularity.
This gives a solution which is a warped
product $AdS_4\times M_6$ with a manifold $M_6$ which is compact (except for 5-brane ``throats''); such a solution is dual to some
3d ${\cal N}=4$ superconformal theory, without any coupling to a 4d theory. Starting from a solution that has an interpretation as D3-branes
ending on D5-branes and NS5-branes, we can interpret this theory as
the low-energy theory on the D3-branes stretched between these
D5-branes and NS5-branes.
Finally, it would be interesting to generalize the solutions we find to finite temperature. Here there is a difference between considering our solutions as describing the ${\cal N}=4$ SYM theory on a half-line or on $AdS_4$, and the finite temperature generalization can be considered in both cases. In the first case it is clear that the asymptotic $AdS_5\times S^5$ region should be replaced by the near-extremal D3-brane solution, and it would be interesting to see how this is completed to the full geometry. The second case has richer dynamics, since (if we use global coordinates for $AdS_4$) it has a dimensionless parameter (the temperature in units set by the $AdS_4$ radius), and one expects (as discussed in \cite{Aharony:2010ay}) phase transitions as a function of this parameter. In this case there is always a trivial solution where we just periodically identify the (Euclidean) time direction of the $AdS_4$ factor in our solutions, and this trivial solution should be the dominant one at low temperatures, but at some point we expect a phase transition to a new solution with a horizon. It would be interesting to find and analyze these new solutions for the various boundary conditions we discuss in this paper.
\subsection*{Acknowledgments}
\label{s:acks}
It is a pleasure to thank Costas Bachas, Francesco Benini, Cyril Closset, Stefano Cremonesi, John Estes, Daniel Jafferis, David Kutasov, Mukund Rangamani, Cobi Sonnenschein, Shimon Yankielowicz, and especially Don Marolf for useful discussions. We thank Nizan Klinghoffer for assistance with the figures.
This work was supported in part by the Israel--U.S.~Binational Science Foundation, by a research center supported by the Israel Science Foundation (grant number 1468/06), by the German-Israeli Foundation (GIF) for Scientific Research and Development, and by the Minerva foundation with funding from the Federal German Ministry for Education and Research.
| {
"attr-fineweb-edu": 1.600586,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfV7xK1yAga6Jt-oM | \section{introduction}
Bose-Einstein condensates (BECs) with spin degrees of freedom have
attracted growing attention since the first observation of the spin-1
$^{23}$Na BEC by the MIT group~\cite{D.M.Stamper-Kurn et al.,J.Stenger et
al.}.
In contrast to a magnetic trap, in which hyperfine-spin degrees of freedom
are frozen, an optical trap can confine atoms in all magnetic sublevels of
spin, allowing study of the magnetic properties of BECs.
A variety of experiments have been performed to date, on areas such as
spin domains~\cite{H.-J.Miesnr et al.}, interdomain
tunneling~\cite{tunneling}, and realization of a spin-2 $^{23}$Na
BEC~\cite{A.Gorllitz et al.}.
The spin-exchange dynamics of $^{87}$Rb BECs have been investigated
experimentally by Schmaljohann \textit{et al.}~\cite{H.Schmaljohann et
al.}, Chang \textit{et al.}~\cite{M.-S. Chang et al.}, and Kuwamoto
\textit{et al.}~\cite{T.Kuwamoto et al.}.
Theoretical investigations of the spinor BEC have also been carried out
extensively.
Mean field theory (MFT) for a spin-1 BEC was formulated by Ho~\cite{T-L
Ho} and Ohmi and Machida~\cite{T.Ohmi&K.Machida}.
The MFT of a spin-2 BEC was developed by Ciobanu \textit{et
al.}~\cite{Ciobanu et al.} and Ueda and Koashi~\cite{Ueda&Koashi}.
Law \textit{et al.}~\cite{Law et al.} developed a many-body theory of
spin-1 antiferromagnetic BEC.
Koashi and Ueda~\cite{M.Koashi&M.Ueda} and Ho and Yip~\cite{Ho&Yip}
extended it to including the linear Zeeman effect and found that an
antiferromagnetic BEC realize a fragmented BEC for a weak magnetic field.
The Bogoliubov analysis was carried out by Huang and Gou~\cite{Huang&Gou}
and by Ueda~\cite{M.Ueda} in the presence of the linear Zeeman effect.
Their results agree with those obtained using a diagrammatic method by
Sz\'{e}pfalusy and Szirmai~\cite{Peter et al.}.
In these studies, the Zeeman effects are restricted to those up to the
linear order in the magnetic field.
A unique feature of trapped atomic systems is that linear and quadratic
Zeeman effects can be manipulated independently due to spin conservation.
If we take the quadratic Zeeman term into account, the ground-state
phase diagram becomes much richer as shown in Ref.~\cite{J.Stenger et
al.}.
In particular, under a certain range of linear and quadratic Zeeman
effects, there is a special phase in which the magnetization tilts against
the applied magnetic field.
The investigation of some of the unique features of this phase is the
primary purpose of our study.
When a weak magnetic field is applied along the quantization axis, the $m
= 1$ or $-1$ state is favorable for a spin-1 ${}^{87}$Rb atom due to the
linear Zeeman effect and the ferromagnetic interaction, where $m$ refers
to the magnetic sublevel.
On the other hand, the quadratic Zeeman effect raises the energy of the
$m = \pm 1$ states relative to that of the $m = 0$ state.
As a consequence, if the quadratic Zeeman effect is sufficiently large,
the spin vector of the ferromagnetic ground state not only shrinks but
also tilts against the direction of the magnetic field.
Therefore, even if the Hamiltonian is axisymmetric with respect to the
direction of the magnetic field, the ground state spontaneously breaks
the axisymmetry.
This phase, which we shall refer to as a broken-axisymmetry phase, was
predicted in Ref.~\cite{J.Stenger et al.}, but little attention has been
paid to it from the viewpoint of axisymmetry breaking.
In the present study, we investigate the Goldstone modes of this phase by
studying its excitation spectrum.
The BEC with ferromagnetic interactions has three phases: ferromagnetic,
polar, and broken-axisymmetry phases.
In the ferromagnetic and polar phases, only the U(1) (global phase)
symmetry is broken, and the Goldstone mode corresponds to a phonon.
In the broken-axisymmetry phase, the SO(2) symmetry (axisymmetry) of the
spin vector is broken in addition to the U(1) symmetry.
Because of the simultaneous breaking of these two continuous symmetries,
the associated Goldstone modes are expected to involve both phonons and
magnons.
This paper is organized as follows.
Section~\ref{section2} reviews the mean-field ground state of a spin-1 BEC
to make this paper self-contained.
Section~\ref{section3} uses the Bogoliubov theory to derive one gapful
mode and two gapless Goldstone modes.
Section~\ref{sectiond} explores the implications of the present study for
other related studies, and Sec.~\ref{section4} concludes this paper.
Appendix~\ref{app0} derives analytic expressions for the
broken-axisymmetry phase.
Appendix~\ref{app} discusses excitations in the ferromagnetic and polar
phases for comparison with those in the broken-axisymmetry phase.
\section{Ground state with broken axisymmetry}
\label{section2}
\subsection{Formulation of the problem}
We consider a uniform system of $N$ identical bosons with hyperfine spin
1 in which an external magnetic field is applied in the $z$ direction.
The Hamiltonian of the system is written as the sum of one-body part
$\hat{\mathcal{H}}_\mathrm{I}$ and two-body interaction part
$\hat{\mathcal{H}}_\mathrm{II}$.
The one-body part is given by
\begin{align}
\hat{\mathcal{H}}_\mathrm{I}
=
\sum_{m=-1}^1
\int \mathrm{d} \mathbf{r}
\hat{\Psi}_m^\dag
\left(
-\frac{\hbar^2}{2M} \nabla^2 - p m + q m^2
\right)
\hat{\Psi}_m,
\label{hamiltonian1}
\end{align}
where subscripts $m=+1,0,-1$ denote the magnetic quantum numbers along the
$z$ axis, $M$ is the mass of the atom, and $p$ and $q$ are the linear and
quadratic Zeeman coefficients, respectively.
In the case of spin-1 ${}^{23}$Na and ${}^{87}$Rb atoms, $q$ is positive.
The two-body part, which is described by a contact-type $s$-wave
interaction at ultralow temperature, takes the form
\begin{align}
\hat{\mathcal{H}}_\mathrm{II}
=
\frac{1}{2}
\sum_{F=0,2}
g_F
\int \mathrm{d} \mathbf{r} \hat{\Psi}_{n'}^\dag \hat{\Psi}_{m'}^\dag
\langle m' ; n' |\mathcal{P}_F | m ; n \rangle
\hat{\Psi}_m \hat{\Psi}_n,
\label{hamiltonian2}
\end{align}
where $g_F = 4 \pi \hbar^2 a_F/M$ with
$a_0$ and $a_2$ being the $s$-wave scattering lengths in the singlet and
quintuplet channels, respectively, and $\mathcal{P}_F$ projects a two-body
state into that with total spin $F$.
The absence of the projection onto the $F=1$ channel is due to the Bose
statistics.
Because the system is uniform, it is convenient to expand the field
operators in terms of plane waves as
$
\hat{\Psi}_m = \Omega^{-1/2} \sum_{\mathbf{q}}
e^{i \mathbf{q} \cdot \mathbf{r}}
\hat{a}_{\mathbf{q}, m}
$,
where $\Omega$ is the volume of the system and $\hat{a}_{\mathbf{q}, m}$
represents the annihilation operator of a boson with wavenumber
$\mathbf{q}$ and magnetic sublevel $m$.
Equations (\ref{hamiltonian1}) and (\ref{hamiltonian2}) are then rewritten
as
\begin{align}
\hat{\mathcal{H}}_\mathrm{I}= &
\sum_{\mathbf{k},m}
\left(
\epsilon_{\mathbf{k}} -p m + q m^2
\right)
\hat{a}_{\mathbf{k},m}^\dag \hat{a}_{\mathbf{k},m},
\label{hamiltonian-3}\\
\hat{\mathcal{H}}_\mathrm{II}= &
\frac{c_0}{2 \Omega}
\sum_{\mathbf{k}}
: \hat{\rho}_{\mathbf{k}}^\dag \hat{\rho}_{\mathbf{k}} :
+ \frac{c_1}{2 \Omega}
\sum_{\mathbf{k}}
: \hat{\mathbf{f}}_{\mathbf{k}}^\dag \cdot \hat{\mathbf{f}}_{\mathbf{k}} :,
\label{hamiltonian3}
\end{align}
where $\epsilon_{\mathbf{k}} = \hbar^2 k^2/(2M)$, $c_0 = (g_0 + 2 g_2)/3$,
$c_1 = (g_2 - g_0)/3$,
$
\hat{\rho}_{\mathbf{k}}
= \sum_{\mathbf{q},m} \hat{a}_{\mathbf{q}, m}^\dag
\hat{a}_{\mathbf{q}+\mathbf{k}, m}
$,
and
$
\hat{\mathbf{f}}_{\mathbf{k}}
= \sum_{\mathbf{q},m,n} \mathbf{f}_{m,n} \hat{a}_{\mathbf{q}, m}^\dag
\hat{a}_{\mathbf{q}+\mathbf{k}, n}
$
with $\mathbf{f} = (f_x, f_y, f_z)$ being the spin-1 matrices in vector
notation.
The symbol $: :$ denotes the normal ordering of operators.
The spin-spin interaction is ferromagnetic if $c_1 < 0$ and
antiferromagnetic if $c_1 >0$.
It is known that the interaction between spin-1 $^{23}$Na atoms is
antiferromagnetic and that the interaction between spin-1 $^{87}$Rb atoms
is ferromagnetic~\cite{Klausen et al.,E.G.M. van Kempen et al.}.
Assuming that a macroscopic number of atoms occupy the
$\mathbf{k}=\mathbf{0}$ state, we replace the relevant operators with
c-numbers.
The Hamiltonian for the BEC in the $\textbf{k}=\textbf{0}$ state is given
by
\begin{eqnarray}
\hat{\mathcal{H}}_{\mathrm{BEC}}
& = &
\frac{c_0}{ 2\Omega }
:
\left(
\hat{a}_{\textbf{0},m}^\dag \hat{a}_{\textbf{0},m}
\right)^2
: \nonumber \\
& & +
\sum_{m=-1}^{1}
\left(
-pm +qm^2
\right)
\hat{a}_{\mathbf{0}, m}^\dag \hat{a}_{\mathbf{0}, m}
-\frac{c_1}{2\Omega} \hat{s}^\dag \hat{s}, \nonumber \\
\label{Hbec}
\end{eqnarray}
where
$
\hat{s}=
\left(
\hat{a}_{\mathbf{0}, 0}^2 -2 \hat{a}_{\mathbf{0}, 1} \hat{a}_{\mathbf{0}, -1}
\right)/\sqrt{3}
$
is an annihilation operator for a singlet pair.
In the MFT, we replace the operator $\hat{a}_{\textbf{0},m}$ with a
c-number $\zeta_m \sqrt{N_0}$.
Here, $N_0$ is the number of condensed atoms and the order parameters
$\zeta_m$'s are complex variational parameters that are determined so as
to minimize the energy functional under the constraint of normalization
$\sum_{m} |\zeta_m|^2 = 1$.
For this purpose, we introduce a Lagrange multiplier $\mu$ and minimize
$
\left \langle \mathcal{H} \right \rangle
-
\mu N_0 \sum_m |\zeta_m|^2
$
with respect to $\zeta_m$.
In the following, we denote the set of the order parameters as
$\bm{\zeta}= {}^{T} (\zeta_1, \zeta_0, \zeta_{-1})$, where the superscript
$T$ stands for transpose.
\subsection{Ground states}
\begin{figure}[t]
\begin{center}
\includegraphics[height=12\baselineskip]{pd1.eps}
\end{center}
\caption{
Ground-state phase diagram for a spin-1 ferromagnetic BEC.
The dashed curves indicate second-order phase boundaries.
In the figure, $\left|+1 \right\rangle$ and $\left|-1\right\rangle$
represent the ferromagnetic phase, and $\left|0\right\rangle$ represents
the polar phase.
The shaded region is the broken-axisymmetry phase, in which the
magnetization tilts against the $z$ axis.
}
\label{fig:eps.eps}
\end{figure}
The ground-state phase diagram for a spin-1 ferromagnetic BEC is shown in
Fig.~\ref{fig:eps.eps}~\cite{J.Stenger et al.}.
The phases are classified as follows:
\begin{enumerate}
\item Ferromagnetic phase ( $|+1\rangle $ and $|-1\rangle $ in
Fig.~\ref{fig:eps.eps}).
The order parameter is given for $p > 0$ by $\bm{\zeta}_{\mathrm{F}}=
{}^{T} (e^{i \chi_1}, 0, 0) $ and for $p < 0$ by
$\bm{\zeta}_{\mathrm{F}}={}^{T}(0, 0, e^{i \chi_{-1}})$, where $\chi_m$
denotes an arbitrary phase of $\zeta_m$, i.e., $\zeta_m = |\zeta_m| e^{i
\chi_m}$.
\item Polar phase ($|0\rangle $ in Fig.~\ref{fig:eps.eps}).
The order parameter is given by $\bm{\zeta}_{\mathrm{P}}= {}^{T} (0, e^{i
\chi_0}, 0)$.
\item Broken-axisymmetry phase (shaded region in Fig.~\ref{fig:eps.eps}).
The order parameter is given by (see Appendix~\ref{app0} for derivation)
\begin{align}
\begin{cases}
\zeta_{\pm 1}
= \left( q \pm p \right)
\sqrt{
\displaystyle
\frac{p^2 + 2 \left| c_1 \right| n q - q^2}
{8 \left| c_1 \right| n q^3}
}
e^{ i \chi_{\pm 1}},\\
\zeta_0
=\sqrt{
\displaystyle
\frac{
\left(
q^2 - p^2
\right)
\left(
p^2 + 2 \left| c_1 \right| n q + q^2
\right)
}
{4 \left| c_1 \right| n q^3}
}
e^{i (\chi_1 + \chi_{-1})/2}.
\end{cases}
\label{mixed}
\end{align}
\end{enumerate}
In the broken-axisymmetry phase, the transverse magnetization, which is
perpendicular to the external magnetic field,
\begin{eqnarray}
\label{fperp}
\left \langle F_\perp \right \rangle
& \equiv & \sqrt{\left\langle F_x \right\rangle^2 + \left\langle F_y
\right\rangle^2}
\nonumber \\
& = &
N_0\frac{
\sqrt{q^2 - p^2}
\sqrt{
\left(
p^2 + 2 \left| c_1 \right| n q
\right)^2
-q^4
}
}
{
2 \left| c_1 \right| n q^2
},
\end{eqnarray}
is nonzero as shown in Fig.~\ref{f_perpendicular}.
\begin{figure}[t]
\begin{center}
\includegraphics[height=9\baselineskip]{f_perp.eps}
\includegraphics[height=9\baselineskip]{magn.eps}
\end{center}
\caption{ (Color online)
(a) Transverse magnetization, i.e., magnetization perpendicular to the
direction of the applied magnetic field $\langle f_\perp \rangle \equiv
\langle F_\perp \rangle / N_0$ as a function of linear and quadratic
Zeeman coefficients.
The transverse magnetization is nonzero only in the broken-axisymmetry
phase.
(b) Schematic illustration of the spin vector in the broken-axisymmetry
phase.
}
\label{f_perpendicular}
\end{figure}
(If we choose $\zeta_0$ to be real and positive, we have
$
\left \langle F_x \right \rangle
=
\left \langle F_\perp \right \rangle \cos \phi
$ and
$
\left \langle F_y \right \rangle
=
\left \langle F_\perp \right \rangle \sin \phi
$,
where $\phi \equiv \chi_1 = -\chi_{-1}$.)
The longitudinal magnetization, which is parallel to the external magnetic
field, is given by
\begin{equation}
\left\langle F_z \right\rangle = N_0 \frac{p \left( p^2 + 2 \left| c_1
\right| n q - q^2 \right)}{2 \left| c_1 \right| n q^2}.
\end{equation}
The total magnetization is therefore given by
\begin{equation} \label{fmag}
\left| \left\langle \bm{F} \right\rangle \right| \equiv \sqrt{\left\langle
F_\perp \right\rangle^2 + \left\langle F_z \right\rangle^2} = N_0
\frac{\sqrt{4 c_1^2 n^2 q^2 - \left( p^2 - q^2 \right)^2}}{2
\left| c_1 \right| n q}.
\end{equation}
The magnetization thus tilts against the applied magnetic field with
the polar angle
\begin{align} \label{pangle}
\vartheta = \arctan
\left(
\frac{ \sqrt{q^2 - p^2} \sqrt{p^2 + 2 \left| c_1 \right| n q + q^2} }
{ p \sqrt{p^2 + 2 \left| c_1 \right| n q - q^2} }
\right).
\end{align}
We note that this ground state breaks the axisymmetry around the $z$ axis
despite the fact that the Hamiltonian including the external magnetic
field is axisymmetric.
Thus, the ground state features spontaneous breaking of axisymmetry or
spontaneous breaking of the SO(2) symmetry.
Such an axisymmetry breaking is due to the competition between the linear
and quadratic Zeeman effects and the ferromagnetic interaction.
The quadratic Zeeman effect decreases the $z$ component of the spin
vector.
However, a decrease in the length of the spin vector costs the
ferromagnetic interaction energy.
To reconcile the quadratic Zeeman effect with the ferromagnetic
interaction, the spin vector tilts against the $z$ axis.
In fact, $\vartheta$ in Eq.~(\ref{pangle}) is a monotonically decreasing
function of $p$ and a monotonically increasing function of $q$, and the
length of the spin vector (\ref{fmag}) attains the maximum value of $N_0$
for $|c_1| n \rightarrow \infty$.
\section{Bogoliubov excitations and Goldstone modes}
\label{section3}
According to the Goldstone theorem~\cite{Goldstone}, there exists a
gapless excitation mode when a continuous symmetry is spontaneously
broken.
In the broken-axisymmetry phase, we have shown that the relevant
continuous symmetry is the SO(2) axisymmetry.
Since we have treated the system in the MFT in which the global phase of
the wave function is assumed to be arbitrarily chosen, the U(1) symmetry is
also broken.
Thus, the two continuous symmetries are simultaneously broken in this
phase.
In this section, we examine the corresponding Goldstone modes using the
Bogoliubov theory.
\subsection{Basic theory}
We first formulate a number-conserving Bogoliubov
theory~\cite{Bogoliubov} for a BEC with spin degrees of
freedom~\cite{M.Ueda}.
The advantage of this number-conserving formulation is that we do not need
to introduce the chemical potential as a Lagrange multiplier to adjust the
particle number to a prescribed value.
In this formulation, we replace $\hat{a}_{\mathbf{0}, m}$ with $\zeta_m
\left( N - \sum_{\textbf{k}\neq\textbf{0},m} \hat{a}_{\mathbf{k}, m}^\dag
\hat{a}_{\mathbf{k}, m} \right)^{1/2}$ in Eqs.~(\ref{hamiltonian-3}) and
(\ref{hamiltonian3}) and keep terms up to those of second order in
$\hat{a}_{\mathbf{k}\neq \textbf{0}, m}$ and
$\hat{a}_{\mathbf{k}\neq\textbf{0}, m}^\dag$.
We then obtain an effective Bogoliubov Hamiltonian as
\begin{widetext}
\begin{align}
\hat{\mathcal{H}}_{\mathrm{eff}}
= &
\sum_{\textbf{k} \neq \textbf{0} }
\sum_{m=-1}^{1}
\left(
\epsilon_{\textbf{k}}
- p m
+ q m^2
+ p \langle f_z \rangle
- q \langle f_z^2 \rangle
- c_1 n
+ c_1 n | \zeta_0^2 - 2 \zeta_1 \zeta_{-1} |^2
\right)
\hat{a}_{ \textbf{k} , m }^\dag \hat{a}_{ \textbf{k} , m }
\nonumber \\ &
+
c_1 n \langle \bm{f} \rangle \cdot
\sum_{ \textbf{k} \neq \textbf{0} }\sum_{m,n}
\bm{f}_{m,n}
\hat{a}_{ \textbf{k} , m }^\dag
\hat{a}_{ \textbf{k} , n }
+
\frac{c_0 n}{2}
\sum_{ \textbf{k} \neq \textbf{0} }
\left(
2 \hat{\mathcal{D}}_{\textbf{k}}^\dag \hat{\mathcal{D}}_{\textbf{k}}
+
\hat{\mathcal{D}}_{\textbf{k}} \hat{\mathcal{D}}_{-\textbf{k}}
+
\hat{\mathcal{D}}_{\textbf{k}}^\dag \hat{\mathcal{D}}_{-\textbf{k}}^\dag
\right)
\nonumber \\ &
+
\frac{c_1 n}{2}
\sum_{\textbf{k} \neq \textbf{0} }
\left(
2 \hat{\bm{\mathcal{F}}}_{\textbf{k}}^\dag \cdot
\hat{\bm{\mathcal{F}}}_{\textbf{k}}
+
\hat{\bm{\mathcal{F}}}_{\textbf{k}} \cdot
\hat{\bm{\mathcal{F}}}_{-\textbf{k}}
+
\hat{\bm{\mathcal{F}}}_{\textbf{k}}^\dag \cdot
\hat{\bm{\mathcal{F}}}_{-\textbf{k}}^\dag
\right)
+E_0,
\label{original Hamiltonian}
\end{align}
\end{widetext}
where $\hat{\mathcal{D}}_{\textbf{k}} \equiv \sum_{m} \zeta_{m}^*
\hat{a}_{\textbf{k}, m}$, $\hat{\bm{\mathcal{F}}}_{\textbf{k}} \equiv
\sum_{m,n} \bm{f}_{m,n} \zeta_{m}^* \hat{a}_{\textbf{k}, n}$, and $E_0$
represents a constant term.
In general, for spin-$f$ atoms, we can express quasiparticle operators
$\hat{b}_{\mathbf{k}, \sigma}$'s as linear combinations of the
annihilation and creation operators of the original particles:
\begin{align}
\hat{\mathbf{B}}_{\textbf{k}}
= \mathrm{U}(k) \hat{\mathbf{A}}_{\textbf{k}}
+ \mathrm{V}(k) \hat{\mathbf{A}}_{-\textbf{k}}^*.
\label{defofqp}
\end{align}
Here $\mathrm{U}(k)$ and $\mathrm{V}(k)$ are $(2f+1) \times (2f+1)$ real
matrices and the bold letters represent sets of operators
\begin{align*}
\hat{\mathbf{B}}_{\textbf{k}}&= \,^{T} \! \left(
\hat{b}_{\mathbf{k},\sigma_1} \, , \,
\hat{b}_{\mathbf{k},\sigma_2} \, , \,
\cdots \, , \,
\hat{b}_{\mathbf{k},\sigma_{2f+1}}
\right),\\
\hat{\mathbf{A}}_{\textbf{k}}&= \,^{T} \! \left(
\hat{a}_{\mathbf{k},f} \, ,\,
\hat{a}_{\mathbf{k},f-1} \, , \,
\cdots \, , \,
\hat{a}_{\mathbf{k},-f}
\right),\\
\hat{\mathbf{A}}_{\textbf{k}}^*&= \,^{T} \! \left(
\hat{a}_{\mathbf{k},f}^\dag \, , \,
\hat{a}_{\mathbf{k},f-1}^\dag \, , \,
\cdots \, , \,
\hat{a}_{\mathbf{k},-f}^\dag
\right),
\end{align*}
where $\sigma_j$ is the label for each Bogoliubov mode.
The quasiparticle operators (\ref{defofqp}) should satisfy the Bose
commutation relations,
\begin{align}
\left[
\hat{b}_{\mathbf{k}, \sigma}, \hat{b}_{\mathbf{k}', \sigma '}
\right]
=0
, \quad
\left[
\hat{b}_{\mathbf{k}, \sigma},\hat{b}_{\mathbf{k}', \sigma '}^\dag
\right]
=
\delta_{\mathbf{k},\mathbf{k}'}\delta_{\sigma , \sigma '},
\label{com_rel}
\end{align}
which lead to
\begin{eqnarray}
\sum_i
\left[
U_{\sigma , i}(k)\, ^T U_{i , \sigma'}(k)
-
V_{\sigma , i}(k)\, ^T V_{i , \sigma'}(k)
\right] & = & \delta_{\sigma, \sigma'} ,
\nonumber \\
\label{comrelcomp1}
\\
\sum_i
\left[
U_{\sigma , i}(k)\, ^T V_{i , \sigma'}(k)
-
V_{\sigma , i}(k)\, ^T U_{i , \sigma'}(k)
\right] & = & 0.
\label{comrelcomp2}
\end{eqnarray}
We can rewrite Eqs.~(\ref{comrelcomp1}) and (\ref{comrelcomp2}) in a
matrix form,
\begin{align}
\,^{T} \!
\left(
\mathrm{U} + \mathrm{V}
\right)
\left(
\mathrm{U} - \mathrm{V}
\right)
= \mathrm{I}.
\label{commutation relation mat}
\end{align}
Thus, U and V are not independent of each other.
For later convenience, we write $\hat{\mathbf{B}}_{\textbf{k}}$ as
\begin{align}
\hat{\mathbf{B}}_{\textbf{k}} =
\frac{1}{2}
\left[
\left(
\mathrm{U} + \mathrm{V}
\right)
\left( \hat{\mathbf{A}}_{\textbf{k}}
+
\hat{\mathbf{A}}_{-\textbf{k}}^* \right)
+
\left(
\mathrm{U} - \mathrm{V}
\right)
\left( \hat{\mathbf{A}}_{\textbf{k}}
-
\hat{\mathbf{A}}_{-\textbf{k}}^* \right)
\right].
\label{Re_Bogoliubov}
\end{align}
We seek the excitation spectrum $E_{\sigma}$ and operators
$\hat{b}_{\mathbf{k}, \sigma}$ such that the quasiparticles behave
independently, i.e.,
\begin{align}
\displaystyle \hat{\mathcal{H}}_{\mathrm{eff}}
=\sum_{\mathbf{k} \neq 0}
\sum_{\sigma = \left\{ \sigma_1,\sigma_2,\cdots,\sigma_{2f+1} \right\}}
E_{\sigma}
\hat{b}_{\mathbf{k}, \sigma}^\dag \hat{b}_{\mathbf{k}, \sigma}
+E_{\mathrm{vac}},
\label{free particle like}
\end{align}
where $\hat{\mathcal{H}}_{\mathrm{eff}}$ is given in Eq.~(\ref{original
Hamiltonian}), and $E_{\mathrm{vac}}$ is the energy of the vacuum state
for the quasiparticles.
From Eq.~(\ref{original Hamiltonian}), the Heisenberg equation of motion
takes the form
\begin{align}
i \hbar \frac{\mathrm{d}}{\mathrm{d}t} \hat{\mathbf{A}}_{\textbf{k}}
= \mathrm{M}(k) \hat{\mathbf{A}}_{\textbf{k}}
+ \mathrm{N}(k) \hat{\mathbf{A}}_{-\textbf{k}}^*,
\label{differential of a}
\end{align}
where $\mathrm{M}(k)$ and $\mathrm{N}(k)$ are real and symmetric
$(2f+1) \times (2f+1)$ matrices.
Using the quasiparticle Hamiltonian (\ref{free particle like}) and the
commutation relations (\ref{com_rel}), on the other hand, we obtain
\begin{align}
i \hbar \frac{\mathrm{d}}{\mathrm{d}t}
\hat{\mathbf{B}}_{\textbf{k}}
= \mathrm{E}(k) \hat{\mathbf{B}}_{\textbf{k}},
\label{differential of b}
\end{align}
where $\mathrm{E}(k)$ is the diagonal $(2f+1) \times (2f+1)$ matrix, whose
diagonal elements correspond to the energies of the elementary excitations
$E_{\textbf{k},\sigma_j}$.
Then substituting Eq.~(\ref{differential of a}) into
Eq.~(\ref{differential of b}) and using Eq.~(\ref{commutation relation
mat}), we obtain
\begin{align}
\left(
\mathrm{M} + \mathrm{N}
\right)
\left(
\mathrm{M} - \mathrm{N}
\right)
\,^{T} \!
\left(
\mathrm{U} + \mathrm{V}
\right)
=
\,^{T} \!
\left(
\mathrm{U} + \mathrm{V}
\right)
\mathrm{E}^2 .
\end{align}
Since $\mathrm{E}^2$ is also a diagonal matrix, the Bogoliubov excitation
spectrum can be found as the eigenvalues of the matrix
\begin{align} \label{defG}
\mathrm{G}
\equiv
\left(
\mathrm{M} + \mathrm{N}
\right)
\left(
\mathrm{M} - \mathrm{N}
\right).
\end{align}
We note that G, which is the product of two Hermitian matrices, is not, in
general, Hermitian.
The present approach has the advantage that we can reduce the dimension
of the eigenvalue equation from $2(2f+1)$ to $(2f+1)$ and therefore
the diagonalization is simplified.
That is, instead of the diagonalization of the $2(2f+1) \times 2(2f+1)$
matrix as
\begin{align}
\begin{pmatrix}
\mathrm{M} & \mathrm{N} \\
-\mathrm{N} & -\mathrm{M}
\end{pmatrix}
\to
\begin{pmatrix}
\mathrm{E} & 0 \\
0 & -\mathrm{E}
\end{pmatrix},
\end{align}
a $(2f+1) \times (2f+1)$ matrix G is to be diagonalized.
\subsection{Low-lying modes in the broken-axisymmetry phase for $\bm{k}
\rightarrow \bm{0}$}
Without loss of generality, we may assume $\zeta_m$ to be real and
positive.
The excitation spectra in the ferromagnetic and polar phases can be
derived analytically as shown in Appendix~\ref{app}.
The analytic solutions can also be obtained for the broken-axisymmetry
phase.
However, since they are very complicated, we here derive the excitation
spectrum for small $k$.
The effective Hamiltonian (\ref{original Hamiltonian}) gives the
coefficient matrices M and N of the Heisenberg equation of motion
(\ref{differential of a}).
Using the explicit form of $\zeta_m$ in Eq.~(\ref{mixed}), the matrix G
can be written in the form,
\begin{equation}
\mathrm{G} = \mathrm{G}_0 + 2(g_2 n \mathrm{G}_1 - c_1 n
\mathrm{G}'_1) \epsilon_{\mathbf{k}} +
I \epsilon_{\mathbf{k}}^2,
\end{equation}
where $I$ is the unit matrix and
\begin{align}
\mathrm{G}_0 =&
\begin{pmatrix}
\Theta_1 \zeta_{-1} \zeta_{0 } &
\Theta_1 \zeta_{ 1} \zeta_{-1} &
\Theta_1 \zeta_{ 1} \zeta_{0} \\
\Theta_0 \zeta_{-1} \zeta_{0 } &
\Theta_0 \zeta_{ 1} \zeta_{-1} &
\Theta_0 \zeta_{ 1} \zeta_{0} \\
\Theta_{-1} \zeta_{-1} \zeta_{0 } &
\Theta_{-1} \zeta_{ 1} \zeta_{-1} &
\Theta_{-1} \zeta_{ 1} \zeta_{0}
\end{pmatrix},
\label{G0}
\\
\mathrm{G}_1 =&
\begin{pmatrix}
\zeta_{ 1} ^2 & \zeta_{ 1} \zeta_{ 0} & \zeta_{ 1} \zeta_{-1} \\
\zeta_{ 1} \zeta_{0 } & \zeta_{ 0} ^2 & \zeta_{-1} \zeta_{0} \\
\zeta_{ 1} \zeta_{-1} & \zeta_{-1} \zeta_{ 0} & \zeta_{-1} ^2
\end{pmatrix},
\label{G1}
\\
\mathrm{G}_1' =&
\begin{pmatrix}
\zeta_{ 0} ^2 \zeta_{-1}/\zeta_{ 1} & -2 \zeta_{-1} \zeta_{ 0} & -2 \zeta_{ 1} \zeta_{-1} \\
-2 \zeta_{-1} \zeta_{0 } & \zeta_{ 0} ^2 + 2 \zeta_{ 1} \zeta_{-1} & -2\zeta_{-1} \zeta_{0} \\
-2\zeta_{ 1} \zeta_{-1} & -2\zeta_{-1} \zeta_{ 0} & \zeta_{ 0} ^2 \zeta_{ 1}/\zeta_{-1}
\end{pmatrix},
\label{eq:3matrices}
\end{align}
with
\begin{eqnarray}
\Theta_m & = & (3m^2 - 2) |c_1| n
\bigl[
p^2 + 2 |c_1| n q + (-1)^{m} q^2
\nonumber \\
& & - 2 m p q \bigr]
\zeta_0 / (q \zeta_m).
\end{eqnarray}
We first consider the limit of $ \epsilon_{\mathbf{k}} \to 0$.
The eigenvalues for $\mathrm{G} = \mathrm{G}_0$ can be obtained easily:
one is
\begin{align}
E_{\mathrm{gap}}^2
=
(3 p^2 - 2 c_1 n q -q^2)
(p^2 - 2 c_1 n q + q^2)/q^2
\end{align}
and the other two are zero.
Thus the system has two gapless excitation modes, which, as we will show
later, arise from the U(1) and SO(2) symmetry breakings.
We label the gapful mode as $\alpha$, and the other two gapless modes as
$\beta$ and $\gamma$.
The eigenvectors of $\mathrm{G}_0$ are given by each row of the following
matrix:
\begin{align}
\mathrm{U} + \mathrm{V} =
\begin{pmatrix}
\lambda \Theta_1 & \lambda \Theta_0 & \lambda \Theta_{-1} \\
( \mu_{\mathrm{p} } + \mu_{\mathrm{m} } ) \zeta_1 &
\mu_{\mathrm{p} } \zeta_0 &
( \mu_{\mathrm{p} } - \mu_{\mathrm{m} } ) \zeta_{-1}\\
( \nu_{\mathrm{p} } + \nu_{\mathrm{m} } ) \zeta_1 &
\nu_{\mathrm{p} } \zeta_0 &
( \nu_{\mathrm{p} } - \nu_{\mathrm{m} } ) \zeta_{-1}
\end{pmatrix},
\label{uplusv}
\end{align}
where $\lambda$, $\mu_{\mathrm{p}}$, $\mu_{\mathrm{m}}$,
$\nu_{\mathrm{p}}$, and $\nu_{\mathrm{m}}$ are arbitrary parameters.
Note that the second and the third rows are the linear combinations of two
basis vectors $(\zeta_1, \zeta_0, \zeta_{-1})$ and $(\zeta_1, 0,
-\zeta_{-1})$, both of which are eigenvectors with zero eigenvalue.
It follows from Eq.~(\ref{commutation relation mat}) that the matrix
$\mathrm{U} - \mathrm{V}$ is given as the transposed
inverse matrix of Eq.~(\ref{uplusv}),
\begin{widetext}
\begin{align}
\mathrm{U} - \mathrm{V} = \frac{1}{A}
\begin{pmatrix}
\frac{ \zeta_{-1} \zeta_0 }{\lambda} &
2 \frac{ \zeta_{1} \zeta_{-1} }{\lambda} &
\frac{ \zeta_{1} \zeta_0 }{\lambda} \\
\frac{
-\zeta_{-1} \Theta_0
( \nu_{\mathrm{p} } - \nu_{\mathrm{m} } )
+ \zeta_0 \Theta_{-1} \nu_{\mathrm{p} }
}{J} &
\frac{
\zeta_{-1} \Theta_1
( \nu_{\mathrm{p} } - \nu_{\mathrm{m} } )
- \zeta_1 \Theta_{-1}
( \nu_{\mathrm{p} } + \nu_{\mathrm{m} } )
}{J} &
\frac{
\zeta_{1} \Theta_0
( \nu_{\mathrm{p} } + \nu_{\mathrm{m} } )
- \zeta_0 \Theta_{1} \nu_{\mathrm{p} }
}{J} \\
\frac{
\zeta_{-1} \Theta_0
( \mu_{\mathrm{p} } - \mu_{\mathrm{m} } )
- \zeta_0 \Theta_{-1} \mu_{\mathrm{p} }
}{J} &
\frac{
- \zeta_{-1} \Theta_1
( \mu_{\mathrm{p} } - \mu_{\mathrm{m} } )
+ \zeta_1 \Theta_{-1}
( \mu_{\mathrm{p} } + \mu_{\mathrm{m} } )
}{J} &
\frac{
- \zeta_{1} \Theta_0
( \mu_{\mathrm{p} } + \mu_{\mathrm{m} } )
+ \zeta_0 \Theta_{1} \mu_{\mathrm{p} }
}{J}
\end{pmatrix},
\label{uminusv}
\end{align}
\end{widetext}
where $A \equiv 2\zeta_1 \zeta_{-1}\Theta_0 -\zeta_0 \zeta_{-1}\Theta_1 -
\zeta_0 \zeta_{1}\Theta_{-1}$ and $J \equiv \mu_{\mathrm{p}
}\nu_{\mathrm{m} }- \mu_{\mathrm{m} }\nu_{\mathrm{p} }$.
\subsection{Low-lying modes in the broken-axisymmetry phase for small
$\bm{k}$}
In the limit of small $\epsilon_{\mathbf{k}}$, the five parameters
$\lambda$, $\mu_{\mathrm{p}}$, $\mu_{\mathrm{m}}$, $\nu_{\mathrm{p}}$, and
$\nu_{\mathrm{m}}$ can be determined by substitution of
Eqs.~(\ref{uplusv}) and (\ref{uminusv}) into the definitions of the
Bogoliubov operators (\ref{Re_Bogoliubov}) and by comparison of the
quasi-particle Hamiltonian (\ref{free particle like}) with the effective
Hamiltonian (\ref{original Hamiltonian}).
However, we cannot perform this procedure using only the expression for
$\epsilon_{\textbf{k}} = 0$ because the two Goldstone modes diverge in the
limit of $\epsilon_{\textbf{k}} \to 0$.
Thus, it is necessary to find the $\epsilon_{\mathbf{k}}$ dependence of
the eigenenergies to find the properties of the low-lying excitations.
From Eqs.~(\ref{G0})-(\ref{eq:3matrices}), we obtain the eigenenergies up
to the order of $\epsilon_{\mathbf{k}}$ as
\begin{align}
\begin{cases}
E_{\alpha}^2 = E_{\mathrm{gap}}^2 +
4 \left(
\frac{ p^2 -c_1 n q }{q}
\right)
\epsilon_{\mathbf{k}} +
\mathrm{O} \left( \epsilon_{\mathbf{k}}^2 \right),
\\
E_{\beta }^2 = \Lambda_{+} \epsilon_{\mathbf{k}} +
\mathrm{O} \left( \epsilon_{\mathbf{k}}^2 \right),
\\
E_{\gamma}^2 = \Lambda_{-} \epsilon_{\mathbf{k}} +
\mathrm{O} \left( \epsilon_{\mathbf{k}}^2 \right),
\end{cases}
\label{three modes}
\end{align}
where
\[
\Lambda_{\pm} = g_2 n + \frac{\eta}{2}
\pm \frac{1}{2}
\sqrt{
(2 g_2 n - \eta)^2 + \frac{8 g_2 n (q - \eta) \eta^2}
{c_1 n (3 \eta - 2 q + 2 c_1 n)}
}
\]
with $\eta = (q^2 -p^2)/q$.
In Fig.~\ref{fig:graph.eps}, we compare the $\epsilon_{\textbf{k}}$
dependences of the approximate eigenenergies (\ref{three modes}) (dashed
curves) with those of the numerically obtained exact energies (solid
curves).
\begin{figure}[t]
\begin{center}
\includegraphics[width=19\baselineskip]{nmodes.eps}
\end{center}
\caption{
Excitation spectrum in the broken-axisymmetry phase.
The solid curves represent the exact solutions and the dashed ones are
approximate solutions given in Eq.~(\ref{three modes}).
The linear and quadratic Zeeman coefficients are chosen to be $p / |c_1| n
= 9 / 10$ and $q / |c_1| n = 11 / 10$.
The modes $\beta$ and $\gamma$ are the gapless modes associated with the
simultaneous breakings of U(1) and SO(2) symmetries.
The exact and approximate solutions for the $\beta$ mode cannot be
distinguished in this figure.
}
\label{fig:graph.eps}
\end{figure}
It is important to note that the two gapless excitations $E_\beta$ and
$E_\gamma$ in Eq.~(\ref{three modes}) share the same leading-order
term $\epsilon_{\mathbf{k}}^{1/2}$.
Since the effective Hamiltonian $\hat{\mathcal{H}}_{\mathrm{eff}}$ in
Eq.~(\ref{original Hamiltonian}) contains only the terms that are
proportional to $\epsilon_{\mathbf{k}}$, this
$\epsilon_{\mathbf{k}}^{1/2}$ dependence must be canceled by the operators
$\hat{b}_{\mathbf{k},\sigma}$ so that Eq.~(\ref{free particle like})
reproduces Eq.~(\ref{original Hamiltonian}).
Therefore, we find that the normalization factors $\mu_{\mathrm{p}}$,
$\mu_{\mathrm{m}}$, $\nu_{\mathrm{p}}$, and $\nu_{\mathrm{m}}$ in
Eq.~(\ref{uplusv}), which determines $\hat{b}_{\mathbf{k},\sigma}$ through
Eq.~(\ref{defofqp}), must be proportional to either
$\epsilon_{\mathbf{k}}^{-1/4}$ or $\epsilon_{\mathbf{k}}^{1/4}$.
From numerical analysis, we find that they all have an
$\epsilon_{\mathbf{k}}^{-1/4}$ dependence as is the case in gapless
excitation modes in the other phases (see Appendix~\ref{app}).
It follows then from Eqs.~(\ref{uplusv}) and (\ref{uminusv}) that
$(\mathrm{U} + \mathrm{V})_{\sigma,m} \sim \mathrm{O}
(\epsilon_{\mathbf{k}}^{-1/4})$ and $(\mathrm{U} - \mathrm{V})_{\sigma,m}
\sim \mathrm{O} (\epsilon_{\mathbf{k}}^{1/4})$, and
\begin{align}
(\mathrm{U} + \mathrm{V})_{\sigma,m} \gg (\mathrm{U} -
\mathrm{V})_{\sigma,m} \;\;\;\;\; \mbox{($\sigma = \beta$ or $\gamma$)}
\label{hermitian-like}
\end{align}
for $\epsilon_{\mathbf{k}} \to 0$.
Therefore, we can neglect the second term in the square bracket in
Eq.~(\ref{Re_Bogoliubov}), obtaining
\begin{align}
\hat{b}_{\mathbf{k},\sigma}
\simeq
\sum_{m}
(\mathrm{U} + \mathrm{V})_{\sigma,m}
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right)
\label{negligible}
\end{align}
for $\sigma = \beta$ and $\gamma$.
The corresponding two Bogoliubov operators are then written as
\begin{widetext}
\begin{align}
\begin{cases}
\hat{b}_{\mathbf{k},\beta}
\simeq
\mu_{\mathrm{p}}
\sum_{m} \zeta_m
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right)
+
\mu_{\mathrm{m}}
\sum_{m} m \zeta_m
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right),\\
\hat{b}_{\mathbf{k},\gamma}
\simeq
\nu_{\mathrm{p}}
\sum_{m} \zeta_m
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right)
+
\nu_{\mathrm{m}}
\sum_{m} m \zeta_m
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right).
\end{cases}
\label{beta_gamma}
\end{align}
\end{widetext}
Equation (\ref{beta_gamma}) indicates that the quasiparticle operators for
the two gapless modes consist of the number fluctuation operator
$
\delta \hat{N}
= \sqrt{N_0}
\left[
\sum_{m} \zeta_m
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right)
\right]
$
and the spin fluctuation operator
$
\delta \hat{F}_z
= \sqrt{N_0}
\left[
\sum_{m} m \zeta_m
\left(
\hat{a}_{\mathbf{k},m} + \hat{a}_{-\mathbf{k},m}^\dag
\right)
\right]
$.
We recall that the operator $\hat{N}$ is the generator of the global phase
rotation and the operator $\hat{F}_z$ is that of the spin rotation around
the $z$ axis.
The creations of the gapless quasiparticles therefore lead to a change in
the global phase of the order parameter and a rotation of the
magnetization around the $z$ axis.
The modes $\beta$ and $\gamma$ are thus the Goldstone modes that restore
the U(1) and SO(2) symmetries.
Since $\delta \hat{N}$ and $\delta \hat{F}_z$ can be regarded as the
phonon and magnon operators, respectively, phonons and magnons are coupled
in the quasiparticles described by Eq.~(\ref{beta_gamma}).
This is in contrast with the cases of the ferromagnetic and polar phases,
in which phonons and magnons are decoupled (see Appendix~\ref{app}).
The numerically obtained coefficients $\mu_{\mathrm{p}}$,
$\mu_{\mathrm{m}}$, $\nu_{\mathrm{p}}$, and $\nu_{\mathrm{m}}$ are shown
in Fig.~\ref{fig:plot_1.eps} as functions of $q / |c_1| n$.
We see that $\hat{b}_{\textbf{k},\beta}$ is mostly the density fluctuation
operator, while $\hat{b}_{\textbf{k},\gamma}$ is the linear combination of
the number and spin fluctuation operators with roughly equal weights for
small $q / |c_1| n$.
In other words, the $\beta$-mode is a phonon-dominant mode and the
$\gamma$-mode is a phonon-magnon coupled mode.
The $\beta$-mode crosses over to a phonon mode across the two neighboring
phase boundaries, while the $\gamma$-mode crosses over to a magnon mode.
\begin{figure}[t]
\begin{center}
\includegraphics[height=14\baselineskip]{lccoefficient.eps}
\end{center}
\caption{
(Color online)
Coefficients in the phonon-magnon modes ($\mu_{\mathrm{p}}$,
$\mu_{\mathrm{m}}$, $\nu_{\mathrm{p}}$, and $\nu_{\mathrm{m}}$ in
Eq.~(\ref{beta_gamma})) as functions of the normalized quadratic Zeeman
energy $q / |c_1| n$.
The linear Zeeman energy is chosen to be $ p /|c_1| n = 9 / 10 $.
The vertical axis is scaled by $(\epsilon_{\mathbf{k}}/|c_1| n)^{1/4} /
\sqrt{N_0}$.
The values of the scattering lengths obtained by Kempen \textit{et
al.}~\cite{E.G.M. van Kempen et al.} are used: $a_0=101.8 \,
\mathrm{a.u.}$ and $a_2=100.4 $ a.u.
The insets show the enlarged view of the curves for $\mu_\mathrm{p}$ and
$\mu_\mathrm{m}$.
}
\label{fig:plot_1.eps}
\end{figure}
\subsection{Coherent excitations}
We investigate the dynamics of the states in which the quasiparticles are
coherently excited.
The excited state is assumed to be a coherent state
\begin{align}
| \beta_{k,\sigma} \rangle
\equiv
e^{
\beta_{k,\sigma} \hat{b}_{\mathbf{k},\sigma}^\dag
-
\beta_{k,\sigma}^*\hat{b}_{\mathbf{k},\sigma}
}
| 0 \rangle_{\mathrm{B}},
\end{align}
where $| 0 \rangle_{\mathrm{B}}$ is the vacuum of the Bogoliubov
quasiparticles.
The change in the expectation value of an observable $\hat{Q}$ due to the
excitation of quasiparticles is given by
\begin{align}
\langle \delta \hat{Q}_{\textbf{k},\sigma}(t) \rangle
=
\langle \beta_{k,\sigma} | \hat{Q}_{\mathrm{H}}(t) | \beta_{k,\sigma} \rangle
- _{\mathrm{B}} \! \langle 0 | \hat{Q}_{\mathrm{H}}(t) | 0
\rangle_{\mathrm{B}},
\label{fluctuation}
\end{align}
where the subscript H denotes the Heisenberg representation.
Since $\hat{\textbf{A}}_{\textbf{k}} = {}^T \mathrm{U}(k)
\hat{\textbf{B}}_{\textbf{k}} - {}^T \mathrm{V}(k)
\hat{\textbf{B}}_{-\textbf{k}}^*$ from the inverse relation of
Eq.~(\ref{defofqp}), we obtain
\begin{align}
\langle \delta \hat{\Psi}_{m}(t) \rangle_{\textbf{k},\sigma}
\nonumber \\
= \frac{ |\beta_{k,\sigma}| }{ \sqrt{\Omega} }
[
&
( \mathrm{U} - \mathrm{V} )_{\sigma,m}
\cos ( \textbf{k} \cdot \textbf{r} - \omega_\sigma t + \phi_{k,\sigma} )
\nonumber \\
&+
i
( \mathrm{U} + \mathrm{V} )_{\sigma,m}
\sin ( \textbf{k} \cdot \textbf{r} - \omega_\sigma t + \phi_{k,\sigma} )
],
\end{align}
where $\phi_{k,\sigma}=\mathrm{arg}(\beta_{k,\sigma})$ and $\omega_\sigma
= E_{\textbf{k}, \sigma}/\hbar$.
Since the ratio of the real part to the imaginary part is estimated from
Eq.~(\ref{hermitian-like}) to be $(\bar{\epsilon}_{\textbf{k}})^{1/2} \ll
1$, the real part is negligible for the two
gapless modes, $\sigma= \beta$ and $\gamma$, in the long-wavelength
limit.
Therefore, $\langle \delta \hat{\Psi}_{m}(t) \rangle_{\textbf{k},\sigma}$
is almost entirely imaginary, which indicates that the change occurs
mostly in the phase of the real order parameter $\zeta_m$.
Thus, the excitations of $\beta$ and $\gamma$ modes lead to a global phase
rotation and a spin rotation around the $z$ axis.
To study how the quasiparticle excitation rotates the spin, we calculate
$\langle \delta \hat{\textbf{F}}(t) \rangle_{\textbf{k},\sigma}$.
Keeping terms up to those of the first order in $\beta_{k,\sigma}$, we
obtain
\begin{align}
& \langle \delta \hat{F}_\xi(t) \rangle_{\textbf{k},\sigma}
\nonumber \\
& =\frac{ \sqrt{n_0} }{ \sqrt{\Omega} }
\sum_{m,n} \zeta_m (f_\xi)_{m,n}
\left[
( \mathrm{U} \mp \mathrm{V} )_{\sigma,n}
e^{i( \textbf{k} \cdot \textbf{r} - \omega_\sigma t )}
\beta_{k, \sigma} \pm \mathrm{H.c.}
\right],
\end{align}
where $f_\xi$ ($\xi = x, y, z$) are the spin-1 matrices defined in
Eq.~(\ref{fxyz}), and the upper signs refer to $\xi = x$ and $z$ and the
lower signs to $\xi = y$.
Hence, Eq.~(\ref{hermitian-like}) leads to $\langle \delta \hat{F}_y
\rangle_{\textbf{k},\sigma} \gg \langle \delta \hat{F}_x
\rangle_{\textbf{k},\sigma}$ and $\langle \delta \hat{F}_y
\rangle_{\textbf{k},\sigma} \gg \langle \delta \hat{F}_z
\rangle_{\textbf{k},\sigma}$.
This is because $\langle \hat{F}_x \rangle \neq 0$, $\langle \hat{F}_z
\rangle \neq 0$, and $\langle \hat{F}_y \rangle = 0$ from the assumption
of real and positive $\zeta_m$, and the infinitesimal spin rotation around
the $z$ axis changes only $\langle \hat{F}_y \rangle$.
Thus, we have shown that the excitations of the Goldstone modes $\beta$
and $\gamma$ lead to U(1) and SO(2) transformations.
Oscillations of the order-parameter phases and those of the azimuthal
angle of the spin vector are shown in Fig.~\ref{fig:angle fluc}.
Figure~\ref{fig:angle fluc}(a) shows that the $\beta$-mode excitation
changes the phases of $\langle \hat\psi_1 \rangle$, $\langle \hat\psi_0
\rangle$, and $\langle \hat\psi_{-1} \rangle$ in the same manner.
This is because as shown in Fig.~\ref{fig:plot_1.eps} the dominant
contribution to the $\beta$ mode is made by phonons which are insensitive
to individual spin components.
On the other hand, the $\gamma$-mode excitation describes not only
fluctuations of the overall phase but also those of the spin vector around
the $z$ axis.
Since the rotation around the $z$ axis is $e^{i \varphi \hat{F}_z}| m
\rangle = e^{i \varphi m}| m \rangle$, $\chi_1$ and $\chi_{-1}$ are out of
phase with respect to $\chi_0$.
\begin{figure}[t]
\begin{center}
\includegraphics[height=8\baselineskip]{wavesbeta.eps}
\\
\includegraphics[height=8\baselineskip]{wavesgamma.eps}
\end{center}
\caption{
Oscillations of the order-parameter phases ($\chi_m \equiv \tan^{-1}
[\mathrm{Im} \langle \hat{\psi}_m \rangle / \mathrm{Re} \langle
\hat{\psi}_m \rangle ]$) and the azimuthal angle of magnetization around
the $z$ axis ($\phi \equiv \tan^{-1} [\langle \hat{F}_y \rangle / \langle
\hat{F}_x \rangle ]$) caused by the excitation of the Bogoliubov
quasiparticles in the long-wavelength limit.
The vertical axis is marked in arbitrary units.
The Zeeman energies are assumed to be $p/|c_1| n = 9/10$ and $q/|c_1| n =
11/10$.
(a) Phonon-dominant mode ($\beta$-mode).
Since rotations of the order-parameter phases are much larger than that of
the magnetization, we cannot see the latter in this figure.
(b) Phonon-magnon coupled mode ($\gamma$-mode).
}
\label{fig:angle fluc}
\end{figure}
The gapful mode ($\alpha$-mode) can be interpreted as playing the role of
changing the magnitude of magnetization.
As shown in Fig.~\ref{fig:wavealpha.eps}, the fluctuation of $F_x$ is
dominant when the $\alpha$-mode is excited.
The $z$ component $F_z$ cannot vary due to the spin conservation, and
hence the spin fluctuation is restricted in the $x$-$y$ plane as
illustrated in the inset of Fig.~\ref{fig:wavealpha.eps}.
\begin{figure}[t]
\begin{center}
\includegraphics[height=10\baselineskip,width=15\baselineskip]{wavealpha.eps}
\end{center}
\caption{
Oscillation of transverse magnetization $\langle F_x \rangle$ due to
coherent excitation of the gapful mode ($\alpha$-mode).
The inset schematically illustrates the change in the spin vector caused
by the excitation of the $\alpha$-mode.
}
\label{fig:wavealpha.eps}
\end{figure}
\section{Discussion} \label{sectiond}
We have shown that the ground state in the shaded region of
Fig.~\ref{fig:eps.eps} is the broken-axisymmetry phase featuring
transverse magnetization.
Here, we discuss possible experimental consequences of the axisymmetry
breaking and the transverse magnetization.
Let us consider a situation in which atoms are prepared in the $m = 0$
state.
When $p = 0$ and $q < 2 |c_1| n$, the ground state is in the
broken-axisymmetry phase and the transverse magnetization should develop
in time.
However, the total spin angular momentum parallel to the magnetic field
must be conserved, and for small $q$ the magnitude of the transverse
component of the total spin is nearly entirely conserved.
Consequently, local magnetization varies in space only insofar as the
total spin is conserved~\cite{Saito,SaitoL}.
This constraint leads to formation of various spin textures depending on
the trap geometry.
For example, in an elongated cigar-shaped trap, a staggered domain
structure or a helical structure is formed spontaneously~\cite{Saito}.
In a pancake-shaped trap, on the other hand, a concentric domain structure
is formed~\cite{Saito}.
The formation of the domain structure costs the kinetic energy and the
ferromagnetic energy at the domain walls.
If the direction of the spin vector changes gradually over space, the
formation of the spin texture costs little energy.
One of such textures is a topological spin texture, in which the
orientation of transverse magnetization has a $2\pi$ winding about a
central defect.
When the size of the system is small, the topological spin texture becomes
energetically favorable and develops spontaneously~\cite{SaitoL}.
Recently, formation of the topological spin texture has been observed by
the Berkeley group~\cite{Sadler}, in which the state of the system is
changed rapidly by a change in the magnetic field from the $| 0 \rangle$
region to the shaded region in Fig.~\ref{fig:eps.eps}.
The spontaneous transverse magnetization in the Berkeley experiment is a
manifestation of the symmetry breaking discussed in the present paper.
If the amount of the change in the magnetic field is small or if the speed
of the change is slow, the system is not markedly disturbed, and
low-energy gapless excitations (the $\beta$ and $\gamma$ modes) should be
observed.
\section{Conclusions} \label{section4}
We have studied a spin-1 ferromagnetic BEC by taking the quadratic Zeeman
effect into account.
The mean field theory predicts that BECs with ferromagnetic interactions
show the broken-axisymmetry phase, in which magnetization tilts against
the direction of an external magnetic field.
Here, the SO(2) symmetry is broken, in addition to the U(1) global phase
symmetry.
Applying the Bogoliubov theory for a BEC with spin degrees of freedom, we
have found one gapful mode and two gapless Goldstone modes for this
phase.
We have analytically shown that two gapless modes are the coupled
phonon-magnon modes that restore the U(1) and SO(2) symmetries
simultaneously.
Numerical analysis has shown that one Goldstone mode is the
phonon-dominant mode and the other is the phonon-magnon coupled mode with
roughly equal weights.
The gapful mode changes the length of the spin by fluctuating the spin in
the direction perpendicular to the magnetic field (see the inset of
Fig.~\ref{fig:wavealpha.eps}).
When more than one continuous symmetry is spontaneously broken, multiple
gapless modes emerge and they couple with each other to form Goldstone
modes as shown in this paper.
Such multiple Goldstone modes may therefore be found in spin-2
BECs~\cite{Ueda&Koashi} and higher spin BECs, which merit further
investigation.
\section*{Acknowledgements}
We thank T.~Mori for his participation in an early stage of this work.
This work was supported by Grants-in-Aid for Scientific Research
(Grant No.~17071005 and No.~17740263)
and by the 21st Century COE programs on ``Nanometer-Scale Quantum
Physics'' and ``Coherent Optical Science'' from the Ministry of Education,
Culture, Sports, Science and Technology of Japan.
M.U. acknowledges support by a CREST program of the JST.
| {
"attr-fineweb-edu": 1.727539,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfVTxK6EuNBRhKNFo | \section{Introduction}\label{sec:intro}
Like cuprate superconductors, many of their iron-based cousins have an
antiferromagnetic (AFM) phase in the phase diagram near the superconducting
phase~\cite{pnict_rev_johnston,Paglione:2010p2493}. As phonons are
moreover not believed to be strong enough to explain the relatively
high transition temperatures of pnictides~\cite{phonon0}, the AFM
interactions have attracted large interest~\cite{chubukov_review}. Apart from the metallic
character of the spin-density--wave (SDW) phase in pnictides -- as
opposed to the Mott insulator in undoped cuprates -- a second
difference is that the SDW in pnictides breaks the four-fold
symmetry of the iron-arsenic planes down to a two-fold symmetry.
This breaking of the four-fold lattice symmetry is seen in the
conductivity~\cite{rho_anisotropy,Anis_charge} and in angle-resolved
photo-emission spectroscopy
(ARPES)~\cite{Yi:PNAS2011,ARPES_NaFeAs11,He:2010pNaFeAs,ZhangARPES_NaFeAs_2011}
even at temperatures above the onset of magnetic
long-range order. While there is a structural phase transition at
slightly higher temperature and while the in-plane lattice constants
thus break the rotational lattice symmetry~\cite{neutrons1}, the
effects in experiments appear stronger than can be explained by
slightly different lattice constants. Additional symmetry-breaking of
the electronic degrees of freedom has thus been suggested to be
involved, especially (i) a breaking of the \emph{orbital} symmetry, i.e., a
lifting of the degeneracy of the $xz$ and $yz$
orbitals~\cite{kruger:054504,Lv10,oo_nematic_2011}, and (ii) a
\emph{nematic} phase of the spin degree of
freedom~\cite{Fernandes:2011transp,Fernandes_nematic2012}. In the
latter scenario, magnetic correlations already select the preferred ordering
vector, e.g., $(\pi,0)$ over the equivalent $(0,\pi)$, but do not yet
establish long-range magnetic
order~\cite{Fang:2008nematic,Xu2,Fernandes_nematic2012,PhysRevB.82.020408}. A
related picture involves clusters with short-range magnetic order
whose AFM direction is pinned by the lattice anisotropy~\cite{Shuhua_Pnict2011}.
One problem in deciding between the scenarios is precisely that they
all break the same symmetry, which implies that as soon as one of the
breaks rotational symmetry, the symmetry in the other degrees of
freedom will be broken as
well. Lattice~\cite{PhysRevB.79.180504,Paul:2011p2617,PhysRevB.82.020408}
and orbital~\cite{kruger:054504,Fernandes_nematic2012} degrees of
freedom thus strongly interact with the spin. Nevertheless,
identifying the signatures of various modes of symmetry breaking may
help to elucidate the most important effects. We calculate here the
spectral density of the three-orbital model, see
Sec.~\ref{sec:models}, where the four-fold rotational lattice symmetry
is broken by (i) orbital order, (ii) anisotropic hoppings and (iii)
short-range AFM interactions. In order to be able to keep the magnetic
interactions short-range and in order to include onsite Coulomb
interactions, we employ cluster-perturbation theory, extending an
earlier study~\cite{Akw_nematic} of anisotropy driven by short-range AFM and ferromagnetic
(FM) interactions. Anisotropic
magnetic interactions can distort the spectral density in
agreement with ARPES both in otherwise noninteracting three- and
four-band models and for a regime close to the
SDW phase~\cite{Akw_nematic}, orbital order in the non-interacting model was discussed
in~\cite{oo_nematic_2011}. Here, we compare the two scenarios near the
SDW in more detail and also compare them to symmetry breaking via
anisotropic hoppings. While distortions induced by the latter cannot even qualitatively be
reconciled with ARPES for the noninteracting model, we are going to
see that results become more realistic near the SDW transition.
\section{Model and Method}\label{sec:models}
As we want to use exact diagonalization to solve a fully interacting
model on a small four-site cluster, we have to restrict the model to
at most three bands. We use a variant~\cite{Akw_nematic} of the
three-band model proposed in~\cite{Daghofer_3orb} that gives a
better fit of the Fermi surface. The unit cell of the Fe-As plane
contains two iron and two arsenic ions, however, an internal symmetry of
this unit cell allows us to write the Hamiltonian in terms of a
one-iron unit cell~\cite{plee,eschrig_tb} as long as we consider
isolated planes, as is done here. In momentum space, the tight-binding
Hamiltonian can then be written in terms of a pseudo-crystal
momentum ${\bf \tilde{k}}$, which is defined as ${\bf \tilde{k}} = {\bf k}$
for the $xz$/$yz$ orbitals and ${\bf \tilde{k}} = {\bf k}+(\pi,\pi)$ for the
$xy$ orbital and which is taken from the enlarged Brillouin zone
corresponding to a one-iron unit cell. When
translating the spectral density $A({\bf k},\omega)$
from pseudo-crystal momentum ${\bf \tilde{k}}$ to `lab-space' momentum
${\bf k}$, spectral weight of $xy$ character is shifted by $(\pi,\pi)$
with respect to $xz$ and $yz$ weight~\cite{Tranquada_spinorb,oo_nematic_2011,Akw_nematic}.
The ${\bf \tilde{k}}$-dependent Hamiltonian in orbital space can then be written as
\begin{eqnarray}\label{eq:H0k}
H_{\rm TB}(\mathbf{ \tilde{k}}) = \sum_{\mathbf{ \tilde{k}},\sigma,\mu,\nu}
T^{\mu,\nu}(\mathbf{ \tilde{k}})
d^\dagger_{\mathbf{ \tilde{k}},\mu,\sigma} d^{\phantom{\dagger}}_{\mathbf{ \tilde{k}},\nu,\sigma}\;,
\end{eqnarray}
where $d^{\phantom{\dagger}}_{\mathbf{ \tilde{k}},\nu,\sigma}$
($d^{\dagger}_{\mathbf{ \tilde{k}},\nu,\sigma}$) annihilates (creates) an
electron with pseudo-crystal momentum ${\bf \tilde{k}}$ and spin
$\sigma$ in orbital $\nu$. The orbital indices $\nu,\mu=1,2,3$ refer to the $xz$,
$yz$ and $xy$ states of the iron $3d$ manifold, respectively.
The $T^{\mu,\nu}(\mathbf{ \tilde{k}})=T^{\mu,\nu}(k_x,k_y)$ defining the hoppings are given
by
\begin{eqnarray}\label{eq:ekin}
T^{11/22} &= 2t_{2/1}\cos k_x +2t_{1/2}\cos k_y \nonumber\\
&\quad +4t_3 \cos k_x \cos k_y \\
&\quad \pm 2t_{11}(\cos 2k_x-\cos 2k_y)\nonumber\\
&\quad+4t_{12}\cos 2 k_x \cos 2k_y, \nonumber\\%\label{eq:H0k_t} \\
T^{33} &= \Delta_{xy} + 2t_5(\cos k_x+\cos k_y)\\
&\quad +4t_6\cos k_x\cos k_y + 2t_9(\cos 2k_x+\cos 2k_y)\nonumber\\
&\quad + 4t_{10}(\cos 2k_x \cos k_y + \cos k_x \cos 2k_y),\nonumber\\
T^{12} &= T^{21} = 4t_4\sin k_x \sin k_y, \label{eq:3b}\\
T^{13} &= \bar{T}^{31} = 2it_7\sin k_x + 4it_8\sin k_x \cos k_y, \\
T^{23} &= \bar{T}^{32} = 2it_7\sin k_y + 4it_8\sin k_y \cos k_x,
\end{eqnarray}
where a bar denotes the complex conjugate. Hopping parameters are the
same as in~\cite{Akw_nematic}: $t_1=-0.08$, $t_2=0.1825$,
$t_3=0.08375$, $t_4=-0.03$, $t_5=0.15$, $t_6=0.15$, $t_7=-0.12$,
$t_8=-t_7/2=0.06$, $t_{10} = -0.024$, $t_{11} = -0.01$, $t_{12} =
0.0275$, $\Delta_{xy} = 0.75$. The chemical potential $\mu$ depends on
the interaction terms and is chosen to fix the filling at 4 electrons
per site, for non-interacting electrons, we find $\mu= 0.47$. All
energies are given in eV. These bands are only an approximation and a
three-band model may not be detailed enough to capture
material-dependent properties.
We also include the onsite Coulomb interactions including Hund's rule
coupling and pair hopping~\cite{PhysRevB.18.4945,Oles:1983}. While the
couplings for the $xz$/$yz$ doublet can in
principle differ from the ones involving the $xy$ orbital, we choose
them here to be the same and employ the symmetric relations $U=U'+2J$ for simplicity.
\begin{eqnarray} \label{eq:Hcoul}
H_{\rm int}& =
U\sum_{{\bf i},\alpha}n_{{\bf i},\alpha,\uparrow}n_{{\bf i},
\alpha,\downarrow}
+(U'-J/2)\sum_{{\bf i},
\alpha < \beta}n_{{\bf i},\alpha}n_{{\bf i},\beta} \nonumber\\
&\quad -2J\sum_{{\bf i},\alpha < \beta}{\bf S}_{\bf{i}\alpha}\cdot{\bf S}_{\bf{i}\beta}\\
&\quad +J\sum_{{\bf i},\alpha < \beta}(d^{\dagger}_{{\bf i},\alpha,\uparrow}
d^{\dagger}_{{\bf i},\alpha,\downarrow}d^{\phantom{\dagger}}_{{\bf i},\beta,\downarrow}
d^{\phantom{\dagger}}_{{\bf i},\beta,\uparrow}+h.c.), \nonumber
\end{eqnarray}
where $\alpha,\beta$ denote the orbital and ${\bf S}_{{\bf i},\alpha}$
($n_{{\bf i},\alpha}$) is the spin (electronic density) in orbital $\alpha$ at site
${\bf i}$. The electron-spin operators are given as usual by ${\bf S}_{{\bf i}\nu} = \frac{1}{2}\sum_{ss'}
d^{\dagger}_{{\bf i}\nu s}{\boldsymbol \sigma}^{\phantom{\dagger}}_{ss'}d^{\phantom{\dagger}}_{{\bf
i}\nu s'}$, where ${\boldsymbol \sigma} = (\sigma^x,\sigma^y,\sigma^z)$
is the vector of Pauli matrices.
We choose here $U=1.02\;\textrm{eV}$ and $J=U/4$, because
the system is then very close to the SDW transition. The interacting
system has been found to be very susceptible to magnetic interactions
breaking rotational symmetry~\cite{Akw_nematic}, and we thus
concentrate on this regime.
Similar to the approach chosen in~\cite{Akw_nematic}, we
explicitly break rotational symmetry by Heisenberg couplings that act
only locally within the small cluster directly solved with exact
diagonalization (see method below). Extending the analysis
in~\cite{Akw_nematic}, the coupling $J_{x}$ along the
$x$-direction and $J_{y}$ along $y$ can have different magnitudes and
the same or different signs:
\begin{eqnarray}\label{eq:Heisenberg}
H_{\textrm{Heis}} &= J_{x} \sum_{\stackrel{\langle {\bf i},{\bf
j}\rangle\parallel x}{\mu\nu}}
{\bf S}_{{\bf i},\mu}\cdot
{\bf S}_{{\bf j},\nu}
+ J_{y} \sum_{\stackrel{\langle {\bf i},{\bf
j}\rangle\parallel y}{\mu\nu}}
{\bf S}_{{\bf i},\mu}\cdot
{\bf S}_{{\bf j},\nu},
\end{eqnarray}
where $\mu$, $\nu$ denote orbitals and $\langle {\bf i},{\bf
j}\rangle\parallel x/y$ nearest-neighbour (NN) bonds along the $x$
and $y$ directions. For $J_{x/y}>0$, the coupling is AFM.
We compare this magnetic symmetry breaking addressing the anisotropy
of the magnetic state to a an orbital symmetry breaking addressing the
symmetry between the $xz$ and $yz$ orbitals. The latter is implemented
as a difference in onsite energy
\begin{equation}\label{eq:oo}
H_{\textrm{orb}} = \Delta \sum_{\bf i}(n_{{\bf i},yz}-n_{{\bf i},xz}),
\end{equation}
where $\Delta>0$ favours occupation of the $xz$ orbital, as proposed as
an explanation of the anisotropic spectral
density~\cite{oo_nematic_2011}. Finally, as a third way to induce an
anisotropy, we make hoppings along one lattice direction five to ten
percent larger.
Following~\cite{Akw_nematic}, this Hamiltonian is treated with
the variational cluster approach. This method allows to include
correlations within a small cluster (4 sites for the
three-orbital model discussed here), which is solved almost exactly with
Lanczos exact diagonalization. Hopping between the clusters is then
included as a
perturbation~\cite{Gros:1993p2667,Senechal:2000p2666}. Long-range
order can be treated by optimizing the grand
potential with respect to fictitious ordering
fields~\cite{Aic03,Dahnken:2004p2409}, the SDW state of a two-band
model for pnictides has been studied with this
approach~\cite{Daghofer:2008,Yu:2009p2127}. In fact, all
parameters of the one-particle part of the Hamiltonian can in principle
be optimized, we found here that optimizing an overall fictitious
chemical potential is necessary near the SDW transition to obtain a
stable solution.
\section{Results}\label{sec:results}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Ak_3b_U102_ho01}
\caption{Spectral density $A({\bf k},\omega)$ for onsite interactions
near the SDW transition ($U=1.02\;\textrm{eV}$, $J=U/4$) and a
phenomenological field $\Delta=0.1\;\textrm{eV}$, see
(\ref{eq:oo}), breaking the orbital $xz$/$yz$ symmetry and
favouring the $xz$ orbital. Solid lines give the noninteracting
bands in terms of the pseudo-crystal momentum ${\bf
\tilde{k}}$; shading gives the spectral weight of
the interacting system for `lab-space' momentum ${\bf k}$, $xy$
weight is thus shifted by $(\pi,\pi)$ with respect to ${\bf
\tilde{k}}$. In
the online colour figure, red, blue and green shading illustrate
spectral weight in the $xz$, $yz$, and $xy$ orbitals respectively.\label{fig:Akw_oo}}
\end{figure}
It has been shown that both orbital ordering~\cite{oo_nematic_2011} and anisotropic magnetic
correlations~\cite{Akw_nematic} can reproduce the band distortions in
a manner broadly consistent with ARPES, i.e., the $yz$ states at
$X=(\pi,0)$ move to higher energies than the $xz$ states at
$Y=(0,\pi)$. Differences between these two ways of breaking rotational
lattice symmetry mostly affect states around $\Gamma=(0,0)$, where an
orbital energy splitting induces stronger distortions than the
magnetic fluctuations in the absence of onsite
interactions~\cite{Akw_nematic}. As can be seen by comparing
figures~\ref{fig:Akw_oo} and~\ref{fig:Akw_AF_AF}, this remains
true when onsite interactions push the system closer to the SDW
transition. One may also note that despite the sizable onsite
interactions, one still needs an orbital energy splitting
$\Delta=0.1\;\textrm{eV}$ to induce the band anisotropy between $X$
and $Y$ as seen
in figure~\ref{fig:Akw_oo}. The anisotropy in the final band structure is here not even extreme,
with
the $yz$ states not quite reaching the Fermi level, and $\Delta$ of a
similar order of magnitude induces comparable distortions in non-interacting
bands~\cite{oo_nematic_2011,Akw_nematic}. This is in contrast to short-range magnetic couplings that
become more effective at distorting the bands when onsite interactions
are switched on~\cite{Akw_nematic}. Onsite interactions thus enhance
the tendency to short-range magnetic correlations but do not
appear to strengthen tendencies to a direct orbital energy splitting
in a comparable manner. We did not find spontaneous symmetry breaking
between the $xz$ and $yz$ orbitals, which can be investigated in the
variational cluster-perturbation theory~\cite{Aic03,Dahnken:2004p2409}.
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Ak_3b_U102_AF_AF_xz_yz_xy}
\caption{As figure~\ref{fig:Akw_oo}, but without orbital energy
splitting. Instead, there are anisotropic AFM couplings, see
(\ref{eq:Heisenberg}), acting
within the directly solved four-site cluster, i.e., on a very short
distance only. $J_x = 0.04\;\textrm{eV} \gg J_y = 0.01\;\textrm{eV}$ \label{fig:Akw_AF_AF}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Ak_3b_U102_t2_10_xz_yz_xy}
\caption{As figure~\ref{fig:Akw_oo}, but without orbital energy
splitting. Instead, there is a
phenomenological difference in hopping parameter $t_2$: it is
$10\;\%$ larger along $y$-direction. \label{fig:Akw_t2}}
\end{figure}
Figure~\ref{fig:Akw_t2} illustrates an alternative way to induce an
anisotropy: The hopping parameter $t_2$ is chosen $10\;\%$ larger along
the $y$-direction, a rather larger hopping anisotropy. For non-interacting
electrons (not shown), this simply increases the dispersion somewhat, but does not
raise the states at $Y$. As can be seen in figure~\ref{fig:Akw_t2}, the
interplay of the hopping anisotropy with onsite interactions, which favour a $(\pi,0)$ or $(0,\pi)$
SDW, distorts the bands: The
$yz$ band going from $\Gamma$ to $X$ has become very incoherent, and
the only remaining coherent states at $X$ are above the Fermi
level. It should be noted that the effect of an
anisotropy on various hopping parameters is not consistent: It is
largest for $t_2$, which is the larger NN hopping entering the kinetic
energy of the $xz$ and $yz$ orbitals, see (\ref{eq:ekin}). $10\;\%$
anisotropy of the other NN hoppings $t_1$, $t_5$ and $t_7$ leads to far
smaller effects (not shown).
Finally, we take a closer look at the short-range magnetic
interactions that were shown to induce band anisotropies
in~\cite{Akw_nematic}, where the strength of the AFM
(along $x$) and FM (along $y$) couplings were chosen
with opposite sign, but of equal strength. By varying the two
parameters independently, we found that it is not necessary to have
one FM and one AFM direction. For example, figure~\ref{fig:Akw_AF_AF}
shows a spectrum obtained for a case where both couplings are AFM, but
the one along $x$ is much stronger ($J_x=0.04\;\textrm{eV}$) than the
one along $y$ ($J_y=0.01\;\textrm{eV}$). The AFM couplings act only
within the directly solved cluster, i.e., they favour AFM bonds more
along $x$ than along $y$. The clusters are coupled within
cluster-perturbation theory only via the kinetic energy, i.e., there
is no long-range magnetic order. The anisotropic signatures seen in
the spectral density are comparable to those found for
$J_x=-J_y=0.015\;\textrm{eV}$~\cite{Akw_nematic}. As
the order parameter for nematic order is of higher order, its
spontaneous symmetry breaking can not be studied in
cluster-perturbation theory.
\section{Discussion and Conclusions}\label{sec:conclusions}
We investigated how various mechanisms of breaking the four-fold
lattice symmetry of a three-orbital model for Fe-As planes manifest
themselves in the spectral density. Onsite interactions bring the
system here close to a SDW transition, where
short-range magnetic couplings have previously been found to be more
effective at breaking rotational symmetry than in the non-interacting
system. We find that an orbital energy splitting and anisotropic
hoppings can lead to qualitatively similar features in the interacting
bands as short-range magnetic correlations: the states at $X$ can be
found at higher energies than those at $Y$ and can even move up to the
Fermi level, see also previous
studies~\cite{oo_nematic_2011,Akw_nematic}. The band/orbital
anisotropy near the Fermi surface can be similarly pronounced both in cases with
a strong total orbital polarization $n_{xz}-n_{yz}\approx 0.38$ (found
for $\Delta =0.1\;\textrm{eV}$ as in figure~\ref{fig:Akw_oo}) and with
nearly vanishing polarization $n_{xz}-n_{yz}\approx 0.02$ (found
for the anisotropic AFM couplings as in
figure~\ref{fig:Akw_AF_AF}). A splitting of $\approx
100\;\textrm{meV}$ between the $xz$ at $Y$ and the $yz$ states at
$X$, which moves the latter close to or just above the chemical
potential, can be induced by (i) an orbital energy splitting of
$\Delta
=0.1\;\textrm{eV}$, (ii) $10\;\%$ anisotropy in the hopping
parameter $t_2$ and (iii) a magnetic anisotropy of $30\;\textrm{meV}$
between the $x$ and $y$ directions.
There are differences in the results obtained in the three scenarios: In the case of an
orbital splitting, the changes around $\Gamma=(0,0)$ are more
pronounced than in either the magnetic scenario of the scenario with
anisotropic hopping. In the latter case, the spectra appear less
coherent even near the Fermi level, and the $yz$ band going from
$\Gamma$ to $X$ almost disappears except for states very close to $X$
that are above the Fermi level. Comparing to ARPES~\cite{Yi:PNAS2011,ARPES_NaFeAs11,He:2010pNaFeAs,ZhangARPES_NaFeAs_2011}, this lack of
coherence does not appear to be in good agreement. Distinction between
the other two scenarios is more difficult and has previously been
discussed for an orbital energy difference in the non-interacting
model~\cite{Akw_nematic}. The conclusions drawn there remain valid for
the interacting model: ARPES data look more consistent with slighter
changes around $\Gamma$ than those arising from an orbital energy
difference large enough to raise the states at $X$ to the Fermi level. However, it has to be stressed that
both our model (including only the three most important orbitals) and
our method (where the impact of onsite correlations is only treated
exactly within a very small four-site cluster) imply substantial
approximations. The most robust conclusion to be drawn might thus be
that a decision between the scenarios based on experimental ARPES data
remains difficult. Finally, it is also thinkable that different
mechanisms are driving the breaking of the fourfold lattice symmetry
in various compounds of the pnictide family.
\ack
This research was supported by the DFG
under the Emmy-Noether programme and via the Graduate Training Group
GRK 1621.
\section*{References}
| {
"attr-fineweb-edu": 1.953125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfVc25V5jLcQY1b70 | \section{Introduction}
Optimization of curvature and higher-order regularizers, in general, has significant potential in segmentation,
stereo, 3D reconstruction, image restoration, in-painting, and other applications. It is widely known as
a challenging problem with a long history of research in computer vision. For example, when Geman and Geman
introduced MRF models to computer vision \cite{geman-geman-pami-1984} they proposed first- and second-order
regularization based on {\em line process}. The popular {\em active contours} framework \cite{Kass:88} uses elastic (first-order)
and bending (second-order) energies for segmentation. Dynamic programming was used for curvature-based inpainting \cite{MasnouMorel:98}. Curvature was also studied within PDE-based \cite{ChanShen:01}
and {\em level-sets} \cite{DroskeRumpf:04} approaches to image analysis.
Recently there has been a revival of interest in second-order smoothness for discrete MRF settings.
Due to the success of global optimization methods for first-order MRF models \cite{boykov-etal-pami-2001,ishikawa-pami-2003}
researchers now focus on more difficult second-order functionals \cite{woodford2009} including various discrete
approximations of curvature \cite{schoenemann-etal-ijcv-2012,elzehiry-grady-cvpr-2010,strandmark-kahl-emmcvpr-2011}.
Similarly, recent progress on global optimization techniques for first-order continuous geometric functionals \cite{Nikolova:SIAM06,Pock:SIAM10,Lellmann:SIAM11,yuan:eccv10} has lead to extensions for curvature \cite{Pock:JMIV12}.
Our paper proposes new discrete MRF models for approximating curvature regularization terms like $\int_{\partial S} |\kappa| d\sigma$.
We primarily focus on the absolute curvature. Unlike length or squared curvature regularization, this term does not add
shrinking or ballooning bias.
Our technique evaluates curvature using small patches either on a grid or on
a cell complex, as illustrated in Fig.\ref{fig:gridVScomplex}. In case of a grid, our patches use a novel {\em integral geometry}
approach to evaluating curvature. In case of a complex, our patch-based approach can use standard geometry for evaluating
curvature. The relationship to previous discrete MRF models for curvature is discussed in Section \ref{sec:related_curvature}.
We also propose a very simple and efficient optimization technique, {\em partial enumeration}, directly applicable
to curvature regularization and many other complex (e.g. high-order or non-submodular) problems.
Partial enumeration aggregates the graph nodes within some overlapping patches. While the label space of each patch
is larger compared to individual nodes, the interactions between the patches become simpler.
Our approach can reduce high-order discrete energy formulations to pair-wise {\em Constraint Satisfaction Problem}
with unary costs (uCSP). The details of our technique and related work are in Section~\ref{sec:graphconst}.
\begin{figure*}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[height=57mm]{par_en_1} &
\includegraphics[height=57mm]{par_en_2} &
\includegraphics[height=57mm]{par_en_3} \\
(a) active contours & (b) tiered labeling & (c) more general
\end{tabular}
\end{center}
\caption{Examples of {\em partial enumeration}. Some instances of partial enumeration can be found in
prior art: 3-rd order active contour model and {\em tiered labeling} energy on a grid
\cite{felzenszwalb-veksler-cvpr-2010} are reduced to pairwise energies on a chain that can be optimized
by dynamic programming (a-b). {\em Junction tree} approach to energy minimization \cite{Koller:09} can also be seen as a specific form of
partial enumeration. In general, we show that partial enumeration could be useful for simplifying
a wide class of complex (non-submodular or high-order) optimization problems without reducing them
to a chain or a tree. For example (c), if high-order factors on the upper grid are covered by overlapping 2x2 patches, than
the equivalent optimization problem on the lower graph needs only pairwise interactions
enforcing consistency between the super-nodes representing the patches. }
\label{fig:PEexamples}
\end{figure*}
Some specific examples of {\em partial enumeration} can be found in prior art. For example,
to optimize a {\em snake} with a bending (3rd-order) energy it is customary to combine each pair of adjacent control points
into a single super-node, see Fig.\ref{fig:PEexamples}(a). If the number of states for each control point is $m$ then the number
of states for the super-node is $m^2$. Thus, the label space has increased. On the other hand, the 3rd-order bending energy on the original snake is simplified to a pair-wise energy on the chain of super-nodes, which can be
efficiently solved with dynamic programming in $O(nm^3)$. Analogous simplification of a high-order {\em tiered labeling}
energy on a grid graph to a pair-wise energy on a chain was proposed in \cite{felzenszwalb-veksler-cvpr-2010},
see Fig.\ref{fig:PEexamples}(b). Their approach can also be seen as special case of partial enumeration, even though
non-overlapping ``patches'' are sufficient in their case.
We study {\em partial enumeration} as a more general technique for simplifying complex
(non-submodular or high-order) energies without necessarily reducing the problems to chains/trees,
see Fig.\ref{fig:PEexamples}(c).
Our contributions can be summarized as follows:
\begin{itemize} \vspace{-1ex}
\item simple patch-based models for curvature \vspace{-1.5ex}
\item integral geometry technique for evaluating curvature \vspace{-1.5ex}
\item easy-to-implement partial enumeration technique reducing patch-based MRF models
to a pairwise {\em Constraint Satisfaction Problem} with unary costs directly addressable with
many approximation algorithms \vspace{-1.5ex}
\item our uCSP modification of TRWS outperforms several alternatives
producing near-optimal solutions with smaller optimality gap and shorter running times \vspace{-1ex}
\end{itemize}
The experiments in Sections \ref{sec:graphconst} and \ref{sec:exp} show that our patch-based technique obtains
state-of-the-art results not only for curvature-based segmentation, but also for high-order stereo
and deconvolution problems.
\begin{figure*}[htb]
\begin{center}
\small
\begin{tabular}{c@{\hspace{5ex}}c}
(a) curvature patches on a cell complex (basic geometry) &
(c) curvature patches on a pixel grid (integral geometry) \\[1ex]
\includegraphics[width=50mm]{curvature_complex} &
\includegraphics[width=55mm]{curvature_grid} \\[1ex]
(b) our cell-complex patches (8-connected), & (d) our pixel-grid patches (3x3), \\
up to symmetries, and resulting segmentation. & up to symmetries, and resulting segmentation.
\end{tabular}
\begin{tabular}{cc@{\hspace{8ex}}cc}
\includegraphics[height=17mm]{curvature_cell_patches} &
\includegraphics[width=35mm]{cameraman8complex} &
\includegraphics[height=17mm]{curvature_pixel_patches} &
\includegraphics[width=35mm]{cameraman3x3}
\end{tabular}
\end{center} \vspace{-3ex}
\caption{\small Evaluating curvature of a segment on a complex (a,b) and on a grid (c,d)
using standard and integral geometry.
At sufficiently high resolution, any segment $C$ is a polygon. Thus, to minimize
curvature functionals like $\int_C |\kappa| ds$ we need to evaluate all corners.
We use (overlapping) patches created for each vertex on a complex (a) and for each pixel on a grid (c).
A patch on a complex (a,b) consists of all cells adjacent to a vertex and a grid patch (c,d) is a square window
centered at a pixel. For $\pi/4$ precision as on 8-complex (a), we use 3x3 windows on a grid (b).
For finer $\pi/8$ precision as on 16-complexes, we use 5x5 windows.
Note that each corner on a complex (a) can be directly evaluated from a configuration (labeling)
of a single patch using standard geometry. However, each corner on a grid (c) should be evaluated using
integral geometry by summing over multiple patches covering the corner.
Patch configurations in black occur at straight boundaries and should contribute zero weights.
Patch configurations in red correspond to curved boundaries. The weights $A,\dots,H$ for all such configurations (d)
can be pre-computed from a system of linear equations for all types of corners.
The accuracy of integral geometry approach to curvature on a grid is comparable
to the standard basic geometry used on complexes, see (b) and (d).}
\label{fig:gridVScomplex}
\vspace{-3mm}
\end{figure*}
\section{Curvature on patches and related work} \label{sec:related_curvature}
We discuss approximation of curvature in the context of binary segmentation
with regularization energy
\begin{equation} \label{curvatureint}
E(S) = \int_{\text{int}(S)} f(x) \ dx + \int_{\partial S} \lambda|\kappa| d\sigma,
\end{equation}
where $\kappa$ is curvature, $\lambda$ is a weighting parameter, and unary potential $f(x)$ is a data term.
Our grid-patches in Fig.\ref{fig:gridVScomplex}(c) and our complex-patches in Fig.\ref{fig:gridVScomplex}(a)
can be seen as ``dual'' methods for estimating curvature in exactly the same way as {\em geo-cuts} \cite{BK:iccv03}
and complex-based approach in \cite{Sullivan:thesis92} are ``dual'' methods for evaluating geometric length.
Our grid-patch approach to curvature extends ideas in {\em geo-cuts} \cite{BK:iccv03} that showed how discrete MRF-based regularization methods can use {\em integral geometry} to accurately approximate length via Cauchy-Crofton formula.
We show how general integral geometry principles can also be used to evaluate curvature, see Fig.\ref{fig:gridVScomplex}(c).
The complex-patch technique in Fig.\ref{fig:gridVScomplex}(a) uses an alternative method for approximating
curvature based on standard geometry as in \cite{schoenemann-etal-ijcv-2012,elzehiry-grady-cvpr-2010,strandmark-kahl-emmcvpr-2011}.
Our patch-based curvature models could be seen as extensions of {\em functional lifting}
\cite{Pock:JMIV12} or {\em label elevation} \cite{olsson:CVPR12}. Analogously to the {\em line processes} in
\cite{geman-geman-pami-1984}, these second-order regularization methods use variables
describing both location and orientation of the boundary. Thus, their curvature is the first-order (pair-wise) energy.
Our patch variables include enough information about the local boundary to reduce the curvature to unary terms.
Curvature is also reduced to unary terms in \cite{schoenemann-etal-ijcv-2012} using auxiliary variables for
each pair of adjacent {\em line processes}. Their integer LP approach to curvature is formulated over a large number of
binary variables defined on fine geometric primitives (vertexes, faces, edges, pairs of edges, etc), which are tied
by constraints. In contrast, our unary representation of curvature uses larger scale geometric primitives
(overlapping patches) tied by consistency constraints. The number of corresponding variables is significantly smaller, but
they have a much larger label space. Unlike \cite{schoenemann-etal-ijcv-2012} and us,
\cite{elzehiry-grady-cvpr-2010,strandmark-kahl-emmcvpr-2011} represent curvature via high-order interactions/factors.
Despite technical differences in the underlying formulations and optimization algorithms, our patch-based approach
for complexes in Fig.\ref{fig:gridVScomplex}(a) and \cite{schoenemann-etal-ijcv-2012,strandmark-kahl-emmcvpr-2011}
use geometrically equivalent models for approximating curvature. That is, all of these models would produce the same solution,
if there were exact global optimization algorithms for them.
The optimization algorithms for these models do however vary, both in quality, memory, and run-time efficiency.
In practice, grid-patches are easier to implement than complex-patches
because the grid's regularity and symmetry.
While integral geometry estimates curvature on a pixel grid as accurately
as the standard geometry on a cell complex, see Figs.\ref{fig:gridVScomplex}(b,d), in practice, our proposed optimization
algorithm for the corresponding uCSP problems works better (with near-zero optimality gap) for the grid version of our method.
To keep the paper focused, the rest of the paper primarily concentrates on grid-based patches.
Grid patches were also recently used for curvature evaluation in \cite{shekhovtsov-etal-dagm-2012}.
Unlike our integral geometry in Fig.\ref{fig:gridVScomplex}(c), their method computes a minimum response over
a number of affine filters encoding some learned ``soft'' patterns. The response to each filter combines
deviation from the pattern and the cost of the pattern. The mathematical justification of this approach to curvature estimation
is not fully explained and several presented plots indicate its limited accuracy. As stated in \cite{shekhovtsov-etal-dagm-2012},
``the plots do also reveal the fact that we consistently overestimate the true curvature cost.''
The extreme ``hard'' case of this method may reduce to our technique if the cost of each pattern
is assigned according to our integral geometry equations in Fig.\ref{fig:gridVScomplex}(c). However, this case
makes redundant the filter response minimization and the pattern costs learning, which are the key
technical ideas in \cite{shekhovtsov-etal-dagm-2012}.
\section{Simple Patch-based Optimization} \label{sec:graphconst}
One way to optimize our patch-based curvature model is to formulate the optimization problem on
the original image pixel grid $\langle V,{\mathcal{C}}\rangle$ in Figure~\ref{fig:PEexamples}(c, top grid) using
pixel variables ${\bf x}=\{x_i|i\in V\}$, high-order factor $\alpha\in{\mathcal{C}}$, and energy
\begin{equation}
E({\bf x}) = \sum_{\alpha\in{\mathcal{C}}} E_\alpha ({\bf x}_\alpha)
\label{eq:clusterenergy}
\end{equation}
where ${\bf x}_\alpha=\{x_i|i\in\alpha\}$ is the restriction of ${\bf x}$ to $\alpha$.
Optimization of such high-order energies is generally NP-hard, but a number of existing approximate algorithms
for certain high-order MRF energies could be applied. Our experimental section includes the results of some generic
methods \cite{GTRWS:arXiv12,kahl-strandmark-dam-2012} that have publicly available code.
We propose a different approach for optimizing our high-order curvature models that equivalently reformulates
the problem on a new graph, see Figure~\ref{fig:PEexamples}(c, bottom grid). The motivation is as follows.
One naive approach applicable to NP-hard high-order energies on small images
is exhaustive search that enumerates all possible labelings of the underlying pixel graph.
On large problems one can use partial enumeration to simplify high-order problems.
If some set of relatively small overlapping patches covers all high-order factors,
we can build a new graph where nodes correspond to patches and their labels enumerate patch states,
as in Figure~\ref{fig:PEexamples}(c, bottom grid). Note that high-order interactions reduce to unary potentials,
but, due to patch overlap, hard pair-wise consistency constraints must be enforced.
Our general approach transforms a high-order optimization problem to a pair-wise Constraint Satisfaction Problem
with unary costs (uCSP).
Formally, the corresponding energy could be defined on graph $\langle {\mathcal{V}},{\mathcal{E}}\rangle$ in Figure~\ref{fig:PEexamples}(c, bottom grid)
where nodes correspond to a set of patches ${\mathcal{V}}$ with the following property: for every factor $\alpha\in{\mathcal{C}}$
there exists patch $\beta\in{\mathcal{V}}$ such that $\alpha\subseteq\beta$. For example, ${\mathcal{V}}={\mathcal{C}}$ works, but, in general,
patches in ${\mathcal{V}}$ can be bigger than factors in ${\mathcal{C}}$. We refer to nodes in ${\mathcal{V}}$ as super nodes.
Clearly, \eqref{eq:clusterenergy} could be equivalently rewritten as an energy with unary and pairwise terms:
\begin{equation}
E_\text{super}({\bf X}) = \sum_{\alpha \in {\mathcal{V}}} U_\alpha (X_\alpha) + \sum_{(\alpha,\beta) \in {\mathcal{E}}} P_{\alpha \beta}(X_\alpha,X_\beta)
\label{eq:superenergy}
\end{equation}
The label $X_\alpha$ of a super node $\alpha$ corresponds to the state of all individual pixels ${\bf x}_\alpha$ within the patch.
By enumerating all possible pixel states within the patch we can now encode the higher order factor
$E_\alpha(x_\alpha)$ into the unary term $U_\alpha(X_\alpha)$ of \eqref{eq:superenergy}.
The pairwise consistency potential $P_{\alpha\beta}(X_\alpha,X_\beta)=0$ if variables
$X_\alpha$ and $X_\beta$ agree on the overlap $\alpha\cap\beta$, and $P_{\alpha\beta}(X_\alpha,X_\beta)=+\infty$ otherwise.
The set of edges ${\mathcal{E}}$ may contain all pairs
$\{\alpha,\beta\}\subset{\mathcal{V}}$ such that $\alpha\cap\beta\ne\varnothing$, but a smaller could be enough. For example,
the graph in Figure~\ref{fig:PEexamples}(c, bottom grid)
does not need diagonal edges. A formal procedure for selecting the set of edges
is given in Appendix~\ref{app:lprelaxation}.
Optimization of pairwise energy \eqref{eq:superenergy} can be addressed with standard methods
like~\cite{kolmogorov-pami-2006,Globerson:NIPS07} that can be modified for our specific consistency constraints
to gain significant speed-up (see Sec.\ref{sec:speedup}).
\noindent{\bf LP relaxations~~}
When we apply method like TRW-S~\cite{kolmogorov-pami-2006} to energy~\eqref{eq:superenergy}, we essentially solve a higher-order relaxation of the original energy~\eqref{eq:clusterenergy}.
Many methods have been proposed in the literature for solving higher-order relaxations, e.g.~\cite{sontag-etal-uai-2008,komodakis-etal-cvpr-2009,Meltzer:UAI09,Werner:PAMI10,GTRWS:arXiv12} to name just a few.
To understand the relation to these methods, in Appendix~\ref{app:lprelaxation} we analyze which specific relaxation is solved by our approach.
We then argue that the complexity of message passing in our scheme roughly matches that
of other techniques that solve a similar relaxation.
\footnote{Message
passing techniques require the minimization of expressions of the form
$E_\alpha({\bf x}_\alpha)+\ldots$ where dots denote lower-order factors.
Here we assume that this expression is minimized by going through all possible labellings ${\bf x}_\alpha$.
This would hold if, for example, $E_\alpha(\cdot)$ is represented by a table (which is the case
with curvature). Some terms $E_\alpha(\cdot)$ used in practice have a special structure
that allow more efficient computations; in this case other techniques may have a better
complexity. One example is {\em cardinality-based potentials}~\cite{Tarlow:AISTATS10} which can have a very high-order.}
In practice, the choice of the optimization method is often motivated by the ease of implementation;
we believe that our scheme has an advantage in this respect, and thus may be preferred by practitioners.
\noindent{\bf Other related work~~}
The closest analogue of our approach is perhaps the ``hidden transformation'' approach~\cite{Bacchus:AI02} that converts
an arbitrary CSP into a pairwise CSP (also known as the ``constraint satisfaction dual problem'').
We provide a {\em weighted} version of this transformation; to our knowledge, this has not been studied yet,
and the resulting relaxation has not been analyzed.
Our method bears some resemblance to the work~\cite{komodakis-etal-cvpr-2009} that also uses square patches.
However, we believe that the relaxation solved in~\cite{komodakis-etal-cvpr-2009} is weaker than ours; details
are discussed in the Appendix~\ref{app:lprelaxation}.
Researchers also considered alternative techniques for converting a high-order energy of binary variables into a pairwise one.
We will compare to one such technique, \cite{kahl-strandmark-dam-2012}, which generalizes roof duality to factors of order 3 and 4.
\subsection{Application to $\pi / 2$-precision curvature}
\begin{figure*}[htb]
\small
\vspace{-2mm}
\begin{center}
\includegraphics[width=25mm]{patchsol2x2lambda1}
\includegraphics[width=25mm]{patchsol2x2lambda2}
\includegraphics[width=25mm]{patchsol2x2lambda3}
\includegraphics[width=25mm]{patchsol2x2lambda4}
\includegraphics[width=25mm]{patchsol2x2lambda5}
\includegraphics[width=25mm]{patchsol2x2lambda6}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{lccccc}
\toprule
$\lambda$ &TRW-S Energy & TRW-S Lower bound & Unlabled by GRD(-heur) & TRW-S running time & GRD(-heur) running time\\
\midrule
$2.5\cdot10^{-5}$ & $4677$ & $4677$ & $0\%$ ($0\%$) & $0.934$s & $10737$s ($7.08$s)\\
$2.5\cdot10^{-4}$ & $4680$ & $4680$ & $0\%$ ($ 0\%$) & $0.961$s & $9287$s ($10.7$s)\\
$2.5\cdot10^{-3}$ & $4707$ & $4707$ & $0.2\%$ ($0.2\%$) & $2.43$s & $10731$s ($7.32$s)\\
$2.5\cdot10^{-2}$ & $4910$ & $4910$ & $6.7\%$ ($6.8\%$) & $3.98$s & $12552$s ($6.96$s)\\
$2.5\cdot10^{-1}$ & $5833$ & $5833$ & $100\%$ ($100\%$) & $14.3$s & $12337$s ($10.9s$)\\
$2.5\cdot10^{0}$ & $7605$ & $ 7605$ & $100\%$ ($100\%$) & $28.8$s & $7027$s ($22.2$s)\\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\small Our results for $2\times2$ curvature with different regularization weight $\lambda$ (top row of images).
TRW-S with super nodes gives a tight lower bound. The figures within parenthesis are results when using the heuristics proposed for speedup in \cite{kahl-strandmark-dam-2012}). }
\label{GRDcomparison}
\end{figure*}
\setlength{\tabcolsep}{6pt}
In this section we illustrate our approach on a very coarse approximation of curvature where we only allow boundary edges that are either
horizontal or vertical. It is shown in \cite{elzehiry-grady-cvpr-2010} that the resulting energy can be formulated as in \eqref{eq:clusterenergy} where ${\mathcal{C}}$ contains the edges of an 8-connected neighborhood, see Fig.~\ref{fig:orggraph}.
In contrast we formulate the problem as \eqref{eq:superenergy} where ${\mathcal{V}}$ is the set of all $2\times2$ patches.
Consider the patches in Figure~\ref{basepatches} and their curvature estimates.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cccccc}
\includegraphics[width=8mm]{patch2x2_1} &
\includegraphics[width=8mm]{patch2x2_2} &
\includegraphics[width=8mm]{patch2x2_3} &
\includegraphics[width=8mm]{patch2x2_4}
\end{tabular}
\end{center}
\vspace{-6mm}
\caption{\small Four of the 16 patch states used to encode curvature with penalties $0$, $\pi/2$, $0$ and $2\pi$ respectively.}
\label{basepatches}
\end{figure}
The patches have 4 pixel boundaries that intersect in the middle of the patch. To compute the curvature contribution of a patch we need to determine which of the 4 pixel boundaries also belong to the segmentation boundary.
If two neighboring pixels (sharing a boundary) have different assignments then their common boundary belongs to the segmentation boundary.
Figure \ref{fig:newgraph} shows the approach.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=25mm]{slidingwindow3} &
\includegraphics[width=35mm]{supernodes}
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\small Super nodes formed in a sliding window fashion. The red pixel occurs in 4 super nodes. Pairwise interactions ensure that shared pixels are assigned the same value.}
\label{fig:newgraph}
\end{figure}
We start by forming patches of size $2\times2$ into super nodes. This is done in a sliding window fashion, that is, super node $(r,c)$ consists of the nodes $(r,c)$, $(r,c+1)$, $(r+1,c)$ and $(r+1,c+1)$, where $r$ and $c$ are the row and column coordinates of the pixels.
Each super node label can take 16 values corresponding to states of the individual pixels.
The curvature interaction and data terms of the original problem are now transformed to unary potentials. Note that since patches are overlapping pixels can be contained in up to four super nodes.
In order not to change the energy we therefore weight the contribution from the original unary term, $f(x)$ in \eqref{curvatureint}, to each patch such that the total contribution is 1.
For simplicity we give pixels that occur $k$ times the weight $1/k$ in each super node.
Finally to ensure that each pixel takes the same value in all the super nodes where it is contained we add the "consistency" edges ${\mathcal{E}}$ between neighboring super nodes (see Fig. \ref{fig:newgraph}).
Note it is enough to use a 4-connected neighborhood.
Our two approaches from Figure~\ref{fig:gridVScomplex} and \cite{schoenemann-etal-ijcv-2012,elzehiry-grady-cvpr-2010,strandmark-kahl-emmcvpr-2011} all assign the same curvature costs for the patches in Figure~\ref{basepatches}.
Therefore, assuming that the global minimum can be found, they yield the same solution for $\pi/2$-precision curvature.
\subsection{Efficient Message Passing} \label{sec:speedup}
Since the number of labels can be very large when we have higher order factors it is essential to compute messages fast.
The messages sent during optimization has the form
\begin{equation}
m_{\alpha\beta}^t (X_\beta) = \min_{X_\alpha} (P_{\alpha\beta}(X_\alpha,X_\beta) + h(X_\alpha)),
\end{equation}
where $h$ is some function of super node label $X_\alpha$.
To compute the message we order the labels of both node $\alpha$ and $\beta$ into (disjoint) groups according to the assignments of the shared pixels.
The message values $m_{\alpha\beta}^t(X_\beta)$ for all the $X_\beta$ in the same group can now be found by searching for the smallest value of $h(X_\alpha)$ in the group consistent with the $X_\beta$.
The label order depends on the direction of the edge between $\alpha$ and $\beta$, however it does not change during optimization and can therefore be precomputed at startup.
The bottleneck is therefore searching the groups for the minimal value which can be done in linear time.
Note that this process does not require that all the possible patch assignments are allowed.
For larger patches (see Section~\ref{superduper}) some of the patch states may not be of interest to us and the corresponding labels can simply be removed.
Figure \ref{GRDcomparison} compares our approach for $\pi/2$-precision curvature to Generalized Roof Duality (GRD) \cite{kahl-strandmark-dam-2012}.
We used TRW-S \cite{kolmogorov-pami-2006} with $2\times 2$ patches and our efficient message passing scheme.
Our approach gives no duality gap.
\subsection{Lower Bounds using Trees}
As observed in \cite{elzehiry-grady-cvpr-2010} the $2\times2$ curvature interaction reduces to pairwise interactions between all the pixels in the patch.
In this discrete setting \eqref{curvatureint} reduces to \eqref{eq:clusterenergy}
where ${\mathcal{C}}$ consists of the edges of the underlying (8-connected) graph, see Figure \ref{fig:orggraph}.
Therefore it could in principle be solved using roof duality (RD) \cite{rother-etal-cvpr-2007} or TRW-S \cite{kolmogorov-pami-2006}. (Note that this is only true for this particular neighborhood and the corresponding interaction penalty.)
However, it may still be useful to form super nodes.
Methods such as \cite{kolmogorov-pami-2006} work by decomposing the problem into subproblems on trees and combining the results into a lower bound on the optimal solution.
Sub-trees with super nodes are in general stronger than regular trees.
\begin{figure}[htb]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=20mm]{graph2} &
\includegraphics[width=20mm]{tree}
\end{tabular}
\end{center}
\vspace{-5mm}
\caption{\small {\em Left}: 8-connected graph. {\em Right}: Sub-tree ${\mathcal{T}}$.}
\label{fig:orggraph}
\end{figure}
\vspace{-5mm}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=60mm]{curv_subtrees}
\end{center}\vspace{-3mm}
\caption{\smal
${\mathcal{T}}_{2\times2}$ contains two copies (red and green) of ${\mathcal{T}}$ .}
\label{fig:newtree}
\end{figure}
\begin{figure*}[htb]
\begin{center}
\begin{tabular}{cccc}
(a) & (b) & (c) & (d) \\
\includegraphics[width=0.2\linewidth,height=0.2\linewidth]{circle} &
\includegraphics[width=0.2\linewidth,height=0.2\linewidth]{circle2x2} &
\includegraphics[width=0.2\linewidth,height=0.2\linewidth]{circle3x3} &
\includegraphics[width=0.2\linewidth,height=0.2\linewidth]{circle5x5}
\end{tabular}
\end{center}\vspace{-5mm}
\caption{\small Segmentation results on a $81\times81$ pixel image using different patch sizes with same very high regularization weight ($\lambda = 20$). (a) - Data term, (b) - $2 \times 2$ patches clearly favors horizontal and vertical boundaries , (c) - $3 \times 3$ patches, favors directions that are multiples of $\pi/4$, (d) $5 \times 5$ patches, favors directions that are multiples of $\pi/8$.}
\label{circle}
\end{figure*}
Consider for example the sub-tree ${\mathcal{T}}$ in Figure \eqref{fig:orggraph}. We can form a similar sub-tree ${\mathcal{T}}_{2\times2}$ using the super nodes, see Figure \ref{fig:newtree}.
Note that the edges that occur twice within the super nodes have half the weight of the corresponding edges in Figure~\ref{fig:orggraph}.
Looking in the super nodes and considering the consistency edges we see that we can find two instances of ${\mathcal{T}}$ within ${\mathcal{T}}_{2 \times 2}$
(see Figure~\ref{fig:newtree}) both with weights 1/2 (here the edges that have weight 1 are allowed to belong to both trees).
Hence if we view these two instances as independent and optimize them we get the same energy as optimization over ${\mathcal{T}}$ would give.
In addition there are other edges present in ${\mathcal{T}}_{2\times2}$, and therefore this tree gives a stronger bound.
In a similar way, we can construct even stronger trees by increasing the patch size further (event though the interactions might already be contained in the patches).
If we group super nodes in a sliding window approach we obtain a graph with $3\times3$ patches, see Figure~\ref{fig:graph33}. (We refer to the new nodes as super-duper nodes.)
If we keep repeating this process we will eventually end up enumerating the entire graph, so it is clear that the resulting lower bound will eventually approach the optimum.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=40mm]{superdupernode}
\end{center}\vspace{-5mm}
\caption{\small Super-duper nodes containing patches of size $3 \times 3$ are created by grouping super nodes of size $2 \times 2$ in groups of $2 \times 2$ in a sliding window fashion.}
\label{fig:graph33}
\end{figure}
In Table~\ref{grady-approach} the same problem as in Figure~\ref{GRDcomparison} is solved using TRW-S without forming any super nodes.
\setlength{\tabcolsep}{4pt}
\begin{table}[htb]
\small
\begin{center}
\begin{tabular}{lrr}
\toprule
$\lambda$ & Energy & Lower bound\\
\midrule
$2.5\cdot10^{-5}$ & 4677 & 4677\\
$2.5\cdot10^{-4}$ & 4680 & 4680\\
$2.5\cdot10^{-3}$ & 4709 & 4705\\
$2.5\cdot10^{-2}$ & 5441 & 4501\\
$2.5\cdot10^{-1}$ & 16090& -16039\\
$2.5\cdot10^{0}$ & 15940 & -19990\\
\bottomrule
\end{tabular}
\end{center}\vspace{-5mm}
\caption{\small Same as in Figure~\ref{GRDcomparison} without super nodes.}
\label{grady-approach}
\end{table}
\setlength{\tabcolsep}{6pt}
\subsection{Application to $\pi/4$ and $\pi/8$ precision curvature}\label{superduper}
For patches of size $2 \times 2$ it is only possible to encourage horizontal and vertical boundaries.
Indeed, along a diagonal boundary edge all $2\times2$ patches look like the second patch in Figure~\ref{basepatches}.
To make the model more accurate and include directions that are multiples of $\pi/4$ radians we will look at patches of a larger size, see Figure \ref{fig:gridVScomplex}(c).
For multiples of $\pi/4$ radians it is enough to have $3 \times 3$ patches and for $\pi/8$ radians we use patches of size $5 \times 5$.
However, the number of distinct patch-labels needed to encode the interactions (transitions between directions) is quite high.
It is not feasible to determine their costs by hand.
To compute the specific label costs we generate representative windows of size slightly larger than the patch
(for $3\times3$ patches we use $5 \times 5$ windows) that contain either a straight line or a transition between two directions
of known angle difference.
From this window we can determine which super node assignments occur in the vicinity of different transitions.
We extract all the assignments and constrain their sum, as shown in Figure \ref{fig:gridVScomplex}, to be the known curvature of the window.
Furthermore, we require that the cost of each label is positive.
If a certain label is not present in any of the windows we do not allow this assignment.
This gives us a set of linear equalities and inequalities for which we can find a solution (using linear programming).
The procedure gives 122 and 2422 labels for the $3 \times 3$ and $5 \times 5$ cases respectively.
More details are given in Appendix~\ref{app:patchcost}.
Figures~\ref{circle} illustrates the properties of the different patch sizes.
Here we took an image of a circle and segmented it using the 3 types of approximating patches.
Note that there is no noise in the image, so simply truncating the data term would give the correct result.
We segment this image using a very high regularization weight ($\lambda = 20$).
In (b) horizontal and vertical boundaries are favored since these have zero regularization cost.
In (b) and (c) the number of directions with zero cost is increased and therefore the approximation improved with the patch size.
Figure~\ref{cameraman} shows real segmentations with the different patch sizes
(with $\lambda = 1$).
Table~\ref{camtable} shows energies, lower bounds and execution times for a couple of methods.
Note that all methods except GTRW-S use our super node construction, here we use the single separator implementation \cite{GTRWS:arXiv12}.
Thus, GTRW-S solves a weaker relaxation of the energy (this is confirmed by the numbers in Table~\ref{camtable}).
GTRW-S requires specifying all label combinations for each factor. For the patch assignments that we do not use to model curvature we specify a large cost (1000) to ensure that these are not selected.
Furthermore, TRW-S and Loopy belief propagation (LBP) both use our linear time message computation. For comparison TRW-S (g) uses general message computation.
All algorithms have an upper bound of 10,000 iterations. In addition, for TRW-S and GTRW-S the algorithm converges if the lower bound stops increasing.
For MPLP \cite{sontag-etal-uai-2008} we allow 10,000 iterations of clustering and we stop running if the duality gap is less than $10^{-4}$.
Figure~\ref{timecurves} shows convergence plots for the $2\times2$ case.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.2\linewidth]{cameraman.jpg}
\includegraphics[width=0.2\linewidth]{cameraman_2x2_lambda1.png}
\includegraphics[width=0.2\linewidth]{cameraman_3x3_lambda1.png}
\includegraphics[width=0.2\linewidth]{cameraman_5x5_lambda1.png}
\end{center}\vspace{-4mm}
\caption{\small Segmentation of the camera man using (from left to right) $2 \times 2$, $3 \times 3$ and $5 \times 5$ patches with $\lambda = 1$.}
\label{cameraman}
\vspace{-4mm}
\end{figure*}
\begin{table*}[htb]
\small
\subfloat[$2\times 2$ patches.]{
\begin{tabular}{lrrr}
\toprule
& Energy & Lower bound & Time (s) \\
\midrule
TRW-S &1749.4 & 1749.4 & 21\\
TRW-S (g) & 1749.4 & 1749.4 & 1580 \\
MPLP & 1749.4 & 1749.4 & 6584 \\
LBP & 2397.7 & & 1565 \\
GTRW-S & 1867.9 & 1723.8 & 2189 \\
\bottomrule
\end{tabular}
} %
\hfill%
\subfloat[$3\times 3$ patches.]{
\begin{tabular}{rrrr}
\toprule
Energy & Lower bound & Time (s) \\
\midrule
1505.7 & 1505.7 & 355 \\
1505.7 & 1505.7 & 41503 \\
$\ddagger$ & $\ddagger$ & $\ddagger$ \\
$*$ & & 3148 \\
99840 & 1312.6 & 10785 \\
\bottomrule
\end{tabular}
} %
\subfloat[$5\times 5$ patches.]{
\begin{tabular}{rrrr}
\toprule
Energy & Lower bound & Time (s) \\
\midrule
1417.0 & 1416.6 & 8829 \\
$\ddagger$ & $\ddagger$ & $\ddagger$ \\
$\ddagger$ & $\ddagger$ & $\ddagger$ \\
$*$ & & $157532$ \\
$\ddagger$ & $\ddagger$ & $\ddagger$ \\
\bottomrule
\end{tabular}
}
\caption{\small Cameraman ($256 \times 256$ pixels) with $\lambda = 1$ run with with different path sizes. Resulting segmentation can be seen in Figure~\ref{cameraman}.
$(\ddagger)$ Creating the problem instance not feasible due to excessive memory usage. $(*)$ Inconsistent labeling.}
\label{camtable}
\end{table*}
\begin{figure}[htb]
\graphicspath{{.}}
{\scriptsize
\def1\linewidth{1\linewidth}
\import{.}{allcurve_tikz.pdf_tex}
}
\vspace{-5mm}
\caption{\small Logarithmic time plot for energy and lower bound over time for the $2\times2$ experiment in Table~\ref{camtable}. For MPLP and GTRW-S we only show the final energy as a dot.}
\label{timecurves}
\end{figure}
\section{Other Applications}\label{sec:exp}
Our framework does not only work for curvature but applies to a general class of problems.
In this section we will test our partial enumeration approach for other problems than curvature regularization.
\subsection{Binary Deconvolution}
Figure \ref{deconvolution} (a) shows an image convolved with a $3 \times 3$ mean value kernel with additional noise added to it.
The goal is to recover the original (binary) image. We formulate the energy as outlined in \cite{raj-zabih-iccv-2005}.
The resulting interactions are pairwise and Figure \ref{deconvolution} (b) shows the solution obtained using RD, here the gray pixels are unlabeled.
For comparison we also plotted the solution obtained when solving the the same problem as RD but with
\cite{sontag-etal-uai-2008} (c) and TRW-S (d).
For these methods there are substantial duality gaps.
In contrast (e) shows the solution obtained when forming super nodes with patches of size $3 \times 3$ and then applying TRW-S.
Here there is no duality gap, so the obtained solution is optimal.
\begin{figure*}[htb]
\vspace{-5mm}
\centering
\def\iccvwidth{0.19\linewidth}%
\hfill
\subfloat[Noisy image]{%
\includegraphics[width=\iccvwidth]{iccv_noise}
}\hfill
\subfloat[RD (379.44)]{%
\includegraphics[width=\iccvwidth]{iccv_rd}
}\hfill
\subfloat[MPLP (324.07)]{%
\includegraphics[width=\iccvwidth]{iccv_mplp}
}
\hfill
\subfloat[TRW-S (36.44)]{%
\includegraphics[width=\iccvwidth]{iccv_trws}
}\hfill
\subfloat[TRW-S~patches (12.11)]{%
\includegraphics[width=\iccvwidth]{iccv_patch}
}\hfill
\caption{\small Binary deconvolution results (with energy).
(a) Noisy image.
(b) Gray pixels are unlabeled. Duality gap: $477.78$ (unlabeled set to 0)
(c) Duality gap: $422.40$, maximum 10,000 clusterings.
(d) Duality gap: $134.77$.
(e) Duality gap: $10^{-14}$.
}
\label{deconvolution}
\vspace{-4mm}
\end{figure*}
\def\texttt{Cones}{\texttt{Cones}}
\subsection{Stereo}
\setlength{\tabcolsep}{4pt}
\begin{figure*}[htb]
\begin{centering}
\iffalse
\begin{minipage}[c]{30mm}
\subfloat[\texttt{Cones}.]{
\includegraphics[height=0.09\textheight]{figs/cones_left.png}
}
\end{minipage}
\fi
\subfloat[$\ell_1$ regularization.]{
\small
\begin{tabular}{l@{}cccr}
\toprule
& Energy & \hspace{-2mm} Lower bound & Relative gap & Time (s) \\
\midrule
Our & \hspace{-2mm}$1.4558\cdot 10^{10}$ & $1.4558 \cdot 10^{10}$ & $1.4471 \cdot 10^{-14}$ & $315.3342$\\
RD & \hspace{-2mm} $1.4637\cdot 10^{10}$ & $1.4518 \cdot 10^{10}$ & $9.3694 \cdot 10^{-3}$ & $1.9216$\\
Our/RD & $0.9958$ & $1.0019$ & $4.3825\cdot 10^{-13}$ & $180.6859$ \\
\bottomrule
\end{tabular}
}
\subfloat[$\ell_2$ regularization.]{
\small
\begin{tabular}{l@{}cccr}
\toprule
& Energy & \hspace{-2mm} Lower bound & Relative gap & Time (s) \\
\midrule
Our & \hspace{-2mm}$1.3594 \cdot 10^{10}$ & $1.3594 \cdot 10^{10}$ & $2.0394 \cdot 10^{-14}$ &
$428.2557$ \\
RD & \hspace{-2mm}$1.5165 \cdot 10^{10}$ & $1.0484 \cdot 10^{10}$ & $0.5756$ & $4.6479$ \\
Our/RD & $0.9092$ & $1.1652$ & $6.0910\cdot 10^{-15}$ & $111.8597$ \\
\bottomrule
\end{tabular}
}\vspace{-2mm}
\captionof{table}{\small Averaged stereo results on $\texttt{Cones}$~sequence. Relative gap is defined as (Energy-Lower bound)/Lower bound. (a) For $\ell_1$ regularization RD left 24\% of the variables unlabeled. "Improve" lowered the average energy for RD to $1.4609\cdot 10^{10}$. (b) For $\ell_2$ regularization RD left 64\% of the variables unlabeled. "Improve" lowered the average energy for RD to $1.4392 \cdot 10^{10}$.}
\label{tab:stereo_result}
\end{centering}
\vspace{-4mm}
\end{figure*}
\setlength{\tabcolsep}{6pt}
In this section we optimize the energies occurring in Woodford \etal \cite{woodford2009}.
The goal is to find a dense disparity map for a given pair of images.
The regularization of this method penalizes second order derivatives of the disparity map, either using a truncated $\ell_1$- or $\ell_2$-penalty.
The 2nd derivative is estimated from three consecutive disparity values (vertically or horizontally), thus resulting in triple interactions.
To solve this problem \cite{woodford2009} uses fusion moves \cite{lempitsky-etal-pami-2010} where proposals are fused together to lower the energy.
To compute the move \cite{woodford2009} first reduces the interactions (using auxiliary nodes)
and applies Roof duality (RD) \cite{rother-etal-cvpr-2007}.
In contrast we decompose the problem into patches of size $3\times3$, that contain entire triple interactions.
Since the interactions will occur in as many as three super nodes we weight these so that the energy does not change.
Table \ref{tab:stereo_result} shows the results for the \texttt{Cones}~dataset from \cite{middlebury2} when fusing "SegPln" proposals \cite{woodford2009}. Starting from a randomized disparity map we fuse all the proposals.
To ensure that each subproblem is identical for the two approaches, we feed the solution from RD into our approach
before running the next fusion. We also tested the "improve" heuristic \cite{rother-etal-cvpr-2007} which gave a reduction in duality gap for RD.
Running "probe" \cite{rother-etal-cvpr-2007} instead of improve is not feasible due to the large number of unlabeled variables.
We also compared the final energies when we ran the methods independent of each other (not feeding solutions into our approach). For $\ell_1$ regularization our solution has 0.82\% lower energy than that of RD with "improve" and for $\ell_2$ regularization our solution is 7.07\% lower than RD with "improve".
{\footnotesize
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 1.523438,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfXQ5qhLBtiOLc11X | \section{Introduction}
Weakly self-avoiding walk is a model based on the simple random walk that penalizes self-intersections. In the discrete-time setting, it is also known as the self-repellent walk and as the Domb-Joyce model for soft polymers \cite{DombJoyce1972}. In the model, every self-intersection of the walk is penalized using a factor $1-\alpha$, with parameter $\alpha\in (0,1)$.
The boundary value $\alpha = 0$ corresponds to the simple random walk and $\alpha=1$ corresponds to the strictly self-avoiding walk.
In dimension one, the strictly self-avoiding case $\alpha=1$ has only two walks, going left or right, so it has escape speed equal to 1. Greven and den Hollander \cite{GH1993} proved in 1993 that the weakly self-avoiding walk also escapes linearly from the origin as length goes to infinity, with an asymptotically deterministic speed $\theta^*(\alpha)\in (0, 1)$ satisfying $\lim_{\alpha\to 0}\theta^*(\alpha) = 0$ and $\lim_{\alpha\to 1}\theta^*(\alpha) =1$.
The result was extended to a central limit theorem by K\"onig in 1996 \cite{Konig1996}. Their proofs use large deviation theory and local times of the simple random walk. In 2001, van der Hofstad \cite{Hofstad2001} gave another proof of the central limit theorem for $\alpha$ close to 1, using the lace expansion.
The escape speed $\theta^*(\alpha)$ is conjectured to be strictly increasing in the repelling strength parameter $\alpha$. The walk should escape faster if self-repelling is stronger. But to our knowledge, this has not been proved.
In this paper, we study a continuous-time version of the model. We prove this model also has an asymptotically deterministic speed and the speed is strictly increasing in the repelling strength parameter.
Our proof of the existence of escape speed uses local times of the simple random walk and Tauberian theory. The monotonicity of the speed is proved using stochastic dominance.
The speed is qualitatively similar to the speed in the discrete-time model \cite{GH1993}, in the sense that they are both identified as the inverse of the derivative of the largest eigenvalue of some operator.
The continuous-time weakly self-avoiding walk was studied in \cite{BBS2015-0, BBS2015} on $\Z^4$, and a generalization of the model was studied in \cite{BS2020} on the complete graph,
both using supersymmetric representation. Supersymmetric representation is a way of expressing certain functionals of local times of a random walk as an integral of differential forms. We give a brief introduction to the representation in Appendix~\ref{appendix:supersymmetry}. It allows us to write the two-point function (defined below) in a nice form to apply the transfer matrix approach.
There is an analogous continuous time and space model based on Brownian motion called the Edwards' model. In this model, there is also an escape speed, first proved by Westwater in 1985 \cite{Westwater1985} using Brownian local times. The speed is very simple due to the Brownian scaling property, and it is strictly increasing in the repelling strength parameter.
\subsection{The continuous-time WSAW model and main results}
We consider the nearest-neighbor simple random walk on $\Z$, \ie, the walk jumps left or right, each with rate 1. Denote the walk by $\{X(t)\}_{t\ge0}$ and denote its expectation by $E_i$ if $X(0)=i$.
The local time of $X$ at $x\in \Z$ up to time $T$ is defined by \begin{equation} \label{def:local-time}
L_{T,x} = \int_0^T \1_{X(s)=x}\ ds. \end{equation}
Notice that \begin{align}
L_{T,x}^2 = \int_0^T \int_0^T \1_{X(t)=x}\1_{X(s)=x}\ dsdt
= \int_0^T \int_0^T \1_{X(t)=X(s)=x}\ dsdt
\end{align} gives the self-intersection local time at $x$ up to time $T$.
As in the discrete-time setting, we penalize self-intersections.
Let $g>0$ be the repelling strength parameter and let $\phi(t) = t^2$. We introduce self-avoidance as follows. The weakly self-avoiding walk measure $\P_i^{g, T}$, for walks with $X(0)=i$, is defined by the expectation \begin{align}
\E_i^{g,T}[ f(X) ]\propto E_i \( e^{-g \sum_{x=-\infty}^\infty \phi(L_{T,x})} f(X) \).
\end{align}
For $i, j\in \Z$, define \begin{equation}
P_{ij}(g, T) = E_i \( e^{-g\sum_{x=-\infty}^\infty\phi(L_{T,x})} \1_{X(T)=j} \). \end{equation}
For $\nu\in\C$, define $p(t) = e^{-g\phi(t) - \nu t}$.
The \emph{two-point function} of the weakly self-avoiding walk from $i$ to $j$ is defined to be the Laplace transform
\begin{align}
G_{ij}(g, \nu) &= \int_0^\infty P_{ij}(g, T) e^{-\nu T}dT
= \int_0^\infty E_i \( \prod_{x=-\infty}^\infty p(L_{T,x}) \1_{X(T)=j} \) dT,
\end{align}
where in the second equality we used $T=\sum_{x=-\infty}^\infty L_{T,x} $.
Our methods allow us to consider a more general model. From now on, we only assume $\phi:[0, \infty)\to [0, \infty)$ satisfies \begin{align}
&\phi(0)=0, \label{A0}\tag{H0} \\
&\phi(t) \text{ is differentiable for all } t\ge 0, \label{A1} \tag{H1} \\
&\phi(t)/t \text{ is increasing,} \label{A2} \tag{H2} \\
&\lim_{t\to\infty} \phi(t)/t = \infty,\label{A3} \tag{H3} \\
&\phi'(t) e^{-g\phi(t) -\nu t} \text{ is a Schwartz function for all } g>0, \nu\in \R. \label{A4} \tag{H4}
\end{align}
For example, $\phi(t)$ can be a polynomial $\phi(t) = \sum_{k=2}^M a_k t^k$ with $M\ge2$, $a_M>0$, and $a_k \ge 0$ for all $k$.
By translation invariance, we can always start the walk at 0.
The main result of the paper is the following theorem, which asserts that the weakly self-avoiding walk has an escape speed and the speed is strictly increasing in the repelling strength $g$. The theorem is stated for walks going to the right, but walks going to the left have the same speed by symmetry.
\begin{theorem} \label{theorem:WLLN}
There exists an analytic function $\theta: (0, \infty) \to (0, \infty)$ with $\theta' > 0$ everywhere such that for all $g>0$, $\eps>0$, \begin{align}
\lim_{T\to \infty} \P_0^{g, T} \( \left\lvert \frac{X(T)}{T} - \theta(g) \right\rvert \ge \eps
\,\middle\vert\,
X(T) > 0 \) = 0.
\end{align}
\end{theorem}
We also have the following result on the mean end-to-end distance.
The notation $f(T)\sim h(T)$ means $\lim_{T\to\infty} f(T)/g(T)=1$.
\begin{theorem} \label{theorem:moments}
For the same $\theta(\cdot)$ as in Theorem~\ref{theorem:WLLN}, for any $g>0$ and any $k\in \N$, \begin{align} \label{eqn:6}
\E_0^{g,T}\big[ X(T)^k \mid X(T) > 0 \big] =
\frac{ \sum_{j=1}^\infty j^k P_{0j}(g, T) }{ \sum_{j=1}^\infty P_{0j}(g, T) } \sim \( \theta(g) T\)^k, \qquad T\to \infty. \end{align}
\end{theorem}
\begin {figure}
\centering
\includegraphics[scale=0.8]{WSAW_speed}
\caption{
Numerical evaluation of the escape speed $\theta(g)$ of the WSAW model ($\phi(t) = t^2$).
The dots are the numerical results, interpolated by cubic spline.
The computation is based on truncating and discretizing the integral operator $Q$ (defined in~\eqref{def:Q}) at $s=100$ and with step size $0.001$.
The numerical results suggest $\theta(g) \sim Cg^{1/3}$ as $g\to 0$.
For the discrete-time model, this asymptotic relation is proved by van der Hofstad and den Hollander in \cite{HH1995}.
}
\label{figure1}
\end{figure}
As a by-product of the proofs, we obtain critical exponents for the two-point function, the susceptibility, and the correlation lengths of integer orders. We collect the critical exponents, which are the same as for the strictly self-avoiding walk, in the following theorem.
For all $g>0$, we will define $\nu_c(g)$ to be the $\nu$ at which an integral operator has operator norm $1$, see Section \ref{section:Qproperty}. We will show this is consistent with the usual definition of $\nu_c$, which is by the property \begin{align} \label{usual-nu_c}
\chi(g, \nu) < \infty
\quad \text{if and only if} \quad
\nu> \nu_c(g), \end{align}
where $\chi(g, \nu) = \sum_{x=-\infty}^\infty G_{0j}(g, \nu)$ is the \emph{susceptibility}.
\begin{theorem} \label{theorem:critical_exponent}
Let $g>0$ and $\nu_c = \nu_c(g)$ (defined in Section \ref{section:Qproperty}).
There exist non-zero constants $A, B, C_1, C_2, \dots$ such that:
(i) The critical two-point function $G_{ij}(g, \nu_c)$ satisfies, for $\abs{ j-i }\to \infty$, \begin{align} \label{exponent:eta}
G_{ij}(g, \nu_c) \sim \frac{ A}{ \abs{ j-i }^{d-2+\eta}}, \qquad d=\eta=1.
\end{align}
(ii) The susceptibility $\chi(g, \nu)$ satisfies, for $\nu\to \nu_c^+$,
\begin{align}
\chi(g, \nu)
\sim B (\nu-\nu_c)^{-\gamma}, \qquad \gamma=1.
\end{align}
(iii) For any $k\in \N$, the correlation length of order $k$ satisfies, for $\nu\to \nu_c^+$, \begin{align} \label{exponent:nu_k}
\xi_k(g, \nu)
= \(\frac{\sum_{j=-\infty}^\infty \abs j^k G_{0j}(g, \nu)}{\chi(g, \nu)}\)^{1/k}
\sim
C_k (\nu-\nu_c)^{-\nu_k}, \qquad \nu_k = 1.
\end{align}
\end{theorem}
The rest of the paper is organized as follows.
In Section~\ref{section:2pt}, we first make a finite-volume approximation and use the supersymmetric representation of random walk to express the finite-volume two-point function as an inner product (Proposition~\ref{prop:Gij^N}). Then we take the infinite-volume limit and study its asymptotic behaviors near the critical $\nu$.
We will also calculate the susceptibility, the correlation lengths, and prove Theorem~\ref{theorem:critical_exponent} in Corollary~\ref{corollary:Gij}, Proposition~\ref{prop:chi}, and Corollary~\ref{corollary:correlation}.
In Section~\ref{section:laplace}, we first prove a general Tauberian theorem, then we use the theorem, with $\nu$ space asymptotics from Section~\ref{section:2pt} as input, to prove Theorem~\ref{theorem:moments} and the convergence part of Theorem~\ref{theorem:WLLN}.
In Section~\ref{section:monotonicity}, we use a separate stochastic dominance argument to prove the derivative of the escape speed to be strictly positive, finishing the proof of Theorem~\ref{theorem:WLLN}.
\section{Two-point function, susceptibility and correlation length} \label{section:2pt}
In this section, we first work on the finite subset $[-N, N]\cap \Z$ with free boundary conditions. We use the transfer matrix approach to derive an expression for the finite-volume two-point function $G_{ij}^N(g, \nu)$ (defined below). Then we will define a critical parameter $\nu_c(g)$, at and above which the infinite-volume limit $\lim_{N\to\infty} G_{ij}^N(g, \nu)$ exists and equals $G_{ij}(g, \nu)$.
We will use this representation of $G_{ij}$ to study one-sided versions of the susceptibility and correlation lengths, obtaining $\nu$-space asymptotics used as input for the Tauberian theorem in Section~\ref{section:laplace}. Two-sided versions of the quantities can also be obtained readily using symmetry.
\subsection{Finite-volume two-point function} \label{section:finite_volume}
Let $N\in \N$. Consider the nearest-neighbor simple random walk on $[-N, N]\cap \Z$. That is, the walk jumps to left or right if possible, each with rate $1$. For $i,j\in [-N, N]$, let $E^N_i$ denote the expectation of such a walk starting at $i$. The local times of this walk are defined exactly as in \eqref{def:local-time}. Recall $p(t) = e^{-g\phi(t) - \nu t}$.
Define \begin{align}
P_{ij}^N (g, T) &= E_i^N \( e^{-g \sum_{x=-N}^N \phi(L_{T,x})} \1_{X(T)=j} \), \\
G_{ij}^N (g, \nu) &= \int_0^\infty P_{ij}^N (g, T) e^{-\nu T}dT
= \int_0^\infty E_i^N \( \prod_{x=-N}^{-N} p(L_{T,x}) \1_{X(T)=j} \) dT. \label{def:Gij^N}
\end{align}
We need the modified Bessel functions of the first kind of orders 0 and 1, which are the entire functions \begin{align}
I_0(z) &= \sum_{m=0}^\infty \frac{1}{ m! m!} \left(\frac z 2\right)^{2m}, \label{def:I0} \\
I_1(z) &= \sum_{m=0}^\infty \frac{1}{ m! (m+1)!} \left(\frac z 2\right)^{2m+1} \label{def:I1}
\end{align} respectively. For $j=0,1$, define symmetric kernels \begin{align} \label{def:k_j}
k_j(t,s)= \sqrt{p(t)}\sqrt{p(s)} e^{-t}e^{-s} I_j(2\sqrt{st}). \end{align}
Since $p(\cdot)$ has exponential decay by assumption~\eqref{A3}, the kernels are square-integrable.
Define two operators $T,Q: L^2[0,\infty) \to L^2[0,\infty)$, by \begin{align}
Tf(t)
&= \sqrt{p(t)} e^{-t} + \int_0^\infty f(s) k_1(t,s) \sqrt{\tfrac{t}{s}} ds, \label{def:T} \\
Qf(t)
&= \int_0^\infty f(s)k_0(t,s) ds. \label{def:Q}
\end{align}
From the definition of $k_1$ and the Taylor series of $I_1$, we have \begin{align} \label{eqn:2.6}
k_1(t,s) \sqrt{\tfrac{t}{s}}
= \sqrt{p(t)}\sqrt{p(s)} e^{-t}e^{-s}
\sum_{m=0}^\infty \frac{s^m t^{m+1}}{ m! (m+1)!},
\end{align}
which is regular at $s=0$ for all $t$. Notice $T$ is affine but non-linear. Later we will also view $T$ as an operator on a different space, defined by the same equation~\eqref{def:T}.
For $Q$, since $k_0(t,s)$ is square-integrable, $Q$ is a Hilbert--Schmidt operator, thus compact. If $\nu\in \R$, then $k_0(t,s)$ is real and symmetric, so $Q$ is a self-adjoint operator for real $\nu$.
With these operators, we prove the following representation of $G_{ij}^N$. The inner product is the usual $\langle f, h \rangle = \int_0^\infty f(t)\overline{ h(t) } dt$.
\begin{proposition} \label{prop:Gij^N}
Let $g>0$, $\nu\in \C$, $N\in \N$, $-N\le i\le j\le N$. Then \begin{align}
G_{ij}^N(g, \nu)
&=\left \langle Q^{j-i}T^{N+i}[\sqrt p] , \overline{ \Big( T^{N-j}[\sqrt p] \Big)}\right \rangle.
\end{align}
\end{proposition}
Note the conjugation on $ T^{N-j}[\sqrt p]$ cancels with the conjugation from the inner product. Since $\sqrt{p(t)} = e^{-\half g \phi(t) - \half \nu t}$ is holomorphic in $\nu$, $G_{ij}^N$ is also holomorphic in $\nu$.
The proposition is proved using the transfer matrix approach and a supersymmetric representation of the random walk. This is the only place we need the representation. The proof of the proposition and an introduction to the supersymmetric theory are included in Appendix~\ref{appendix:supersymmetry}.
\begin{remark}
Let $x>0$. For a continuous-time simple random walk starting at 0 and stopped when the local time at $x$, $L_{T,x}$, reaches some fixed value, the local times between 0 and $x$ form a Markov chain with transition kernel $e^{-t}e^{-s} I_0(2\sqrt{st})$. This is the Ray--Knight Theorem \cite[Theorem 4.1]{BHK2007}. The kernel $k_0(t,s)$ for our operator $Q$ contains this Markov kernel and an extra $\sqrt{p(t)}\sqrt{p(s)}$ to incorporate self-interactions.
The local times outside the interval $[0,x]$ also form Markov chains, and the Markov kernel is similarly related to our operator $T$.
\end{remark}
\subsection{Critical parameter $\nu_c$ and properties of $Q$} \label{section:Qproperty}
In this section, we prove properties of the operator $Q$ and define the critical parameter $\nu_c(g)$.
\begin{lemma} \label{lemma:Q}
Let $\nu\in\R$.
(i) The operator $Q$ defined by \eqref{def:Q} is positive, \ie, $\langle Qf,f \rangle\ge0$ for all $f$. Hence, all eigenvalues of $Q$ are non-negative.
(ii) $\|Q\|$ is a simple eigenvalue of $Q$, and there exists an eigenvector with eigenvalue $\|Q\|$ that is continuous and strictly positive everywhere.
\end{lemma}
\emph{Proof. }
(i) We use the Taylor series $I_0(z) = \sum_{m=0}^\infty \frac{1}{ m! m!} (\frac z 2)^{2m}$.
For any $f$, we have \begin{align}
\langle Qf,f \rangle
&= \int_0^\infty \int_0^\infty \overline{f(t)} f(s) \sqrt{p(t)}\sqrt{p(s)} e^{-t}e^{-s} I_0(2\sqrt{st})\ dsdt \nonumber \\
&= \sum_{m=0}^\infty \int_0^\infty \overline{ f(t)} \sqrt{p(t)} e^{-t} \frac{t^m}{m!}\ dt
\int_0^\infty {f(s)} \sqrt{p(s)} e^{-s} \frac{s^m}{m!}\ ds\nonumber \\
&= \sum_{m=0}^\infty \left\lvert\int_0^\infty f(t) \sqrt{p(t)} e^{-t} \frac{t^m}{m!}\ dt \right\lvert^2
\ge 0,
\end{align} because $\sqrt{p}$ is real for real $\nu$.
(ii) Since $Q$ is compact and self-adjoint, one of $\pm\|Q\|$ is an eigenvalue \cite[Lemma 6.5]{SteinRealAnalysis}. Since eigenvalues must be non-negative by part (i), we know $\|Q \|$ is an eigenvalue. Since any eigenvalue of $Q$ must be less than $\| Q\|$ in absolute value, we get the spectral radius $r(Q) = \|Q\|$.
Now we use a Krein--Rutman theorem for irreducible operators \cite[Theorem 1.5]{chang2020spectral} (also see references therein). Krein--Rutman theorems are generalizations of the Perron--Frobenius theorem for entrywise positive matrices.
Once we check the hypotheses of \cite[Theorem 1.5]{chang2020spectral}, we can conclude that the spectral radius $r(Q) = \|Q\|$ is a simple eigenvalue of $Q$, and there exists an eigenvector $h$ that is a so-called quasi-interior point of the (non-negative) cone.
We are in the setting where the Banach lattice is $X=L^2[0, \infty)$ and the cone $P\subset X$ is the set of non-negative functions.
By (ii) in the third paragraph before \cite[Theorem 1.5]{chang2020spectral}, in our setting, a function $h$ is a quasi-interior point of the cone if and only if $\langle f, h \rangle > 0$ for all nonzero, non-negative $f$, which happens if and only if $h$ itself is nonzero and non-negative.
For the eigenvector $h$, we have \begin{align}
\|Q\| \cdot h(t) = Qh(t) = \int_0^\infty h(s) k_0(t,s)ds, \end{align}
so $h$ is continuous. Since the kernel $k_0(t,s)$ is strictly positive for all $t,s$ and $h$ is nonzero and non-negative, the equation implies $h$ is strictly positive everywhere.
It remains to check the hypothesis that $Q$ is a positive\footnote{Positive in the sense that $f\ge 0\implies Qf\ge 0$ for all $f\in X$.}, ideal-irreducible, compact operator. $Q$ is positive in this sense because $k_0\ge 0$, and it is compact because it is a Hilbert--Schmidt operator.
Now we check ideal-irreducibility. Let $ \{0\} \ne I \subset X$ be a closed, $Q$-invariant ideal. We want to prove $I=X$. Take $0\ne u\in I$. By the definition of ideal, we have $\abs u \in I$.
Since $0\ne \abs u \ge 0$, $\abs u$ is a quasi-interior point of the cone, which by definition means the closure of the ideal generated by $\abs u$ is all of $X$. But $I$ is a closed ideal, so it must contain $X$. This proves $I=X$ and concludes the proof.
$\qedsymbol$
We define $\nu_c(g)\in \R$ to be the value of $\nu$ that satisfies $\|Q(g, \nu)\| =1$. The following lemma guarantees $\nu_c(g)$ exists and is unique.
In Proposition~\ref{prop:chi} and the paragraphs after it, we will prove $\chi(g, \nu)<\infty$ for all $\nu>\nu_c(g)$ and $\chi(g, \nu_c(g)) = \infty$, so our definition is consistent with the usual definition~\eqref{usual-nu_c}. Then, by comparing with the susceptibility of the simple random walk, it is easy to get $\nu_c(g) \le 0$. We also present some properties of $\nu_c(g)$ in Proposition~\ref{prop:limit-nu_c}.
\begin{lemma} \label{lemma:nu_c}
Fix $g>0$. Consider $\nu\in \R$.
(i) The map $\nu\mapsto \|Q(g, \nu)\|$ is continuous and strictly decreasing.
(ii) The map $\nu\mapsto \|Q(g, \nu)\|$ is convex.
(iii) $\nu_c(g)$ exists and is unique.
\end{lemma}
\emph{Proof. }
(i) Since $\nu\in \R$, we have $k_0 > 0$, so \begin{equation}
\|Q\| = \sup_{\|f\|_2=1} \abs{ \langle Qf, f \rangle }
= \sup_{\|f\|_2=1, f\ge0} \langle Qf, f \rangle. \label{eqn:26-2}
\end{equation}
Consider $\nu, \hat \nu\in \R$. Denote $\hat Q= Q(g, \hat \nu), \hat k_0(t,s) = \hat k_0(t,s;g, \hat \nu)$, and $ \hat p(t) = p(t; g, \hat \nu)$. For any $f\ge 0$, by the Cauchy--Schwarz inequality, \begin{align} \nonumber
\abs{ \langle Qf, f \rangle - \langle \hat Qf, f \rangle }
&= \left\lvert \int_0^\infty \int_0^\infty f(t)f(s) [ k_0(t,s) - \hat k_0(t,s) ] ds dt \right\lvert \\
&\le \|f\|_2^2 \big \| k_0(t,s) - \hat k_0(t,s)
\big \|_{L^2(dsdt)}. \label{eqn:2.9}
\end{align}
Recall $k_0(t,s) = \sqrt{p(t)}\sqrt{p(s)} e^{-t}e^{-s} I_0(2\sqrt{st})$ has exponential decay by assumption~\eqref{A3}.
Also notice $p(t) = e^{-g\phi(t) - \nu t}$ is monotone in $\nu$.
Hence, as $\hat \nu\to \nu$, we have $\big \| k_0(t,s) - \hat k_0(t,s)
\big \|_{L^2(dsdt)}\to 0$ by the Dominated Convergence Theorem.
Since we are taking supremum over $\|f \|_2=1$ to get $\|Q\|$, we obtain $\| \hat Q \| \to \| Q\|$.
To prove strictly decreasing, consider $\nu > \hat \nu$.
By Lemma~\ref{lemma:Q}(ii), $Q$ has an eigenvector $h$ with eigenvalue $\|Q \|$ that is continuous and strictly positive everywhere. We normalize so that $\|h\|_2 = 1$.
For each $t>0$, $p(t) = e^{-g\phi(t) - \nu t} < e^{-g\phi(t) - \hat\nu t} = \hat p(t)$, so $k_0(t,s) < \hat k_0(t,s)$ for all $t,s>0$.
Since $h$ is positive and continuous, \begin{align}
\| Q\| = \langle Qh, h \rangle \nonumber
&=\int_0^\infty \int_0^\infty h(t)h(s) k_0(t,s) ds dt \\
&< \int_0^\infty \int_0^\infty h(t)h(s) \hat k_0(t,s) ds dt
= \langle \hat Qh, h \rangle \le \| \hat Q\|.
\end{align}
(ii) For each $f\ge0$, direct calculation gives $\frac{\del^2}{\del\nu^2} \langle Qf, f \rangle \ge 0$,
so the map $\nu\mapsto \langle Qf, f \rangle$ is convex.
By the supremum equation (\ref{eqn:26-2}), the map $\nu\mapsto \|Q\|$ is convex too.
(iii) As $\nu\to \infty$, $p(t) = e^{-g\phi(t) - \nu t} \to 0$ for all $t>0$.
Similar to \eqref{eqn:2.9} in part (i), we have $\langle Qf, f \rangle\to 0$ as $\nu\to\infty$, uniformly in $\|f \|_2 =1$. Therefore, $\| Q\| \to 0$ as $\nu\to \infty$.
Since $\nu\mapsto \|Q\|$ is convex and strictly decreasing, one of its subderivatives must be strictly negative. Since the corresponding subtangent line bounds the function from below, we have $\lim_{\nu\to -\infty} \|Q\| = \infty$. The critical $\nu_c(g)$ then exists by the Intermediate Value Theorem. Uniqueness follows from part (i).
$\qedsymbol$
\begin{lemma} \label{lemma:Q_analytic}
Let $g, \nu\in \C$ with $\Re(g)>0$.
(i) For a fixed $g$, the map $\nu\mapsto Q(g, \nu)$ into the space of bounded linear operators on $L^2[0,\infty)$ is strongly analytic, with derivative given by \begin{equation}\label{def:dQdnu}
\(\frac{\del Q}{\del\nu}\)f(t) = \int_0^\infty f(s) (-\half(t+s)) k_0(t,s) ds.
\end{equation}
(ii) For a fixed $\nu$, the map $g\mapsto Q(g, \nu)$ is strongly analytic, with derivative given by \begin{equation}\label{def:dQdg}
\(\frac{\del Q}{\del g}\)f(t) = \int_0^\infty f(s) (-\half(\phi(t)+\phi(s))) k_0(t,s) ds.
\end{equation}
\end{lemma}
\emph{Proof. }
(i) Fix some $g$ and $\nu$. Let $h\in\C$, $0<\abs h \le1$, define the operator \begin{align}
R_h = \frac{Q(g, \nu+h) - Q(g, \nu)}{h}. \end{align}
We will show $\lim_{h\to0} R_h = \frac{\del Q}{\del\nu}$ in operator norm, where $\frac{\del Q}{\del\nu}$ is (for now) defined by the right-hand side of the goal \eqref{def:dQdnu}. This will prove the map $\nu\mapsto Q(g, \nu)$ is strongly analytic at that point.
Let $f_1, f_2\in L^2[0, \infty)$ be such that $\|f_1\|_2, \|f_2\|_2= 1$.
Denote $p^*(t) = p(t \text{; } \Re(g), \Re(\nu))$ and $k_0^*(t,s) = k_0(t,s \text{; } \Re(g), \Re(\nu))$.
Then by the Cauchy--Schwarz inequality, \begin{align}
& \left\lvert\left\langle (R_h-\frac{\del Q}{\del\nu}) f_1, f_2\right\rangle \right\lvert \nonumber \\
\le &
\int_0^\infty \int_0^\infty \abs{ f_2(t) }\abs{ f_1(s) }
\left\lvert \frac{e^{-\half h(t+s)}-1}{h} - (-\half(t+s))\right\lvert
k_0^*(t,s)dsdt \nonumber \\
\le &
\|f_1\|_2\|f_2\|_2 \(\int_0^\infty \int_0^\infty \left(
\left\lvert \frac{e^{-\half h(t+s)}-1}{h} - (-\half(t+s))\right\lvert
k_0^*(t,s)\right)^2dsdt\)^{1/2}.
\end{align}
Taking supremum over $f_1$ and $f_2$ gives
\begin{align}
\left\| R_h-\frac{\del Q}{\del\nu}\right\|
&= \sup_{\|f_1\|_2=\|f_2\|_2=1} \left\lvert\left\langle (R_h-\frac{\del Q}{\del\nu}) f_1, f_2\right\rangle \right\lvert \nonumber \\
&\le \(\int_0^\infty \int_0^\infty \left(
\left\lvert \frac{e^{-\half h(t+s)}-1}{h} - (-\half(t+s))\right\lvert
k_0^*(t,s) \right)^2dsdt\)^{1/2}.
\end{align}
Since $\abs h \le 1$, we have $\abs{ e^{ah} - 1 - ah } \le \abs{h}^2 e^{\abs a}$ for all $a$. We apply the inequality with $a = -\half (t+s)$, then \begin{align}
\left\lvert\frac{e^{-\half h(t+s)}-1}{h} - (-\half(t+s))\right\lvert
\le \abs h e^{\half(t+s)}.
\end{align}
By assumption~\eqref{A3}, $e^{\half(t+s)}k_0^*(t,s)$ still has exponential decay. The claim then follows from the Dominated Convergence Theorem.
(ii) Same proof, with $\abs h <\Re(g)$ instead. This and assumption~\eqref{A3} guarantee enough decay to apply the Dominated Convergence Theorem. $\qedsymbol$
\begin{remark}\label{remark:kato}
For $g>0, \nu\in \R$, by Lemma~\ref{lemma:Q}, the eigenvalue $\|Q(g, \nu)\|$ is simple. It is also isolated because $Q(g, \nu)$ is a compact operator.
We just proved the operators $Q(g, \nu)$ are strongly analytic.
Therefore, by the Kato--Rellich Theorem \cite[Theorem XII.8]{ReedSimon}, there exists a holomorphic function $\lambda_g(\nu)$ that agrees with $\norm{ Q(g, \nu) }$ for $\nu\in \R$.
In particular, the derivative $\del_\nu \|Q(g, \nu)\|$ exists.
Convexity and strict monotonicity of the function (Lemma~\ref{lemma:nu_c}) then imply $ \del_\nu \|Q(g, \nu)\| <0$.
For the same reason, the derivative $\del_g \|Q(g, \nu)\|$ also exists.
By assumptions~\eqref{A0} and \eqref{A3}, we have $\phi(t)>0$ for all $t>0$. Using this, the same proof of Lemma~\ref{lemma:nu_c} shows the map $g\mapsto \norm{ Q(g, \nu) }$ for a fixed $\nu$ is convex and strictly decreasing. It follows that $\del_g \|Q(g, \nu)\|<0$. Then by the Implicit Function Theorem, \begin{align} \label{eqn:2.21}
\frac {d\nu_c}{dg} = - \frac{ \del_g \|Q(g, \nu)\| }{ \del_\nu \|Q(g, \nu)\| }<0,
\end{align}
so $\nu_c(g)$ is strictly decreasing in $g$.
\end{remark}
In the next proposition, we prove limits of $\nu_c(g)$ conditional on $\nu_c(g) \le 0$, which is proved in Proposition~\ref{prop:chi}. The limits of $\nu_c(g)$ will not be used elsewhere.
\begin{proposition} \label{prop:limit-nu_c}
The critical parameter $\nu_c(g)$ is a strictly decreasing function of $g$, with limits \begin{align}
\lim_{g\to 0} \nu_c(g) = 0, \qquad
\lim_{g\to \infty} \nu_c(g) = -\infty.
\end{align}
\end{proposition}
\emph{Proof. }
The function $\nu_c(g)$ is strictly decreasing because it has a strictly negative derivative~\eqref{eqn:2.21}. It follows that limits of $\nu_c(g)$ exist. Since $\nu_c(g)\le 0$ by Proposition~\ref{prop:chi}, $\lim_{g\to 0} \nu_c(g) \le 0$.
Suppose for the sake of contradiction that $\lim_{g\to 0} \nu_c(g) = \nu_0 < 0$. Let $g>0$ and $f\ne 0$. Since the function $\nu \mapsto \norm{ Q(g, \nu) } $ is decreasing (Lemma~\ref{lemma:nu_c}), we have \begin{align} \label{eqn:2.23}
1 = \norm{ Q(g, \nu_c(g)) } \ge \norm { Q(g, \nu_0) }
\ge \frac{ \norm{Q(g, \nu_0) f}_2 } {\norm {f}_2 }.
\end{align}
We will pick $f$ so that the $g\to 0$ limit of the right-hand side is unbounded, giving a contradiction. For this purpose, we can assume $\nu_0 \ge -2$ without loss of generality.
Let $a>0$ be a parameter and let $f(t) = e^{-at}$, so $f$ is square-integrable.
Since $\lim_{g\to 0}\sqrt{p(t; g, \nu_0)} = e^{-\half \nu_0 t}$, by the Monotone Convergence Theorem and the Taylor series $I_0(z) = \sum_{m=0}^\infty \frac{1}{ m! m!} (\frac z 2)^{2m}$,
\begin{align}
\lim_{g\to 0} Q(g, \nu_0)f (t)
&= \int_0^\infty f(s) e^{-\half \nu_0 t} e^{-\half \nu_0 s} e^{-t}e^{-s} I_0(2\sqrt{st})\ ds \nonumber \\
&= \sum_{m=0}^\infty \frac{ t^m} {m!} e^{-(\half \nu_0 + 1 ) t} \int_0^\infty \frac{ s^m} {m!} e^{-(a + \half \nu_0 + 1 ) s} ds \nonumber \\
&= \sum_{m=0}^\infty \frac{ t^m} {m!} e^{-(\half \nu_0 + 1 ) t} \( a+ \half \nu_0 + 1 \)^{-m-1} \nonumber \\
&= \( a+ \half \nu_0 + 1 \)\inv e^{ ( a+ \half \nu_0 + 1 )\inv t - (\half \nu_0 + 1 ) t }.
\end{align}
Observe that if \begin{align} \label{eqn:2.25}
( a+ \half \nu_0 + 1 )\inv - (\half \nu_0 + 1 ) \ge 0,
\end{align}
then $ \lim_{g\to 0} \norm{ Q(g, \nu_0)f }_2 = \infty $ by monotone convergence. Since $\nu_0 < 0$, equation \eqref{eqn:2.25} holds as a strict inequality when $a=0$. By continuity, \eqref{eqn:2.25} also holds for all small $a$. Pick one such $a$, then taking the $g\to 0$ limit of equation~\eqref{eqn:2.23} yields \begin{align}
1 \ge \frac{ \lim_{g\to 0} \norm{ Q(g, \nu_0)f }_2 } { \norm f_2} = \infty,
\end{align} giving a contradiction.
The $g\to \infty$ limit of $\nu_c(g)$ is easier. Suppose for the sake of contradiction that $\lim_{g\to \infty} \nu_c(g) = \nu_0 > -\infty$. Since the function $\nu \mapsto \norm{ Q(g, \nu) } $ is decreasing (Lemma~\ref{lemma:nu_c}), we have
\begin{align}
1 = \norm{ Q(g, \nu_c(g)) } \le \norm { Q(g, \nu_0) }.
\end{align}
As in the proof of Lemma~\ref{lemma:nu_c}(iii) (now with $g\to \infty$ instead if $\nu\to \infty$), the right-hand side converges to 0 as $g\to \infty$, giving a contradiction.
$\qedsymbol$
\subsection{Infinite-volume two-point function}
In this section, we prove the convergence of $G_{ij}^N$ to $G_{ij}$ and give an inner product representation of $G_{ij}$. As a corollary, we will obtain properties of $G_{ij}$ as $\abs{ j-i } \to\infty$. Recall $\nu_c(g)$ satisfies $\|Q(g, \nu_c(g))\|=1$. The following proposition is proved using a contraction argument.
\begin{proposition} \label{prop:Gij}
Let $g>0$. There exists $\eps>0$ such that for $\Re(\nu) > \nu_c(g) - \eps$:
(i) $q = \lim_{N\to\infty} T^N[\sqrt p]$ exists in $L^2[0, \infty)$. Moreover, the convergence is locally uniform in $g$ and $\nu$, and $q(t; g, \nu)$ is continuous in $t, g$ and holomorphic in $\nu$;
(ii) for any integers $i \le j$, the two-point function $G_{ij}(g, \nu)$ is given by \begin{align} \label{eqn:2.28}
G_{ij}(g, \nu) = \lim_{N\to\infty} G_{ij}^N(g, \nu)
= \langle Q^{j-i}(q) , \bar q \rangle.
\end{align}
\end{proposition}
The $Q^{j-i}$ in the inner product representation~\eqref{eqn:2.28} hints that we should see different behaviors of $G_{ij}$ when $\norm Q<1$ and $\norm Q >1$ as $\abs{j-i} \to \infty$.
\begin{corollary} \label{corollary:Gij}
Let $g>0$, $\nu\in \R$, and $\eps$ be given by Proposition~\ref{prop:Gij}. Then
(i) If $\nu > \nu_c(g)$, the two-point function $G_{ij}(g, \nu)$ decays exponentially as $\abs { j-i } \to \infty$.
(ii) If $\nu = \nu_c(g)$, the two-point function $G_{ij}(g, \nu_c(g))$ converges to a non-trivial limit as $\abs { j-i } \to \infty$.
(iii) If $\nu_c(g) - \eps< \nu < \nu_c(g)$, the two-point function $G_{ij}(g, \nu)$ diverges exponentially as $\abs { j-i } \to \infty$.
In particular, we have the critical exponent $\eta=1$ (see \eqref{exponent:eta}).
\end{corollary}
\emph{Proof. }
By Section~\ref{section:Qproperty}, $Q$ is positive and self-adjoint, and it has eigenvalue $\|Q\|$. Let $P_{\|Q\|}$ denote the projection into the eigenspace of eigenvalue $\norm Q$.
If $P_{\|Q\|}(q) \ne 0$, then \eqref{eqn:2.28} and the spectral theorem implies \begin{align}
G_{ij}(g, \nu) \sim \|Q\|^{\abs{ j-i }} \| P_{\|Q\|}(q)\|_2^2, \qquad \abs{ j-i }\to \infty, \end{align}
which gives the desired result. Hence, it is sufficient to prove $P_{\|Q\|}(q) \ne 0$.
Recall the definition of the operator $T$ in \eqref{def:T}. Since $\sqrt p >0$, we have $ T^N[\sqrt p] > \sqrt{p(t)}e^{-t}$ for all $N$. By the $L^2$ convergence, \begin{align}\label{eqn:2.30}
q = \lim_{N\to\infty} T^N[\sqrt p] \ge \sqrt{p(t)}e^{-t} \end{align} for almost every $t$. But $q(t)$ is continuous, so \eqref{eqn:2.30} holds for all $t$.
By Lemma~\ref{lemma:Q}(ii), $Q$ has an eigenvector $h>0$ with eigenvalue $\norm Q$. Now \eqref{eqn:2.30} implies $\langle q, h \rangle >0$, so $P_{\|Q\|}(q) \ne 0$.
$\qedsymbol$
\subsubsection{Proof of Proposition~\ref{prop:Gij}(i)}
\emph{Step 1:}
First consider fixed $g>0, \nu\in\R$.
Define another linear operator $A$ by \begin{align}
Af(t) = \int_0^\infty f(s) k_1(t,s)\sqrt{\tfrac{t}{s}} ds, \end{align}
then by definition $Tf(t) = \sqrt{p(t)}e^{-t} + Af(t)$.
We view $A$ as an operator on a weighted $L^r$ space and will show it is a contraction. For $1\le r<2$, define weight $ w(t) = t^{-r/2}$ and measure $d\mu = w(t) dt$ on $(0, \infty)$. Let $r'$ denote the H\"older conjugate of $r$. Since $\sqrt{p(0)} =1$ and $r<2$, the function $\big(\sqrt{p(t)}\big)^r w(t) $ is integrable on $(0,1)$. Combining with the exponential decay of $\sqrt{p(t)}$, we get $\sqrt{p(t)} \in L^r(\mu)$.
For $h\in L^{r'}(\mu)$, we have \begin{align}
\int_0^\infty \( \abs{ h(t) } t^{ (1-r)/2 } \)^{r'} dt
&= \int_0^\infty \abs{ h(t) }^{r'} \frac{1}{t^{(r-1)r'/2}} dt \nonumber \\
&= \int_0^\infty \abs{ h(t) }^{r'} \frac{1}{t^{r/2}} dt
= \int_0^\infty \abs{ h(t) }^{r'} d\mu(t),
\end{align}
so $h\in L^{r'}(\mu)$ if and only if $ h(t) t^{ (1-r)/2 } \in L^{r'}(dt)$, with $\norm h_{L^{r'}(\mu)} = \norm{ h(t) t^{ (1-r)/2 } }_{L^{r'}(dt)}$.
Similarly, for $f\in L^r(\mu)$, we have $f\in L^r(\mu)$ if and only if $f(t) t^{-1/2} \in L^r(dt)$, with $\norm f _{L^r(\mu)} = \norm{ f(t) t^{-1/2} }_{ L^r(dt)}$.
Now define another linear operator $B$ by \begin{align}
Bf(t) = \int_0^\infty f(s) k_1(t,s)ds. \end{align}
Then for $f\in L^r(\mu), h\in L^{r'}(\mu)$, we have \begin{align}
\langle Af, h \rangle_\mu
&=
\int_0^\infty \overline{h(t)} \int_0^\infty f(s) k_1(t,s)\sqrt{\tfrac{t}{s}}\ ds d\mu(t) \nonumber \\
&=
\int_0^\infty \frac{ \overline{h(t)}\sqrt{t}}{t^{r/2}} \int_0^\infty \frac{f(s)}{\sqrt{s}} k_1(t,s)\ ds dt \nonumber \\
&=
\left\langle B\( f(t) t^{-1/2} \), h(t) t^{(1-r)/2} \right\rangle_{dt}.
\label{eqn:2.33}
\end{align}
We view $A$ and $B$ as operators $A:L^r(\mu) \to L^r(\mu)$ and $B:L^r(dt) \to L^r(dt)$. They are both bounded operators because the kernel $k_1(\cdot, \cdot)$ has exponential decay by assumption~\eqref{A3}. By \eqref{eqn:2.33}, we have \begin{align}
\|A\|_{L^p(\mu)}
&= \sup \{ \langle Af, h \rangle_\mu : \| f \|_{L^r(\mu)} = \|h\|_{L^{r'}(\mu)} =1 \}\nonumber \\
&= \sup \{ \langle B\hat f, \hat h \rangle_{dt} : \| \hat f \|_{L^r(dt)} = \|\hat h\|_{L^{r'}(dt)} =1 \}\nonumber\\
&= \|B\|_{L^r(dt)}.
\end{align}
Therefore, to prove $A$ is a contraction, it is sufficient to prove $B$ has operator norm $<1$. For simplicity, we write $\|B\|_r = \|B\|_{L^r(dt)}$.
We can also view $B$ as an operator on $L^2(dt)$. As in Section~\ref{section:Qproperty}, $B:L^2(dt) \to L^2(dt)$ is compact, self-adjoint, and has a continuous, strictly positive eigenvector $h$ with eigenvalue $\|B\|_2$, which we normalize to have $\|h\|_2=1$. Since the modified Bessel functions satisfy $I_1<I_0$ pointwise, we have \begin{align}
\|B\|_2 = \langle Bh, h \rangle < \langle Qh, h \rangle \le \|Q\|.
\end{align}
By the Riesz--Thorin interpolation theorem, for $1\le r\le 2$, we can upper bound the operator norm by \begin{align}
\| B\|_r \le \| B\|_1^{1-\theta(r)} \| B\|_2^{\theta(r)}
\end{align}
with $\theta(r) \in [0,1]$ continuous and $\theta(2) = 1$.
Since $\| B\|_2 < \|Q\|$, by continuity, we have $\|B\|_r < \| Q\|$ for $r$ sufficiently close to 2.
Hence, for $\nu\ge \nu_c(g)$ (which satisfies $\|Q(g, \nu)\|\le 1$) and $r$ slightly less than 2, we get $\|A\|_{L^r(\mu)} = \| B\|_r<1$, so
$A$ is a contraction in $L^r(\mu)$.
\noindent
\emph{Step 2:} Now we allow $\nu$ to be complex.
Fix $g_0>0, \nu_0\in \R$ for which $\| Q(g_0, \nu_0) \| \le1$.
By Step 1, there exists some $r_0<2$ for which $\| B(g_0, \nu_0) \|_{r_0} <1$.
Since the map $(g, \nu)\mapsto \| B(g, \nu) \|_{r_0}$ is continuous, there exists $\delta>0$ such that $\| B(g, \nu) \|_{r_0}<1$ for any $\abs { g-g_0 }\le\delta, \abs { \nu-\nu_0 } \le \delta$.
The $\eps$ in the statement of the proposition is the $\delta$ corresponding to the choice $(g_0, \nu_0) = (g, \nu_c(g))$.
We will do a contraction argument in a neighborhood of $(g_0, \nu_0)$. Define the tube \begin{align}
S_0= \{ (g, \nu) \in (0, \infty) \times \C \mid \abs{g - g_0} \le \delta, \abs{\Re(\nu) - \nu_0} \le \delta \}. \end{align}
Define \begin{align}
\mathcal X_0 = \{ \phi:(0, \infty)\times S_0 \to \C
\mid \sup_{(g, \nu)\in S_0} \| \phi \|_{L^{r_0}(d\mu(t))}<\infty\},
\end{align}
where $d\mu(t) = t^{-r_0/2} dt$.
We extend $T$ and $A$ to include dependence on $g$ and $\nu$.
Define operator $\hat T:\mathcal X_0\to \mathcal X_0$ and linear operator $\hat A:\mathcal X_0\to \mathcal X_0$ by
\begin{align}
(\hat A\phi)(t, g, \nu) &= \int_0^\infty \phi(s, g, \nu) k_1(t,s;g,\nu)\sqrt{\tfrac{t}{s}} ds, \\
(\hat T\phi)(t, g, \nu) &= \sqrt{p(t; g, \nu)} e^{-t} + (\hat A\phi)(t, g, \nu).
\end{align}
Define $g^* = g_0-\delta$, $\nu^* = \nu_0-\delta$, $p^*(\cdot) = p(\cdot; g^*, \nu^*)$, $k_1^*(t,s) = k_1(t,s; g^*, \nu^*)$, and $A^* = A(g^*, \nu^*)$; these correspond to the worst case in terms of convergence. For all $(g, \nu)\in S_0$, we have \begin{align}
\abs{\sqrt{p(t; g, \nu)} } = \abs{e^{(-g\phi(t)-\nu t)/2} } \le e^{(-g^*\phi(t)-\nu^* t)/2} = \sqrt{p^*(t)},
\end{align} because $\phi\ge 0$ by assumptions \eqref{A0} and \eqref{A2}. It follows that \begin{align}
\abs{(\hat A\phi)(t, g, \nu)} \le \Big(A^*\abs{\phi(\cdot, g, \nu)}\Big)(t),
\end{align}
so \begin{align}
\| \hat A\phi \|_{L^{r_0}(\mu)}(g, \nu)
&\le \Big\| A^*\abs{\phi(\cdot, g, \nu)} \Big\|_{L^{r_0}(\mu)}
\le \| A^* \|_{L^{r_0}(\mu)} \| \phi(\cdot, g, \nu) \|_{L^{r_0}(\mu)}.
\end{align}
Thus, \begin{align}
\sup_{(g, \nu)\in S_0}\| \hat A\phi \|_{L^{r_0}(\mu)}
\le \| A^* \|_{L^{r_0}(\mu)} \sup_{(g, \nu)\in S_0} \| \phi \|_{L^{r_0}(d\mu(t))}.
\end{align}
This proves $\| \hat A \| \le \| A^* \|_{L^{r_0}(\mu)} = \| B(g^*, \nu^*) \|_{r_0}<1$, so $\hat A$ is a contraction on $\mathcal X_0$.
Since \begin{align}
\hat T\phi - \hat T \psi = \hat A\phi - \hat A \psi, \qquad \forall \phi, \psi\in \mathcal X_0,\end{align}
$\hat T$ is also a contraction, so $q = \lim_{N\to\infty} \hat T^N[\sqrt p]$ exists in $\mathcal X_0$ and is a fixed point of $\hat T$.
By the triangle inequality and H\"older's inequality,
\begin{align}
&\left\lvert \( q - \hat T^{N+1}[\sqrt p] \)(t, g, \nu)\right\lvert \nonumber \\
&=\left\lvert \hat T\( q - \hat T^{N}[\sqrt p] \)(t, g, \nu)\right\lvert \nonumber \\
&= \left\lvert \hat A\( q - \hat T^{N}[\sqrt p] \)(t, g, \nu)\right\lvert \nonumber \\
&\le \int_0^\infty \left\lvert\( q - \hat T^{N}[\sqrt p] \)(s, g, \nu)\right\lvert
k_1^*(t,s) \sqrt{\tfrac{t}{s}}s^{r_0/2} d\mu(s) \nonumber \\
&\le \left\| \(q - \hat T^{N}[\sqrt p] \)(\cdot, g, \nu) \right\|_{L^{r_0}(\mu)}
\Big\| k_1^*(t,s) \sqrt{\tfrac{t}{s}}s^{r_0/2} \Big\|_{L^{r_0'}(d\mu(s))}. \label{eqn:45}
\end{align}
Hence, \begin{align}
&\sup_{t\ge0, (g, \nu)\in S_0} \left\lvert q - \hat T^{N+1}[\sqrt p] \right\lvert \nonumber \\
&\le
\sup_{(g, \nu)\in S_0} \left\| \(q - \hat T^{N}[\sqrt p] \) \right\|_{L^{r_0}(\mu)}
\sup_{t\ge 0} \Big\| k_1^*(t,s)\sqrt{\tfrac{t}{s}}s^{r_0/2} \Big\|_{L^{r_0'}(d\mu(s))} .
\end{align}
The first term converges to 0, by definition of convergence in $\mathcal X_0$. The second term is finite because $k_1^*$ has exponential decay by assumption~\eqref{A3} and $r_0 < 2$. This shows the convergence in uniform in $t$ and locally uniform in $g, \nu$, hence $q(t; g, \nu)$ is continuous.
Also, since $\nu\to \sqrt{p(t; g, \nu)}$ is holomorphic, $\nu \mapsto \hat T^k[\sqrt p]$ is holomorphic for all $k$. Then local uniform convergence implies $\nu \mapsto q(t; g, \nu)$ is holomorphic as well.
To get $L^2(dt)$ convergence, taking the $L^2(dt)$ norm in (\ref{eqn:45}) gives \begin{align}
&\sup_{(g, \nu)\in S_0} \left\| q - \hat T^{N+1}[\sqrt p] \right\|_{L^2(dt)} \nonumber \\
&\le \sup_{(g, \nu)\in S_0} \left\| \(q - \hat T^{N}[\sqrt p] \) \right\|_{L^{r_0}(\mu)}
\cdot \Big\|
\Big\| k_1^*(t,s) \sqrt{\tfrac{t}{s}}s^{r_0/2} \Big\|_{L^{r_0'}(d\mu(s))}
\Big\|_{L^2(dt)}.
\end{align}
The first term again converges to 0, and the second term again is finite because $k_1^*$ has exponential decay and $r_0<2$.
$\qedsymbol$
\subsubsection{Proof of Proposition~\ref{prop:Gij}(ii)}
The second equality follows directly from part (i), the continuity of $Q$, and Proposition~\ref{prop:Gij^N}.
To prove the first equality, we first prove $P_{ij}^N(g, T) \to P_{ij}(g, T)$.
This argument is adapted from \cite{BBS2015}. Notice if a walk never reaches $\pm N$, then it contributes the same to $E_i^N[\cdot]$ and $E_i[\cdot]$. Therefore, a walk starting at $i$ must make at least $\min\{\abs{N-i}, \abs{-N-i}\}$ steps to make a difference. Using $L_{T,x}\ge0$ for all $x$, \begin{align} \nonumber
\abs{ P_{ij}^N(g, T) - P_{ij}(g,T)}
&\le 2 P_i( X \text{ hits one of }\pm N \text{ by time }T) \\
&= 2P(M_T \ge \min\{\abs{N-i}, \abs{-N-i}\}),
\end{align}
where $M_T$ is a Poisson process with rate $2$.
For a fixed $T$, this probability converges to 0 as $N\to\infty$.
Let $\nu > \nu_c(g) -\eps$. By Fatou's lemma and part (i), \begin{align}
G_{ij}(g, \nu) &= \int_0^\infty P_{ij}(g, T) e^{-\nu T}dT \nonumber \\
&\le \lim_{N\to\infty} \int_0^\infty P_{ij}^N(g, T) e^{-\nu T}dT
= \lim_{N\to\infty} G_{ij}^N(g, \nu). \label{eqn:51}
\end{align}
Moreover, if $\nu>0$, then $P_{ij}^N\le 1$ and the Dominated Convergence Theorem implies (\ref{eqn:51}) holds with equality.
Since both sides of (\ref{eqn:51}) are holomorphic in $\nu$ when $\Re(\nu)> \nu_c(g)-\eps$, they agree on the whole domain. $\qedsymbol$
\subsection{Susceptibility and correlation length} \label{section:susceptibility}
\begin{proposition} \label{prop:chi}
Fix $g>0$. Write $\nu_c = \nu_c(g)$.
(i) For $\Re(\nu) > \nu_c$, the one-sided susceptibility $\chi_+(g, \nu)$ is given by \begin{align} \label{eqn:chi}
\chi_+(g, \nu) = \sum_{j=1}^\infty G_{0j}(g, \nu)
= \left \langle Q(1-Q)\inv(q) , \bar q \right \rangle,
\end{align}
and there is a constant $\bar u = \bar u(g)>0$ such that \begin{align} \label{eqn:chi_asym}
\chi_+(g, \nu)
\sim \frac{\bar u}{\nu-\nu_c}
\(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)\inv
, \qquad \nu\to \nu_c^+.
\end{align}
(ii) $\nu_c(g) \le 0$.
\end{proposition}
We factored the residue of $\chi_+$ in~\eqref{eqn:chi_asym}, because $\bar u$ appears in the asymptotics of $\sum_{j=1}^\infty j^k G_{0j}$ for all $k\in \N$ as it is, while the other part appears with a different power, see~\eqref{eqn:j^k_asymp}.
By symmetry, the two-sided susceptibility is given by $\chi(g, \nu) = 2\chi_+(g, \nu) + G_{00}(g, \nu)$. Since $G_{00}$ is regular as $\nu\to \nu_c$ by Proposition~\ref{prop:Gij}(ii), we have the critical exponent $\gamma=1$.
\emph{Proof. }
(i)
By Proposition~\ref{prop:Gij}, $G_{0j}(g, \nu) =\langle Q^{j}(q) , \bar q \rangle$.
For $\Re(\nu) > \nu_c$, we know $\|Q(g, \nu) \| \le \| Q(g, \Re(\nu)) \| < 1$. Hence, \begin{align}
\abs{ \chi_+(g, \nu) } \le \sum_{j=1}^\infty \abs { \langle Q^{j}(q) , \bar q \rangle }
\le \sum_{j=1}^\infty \|Q\|^j \| q\|_2^2 < \infty.
\end{align} This convergence is locally uniform in $\nu$, so $\chi_+$ is holomorphic in $\nu$.
For $\nu\in \R$, the Monotone Convergence Theorem implies \begin{align}
\chi_+(g, \nu) = \sum_{j=1}^\infty \langle Q^{j}(q) , \bar q \rangle
= \left \langle \sum_{j=1}^\infty Q^{j}(q) , \bar q \right\rangle
= \left \langle Q(1-Q)\inv(q) , \bar q \right \rangle.
\end{align} This holds for all $\Re(\nu)>\nu_c$ by the uniqueness of analytic continuation.
To determine the divergence of $\chi_+$ when $\nu\to \nu_c^+$, we restrict to $\nu\in\R$ so that $Q$ is self-adjoint. By the spectral theorem, we can decompose $q= v + w$ where $v = P_{\|Q\|}(q)$ is the projection of $q$ into the eigenspace $E_{\|Q\|}$, and $w\in (E_{\|Q\|})^\perp$.
Since $\| Q\|$ is a simple eigenvalue, by the Kato--Rellich Theorem \cite[Theorem XII.8]{ReedSimon}, this decomposition is analytic around $\nu=\nu_c$ (where $\|Q\|=1$), and there exists $\delta>0$ such that the spectrum of $Q(g, \nu)$ intersects $\{ \lambda\in \C \mid \abs{\lambda-1} < \delta\}$ at exactly one point, for all $\nu$ sufficiently close to $\nu_c$. This means all other eigenvalues of $Q$ are bounded from above by $1-\delta$ as $\nu\to \nu_c$. Hence,
\begin{align}
\chi_+(g, \nu)
&= \left \langle Q(1-Q)\inv(q) , \bar q \right \rangle \nonumber \\
&= \left \langle Q(1-Q)\inv(v) , \bar v \right \rangle + \left \langle Q(1-Q)\inv(w) , \bar w \right \rangle\nonumber \\
&= \frac{\|Q\| }{1-\| Q\|} \left \| P_{\|Q\|}(q) \right \|_2^2 + \left \langle Q (1-Q)\inv(w) , \bar w \right \rangle\nonumber \\
&\sim \frac{1}{1-\| Q\|} \left \| P_{1}(q\lvert_{\nu_c}) \right \|_2^2 + O(1),
\end{align}
as $\nu\to \nu_c^+$ (recall $q$ is continuous at $\nu_c$ by Proposition~\ref{prop:Gij}(i)). The second term is bounded because all other eigenvalues of $Q$ are less than $1-\delta$.
Now, using the definition of derivative, we get \begin{align}
\chi_+(g, \nu) \sim
\frac{1}{\nu-\nu_c}
\(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c}\)\inv
\left \| P_{1}(q\lvert_{\nu_c}) \right \|_2^2
, \qquad \nu\to \nu_c^+.
\end{align}
We get the desired result by defining $\bar u = \left \| P_{1}(q\lvert_{\nu_c}) \right \|_2^2 $.
(ii) For $\nu >0$, since $\phi\ge 0$, by the Monotone Convergence Theorem, \begin{align}
\chi_+(g, \nu) = \sum_{j=1}^\infty G_{0j}(g, \nu)
&= \int_0^\infty E_i \( e^{-g\sum_{x=-\infty}^\infty\phi(L_{T,x})} \1_{X(T)> 0} \) e^{-\nu T}dT
\nonumber \\
&\le \int_0^\infty 1\cdot e^{-\nu T}dT
< \infty \end{align} exists.
But by part (i), $\chi_+(g, \nu_c(g)) = \infty$, so we must have $\nu_c(g)\le 0$.
$\qedsymbol$
Using the same method, we can also study the correlation length. Results are collected in the following corollary.
\begin{corollary} \label{corollary:correlation}
Fix $g>0$. Write $\nu_c = \nu_c(g)$. For $\Re(\nu) > \nu_c$, \begin{align}
\sum_{j=1}^\infty jG_{0j}(g, \nu)
= \left \langle Q(1-Q)^{-2}(q) , \bar q \right \rangle \label{eqn:2.54}
\end{align}
Hence, \begin{align}
\sum_{j=1}^\infty j G_{0j}(g, \nu)
\sim \frac{\bar u}{(\nu-\nu_c)^2}
\(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)^{-2}
, \qquad \nu\to \nu_c^+,
\end{align}
where $\bar u$ is the same as in Proposition~\ref{prop:chi}.
In general, for any $k\in \N$, \begin{align} \label{eqn:j^k_asymp}
\sum_{j=1}^\infty j^k G_{0j}(g, \nu)
\sim \frac{\bar u \cdot k!}{(\nu-\nu_c)^{k+1}}
\(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)^{-(k+1)}
, \qquad \nu\to \nu_c^+.
\end{align}
Thus, using symmetry, the correlation length of order $k$ (defined in \eqref{exponent:nu_k}) satisfies \begin{align}
\xi_k(g, \nu)
\sim \frac{(k!)^{1/k}}{\nu-\nu_c}
\(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)\inv
, \qquad \nu\to \nu_c^+, \end{align}
which gives the critical exponent $\nu_k=1$ .
\end{corollary}
\emph{Proof. }
For $\Re(\nu) > \nu_c$, by the same argument as for Proposition~\ref{prop:chi}, \begin{align}
\sum_{j=1}^\infty j G_{0j}(g, \nu)
= \sum_{j=1}^\infty j \langle Q^{j}(q) , \bar q \rangle
= \left \langle \sum_{j=1}^\infty j Q^{j}(q) , \bar q \right\rangle
= \left \langle Q(1-Q)^{-2}(q) , \bar q \right \rangle.
\end{align}
The proof for the asymptotic formula is analogous.
For $k\ge 2$, we calculate $\sum_{j=1}^\infty j^k Q^j$ by differentiating the geometric series, then we use the same argument.
$\qedsymbol$
\section{Time asymptotics} \label{section:laplace}
In this section, we will prove asymptotic results in time $T$. We first prove a general Tauberian theorem, which utilizes analyticity properties of the Laplace transform on the boundary of the region of convergence. Then we apply the Tauberian theorem to our problem to prove Therorem~\ref{theorem:moments} and the convergence part of Theorem~\ref{theorem:WLLN}.
For a function $f:[0, \infty) \to \R$, we define its Laplace transform $\Lcal f$ to be the complex-valued function \begin{align}
\Lcal f(z) = \int_0^\infty f(T) e^{-zT}dT. \end{align}
\subsection{Tauberian theorem}
The following is our Tauberian theorem.
\begin{theorem} \label{theorem:tauber0}
Let $k\in \N_0$. Suppose $f:[0, \infty) \to[0, \infty)$ is differentiable and let its derivative be decomposed as $f' = \alpha_+ - \alpha_-$ with $ \alpha_\pm \ge 0$.
Suppose each of the Laplace transforms $\Lcal f(z), \Lcal \alpha_\pm(z)$ converges for $\Re(z)>0$ and can be extended to a meromorphic function on an open set containing the closed half-plane $\{ z\in \C \mid \Re(z)\ge 0\}$.
Suppose $\Lcal f(z)$ has a unique pole of order $k+1$ at $z=0$, and each of $\Lcal \alpha_\pm(z)$ either has a unique pole of order $\le k+1$ at $z=0$ or is holomorphic.
If $\lim_{z\to 0} z^{k+1}\Lcal f(z) = C >0$, then \begin{align}
\lim_{T\to \infty} \frac{f(T)}{T^k} = \frac{C}{k!}. \end{align}
\end{theorem}
The main tools to prove our Tauberian theorem are the following two theorems. The first is from \cite[Theorem III.9.2]{Korevaar2004} and the second is from \cite[Theorem 4.1]{Korevaar1954}.
\begin{theorem} \label{theorem:tauber1}
Let $\alpha(t)=0$ for $t<0$, be bounded from below for $t\ge 0$ and be such that the Laplace transform $G(z) = \Lcal\alpha(z)$ exists for $\Re(z)>0$.
Suppose that $G(\cdot)$ can be analytically continued to an open set containing the closed half-plane $\{ z\in \C \mid \Re(z)\ge0\}$. Then the improper integral $\lim_{T\to\infty}\int_0^{T} \alpha(t)dt$ exists and equals $G(0)$.
\end{theorem}
\begin{theorem} \label{theorem:tauber2}
Let $a(t)$ be integrable over every finite interval $(0, T)$, and let $\Lcal a(z)$ be convergent for $z>0$. Suppose $\Lcal a(z)$ can be extended analytically to a neighborhood of $z=0$. Finally, suppose that \begin{align}
a(t) \ge -\psi(t) \qquad (t>0),\end{align}
where $\psi(t)$ is continuous for $t>0$ and of the form $t^\gamma L(t)$, $L(t)$ slowly oscillating\footnote{$ L:(0, \infty) \to (0, \infty)$ is said to be slowly oscillating if it is continuous and $L(ct)/L(t)\to 1$ as $t\to\infty$ for every $c>0$. }. Then \begin{align}
\left\lvert\int_0^T a(t)dt - \lim_{z\to0} \Lcal a(z) \right\lvert= O(\psi(T)), \qquad T\to \infty. \end{align}
\end{theorem}
Notice Theorem~\ref{theorem:tauber2} does not assume $\psi(T) \to 0$, so the conclusion is different from that of Theorem~\ref{theorem:tauber1}.
Under the hypotheses of Theorem~\ref{theorem:tauber1}, we can take the $\psi(T)$ in Theorem~\ref{theorem:tauber2} to be a constant function, then Theorem~\ref{theorem:tauber2} only gives that $ \int_0^T \alpha - G(0) = O(1)$, which is weaker than the conclusion of Theorem~\ref{theorem:tauber1}. Nevertheless, the flexibility of Theorem~\ref{theorem:tauber2} is due to the fact that we can take $\psi(T)$ to be, for example, polynomials. The resultant polynomial upper bound is sufficient for our purposes.
\emph{Proof of Theorem~\ref{theorem:tauber0}. }
The strategy of the proof is as follows. We will first use Theorem~\ref{theorem:tauber2} on a modification of $\alpha_\pm$ to prove $f(T) = O(T^k)$. Then we will use Theorem~\ref{theorem:tauber1} on a different modification of $f$ and $\alpha_\pm$ to show $\lim_{T\to\infty} \frac{f(T)}{T^k} = \lim_{T\to\infty} \frac{f(T)}{(T+1)^k}$ exists. Finally, we use the Hardy--Littlewood Tauberian theorem to identify the limit.
By the assumptions on $\Lcal \alpha_\pm$, there are $A_j, B_j\in \R$ such that \begin{align}
\Lcal \alpha_+ (z) = \sum_{j=1}^{k+1} \frac{A_j}{z^j} + O(1), \qquad
\Lcal \alpha_- (z) = \sum_{j=1}^{k+1} \frac{B_j}{z^j} + O(1)
\end{align} as $z\to 0$. We claim $A_{k+1} = B_{k+1}$.
This is because for $z>0$, integration by parts gives\footnote{For the boundary term, the existence of the limit $\lim_{T\to\infty} f(T)e^{-zT}$ follows from the existence of the Laplace transforms $\Lcal f(z)$ and $\Lcal f'(z)$.} $\Lcal[f'](z) = z\Lcal[f](z) - f(0)$.
By assumption, $\Lcal f$ has a pole of order $k+1$ at 0, so $\Lcal f'(z)$ has a pole of order $k$ at 0. The relation $f' = \alpha_+ - \alpha_-$ then forces $A_{k+1} = B_{k+1}$.
We subtract polynomials from $\alpha_\pm$ so that the Laplace transforms of the resultant functions no longer have poles. Define \begin{align} \label{def:alpha-tilde}
\tilde \alpha_+(T) = \alpha_+(T) - \sum_{j=1}^{k+1} A_j \frac{T^{j-1}}{(j-1)!}, \qquad
\tilde \alpha_-(T) = \alpha_-(T) - \sum_{j=1}^{k+1} B_j \frac{T^{j-1}}{(j-1)!}
\end{align}
Since $\Lcal[T^{j-1}/(j-1)!](z) = 1/z^j$, $\Lcal \tilde \alpha_\pm(z)$ are regular as $z\to 0$. From the assumptions on $\Lcal \alpha_\pm$, we get that $\Lcal \tilde \alpha_\pm$ extend analytically to an open set containing the full closed half-plane $\{ z\in\C \mid \Re(z) \ge 0\}$.
We focus on $\alpha_+$, the argument for $\alpha_-$ is analogous.
Since $\alpha_+(T) \ge 0$, we have $\tilde \alpha_+(T) \ge - \sum_{j=1}^{k+1} A_j \frac{T^{j-1}}{(j-1)!}$.
To apply Theorem~\ref{theorem:tauber2}, we define \begin{align}
\psi(T) = T^k L(T)
= T^k \max\Big\{ \frac{ \abs{A_k} }{ k!} +1, \frac{1}{T^k} \Big\lvert \sum_{j=1}^{k+1} A_j \frac{T^{j-1}}{(j-1)!} \Big\rvert \Big\} , \end{align}
so $\alpha_+(T) \ge -\psi(T)$.
Notice for $T$ large enough, we have $L(T) = \frac{ \abs { A_k } }{ k!} +1$, so $L$ is slowly oscillating.
Therefore, Theorem~\ref{theorem:tauber2} yields \begin{align}
\left\lvert \int_0^T \tilde \alpha_+(t)dt + \text{const} \right\lvert = O(T^k), \qquad T\to \infty.
\end{align}
The same equation holds for $\tilde \alpha_-$ in the place of $\tilde \alpha_+$. We subtract the two equations and use the definition of $\tilde \alpha_\pm$, $f' = \alpha_+ - \alpha_-$, and $A_{k+1} = B_{k+1}$, to get \begin{align}
\left\lvert \int_0^T \Big( f'(t) - \sum_{j=1}^k (A_j - B_j) \frac{t^{j-1}}{(j-1)!} \Big)dt + \text{const} \right\lvert = O(T^k), \qquad T\to \infty.
\end{align}
Since the polynomial term integrates to $O(T^k)$, we get
$f(T) = f(0) + \int_0^T f' = O(T^k)$ as $T\to \infty$.
Next, we will prove $\lim_{T\to\infty} \frac{f(T)}{T^k} = \lim_{T\to\infty} \frac{f(T)}{(T+1)^k}$ exists. By the Fundamental Theorem of Calculus, \begin{align} \label{eqn:3.10}
\frac{f(T)}{(T+1)^k}
= f(0) + \int_0^T \frac{ f'(t) }{ (t+1)^k} dt - k \int_0^T \frac{ f(t) }{ (t+1)^{k+1}} dt.
\end{align}
We first calculate the limit of the second integral using Theorem~\ref{theorem:tauber1}.
Since $\Lcal f$ has a pole of order $k+1$, there are $C_j\in \R$ such that \begin{align}
\Lcal f(z) = \sum_{j=1}^{k+1} \frac{C_j}{z^j} + O(1), \qquad z\to 0, \end{align} where $C_{k+1} = C$ in the hypotheses.
Analogous to $\tilde \alpha_\pm$, we define \begin{align}
\tilde f(T) = f(T) - \sum_{j=1}^{k+1} C_j \frac{T^{j-1}}{(j-1)!}, \end{align}
then $\Lcal \tilde f$ extends analytically to an open set containing the closed half-plane, but $\tilde f$ is no longer bounded from below.
To fix this, we apply Theorem~\ref{theorem:tauber1} to $\tilde f(T) / (T+1)^{k+1}$, which is bounded from below.
For $z>0$, the Laplace transform $\Lcal [ f(T) / (T+1)^{k+1} ](z)$ exists by domination, then $\Lcal [ \tilde f(T) / (T+1)^{k+1} ](z)$ exists by linearity.
By properties of the Laplace transform, \begin{align} \label{eqn:3.13}
\Lcal \Big[ \frac{\tilde f(T)}{(T+1)^{k+1}} \Big](z)
= e^z \int_z^\infty ds_1 \int_{s_1}^\infty ds_2\, \dots
\int_{s_{k}}^\infty ds_{k+1} e^{-s_{k+1}} \Lcal[\tilde f ](s_{k+1}). \end{align}
This equation also holds for complex $z$ with $\Re(z)>0$ because $\Lcal \tilde f$ is analytic and the open half-plane is simply connected.
Since $\Lcal \tilde f$ can be extended analytically to the closed half-plane, \eqref{eqn:3.13} extends $\Lcal [ \tilde f(T) / (T+1)^{k+1} ]$ analytically to the closed half-plane too.
By Theorem~\ref{theorem:tauber1}, \begin{align}
\lim_{T\to \infty} \int_0^T \frac{\tilde f(t)}{(t+1)^{k+1}} dt
= \lim_{z\to0} \Lcal \Big[ \frac{\tilde f(T)}{(T+1)^{k+1}} \Big](z).
\end{align}
Since $f$ and $\tilde f$ differ by a polynomial, we get \begin{align}\label{eqn:3.15}
\int_0^T \frac{ f(t)}{(t+1)^{k+1}} dt = \frac{C_{k+1}}{k!} \log(T+1)
+ L_1 + o(1)
\end{align} for some finite $L_1$.
Next, we calculate the first integral of~\eqref{eqn:3.10}.
We use the same strategy and apply Theorem~\ref{theorem:tauber1} to $\tilde \alpha_\pm(T) / (T+1)^k$. This gives \begin{align}\label{eqn:3.16}
\lim_{T\to \infty} \int_0^T \frac{\tilde \alpha_\pm(t)}{(t+1)^k} dt
= \lim_{z\to0} \Lcal \Big[ \frac{\tilde \alpha_\pm(T)}{(T+1)^k} \Big](z).
\end{align}
Since $\alpha_\pm$ and $\tilde \alpha_\pm$ differ by polynomials, if $k\ge 1$, we have \begin{align}
\int_0^T \frac{ \alpha_+(t)}{(t+1)^k} dt
&= \int_0^T \frac{ A_{k+1} }{k!} \frac{ t^k}{ (t+1)^k} dt + \frac{ A_k}{(k-1)!} \log(T+1) +L_2 + o(1), \\
\int_0^T \frac{ \alpha_-(t)}{(t+1)^k} dt
&= \int_0^T \frac{ B_{k+1} }{k!} \frac{ t^k}{ (t+1)^k} dt + \frac{ B_k}{(k-1)!} \log(T+1) +L_3 + o(1),
\end{align} for some finite $L_2$, $L_3$.
We subtract the two equations and use $f' = \alpha_+ - \alpha_-$ and $A_{k+1} = B_{k+1}$ to get \begin{align}\label{eqn:3.19}
\int_0^T \frac{ f'(t)}{(t+1)^k} dt
= \frac{ A_k - B_k}{(k-1)!} \log(T+1) +L_2 - L_3 + o(1).
\end{align}
Combining equations \eqref{eqn:3.10}, \eqref{eqn:3.15}, and \eqref{eqn:3.19}, we obtain \begin{align}
\frac{f(T)}{(T+1)^k}
= \frac{ A_k - B_k - C_{k+1} }{(k-1)!} \log(T+1) + L_4 + o(1)
\end{align}
for some finite $L_4$.
Now since $f(T) = O(T^k)$, the left-hand side of the equation is bounded. This implies $A_k - B_k - C_{k+1} = 0$ and $\lim_{T\to\infty} \frac{f(T)}{T^k} = \lim_{T\to\infty} \frac{f(T)}{(T+1)^k} = L_4$.
If $k=0$, since $A_{k+1} = B_{k+1}$, we have $f' = \alpha_+ - \alpha_- = \tilde \alpha_+ - \tilde \alpha_-$. Denoting $\int_0^{\infty-} g = \lim_{T\to \infty} \int_0^T g$, equation~\eqref{eqn:3.16} gives \begin{align}
\int_0^{\infty-} f'
= \int_0^{\infty-} \tilde \alpha_+ - \tilde \alpha_-
= \lim_{z\to 0} \Lcal [\tilde\alpha_+ - \tilde\alpha_- ](z)
= \lim_{z\to 0} \Lcal f'(z).
\end{align}
As $z\to 0^+$, we have $\Lcal f'(z) = z\Lcal [f](z) - f(0) \to C-f(0)$. It follows from $\int_0^{\infty-} f' = \lim_{T\to\infty} f(T) - f(0)$ that $\lim_{T\to\infty}f(T) = C$.
It remains to identify $L_4$ for the $k\ge 1$ case. Since $f\ge0$, the Hardy--Littlewood Tauberian theorem \cite[Theorem I.15.1]{Korevaar2004} states that $\Lcal f \sim Cz^{-(k+1)}$ as $z\to 0$ implies $\int_0^T f \sim \frac{C}{(k+1)!} T^{k+1}$ as $T\to \infty$. From this Ces\`aro sum and existence of $\lim_{T\to \infty} \frac{f(T) }{T^k}$, it is elementary to identify $\lim_{T\to \infty} \frac{f(T) }{T^k} = \frac{C}{k!}$.
$\qedsymbol$
\subsection{Proof of Theorem~\ref{theorem:moments}}
We prove asymptotics for the numerator and the denominator of (\ref{eqn:6}) separately. The quotient of the two limits then yields the theorem. We begin by proving two lemmas that will be used to verify the hypotheses of Theorem~\ref{theorem:tauber0}.
The first lemma calculates derivatives. It is proved in the same way as for the Kolmogorov forward equations. The only difference is that we get an extra term for staying at site $j$.
\begin{lemma} \label{lemma:T-derivative}
Recall $P_{0j}(g, T) = E_0 [ e^{-g\sum_x \phi(L_{T,x})} \1_{X(T)=j} ]$. Then
(i) \begin{align} \label{eqn:P0j'}
\frac{\del}{\del T} P_{0j}(g, T)
= &-g E_0 \big[ \phi'(L_{T,j})e^{-g\sum_x \phi(L_{T,x})}\1_{X(T)=j} \big]
\nonumber \\
&+ ( P_{0, j-1} -2P_{0j} + P_{0, j+1}) ;
\end{align}
(ii) \begin{align}
\frac{\del}{\del T} \sum_{j=1}^\infty P_{0j}(g, T) =
&-g \sum_{j=1}^\infty E_0 \big[ \phi'(L_{T,j})e^{-g\sum_x \phi(L_{T,x})}\1_{X(T)=j} \big] \nonumber\\
&+( P_{00} - P_{01} ); \end{align}
(iii) For $k\in\N$, \begin{align}\nonumber
\frac{\del}{\del T} \sum_{j=1}^\infty j^k P_{0j}(g, T) =
&-g \sum_{j=1}^\infty j^k E_0 \big[ \phi'(L_{T,j})e^{-g\sum_x \phi(L_{T,x})}\1_{X(T)=j} \big] \\
&+ P_{00} + (2^k-2)P_{01} + 2\sum_{j=2}^\infty \bigg( \sum_{\substack{l=2 \\ l\text{ even} }}^k \binom{k}{l} j^{k-l}\bigg) P_{0j}. \end{align}
\end{lemma}
\emph{Proof. }
(i) Let $T\ge 0$ and $h>0$. Consider \begin{align}
P_{0j}(g, T+h) = E_0 [ e^{-g\sum_x \phi(L_{T+h,x})} \1_{X(T+h)=j} ]. \end{align}
We separate the right-hand side into three events.
If the walk makes no jumps between time $T$ and $T+h$, then $X(T)=j$, and \begin{align}
L_{T+h,x} =\begin{cases}
L_{T,j}+h, & x=j, \\
L_{T,x}, &x\ne j. \end{cases}
\end{align}
Since the jump rates of the walk are 1 to the left and 1 to the right, the probability for this event is $e^{-2h} = 1 -2h + o(h)$.
If the walk makes exactly one jump between time $T$ and $T+h$, then $X(T) = j \pm 1$ with equal probability. For each of the starting point, the probability of jumping to $j$ by time $T+h$ is $1-e^{-h} = h + o(h)$.
If the walk makes more than two jumps between time $T$ and $T+h$, this happens with $O(h^2)$ probability. Combining the three events, we get \begin{align}
P_{0j}(g, T+h) =\ &(1-2h) E_0 [ e^{-g [ \phi(L_{T,j}+h)+\sum_{x\ne j} \phi(L_{T,x}) ]} \1_{X(T)=j} ] \nonumber \\
&+h P_{0,j-1}(g,T) + h P_{0,j+1}(g,T) + o(h).
\end{align}
Thus, \begin{align}
\lim_{h\to 0^+} \frac{P_{0j}(g, T+h) - P_{0j}(g, T)}{h}
=\ &E_0 [ (-g)\phi'(L_{T,j})e^{-g \sum_x \phi(L_{T,x}) ]} \1_{X(T)=j} ] \nonumber \\
&- 2 P_{0j}(g,T) + P_{0,j-1}(g,T) + P_{0,j+1}(g,T).
\end{align}
The left-hand side limit $h\to 0^-$ is proved using the same argument.
(ii) This is from summing (\ref{eqn:P0j'}).
(iii) This is also from summing (\ref{eqn:P0j'}). For $j\ge2$, the coefficient for $P_{0j}$ is \begin{align}
(j-1)^k - 2j^k + (j+1)^k
= \sum_{l=1}^k \binom k l j^{k-l} [ (-1)^l + 1^l ]
= 2 \sum_{\substack{l=2 \\ l\text{ even} }}^k \binom{k}{l} j^{k-l},
\end{align} which gives the desired result. $\qedsymbol$
The next lemma establishes analyticity properties. The proof is algebraic.
\begin{lemma} \label{lemma:chi_analytic}
Fix $g>0$, write $\nu_c = \nu_c(g)$.
(i) For any $0\ne y\in \R$, $1$ is not in the spectrum of $Q(g, \nu_c + iy)$.
(ii) The map $\nu \mapsto \chi_+(g, \nu) = \sum_{j=1}^\infty G_{0j}(g, \nu)$ can be extended to a meromorphic function on an open set containing the closed half-plane $\{ \nu \in \C \mid \Re(\nu)\ge \nu_c\}$, and it has a unique pole at $\nu=\nu_c$.
(iii) For any $k\in \N$, the map $\nu \mapsto \sum_{j=1}^\infty j^k G_{0j}(g, \nu)$ can be extended to a meromorphic function on an open set containing the closed half-plane $\{ \nu \in \C \mid \Re(\nu)\ge \nu_c\}$, and it has a unique pole at $\nu=\nu_c$.
\end{lemma}
\emph{Proof. }
(i) Since $Q$ is compact, we only need to prove $1$ is not an eigenvalue.
Denote $Q_c = Q(g, \nu_c)$ and $k_c(t, s) = k_0(t, s; g, \nu_c)$ for the kernel of $Q_c$. For $y\ne 0$, define an operator $Uf(t) = f(t)e^{-\half iyt}$. Then $Q = Q(g, \nu_c + iy)$ can be decomposed as \begin{align}
Qf(t)
&= \int_0^\infty f(s) e^{-\half iy t} e^{-\half iys} k_c(t,s)ds
= UQ_cUf(t).
\end{align}
Suppose $Qf = (UQ_cU)f = 1f$, then by definition of $U$, we have
$ e^{-\half iyt} Q_cUf(t) = f(t)$, so \begin{align}
\langle Q_cUf, Uf \rangle =
\int_0^\infty e^{\half iyt} f(t) \cdot e^{\half iyt} \overline{f(t)} dt
= \int_0^\infty e^{iyt} \abs{ f(t) }^2. \end{align}
Since $Q_c$ is a positive operator, we have \begin{align}
\langle Q_cU f, Q_cU f \rangle
\le \| Q_c\| \langle Q_cU f, U f \rangle
= 1\cdot \int_0^\infty e^{iyt}\abs{ f(t) }^2 dt. \end{align}
But $\langle Q_cU f, Q_cU f \rangle = \int_0^\infty e^{\half iyt} f(t) \cdot e^{ - \half iyt}\overline{ f(t)} dt = \int_0^\infty \abs{ f(t) }^2 dt, $
so \begin{align}
0 \le \int_0^\infty (\cos(yt) - 1) \abs{ f(t) }^2 dt. \end{align}
For $y\ne0$, $\cos(yt) - 1<0$ almost surely, This forces $f=0$ almost surely, so $1$ cannot be an eigenvalue of $Q$.
(ii) By Proposition~\ref{prop:chi}(i), for $\Re(\nu) > \nu_c$, \begin{align}
\chi_+(g, \nu)
= \left \langle Q(1-Q)\inv(q) , \bar q \right \rangle. \label{eqn:3.33}
\end{align}
By Proposition~\ref{prop:Gij}(i), $q$ is holomorphic for $\Re(\nu) > \nu_c - \eps$. Notice the conjugation $\bar q$ and the conjugation of the second argument of the inner product cancel each other, leaving a holomorphic function. Therefore, it suffices to prove $(1-Q)\inv$ is well-defined on an open set containing the closed half-plane, except at $\nu=\nu_c$.
By part (i), $(1-Q)\inv$ is well-defined at $\nu = \nu_c+iy$ with $y\ne 0$. By continuity, $(1-Q)\inv$ is well-defined on a small neighborhood around all such $\nu = \nu_c+iy$, $y\ne 0$.
At $\nu=\nu_c$, we know 1 is the largest eigenvalue of $Q(g, \nu_c)$ and it is simple by Lemma~\ref{lemma:Q}(ii). Hence, the Kato--Rellich Theorem identifies the top of the spectrum $\lambda_g(\nu)$ of $Q(g, \nu)$ near $\nu=\nu_c$ (also see Remark~\ref{remark:kato}). Since for $\nu\in \R$ we have $\lambda_g(\nu) = \norm{Q(g, \nu)}$, the function $\lambda_g(\nu)$ must be non-constant. Hence, there exist a punctured neighborhood of $\nu=\nu_c$ in which $\lambda_g(\nu)\ne 1$. In this punctured neighborhood, $(1-Q)\inv$ is well-defined.
Together, this proves that $(1-Q)\inv$ is well-defined on an open set containing the closed half-plane, except at $\nu=\nu_c$.
The right-hand side of \eqref{eqn:3.33} then gives the desired extension.
(iii) For $k=1$, this follows from the same reasoning and equation~\eqref{eqn:2.54}, which says \[
\sum_{j=1}^\infty j G_{0j}(g, \nu)
= \left \langle Q(1-Q)^{-2}(q) , \bar q \right \rangle, \qquad (\Re(\nu)>\nu_c).
\]
For $k>1$, we use \begin{align}
\sum_{j=1}^\infty j^k G_{0j}(g, \nu)
= \left \langle \Big( \sum_{j=1}^\infty j^k Q^j \Big)(q) , \bar q \right \rangle, \qquad (\Re(\nu)>\nu_c). \end{align}
To calculate the sum, we differentiate the geometric series $\sum_{j=0}^\infty z^j = (1-z)\inv$. Notice $\sum_{j=1}^\infty j^k z^j$ belongs to the $\Z$-algebra generated by $z$ and $(1-z)\inv$. Then by holomorphic functional calculus, $ \sum_{j=1}^\infty j^k Q^j$ belongs to the $\Z$-algebra generated by $Q$ and $(1-Q)\inv$, so the same reasoning as in part (ii) applies.
$\qedsymbol$
\emph{Proof of Theorem~\ref{theorem:moments}.}
We will see that $\theta(g) =
\big(-\frac{\del \|Q\|}{\del\nu}\big\lvert_{\nu=\nu_c(g)}\big)\inv$.
First, we deal with the denominator of~\eqref{eqn:6}.
Fix $g>0$. Define \begin{align}
f(T) = \sum_{j=1}^\infty P_{0j}(g, T) e^{-\nu_c T} \ge 0, \end{align}
Using Lemma~\ref{lemma:T-derivative}(ii), we can differentiate $f$. Define \begin{align}
\alpha_+(T) &= -\nu_c \sum_{j=1}^\infty P_{0j}(g, T) e^{-\nu_c T}
+ P_{00}(g, T)e^{-\nu_c T}, \\
\alpha_-(T) &= g \sum_{j=1}^\infty E_0 \big[ \phi'(L_{T,j})e^{-g\sum_x \phi(L_{T,x})}\1_{X(T)=j} \big] e^{-\nu_c T} + P_{01}(g, T)e^{-\nu_c T},
\end{align}
then $f' = \alpha_+ - \alpha_-$ and $ \alpha_\pm \ge 0$, because $\nu_c\le 0$ by Proposition~\ref{prop:chi}(ii) and $\phi'\ge 0$ by assumption~\eqref{A2}.
Notice $\Lcal f (z) = \chi_+(g, \nu_c + z)$, so $\Lcal f(z)$ converges for $\Re(z)>0$ by Proposition~\ref{prop:chi} and extends to a meromorphic function on an open set containing the closed half-plane by Lemma~\ref{lemma:chi_analytic}.
For $\Lcal \alpha_+(z) = -\nu_c \chi_+(g, \nu_c + z) + G_{00}(g, \nu_c+z)$, the same is true because $G_{00}(g, \nu_c+z)$ is holomorphic for $\Re(z) > -\eps$ by Proposition~\ref{prop:Gij}.
For $\Lcal \alpha_-$, using the same method described in Section~\ref{section:2pt} and assumption~\eqref{A4}, it is easy to prove \begin{align}
\Lcal E_0 \big[ \phi'(L_{T,j})e^{-g\sum_x \phi(L_{T,x})}\1_{X(T)=j} \big] =
\left \langle Q^j [q](t), \phi'(t) \overline{q(t)} \right\rangle. \end{align}
The required properties follow analogously.
We also know $\Lcal f$ has a unique simple pole at $z=0$, and each of
$\Lcal \alpha_\pm$ either has a unique simple pole at $z=0$ or is holomorphic.
Therefore, by Theorem~\ref{theorem:tauber0} with $k=0$ and the $\nu\to \nu_c$ asymptotic \eqref{eqn:chi_asym}, we conclude \begin{align}
\sum_{j=1}^\infty P_{0j}(g, T) e^{-\nu_c T} \to
\bar u \(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)\inv, \qquad T\to \infty. \label{eqn:95}
\end{align}
For the numerator of~\eqref{eqn:6}, we use \begin{align}
f(T)
&= \sum_{j=1}^\infty j^k P_{0j}(g, T) e^{-\nu_c T}, \\
\alpha_+(T)
&= -\nu_c \sum_{j=1}^\infty j^kP_{0j}(g, T) e^{-\nu_c T} \nonumber \\
&\quad + \Big[ P_{00} + (2^k-2)P_{01} + 2\sum_{j=2}^\infty \bigg( \sum_{\substack{l=2 \\ l\text{ even} }}^k \binom{k}{l} j^{k-l}\bigg) P_{0j} \Big]
e^{-\nu_c T}, \\
\alpha_-(T)
&= g \sum_{j=1}^\infty j^k E_0 \big[ \phi'(L_{T,j})e^{-g\sum_x \phi(L_{T,x})}\1_{X(T)=j} \big] e^{-\nu_c T}.
\end{align}
Now $\Lcal f$ has a pole of order $k+1$ by Corollary~\ref{corollary:correlation}. Theorem~\ref{theorem:tauber0} and the $\nu\to \nu_c$ asymptotic~\eqref{eqn:j^k_asymp} now give \begin{align}
\sum_{j=1}^\infty j^k P_{0j}(g, T) e^{-\nu_c T} \sim
\bar u \(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)^{-(k+1)} T^k
, \qquad T\to \infty.
\end{align}
Dividing by the denominator (\ref{eqn:95}), we get \[
\frac{ \sum_{j=1}^\infty j^k P_{0j}(g, T) }{ \sum_{j=1}^\infty P_{0j}(g, T) } \sim \(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\)^{-k} T^k, \qquad T\to \infty, \]
which is the desired result with $\theta(g) =
\big(-\frac{\del \|Q\|}{\del\nu}\big\lvert_{\nu=\nu_c(g)}\big)\inv$.
$\qedsymbol$
\subsection{Proof of Theorem~\ref{theorem:WLLN}, convergence part}
We use Theorem~\ref{theorem:moments} to prove the convergence claim.
The result follows from Markov's inequality and the Paley--Zygmund inequality. To simplify the notation, we suppress dependence on $g$ and suppress the condition. Write $\P^T(\cdot) = \P_0^{g, T}(\cdot \mid X(T)>0)$ and $\E^T$ for the corresponding expectation.
For all $T$, \begin{align}
\P^T \( \left\lvert \frac{X(T)}{T} - \theta \right\lvert \ge \eps \)
&= \P^T ( \abs{ X(T) - \theta T } \ge \eps T ) \nonumber \\
&= \P^T( X(T) \ge (\theta+\eps)T) + \P^T( X(T) \le (\theta-\eps)T) \nonumber\\
&= \P^T( X(T) \ge (\theta+\eps)T) + 1 - \P^T( X(T) > (\theta-\eps)T).
\label{eqn:99}
\end{align}
By Markov's inequality, for any $k\in \N$, \begin{align}
\P^T( X(T) \ge (\theta+\eps)T)
&=\P^T( X(T)^k \ge (\theta+\eps)^k T^k) \nonumber \\
&\le \frac{ \E^T[X(T)^k]}{ (\theta+\eps)^k T^k}
= \frac{ \theta^k T^k + o(T^k)}{ (\theta+\eps)^k T^k}
= \frac{ \theta^k + o(1)}{ (\theta+\eps)^k} \end{align}
as $T\to\infty$,
so $\limsup_{T\to\infty}\P^T( X(T) \ge (\theta+\eps)T) \le (\frac \theta {\theta + \eps})^k$ for all $k$. Letting $k\to \infty$ proves the limit exists and equals 0.
By the Paley--Zygmund inequality, for any $k\in \N$ and $\gamma_k\in[0,1]$, \begin{align}
\P^T( X(T)^k > \gamma_k \E^T[X(T)^k] )
&\ge (1-\gamma_k)^2 \frac{ \E^T[X(T)^k]^2}{\E^T[X(T)^{2k}]} \nonumber \\
&= (1-\gamma_k)^2 \frac{ [\theta^k T^k + o(T^k)]^2}{\theta^{2k}T^{2k} + o(T^{2k})} \nonumber \\
&=(1-\gamma_k)^2 \frac{ [\theta^k + o(1)]^2}{\theta^{2k} + o(1)}
\label{eqn:100}
\end{align} as $T\to \infty$.
We choose $\gamma_k$ to be \begin{align}
\gamma_k = \frac{ (\theta - \eps)^k T^k} { \E^T[X(T)^k]}
= \frac{ (\theta - \eps)^k T^k} { \theta^k T^k + o(T^k) }
= \frac{ (\theta - \eps)^k} { \theta^k + o(1) }. \end{align}
Notice this choice is valid for all sufficiently small $\eps$ and sufficiently large $T$. Now \eqref{eqn:100} gives \begin{align}
\liminf_{T\to\infty} \P^T(X(T) > (\theta-\eps)T)
\ge \bigg[ 1- \(\frac{ \theta - \eps} \theta\)^k \bigg]^2 \cdot 1
\end{align}
Letting $k\to \infty$ proves the limit exists and equals 1.
Now from (\ref{eqn:99}), \[
\lim_{T\to\infty} \P^T \( \left\lvert \frac{X(T)}{T} - \theta \right\lvert \ge \eps \)
= 0 + 1- 1= 0, \]
which is the desired limit. $\qedsymbol$
\section{Monotonicity of speed} \label{section:monotonicity}
In this section, we prove that the escape speed $\theta(g) = \big(-\frac{\del \|Q\|}{\del\nu}\Big\lvert_{\nu=\nu_c(g)}\Big)^{-1}$ has a strictly positive derivative $\theta'(g)>0$. In Section~\ref{section:monotonicity-reduction}, we first build a sequence $\{ c_n \}_{n=0}^\infty$ (defined in \eqref{def:cn}) for every $g$, and we prove that $\theta'(g)>0$ is equivalent to $c_0 + 2 \sum_{n=1}^\infty c_n >0$. Here, $c_n$ is related to the $n$-th power of the operator $Q$ evaluated at $(g, \nu_c(g))$.
In Section~\ref{section:monotonicity-dominance}, we use stochastic dominance to prove $c_0>0$ and $c_n\ge 0$ for all $n$, which imply $\theta'(g)>0$ and complete the proof of Theorem~\ref{theorem:WLLN}.
Since we will only encounter the operator $Q$ in this section, for simplicity, we write $k(t,s) = k_0(t,s)$ for the kernel of $Q$ (see \eqref{def:Q}). Also, since we do not need complex $\nu$ in this section, we assume $\nu\in \R$.
\subsection{Reduction} \label{section:monotonicity-reduction}
We begin by calculating $\theta'(g)$ using the Implicit Function Theorem. Using subscripts for partial derivatives and denoting $\lambda = \norm{Q}$, we get
\begin{align}
\frac{d}{dg}\( \frac{1}{\theta(g)} \)
&= -\lambda_{\nu g} - \lambda_{\nu\nu}\frac{d\nu_c}{dg} \nonumber \\
&= -\lambda_{\nu g} - \lambda_{\nu\nu}(-\lambda_g / \lambda_\nu) \nonumber \\
&= ( \lambda_{\nu g}\lambda_\nu - \lambda_{\nu\nu}\lambda_g)/(-\lambda_\nu). \label{eqn:101}
\end{align}
Since $\lambda_\nu <0$ by Lemma~\ref{lemma:nu_c}, we have $\theta' >0$ if and only if $\lambda_{\nu g}\lambda_\nu - \lambda_{\nu\nu}\lambda_g <0$.
This combination of derivatives is central to the reduction so we give it a name. For a $C^2$ function $F = F(g, \nu)$, we define the operator \begin{align}\label{def:L}
L[F] = F_{\nu g}F_\nu - F_{\nu\nu}F_g. \end{align}
The goal is to prove $L[\lambda] < 0$.
However, it is difficult to calculate $L[\lambda]$ directly because of the second derivatives. Instead, as suggested in \cite{GH1993}, we can calculate $L\big[ \langle Q^n(g, \nu) f, f \rangle^{1/n} \big]$ for some $f>0$ and then send $n\to\infty$. The idea is to utilize $\|Q(g, \nu) \| =
\lim_{n\to\infty}\langle Q^n(g, \nu) f, f \rangle^{1/n}$. This is justified by the following lemma. We choose $f$ to be a leading eigenvector of $Q$.
\begin{lemma} \label{lemma:Hn}
Fix $g_0>0$ and $\nu_0 = \nu_c(g_0)$. Let $h_0$ be the positive normalized leading eigenvector of $Q(g_0, \nu_0)$, \ie, $h_0$ satisfies $Q(g_0, \nu_0)h_0 = h_0$, $h_0>0$ and $\| h_0\|_2 = 1$. For any $n\ge1$, define \begin{align} \label{def:Hn}
H_n(g, \nu) = \langle Q^n(g, \nu)h_0, h_0 \rangle^{1/n}. \end{align}
Then for $*=g, \nu$, we have $ \del_* H_{n} (g_0, \nu_0) = \lambda_*$ and $\del_*\del_\nu H_{n}(g_0, \nu_0) \to \lambda_{\nu *}$ as $n\to \infty$.
Therefore, \begin{align}\label{LH_limit}
\lim_{n\to \infty} L[H_n] \big \lvert_{g_0, \nu_0} = L[\lambda] \big\lvert_{g_0, \nu_0}. \end{align}
\end{lemma}
This lemma is proved by calculating $L[H_n]$ using differentiation rules and calculating $L[\lambda]$ using Cauchy's integral formula. The proof is in Appendix~\ref{appendix:Hn}.
The function $H_n(g, \nu)$ is more tractable than the the operator norm $\lambda(g, \nu) = \norm{Q(g, \nu)}$. The next lemma calculates $L[H_n] \big \lvert_{g_0, \nu_0}$.
\begin{lemma} \label{lemma:L[Hn]}
Assume the hypotheses of Lemma~\ref{lemma:Hn}.
For any $n\ge 0$, define \begin{align} \label{def:cn}
c_n &=
\Big\langle Q^{n}\big[t h_0(t)\big] (s), \phi(s) h_0(s)\Big\rangle
\cdot \int_0^\infty s h_0^2(s) ds \nonumber \\
&\quad- \Big\langle Q^{n}\big[t h_0(t)\big](s), s h_0(s)\Big\rangle
\cdot \int_0^\infty \phi(s) h_0^2(s) ds, \end{align}
with $Q$ evaluated at $(g_0, \nu_0)$.
Then for all $n\ge 1$, \begin{align}
L[H_n] \big \lvert_{g_0, \nu_0}
&= -\frac 1 n
\( - \half c_0 + \half c_n + \sum_{i=1}^n \sum_{j=1}^n c_{\abs{ j-i }} \).
\end{align}
\end{lemma}
Combining the two lemmas, we sum diagonally (the convergence of the series is controlled by the second largest eigenvalue, which is $<1$) to obtain \begin{align}
L[\lambda] \big\lvert_{g_0, \nu_0}
= \lim_{n\to\infty} L[H_n] \big \lvert_{g_0, \nu_0}
= -c_0 - 2\sum_{n=1}^\infty c_n.
\end{align}
In the next subsection, we will prove $c_0>0$ and $c_n\ge 0$ for all $n$. This will allow us to conclude $L[\lambda] \big\lvert_{g_0, \nu_0}<0$, which is equivalent to $\theta'(g_0)>0$.
The proof of Lemma~\ref{lemma:L[Hn]} uses the following lemma due to \cite{GH1993}.
\begin{lemma} \label{lemma:phi}
Let $F=F(g, \nu)$ be $C^2$ and $\phi:\R\to\R$ be differentiable on the image of $F$. Then \begin{align}
L[\phi(F)] = (\phi'(F))^2 L[F]. \end{align}
\end{lemma}
\emph{Proof. } This is a direct calculation using differentiation rules. \begin{align}
L[\phi(F)]
&= (\phi(F))_{\nu g}(\phi(F)) _\nu - (\phi(F))_{\nu\nu}(\phi(F))_g \nonumber \\
&= (\phi'(F)F_\nu)_g(\phi'(F)F_\nu) - (\phi'(F)F_\nu)_\nu (\phi'(F)F_g)\nonumber \\
&= ({\phi''F_gF_\nu} +\phi'F_{\nu g})(\phi'F_\nu)
- ({\phi''F_\nu F_\nu} +\phi'F_{\nu \nu})(\phi'F_g) \nonumber \\
&= (\phi'(F))^2 (F_{\nu g}F_\nu - F_{\nu\nu}F_g) \nonumber \\
&= (\phi'(F))^2 L[F],
\end{align} which is the desired result. $\qedsymbol$
\emph{Proof of Lemma~\ref{lemma:L[Hn]}. }
We use Lemma~\ref{lemma:phi} with $\phi(t) = t^{1/n}$ and $F(g, \nu) = \langle Q^n(g, \nu) h_0, h_0 \rangle$. By the hypotheses, we have $\langle Q^n(g_0, \nu_0) h_0, h_0 \rangle = \langle h_0, h_0 \rangle = 1$, hence, \begin{align} \label{eqn:4.11}
L[H_n] \big \lvert_{g_0, \nu_0}
&=
\( \frac{1}{n}(1) \)^2 L\big[ \langle Q^n(g, \nu) h_0, h_0 \rangle \big] \Big\lvert_{g_0, \nu_0} .
\end{align}
Next, we calculate the right-hand side.
Since $h_0$ is fixed, we only need to differentiate $Q$.
Recall $Q$ was defined in \eqref{def:Q} by \begin{align*}
Qf(t) &= \int_0^\infty f(s) k(t,s) ds, \\
k(t,s) &= \sqrt{p(t)}\sqrt{p(s)} e^{-t}e^{-s} I_0(2\sqrt{st}), \\
\sqrt{p(t)} &= e^{-\half g\phi(t) - \half \nu t}.
\end{align*}
Observe $g$ and $\nu$ enter the kernel only through $\sqrt{p}$.
Writing out all $n$ integrals in $Q^n h_0$ yields \begin{align} \label{eqn:Qn}
\langle Q^n h_0, h_0 \rangle
&= \int_{(0, \infty)^{n+1}} h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot h_0(s_0) d\bf s,
\end{align}
where $d\bf s = ds_0 \cdots ds_n$.
Thus, the $g$-derivative is
\begin{align}
\frac{\del}{\del g}\langle Q^n h_0, h_0 \rangle
= - \int_{(0, \infty)^{n+1}} &\( \half \phi(s_n) + \sum_{j=1}^{n-1} \phi(s_j) + \half \phi(s_0)\)
\nonumber \\
\cdot &\, h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot h_0(s_0) d\bf s.
\end{align}
When evaluated at $(g_0, \nu_0)$, we have $Q(g_0,\nu_0)h_0 = h_0$, so for each $j$, \begin{align}
\int_{(0, \infty)^{n+1}} \phi(s_j) h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot h_0(s_0) d\bf s
= \int_0^\infty \phi(s_j) h_0^2(s_j) ds_j. \end{align}
This gives
\begin{align}
\frac{\del}{\del g}\langle Q^n h_0, h_0 \rangle \bigg\lvert_{g_0, \nu_0}
&= -n \int_0^\infty \phi(s) h_0^2(s) ds.
\end{align}
The $\nu$-derivative is similar and gives a multiplier of
$ -\( \half s_n + \sum_{j=1}^{n-1} s_j + \half s_0\)$ to the integrand of \eqref{eqn:Qn}. We get \begin{align}
\frac{\del}{\del \nu}\langle Q^n h_0, h_0 \rangle \bigg\lvert_{g_0, \nu_0}
&= -n \int_0^\infty s h_0^2(s) ds. \end{align}
For the second derivatives, we define \begin{align} \label{def:alpha}
\alpha_j =\alpha_j(n)= \begin{cases}
\half &j=0, \\
1& 0<j<n, \\
\half & j=n,
\end{cases}\end{align}
then \begin{align}
&\frac{\del^2}{\del \nu \del g}\langle Q^n h_0, h_0 \rangle \nonumber \\
=&
\sum_{i=0}^n \sum_{j=0}^n \alpha_i \alpha_j
\int_{(0, \infty)^{n+1}} \phi(s_i) s_j h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot h_0(s_0) d\bf s.
\end{align}
When evaluating at $(g_0, \nu_0)$, we have $Q(g_0,\nu_0)h_0 = h_0$, so \begin{align}
&\int_{(0, \infty)^{n+1}} \phi(s_i) s_j h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot h_0(s_0) d\bf s \nonumber \\
&= \Big\langle Q^{\abs{ j-i }}\big[s_j h_0(s_j)\big](s_i), \phi(s_i) h_0(s_i)\Big\rangle. \end{align}
Thus, \begin{align}
\frac{\del^2}{\del \nu \del g}\big\langle Q^n h_0, h_0 \big\rangle \bigg\lvert_{g_0, \nu_0}&=
\sum_{i=0}^n \sum_{j=0}^n \alpha_i \alpha_j
\Big\langle Q^{\abs{ j-i }}\big[s_j h_0(s_j)\big](s_i), \phi(s_i) h_0(s_i)\Big\rangle. \end{align}
Similarly, \begin{align}
\frac{\del^2}{\del \nu^2 }\big\langle Q^n h_0, h_0 \big\rangle \bigg\lvert_{g_0, \nu_0}&=
\sum_{i=0}^n \sum_{j=0}^n \alpha_i \alpha_j
\Big\langle Q^{\abs{ j-i }}\big[s_j h_0(s_j)\big](s_i), s_i h_0(s_i)\Big\rangle. \end{align}
Recall the definition of $c_n$ in~\eqref{def:cn} and the definition of the operator $L$ in~\eqref{def:L}. Combining the sums above gives \begin{align} \label{eqn:116}
L\big[ \langle Q^n(g, \nu) h_0, h_0 \rangle \big] \Big\lvert_{g_0, \nu_0}
= \sum_{i=0}^n \sum_{j=0}^n \alpha_i \alpha_j (-n) c_{\abs{ j-i }}.
\end{align}
Inserting the definitions of $\alpha_i$ and $\alpha_j$ yields
\begin{align}
L\big[ \langle Q^n(g, \nu) h_0, h_0 \rangle \big] \Big\lvert_{g_0, \nu_0}
= (-n) \left( -\half c_0 +\half c_n +
\sum_{i=1}^n \sum_{j=1}^n c_{\abs{ j-i }} \right).
\end{align}
This and equation~\eqref{eqn:4.11} give the claim.
$\qedsymbol$
\subsection{Stochastic dominance} \label{section:monotonicity-dominance}
In this section, we prove $c_0>0$ and $c_n\ge 0$, which then imply $\theta'(g_0)>0$ by Section~\ref{section:monotonicity-reduction}.
In the following proposition, we write the inequality $c_n\ge 0$ in a quotient form (recall the definition of $c_n$ in \eqref{def:cn}). This allows the inequality to be interpreted as an inequality between the expectations of two random variables.
\begin{proposition} \label{prop:Qn-inequality}
Fix $g_0>0$ and $\nu_0 = \nu_c(g_0)$. Let $Q = Q(g_0, \nu_0)$. Let $h_0$ be the positive normalized leading eigenvector of $Q$, \ie, $h_0$ satisfies $Qh_0 = h_0$, $h_0>0$ and $\| h_0\|_2 = 1$. Then for any $n\ge 0$, we have \begin{equation}\label{eqn:Qn-inequality}
\frac{\Big\langle Q^n\big[t h_0(t)\big](s), \phi(s) h_0(s)\Big\rangle}{\displaystyle \int_0^\infty \phi(s) h_0^2(s) ds}
\ge
\frac{\Big\langle Q^n\big[t h_0(t)\big](s), s h_0(s)\Big\rangle}{\displaystyle \int_0^\infty s h_0^2(s) ds},
\end{equation}
and the inequality is strict for $n=0$.
\end{proposition}
To illustrate the method of the proof, we will first prove the case $n=0$ using (first-order) stochastic dominance. For real-valued random variables $X, Y$, we write $X\stle Y$ if $P(X > x) \le P(Y>x)$ for all $x\in \R$. If $X, Y$ have density functions $f_X, f_Y$ respectively, then one sufficient condition for $X\stle Y$ is that $f_Y/f_X$ is an increasing function. A consequence of $X\stle Y$ is $EX \le EY$.
\emph{Proof for the case $n=0$. }
If $n=0$, then the goal (\ref{eqn:Qn-inequality}) is \[
\frac{ \int_0^\infty s\phi(s) h_0^2(s) ds}{ \int_0^\infty \phi(s) h_0^2(s) ds}
\ge
\frac{ \int_0^\infty s\cdot s h_0^2(s) ds} { \int_0^\infty s h_0^2(s) ds}. \]
Define random variables $X, Y\ge 0$ by the density functions \begin{align}
f_X(s) = \frac{s h_0^2(s)}{\int_0^\infty s h_0^2(s) ds}, \qquad
f_Y(s) = \frac{\phi(s) h_0^2(s)}{\int_0^\infty \phi(s) h_0^2(s) ds}. \end{align}
We want to prove $EX<EY$.
Notice $f_Y(s)/f_X(s) = c\phi(s)/s$ for some positive constant $c$, so it is increasing by assumption~\eqref{A2}. Thus, $X\stle Y$ and $EX \le EY$. The strict inequality follows from $X$ and $Y$ having different distributions, which is a consequence of $h_0(s)>0$ for all $s\ge0$. $\qedsymbol$
To prove the general case, we need a result on multivariate stochastic order from \cite[Theorem 6.B.3]{Shaked-Shanthikumar2007}. For random vectors $\bf X, \bf Y \in \R^{n+1}$, we say $\bf X \stle \bf Y$ if $P(\bf X\in U) \le P( \bf Y \in U )$ for all increasing set $U\subset \R^{n+1}. $
\begin{theorem} \label{thm:multi}
Let $\bf X = (X_0, \dots, X_n)$ and $\bf Y = (Y_0, \dots, Y_n)$ be $\R^{n+1}$-valued random variables. If $X_0 \stle Y_0$ and the conditional distributions satisfy \begin{equation}\label{conditional_dominance}
[X_i\mid X_0 = x_0, \dots, X_{i-1} = x_{i-1} ]
\stle [Y_i\mid Y_0 = y_0, \dots, Y_{i-1} = y_{i-1} ] \end{equation}
whenever $(x_0, \dots, x_{i-1}) \le (y_0, \dots, y_{i-1})$ for all $i= 1, \dots, n$,
then $\bf X \stle \bf Y$.
As a result, $X_n\stle Y_n$ and $EX_n \le EY_n$.
\end{theorem}
\emph{Proof of Proposition~\ref{prop:Qn-inequality} for the case $n>0$. }
Recall $Q$ was defined by
$Qf(t) = \int_0^\infty f(s) k_0(t,s) ds$ and we write $k(t,s) = k_0(t,s)$.
With $s_n$ replacing $t$ and $s_0$ replacing $s$, the numerator of the left-hand side of the goal \eqref{eqn:Qn-inequality} is \begin{align}
&\Big\langle Q^n\big[t \cdot h_0(t)\big](s), \phi(s) h_0(s)\Big\rangle \nonumber\\
=& \int_{(0, \infty)^{n+1}} s_n\cdot h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot \phi(s_0) h_0(s_0) d\bf s.
\end{align}
Define random variable $\bf Y = (Y_0, \dots, Y_n) \in \R_+^{n+1}$ by the density function \begin{align}
f_{\bf Y}(s_0, \dots, s_n) = \frac{1}{Z_Y} h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot \phi(s_0) h_0(s_0),
\end{align} where $Z_Y$ is the normalizing constant. Notice \begin{align}
Z_Y &= \int_{(0, \infty)^{n+1}} h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot \phi(s_0) h_0(s_0) \nonumber \\
&= \langle Q^n h_0, \phi(s_0) h_0 \rangle\nonumber \\
&= \langle h_0, \phi(s_0) h_0 \rangle\nonumber \\
&= \int_0^\infty \phi(s_0) h_0^2(s_0)ds_0,
\end{align} so $EY_n$ is precisely the left-hand side of the goal (\ref{eqn:Qn-inequality}).
Similarly, define random variable $\bf X = (X_0, \dots, X_n ) \in \R_+^{n+1}$ by the density function \begin{align}
f_{\bf X}(s_0, \dots, s_n) = \frac{1}{Z_X} h_0(s_n) k(s_n, s_{n-1}) \cdots k(s_1, s_0) \cdot s_0 h_0(s_0),
\end{align} where $Z_X$ is the normalizing constant, then $EX_n$ is precisely the right-hand side of the goal~(\ref{eqn:Qn-inequality}). Therefore, we can conclude the proposition using Theorem~\ref{thm:multi}, once we verify the hypotheses.
To see $X_0 \stle Y_0$, notice the density functions for $X_0$ and $Y_0$ are \begin{align}
f_{X_0}(s_0) = \frac{1}{Z_X} s_0 h_0^2(s_0), \qquad
f_{Y_0}(s_0) = \frac{1}{Z_Y} \phi(s_0) h_0^2(s_0), \end{align}
because $Qh_0=h_0$. This is exactly the $n=0$ case.
To show the dominance of conditional distributions (\ref{conditional_dominance}), we use conditional density functions. The conditional density of $X_i$ given $ X_0 = x_0, \dots, X_{i-1} = x_{i-1} $ is \begin{align}
f_i(s_i\mid x_0, \dots, x_{i-1}) = \frac{1}{Z_1}h_0(s_i) k(s_i, x_{i-1}) \cdots k(x_1, x_0)\cdot x_0 h_0(x_0),
\end{align}
and the conditional density function of $Y_i$ given $Y_0 = y_0, \dots, Y_{i-1} = y_{i-1}$ is \begin{align}
g_i(s_i\mid y_0, \dots, y_{i-1}) = \frac{1}{Z_2}h_0(s_i) k(s_i, y_{i-1}) \cdots k(y_1, y_0)\cdot \phi(y_0) h_0(y_0).
\end{align}
By definition of $k(\cdot, \cdot) = k_0(\cdot,\cdot)$ in \eqref{def:k_j}, we have
\begin{align}
\frac{g_i(s_i\mid y_0, \dots, y_{i-1})}{f_i(s_i\mid x_0, \dots, x_{i-1})}
&= C_1 \frac{k(s_i, y_{i-1})}{k(s_i, x_{i-1})} \nonumber \\
&= C_1 \frac{\sqrt{p(y_{i-1})}\sqrt{p(s_i)} e^{-y_{i-1}}e^{-s_i} I_0(2\sqrt{s_i y_{i-1}})}
{\sqrt{p(x_{i-1})}\sqrt{p(s_i)} e^{-x_{i-1}}e^{-s_i} I_0(2\sqrt{s_i x_{i-1}})} \nonumber \\
&= C_2 \frac{I_0(2\sqrt{s_i y_{i-1}})}{I_0(2\sqrt{s_i x_{i-1}})},
\end{align} where $C_1, C_2>0$ are constants not depending on $s_i$.
To prove the required stochastic dominance, we will show this ratio is increasing in $s_i$. To simplify the notation, we will prove \begin{align}
F(t) = \frac{I_0(\lambda_2 t)}{I_0(\lambda_1 t)} \end{align} to be increasing in $t\ge0$ if $0\le\lambda_1\le \lambda_2$, then we can apply this with $t = \sqrt{s_i}$, $\lambda_1 = 2\sqrt{x_{i-1}}$, and $\lambda_2 = 2\sqrt{y_{i-1}}$. Notice \begin{align}
(\log F(t))' = \lambda_2 \frac{I_0'(\lambda_2 t)}{I_0(\lambda_2 t)} - \lambda_1 \frac{I_0'(\lambda_1 t)}{I_0(\lambda_1 t)}. \end{align}
Since $\lambda_2\ge \lambda_1\ge 0$, $I_0' = I_1$, and since ${I_1}/{I_0}$ is non-negative and increasing\footnote{The monotonicity of the ratio $I_1/I_0$ was proved in, \eg, \cite[Theorem 1, 1(b)]{segura2021monotonicity}.}, we get $(\log F(t))' \ge 0$ for $t\ge0$. Since $F>0$, we get $F'\ge 0$ as well. This verifies all hypotheses of Theorem~\ref{thm:multi} and finishes the proof.
$\qedsymbol$
\section*{Acknowledgements}
This work was supported in part by NSERC of Canada.
We thank Gordon Slade for discussions and very helpful advice.
We thank Roland Bauerschmidt for introducing the problem and the transfer matrix approach to us.
We thank Frank den Hollander for encouragement and comments.
\begin{appendices}
\section{Supersymmetric representation of random walk}
\label{appendix:supersymmetry}
What we call the supersymmetric representation of random walk is an integral representation for certain functionals of local times of continuous-time random walk. It is also known as the supersymmetric version of the BFS--Dynkin isomorphism theorem \cite[Corollary 11.3.7]{BBS2019}.
The integral involved is an integration of differential forms.
Further background of the isomorphism theorem can be found in \cite[Chapter 11]{BBS2019}.
\subsection{Differential forms, bosons, and fermions}
\subsubsection{Integration of differential forms}
Let $\Lambda = \{ 1, \dots, \abs{ \Lambda }\}$ be a finite set. This set will be the state space of the random walk. We consider 2-component real fields over $\Lambda$. For each $x\in \Lambda$, let $(u_x, v_x)$ be real coordinates. The 1-forms $\{ du_x, dv_x \}_{x\in \Lambda}$ generate the Grassmann algebra of differential forms on $\R^{2\Lambda}$, with multiplication given by the anti-commuting wedge product.
We write $u = (u_x)_{x\in \Lambda}, v = (v_x)_{x\in \Lambda}$. For $p\in \N_0$, a $p$-\emph{form} is a function of $(u, v)$ multiplied with a product of $p$ differentials. A \emph{form} $K$ is a sum of $p$-forms with possibly different values of $p$. The largest such $p$ is called the \emph{degree} of $K$. For a fixed $p$, the contribution of $p$-forms to $K$ is called the \emph{degree-}$p$ part of $K$. A form is called \emph{even} if it is the sum of $p$-forms with even $p$ only. Due to anti-commutivity, we can always write the degree-$2\lvert \Lambda \rvert$ part of a form $K$ as \begin{align}
f(u,v) du_1\wedge dv_1 \wedge \cdots \wedge du_{\abs{ \Lambda }} \wedge dv_{\abs{ \Lambda }}.
\end{align}
The integral of $K$ is then defined to be \begin{align}
\int_{\R^{2\lvert \Lambda \rvert}} K = \int_{\R^{2\lvert \Lambda \rvert}} f(u,v) du_1 dv_1 \cdots du_{\abs{ \Lambda }} dv_{\abs{ \Lambda }},
\end{align} where the right-hand side is the Lebesgue integral.
Notice if degree of $K < 2\lvert \Lambda \rvert$, then its degree-$2\lvert \Lambda \rvert$ part is zero, so its integral is zero.
\subsubsection{Bosons and Fermions}
It is convenient to use complex coordinates. For each $x\in \Lambda$, we define \begin{align}
\phi_x &= u_x + i v_x, & \bar\phi_x &= u_x - i v_x, \nonumber\\
d\phi_x &= du_x + i dv_x, & d\bar\phi_x &= du_x - i dv_x.
\end{align}
We call $(\phi, \bar \phi) = (\phi_x, \bar \phi_x)_{x\in \Lambda}$ the \emph{boson field}.
We also define (with any fixed choice of the square root) \begin{align}
\psi_x = \frac{1}{\sqrt{2\pi i}} d\phi_x, \qquad
\bar \psi_x = \frac{1}{\sqrt{2\pi i}} d\bar \phi_x,
\end{align}
and call $(\psi, \bar \psi) = (\psi_x, \bar \psi_x)_{x\in \Lambda}$ the \emph{fermion field}. Then \begin{align}
\bar \psi_x \wedge \psi_x
= \frac 1 {2\pi i} d\bar \phi_x \wedge d\phi_x
= \frac 1 \pi du_x \wedge dv_x.
\end{align}
The combination $\Phi = (\phi_x, \bar \phi_x, \psi, \bar \psi_x)_{x\in \Lambda}$ is called a \emph{superfield}.
We now drop the wedge symbol in the wedge product.
One important field of forms is \begin{align}
\Phi^2 = (\Phi_x^2)_{x\in \Lambda} = ( \phi_x \bar \phi_x + \psi_x\bar \psi_x )_{x\in \Lambda}. \end{align}
For a complex $\abs{ \Lambda }\times \abs{ \Lambda }$ matrix $\Delta$, we define \begin{align}
( \Phi, -\Delta \Phi) = \sum_{x\in \Lambda} \( \phi_x (-\Delta \bar \phi )_x + \psi_x (-\Delta \bar \psi )_x \).
\end{align}
\subsubsection{Function of forms}
For $p\in \N$, consider a $C^\infty$ function $F:\R^p \to \R$. Let $K = (K_1, \dots, K_p)$ be a collection of even forms. Assume the degree-0 part $K_j^0$ of each $K_j$ is real. The form $F(K)$ is defined using the Taylor series about the degree-0 part $K^0$, as follows, \begin{align}
F(K) = \sum_\alpha \frac {1} {\alpha!} F^{(\alpha)}(K^0) (K-K^0)^\alpha,
\end{align}
where the sum is over all multi-indexes $\alpha = (\alpha_1, \dots, \alpha_p)$, and $\alpha! = \prod_{j=1}^p \alpha_j!$, $(K-K^0)^\alpha = \prod_{j=1}^p (K_j - K_j^0)^{\alpha_j}$ (the order of the product does not matter because all $K_j$ are even). The sum is always finite because $(K_j - K_j^0)^{\alpha_j} = 0$ for all $\alpha_j > 2\lvert \Lambda \rvert$, due to anti-commutivity.
The key example is the following. Let $p=1$, $F:\R\to \R$ smooth, $x\in \Lambda$. Then \begin{align} \label{eqn:A9}
F(\Phi_x^2) = F( \phi_x \bar \phi_x + \psi_x\bar \psi_x)
= F( \phi_x \bar \phi_x) + F'( \phi_x \bar \phi_x) \psi_x\bar \psi_x.
\end{align}
\subsection{Isomorphism theorem and supersymmetry}
Let $\{ X(t) \}_{t\ge 0}$ be a continuous-time random walk on a finite set $\Lambda$ with generator $\Delta$.
Denote its expectation by $E_i$ if $X(0)=i$.
The local time of $X$ at $x\in \Lambda$ up to time $T$ is defined by \begin{equation}
L_{T,x} = \int_0^T \1_{X(s)=x}\ ds. \end{equation}
We write $L_T = (L_{T,x})_{x\in \Lambda}$.
The supersymmetric BFS--Dynkin isomorphism theorem \cite[Corollary 11.3.7]{BBS2019} relates local times of $X(t)$ with the boson and fermion fields, as follows.
\begin{theorem}[BFS--Dynkin isomorphism theorem] \label{theorem:isomorphism}
Let $F:\R^{\abs{ \Lambda }} \to \R$ be such that $e^{\eps \sum_{x\in \Lambda} t_x} F(t)$ is a Schwartz function for some $\eps>0$. Then \begin{align}
\int_0^\infty E_i (F(L_T) \1_{X(T)=j })\ dT
= \int_{\R^{2\lvert \Lambda \rvert}}\bar \phi_i \phi_j e^{-(\Phi, -\Delta \Phi)} F(\Phi^2), \label{eqn:isomorphism}
\end{align}
where $\{ X(t) \}_{t\ge 0}$ is a continuous-time random walk on $\Lambda$ with generator $\Delta$.
\end{theorem}
We will use this theorem on the finite-volume two-point function $G_{ij}^N$ defined in \eqref{def:Gij^N}. Choosing the nearest-neighbor Laplacian $\Delta$ on the right-hand side allows us to use the transfer matrix approach.
There is a symmetry between bosons and fermions called \emph{supersymmetry}. The next theorem is a demonstration of this. Notice the form $\Phi^2_x = \phi_x \bar \phi_x + \psi_x \bar \psi_x$ is unchanged if we interchange $(\phi_x, \bar \phi_x)$ with $(\psi_x, \bar \psi_x)$, so the integrands of the two sides of \eqref{eqn:A.12} are related by an interchange of bosons and fermions.
For general results and discussions on supersymmetry, we refer to \cite[Section 11.4]{BBS2019}.
\begin{theorem} \label{theorem:supersymmetry}
Let $x\in \Lambda$ and $F: [0, \infty) \to \R$ be smooth. If $\lim_{t\to\infty} tF(t) = 0$ and the integrals exist, then \begin{align} \label{eqn:A.12}
\int_{\R^2} \bar \phi_x \phi_x F(\Phi_x^2)
=\int_{\R^2} \bar \psi_x \psi_x F(\Phi_x^2) .
\end{align}
\end{theorem}
\emph{Proof. }
Since fermions anti-commute, \begin{align}
\bar \phi_x \phi_x - \bar \psi_x \psi_x
= \phi_x \bar \phi_x + \psi_x \bar \psi_x
= \Phi_x^2.
\end{align}
Thus, it is sufficient to prove $\int_{\R^2 } \Phi_x^2 F(\Phi_x^2) =0$. By definition of the integral and by \eqref{eqn:A9}, \begin{align}
\int_{\R^2 } \Phi_x^2 F(\Phi_x^2)
&= \int_{\R^2 } ( \phi_x \bar \phi_x + \psi_x \bar \psi_x ) ( F( \phi_x \bar \phi_x) + F'( \phi_x \bar \phi_x) \psi_x\bar \psi_x ) \nonumber\\
&= \int_{\R^2 } [ F( \phi_x \bar \phi_x) + \phi_x \bar \phi_x F'( \phi_x \bar \phi_x) ] \psi_x\bar \psi_x \nonumber \\
&= \int_{\R^2 } [ F( u^2 + v^2) + (u^2 + v^2) F'( u^2 + v^2) ] \frac{-1}{\pi} dudv \nonumber \\
&= - \int_0^\infty [ F( r^2) + r^2 F'( r^2) ] dr^2 \nonumber \\
&= - \int_0^\infty \frac{d}{dt}\big( tF(t) \big) dt = 0,
\end{align}
which is the desired result. $\qedsymbol$
\subsection{Proof of Proposition~\ref{prop:Gij^N}} \label{appendix:Gij^N}
We first prove Proposition~\ref{prop:Gij^N} for $\nu\in\R$ using the supersymmetric representation. We need the following lemma, which is a corollary of Proposition 2.5 and Lemma 2.6 in \cite{BS2020}.
Since we need to deal with two superfields at the same time, we use the notation $D\Phi$ to signify that the integration is with respect to the superfield $\Phi$. We also recall the operators $T$ and $Q$ defined in~\eqref{def:T} and \eqref{def:Q}.
Notice $f(t) = \sqrt{p(t)} = e^{-\half g\phi(t) - \half \nu t}$ satisfies the hypotheses of the lemma, because $\phi(t) \ge 0$ and $\phi(0)=0$ by assumption~\eqref{A0}.
\begin{lemma} \label{lemma:outside_integral}
Let $\nu\in \R$. Fix a superfield $Z = (\zeta, \bar \zeta, \xi, \bar \xi)$ where $\xi = \frac 1 {\sqrt{2\pi i}} d\zeta$ and $\bar \xi = \frac 1 {\sqrt{2\pi i}} d\bar \zeta$.
Let $f:[0, \infty)\to [0, \infty)$ be a smooth function such that $\sqrt{p} \cdot f$ is bounded. Then:
(i) if $f(0)=1$, then $Tf(0)=1$, and the following holds if the integrals exist \begin{equation} \label{eqn:lemma2-1}
\sqrt{p(Z^2)}\int_{\R^2} D\Phi\ e^{-(Z-\Phi)^2}\sqrt{p(\Phi^2)} f(\Phi^2)
= Tf(Z^2); \end{equation}
(ii) if $f>0$ pointwise, then $Qf>0$ pointwise, and the following holds if the integrals exist \begin{equation}\label{eqn:lemma2-2}
\sqrt{p(Z^2)}\int_{\R^2} D\Phi\ \bar\phi e^{-(Z-\Phi)^2}\sqrt{p(\Phi^2)} f(\Phi^2)
= \bar \zeta Qf(Z^2). \end{equation}
\end{lemma}
\emph{Proof. }
(i) By definition of $T$ in~\eqref{def:T} and by the Taylor expansion of the integral kernel~\eqref{eqn:2.6}, we have \begin{align}
Tf(0) = \sqrt{p(0)} \cdot 1 + \int_0^\infty f(s) \cdot 0\ ds = 1.
\end{align}
By \cite[Proposition 2.5]{BS2020}, \begin{align} \label{eqn:A18}
\int_{\R^2} D\Phi\ e^{-(Z-\Phi)^2}\sqrt{p(\Phi^2)} f(\Phi^2) = e^{-V(Z^2)}, \end{align}
where \begin{align}\label{eqn:A19}
e^{-V(t)} &= e^{-t} \( \sqrt{p(0)} f(0) + v(t) \), \\
v(t) &= \int_0^\infty \sqrt{p(s)} f(s) e^{-s} I_1(2\sqrt{st})\sqrt{\tfrac{t}{s}}\ ds.
\end{align}
We multiply equation~\eqref{eqn:A19} by $\sqrt{p(t)}$ and use $\sqrt{ p(0)}=f(0)=1$, then \begin{align}
\sqrt{p(t)}e^{-V(t)}
&= \sqrt{p(t)} e^{-t}+
\int_0^\infty f(s) \sqrt{p(t)}\sqrt{p(s)} e^{-t} e^{-s} I_1(2\sqrt{st})\sqrt{\tfrac{t}{s}}\ ds
\nonumber \\
&= Tf(t). \end{align}
Substituting $t$ by $Z^2$ gives the desired (\ref{eqn:lemma2-1}).
(ii) Recall $Qf(t) = \int_0^\infty f(s) k_0(t,s) ds$ and the kernel $k_0(t,s)$ defined in \eqref{def:k_j} is strictly positive for all $t$. Since $f>0$ pointwise, we get $Qf(t)>0$ pointwise too.
By \cite[Lemma 2.6]{BS2020}, \begin{align} \label{eqn:A.7}
\int_{\R^2} D\Phi\ \bar\phi e^{-(Z-\Phi)^2}\sqrt{p(\Phi^2)} f(\Phi^2)
= \bar \zeta (1- V'(Z^2)) e^{-V(Z^2)},
\end{align} where $V$ is the same as in (\ref{eqn:A19}).
Notice $V(t) = t- \log(f(0) + v(t))$, so differentiating gives $1-V'(t) = \frac{v'(t)}{f(0)+v(t)}$. Hence, using equation~\eqref{eqn:A19} and $\sqrt{p(0)}=1$,
\begin{align}
(1-V'(t))e^{-V(t)} = e^{-t} v'(t)
&= e^{-t} \int_0^\infty \sqrt{p(s)} f(s) e^{-s} \frac{\del}{\del t} \( I_1(2\sqrt{st})\sqrt{\tfrac{t}{s}} \) ds \nonumber \\
&= e^{-t} \int_0^\infty \sqrt{p(s)} f(s) e^{-s} I_0(2\sqrt{st})\ ds, \label{eqn:A.8}
\end{align}
where the last equality is from the Taylor series of the modified Bessel functions \begin{align}
I_0(2\sqrt{st}) = \sum_{m=0}^\infty \frac{s^m t^m}{m! m!}, \qquad
I_1(2\sqrt{st})\sqrt{\tfrac t s} = \sum_{m=0}^\infty \frac{s^m t^{m+1}}{m!( m+1)!}.
\end{align}
Multiplying equation \eqref{eqn:A.8} by $\sqrt{p(t)}$, we get \begin{align}
\sqrt{p(t)} (1-V'(t))e^{-V(t)}
&= \int_0^\infty f(s) \sqrt{p(t)}\sqrt{p(s)} e^{-t} e^{-s} I_0(2\sqrt{st})\ ds
\nonumber \\
&= Qf(t). \end{align}
Substituting $t$ by $Z^2$, then plugging into equation \eqref{eqn:A.7} multiplied by $\sqrt{p(Z^2)}$ yields the desired (\ref{eqn:lemma2-2}).
$\qedsymbol$
\emph{Proof of Proposition~\ref{prop:Gij^N}.}
We first prove for $\nu\in \R$. We have \begin{align}
(\Phi, -\Del_N\Phi) &= \sum_{x=-N}^N \( \phi_x(-\Del_N\bar\phi)_x + \psi_x(-\Del_N\bar\psi)_x \)
= \sum_{x=-N}^{N-1} (\Phi_{x+1} - \Phi_x)^2.
\end{align}
Using assumption~\eqref{A3} and Theorem~\ref{theorem:isomorphism}, the two-point function \eqref{def:Gij^N} can be expressed as \begin{align}
G_{ij}^N(g, \nu)
&= \int_0^\infty E_i^N \( \prod_{x=-N}^{N} p(L_{T,x}) \1_{X(T)=j} \) dT \nonumber \\
&= \int_{\R^{2(2N+1)}} \bar\phi_i \phi_j e^{-(\Phi, -\Del_N\Phi)} \prod_{x=-N}^{N} p(\Phi_x^2) \nonumber\\
&= \int_{\R^{2(2N+1)}} \bar\phi_i \phi_j
\prod_{x=-N}^{N-1} e^{-(\Phi_{x+1} - \Phi_x)^2} \prod_{x=-N}^N p(\Phi_x^2) . \label{eqn:A27}
\end{align}
We use Lemma~\ref{lemma:outside_integral} to calculate this integral iteratively. First, we decompose $p(\Phi_x^2) = \sqrt{ p(\Phi_x^2)} \sqrt{ p(\Phi_x^2)}$ for all $x$.
Now starting from $-N$, if $-N < i$, we take one of the $\sqrt{ p(\Phi_{-N+1}^2)}$ terms, all $\Phi_{-N}^2$ terms, and calculate the $\Phi_{-N}$ integral. This matches Lemma~\ref{lemma:outside_integral}(i) with $Z = \Phi_{-N+1}$, $\Phi = \Phi_{-N}$, and $f = \sqrt{p}$. We get \begin{align}
\sqrt{p(\Phi_{-N+1}^2)}\int_{\R^2} D\Phi_{-N}\ e^{ - (\Phi_{-N+1}-\Phi_{-N})^2 } &\sqrt{p(\Phi_{-N}^2)} \sqrt{p(\Phi_{-N}^2)} \nonumber\\
&= T[\sqrt p] (\Phi_{-N+1}^2).
\end{align}
This process can be continued until we reach $i$. If $j<N$, we can do the same to integrate out $\Phi_N, \dots, \Phi_{j+1}$. We get \begin{align}
G_{ij}^N(g, \nu) &=
\int_{\R^{2(j-i+1)}} \bar\phi_i \phi_j
\cdot T^{N+i}[\sqrt p] (\Phi_{i}^2) \cdot T^{N-j}[\sqrt p] (\Phi_{j}^2) \nonumber\\
&\qquad\cdot
\Bigg[ \prod_{x=i}^{j-1} e^{-(\Phi_{x+1} - \Phi_x)^2} \Bigg]
\sqrt{p(\Phi_i^2)}
\Bigg[ \prod_{x=i+1}^{j-1} p(\Phi_x^2) \Bigg] \sqrt{p(\Phi_j^2)} .
\end{align}
Then we integrate from $i$ up to $j-1$ using Lemma~\ref{lemma:outside_integral}(ii). The only difference is that there is an extra boson $\bar \phi_i$ that gets carried along. We get \begin{align}
G_{ij}^N(g, \nu)
&=
\int_{\R^2} \bar \phi_j \phi_j\cdot Q^{j-i}T^{N+i}[\sqrt p] (\Phi_{j}^2)\cdot T^{N-j}[\sqrt p] (\Phi_{j}^2).
\end{align}
By Theorem~\ref{theorem:supersymmetry} and the exponential decay of $\sqrt{p}$, \begin{align}
G_{ij}^N(g, \nu)
&=
\int_{\R^2} \bar \psi_j \psi_j \cdot Q^{j-i}T^{N+i}[\sqrt p] (\Phi_{j}^2)\cdot T^{N-j}[\sqrt p] (\Phi_{j}^2) \nonumber \\
&=
\int_{\R^2} Q^{j-i}T^{N+i}[\sqrt p] (u^2 + v^2)\cdot T^{N-j}[\sqrt p] (u^2+v^2)
\frac 1 \pi du dv\nonumber \\
&=
\int_0^\infty Q^{j-i}T^{N+i}[\sqrt p] (t)\cdot T^{N-j}[\sqrt p] (t) dt \nonumber \\
&=
\left \langle Q^{j-i}T^{N+i}[\sqrt p] , \overline{ \Big( T^{N-j}[\sqrt p] \Big)}\right \rangle. \label{eqn:25}
\end{align}
For complex $\nu$, observe both sides of (\ref{eqn:25}) are defined and holomorphic for any $\nu\in\C$. We get the result by the uniqueness of analytic continuation. $\qedsymbol$
\section{Proof of Lemma~\ref{lemma:Hn}} \label{appendix:Hn}
The proof of this lemma is by a direct calculation.
We first calculate the derivatives of $\lambda(g, \nu) = \norm{Q(g, \nu)}$, by viewing $Q(g, \nu)$ as a perturbation of $Q(g_0, \nu_0)$. This calculation is similar to that of the Rayleigh--Schr\"odinger series.
Then we calculate the derivatives of the function $H_n$ defined in \eqref{def:Hn}, just using differentiation rules.
When the derivatives are evaluated at $(g_0, \nu_0)$, the formulas will simplify because of $Q(g_0, \nu_0)h_0 = \lambda(g_0, \nu_0) h_0$, and the claimed convergences will become apparent.
Since $\lambda(g, \nu) = \norm{Q(g, \nu)}$ is an isolated simple eigenvalue (Lemma~\ref{lemma:Q}), there exists $\delta>0$ such that $\lambda_0 = \lambda(g_0, \nu_0)$ is distance $2\delta$ away from the rest of the spectrum of $Q(g_0, \nu_0)$. For this part, we do not need to use $\lambda_0=1$. For $(g, \nu)$ near $(g_0, \nu_0)$, \begin{align}
P(g, \nu) = -\frac 1 {2\pi i} \oint_{\abs{\lambda_0 - \zeta}=\delta} (Q(g, \nu) - \zeta)\inv d\zeta
\end{align}
is the projection operator to the eigenspace $E_{\lambda(g, \nu)}$ of $Q(g, \nu)$.
Hence, we have $QPh_0 = \lambda Ph_0$, and \begin{align}\label{eqn:B2}
\lambda= \frac{ \langle QPh_0, h_0 \rangle} {\langle Ph_0, h_0 \rangle}. \end{align}
This is the equation we differentiate. Notice the dependence on $g$ and $\nu$ are only through the operators $Q$ and $P$.
We use subscripts to denote partial derivatives. The $\nu$-derivative of \eqref{eqn:B2} is \begin{align} \label{eqn:B3}
\lambda_\nu =
\frac{ \langle Q_\nu Ph_0, h_0 \rangle + \langle QP_\nu h_0, h_0 \rangle}{\langle Ph_0, h_0 \rangle}
-\frac{ \langle QPh_0, h_0 \rangle \langle P_\nu h_0, h_0 \rangle}{\langle Ph_0, h_0 \rangle^2}.
\end{align}
When evaluated at $(g_0, \nu_0)$, we have $Ph_0=h_0$ and $Qh_0 =\lambda_0 h_0$. Also recall $\norm{h_0}_2 = 1$ and that $Q$ is self-adjoint. Using these, we have \begin{align}
\lambda_\nu \rvert_{g_0, \nu_0}
= \langle Q_\nu h_0, h_0 \rangle + \langle P_\nu h_0, Q h_0 \rangle
- \lambda_0 \langle P_\nu h_0, h_0 \rangle
= \langle Q_\nu h_0, h_0 \rangle.
\end{align}
Similarly, $\lambda_g \rvert_{g_0, \nu_0} = \langle Q_g h_0, h_0 \rangle$.
For the second derivatives, we let $*=g, \nu$, differentiate \eqref{eqn:B3}, and then evaluate at $(g_0, \nu_0)$. This gives \begin{align} \label{eqn:B5}
\lambda_{\nu *} \rvert_{g_0, \nu_0} =
&\ \langle Q_{\nu *} h_0, h_0 \rangle
+ \langle Q_\nu P_* h_0, h_0 \rangle + \langle Q_* P_\nu h_0, h_0 \rangle \nonumber \\
&- \langle Q_\nu h_0, h_0 \rangle \langle P_* h_0, h_0 \rangle - \langle Q_* h_0, h_0 \rangle \langle P_\nu h_0, h_0 \rangle.
\end{align}
We claim $\langle P_* h_0, h_0 \rangle = 0$. This is because \begin{align} \label{eqn:B6}
\langle P_* h_0, h_0 \rangle
&= \frac{1}{2\pi i} \oint_{\abs{\lambda_0 - \zeta}=\delta} \langle (Q-\zeta)\inv Q_* (Q-\zeta)\inv h_0, h_0 \rangle d\zeta \nonumber\\
&= \frac{1}{2\pi i} \oint_{\abs{\lambda_0 - \zeta}=\delta}
\frac 1 {\lambda_0 - \zeta }
\langle Q_* h_0, (Q-\bar \zeta)\inv h_0 \rangle d\zeta \nonumber\\
&= \frac{1}{2\pi i} \langle Q_* h_0, h_0 \rangle \oint_{\abs{\lambda_0 - \zeta}=\delta}
\frac 1 { ( \lambda_0 - \zeta )^2 }
d\zeta = 0.
\end{align}
Next, we calculate $\langle Q_\nu P_* h_0, h_0 \rangle$ in equation~\eqref{eqn:B5}. Since $Q$ is self-adjoint, by the spectral theorem, there exists a real orthonormal eigenbasis $\{(\mu_j, \psi_j)\}_j$ of $Q(g_0, \nu_0)$. Using these, we decompose \begin{align}
Q_*h_0 = \sum_j \langle Q_* h_0, \psi_j \rangle \psi_j,
\end{align}
so \begin{align}
\langle Q_\nu P_* h_0, h_0 \rangle
&= \frac{1}{2\pi i} \oint_{\abs{\lambda_0 - \zeta}=\delta} \langle Q_\nu (Q-\zeta)\inv Q_* (Q-\zeta)\inv h_0, h_0 \rangle d\zeta \nonumber\\
&= \frac{1}{2\pi i} \oint_{\abs{\lambda_0 - \zeta}=\delta} \frac{1}{\lambda_0 - \zeta}
\langle (Q-\zeta)\inv Q_* h_0, Q_\nu h_0 \rangle d\zeta \nonumber\\
&= \frac{1}{2\pi i} \oint_{\abs{\lambda_0 - \zeta}=\delta} \frac{1}{\lambda_0 - \zeta}
\sum_j \frac{ \langle Q_* h_0, \psi_j \rangle \langle Q_\nu h_0, \psi_j \rangle }{ \mu_j - \zeta} d\zeta \nonumber\\
&= - \sum_{\mu_j \ne \lambda_0} \frac{ \langle Q_* h_0, \psi_j \rangle \langle Q_\nu h_0, \psi_j \rangle }{ \mu_j - \lambda_0}. \label{eqn:B8}
\end{align}
In the last equality, the $\mu_j = \lambda_0$ term vanishes for the same reason as in~\eqref{eqn:B6}. To write this more compactly, we define $P^\perp = I - P$ where $I$ is the identity operator, then equation~\eqref{eqn:B8} can be written as \begin{align}
\langle Q_\nu P_* h_0, h_0 \rangle
= \langle (\lambda_0 - Q)\inv P^\perp Q_* h_0, P^\perp Q_\nu h_0 \rangle.
\end{align}
We also have $
\langle Q_* P_\nu h_0, h_0 \rangle$ equal to the same expression, by the symmetry between $*$ and $\nu$ in equation~\eqref{eqn:B8}.
Putting together, equation~\eqref{eqn:B5} simplifies to \begin{align}
\lambda_{\nu *} \rvert_{g_0, \nu_0} =
&\ \langle Q_{\nu *} h_0, h_0 \rangle
+2 \langle (\lambda_0 - Q)\inv P^\perp Q_* h_0, P^\perp Q_\nu h_0 \rangle. \label{eqn:B10}
\end{align}
Next, we turn to the derivatives of $H_n(g, \nu) = \langle Q^n(g, \nu)h_0, h_0 \rangle^{1/n}$. A direct calculation gives \begin{align}
\del_\nu H_{n}
=&\ \frac{1}{n} \langle Q^n h_0, h_0 \rangle^{1/n-1}
\langle (Q^n)_\nu h_0, h_0 \rangle, \\
\del_g\del_\nu H_{n}
=&\ \frac{1}{n} \(\frac{1}{n} - 1\) \langle Q^n h_0, h_0 \rangle^{1/n-2}
\langle (Q^n)_g h_0, h_0 \rangle
\langle (Q^n)_\nu h_0, h_0 \rangle \nonumber\\
&+ \frac{1}{n} \langle Q^n h_0, h_0 \rangle^{1/n-1}
\langle (Q^n)_{\nu g} h_0, h_0 \rangle.
\end{align}
When evaluated at $(g_0, \nu_0)$, we have $Q(g_0, \nu_0)h_0 = \lambda_0h_0$ with $\lambda_0 =1$. This gives \begin{align}
\del_\nu H_{n} (g_0, \nu_0)=&\ \frac{1}{n} \langle (Q^n)_\nu h_0, h_0 \rangle
= \langle Q_\nu h_0, h_0 \rangle = \lambda_\nu, \\
\del_g\del_\nu H_{n} (g_0, \nu_0)
=&\ \frac{1}{n} \(\frac{1}{n} - 1\) \langle (Q^n)_g h_0, h_0 \rangle
\langle (Q^n)_\nu h_0, h_0 \rangle + \frac{1}{n}
\langle (Q^n)_{\nu g} h_0, h_0 \rangle \nonumber \\
=&\ (1-n) \langle Q_g h_0, h_0 \rangle
\langle Q_\nu h_0, h_0 \rangle
+ \langle Q_{\nu g} h_0, h_0 \rangle \nonumber\\
&+\frac{2}{n} \sum_{0 \le i< j \le n} \langle Q^{i-1}Q_\nu Q^{j-i-1} Q_g Q^{n-j} h_0, h_0 \rangle \nonumber \\
=&\ (1-n) \langle Q_g h_0, h_0 \rangle
\langle Q_\nu h_0, h_0 \rangle
+ \langle Q_{\nu g} h_0, h_0 \rangle \nonumber\\
&+\frac{2}{n} \sum_{0 \le i< j \le n} \langle Q^{j-i-1} Q_g h_0,Q_\nu h_0 \rangle.
\end{align}
For this last sum, we decompose $Q_g h_0 = PQ_g h_0 + P^\perp Q_g h_0$ and similarly for $Q_\nu h_0$. The parts that are in the eigenspace $E_1$ sum to cancel with $(1-n) \langle Q_g h_0, h_0 \rangle \langle Q_\nu h_0, h_0 \rangle $ exactly, leaving \begin{align}
\del_g\del_\nu H_{n} (g_0, \nu_0) =
\langle Q_{\nu g} h_0, h_0 \rangle
+\frac{2}{n} \sum_{0 \le i< j \le n} \langle Q^{j-i-1} P^\perp Q_g h_0, P^\perp Q_\nu h_0 \rangle.
\end{align}
Summing diagonally, we get \begin{align}
\del_g\del_\nu H_{n} (g_0, \nu_0) &\to
\langle Q_{\nu g} h_0, h_0 \rangle + 2 \langle (1 - Q)\inv P^\perp Q_g h_0, P^\perp Q_\nu h_0 \rangle \nonumber\\
&= \lambda_{\nu g}\rvert_{g_0, \nu_0}
\end{align}
by the first calculation \eqref{eqn:B10}.
The calculation for $\lambda_{\nu\nu}\rvert_{g_0, \nu_0} $ is analogous. $\qedsymbol$
\end{appendices}
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.920898,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfYM25V5jQ-dgBY-i | \section{Introduction}
The quad-curl equation appears in various applications, such as the inverse electromagnetic scattering theory \cite{Cakoni2010IP, Cakoni2017A, Sun2016A} or
magnetohydrodynamics \cite{Zheng2011A}. The corresponding quad-curl eigenvalue problem plays a fundamental role in the analysis and computation of the
electromagnetic interior transmission eigenvalues \cite{Monk2012Finite, sun2011iterative}.
To compute eigenvalues, one usually starts with the corresponding source problem \cite{babuvska1991eigenvalue, boffi2010, Sun2016Finite}.
Some methods have been proposed for the source problem, i.e., the quad-curl problem,
in \cite{WZZelement, Zheng2011A,Sun2016A,Qingguo2012A,Brenner2017Hodge,quadcurlWG, Zhang2018M2NA,Chen2018Analysis164, Zhang2018Regular162,SunZ2018Multigrid102,WangC2019Anew101,BrennerSC2019Multigrid100}.
Recently, a family of $H(\text{curl}^2)$-conforming finite elements using incomplete $k$-th order polynomials is proposed in \cite{WZZelement} for the qual-curl problem.
In this paper, we construct a new family of elements by using the complete $k$-th order polynomials.
Due to the large kernel space of the curl operator, the Helmholtz decomposition of splitting an arbitrary vector field into the irrotational and solenoidal components
plays an important role in the analysis. However, in general, the irrotational component is not $H^2$-regular when $\Omega$ is non-convex. Therefore, we propose a new decomposition for $H_0(\text{curl}^2;\Omega)$, which further splits the irrotational component into a function in $H^2(\Omega)$ and a function in the kernel space of curl operator.
There exist a few results on the numerical methods for the quad-curl eigenvalue problem.
The problem was first proposed in \cite{Sun2016A} by Sun, who applied a mixed finite element method and proved an a priori error estimate.
Two multigrid methods based on the Rayleigh quotient iteration and the inverse iteration with fixed shift were proposed and analyzed in \cite{han2020shifted}.
In the first part of the paper, we apply the classical framework of Babu\v{s}ka and Osborn \cite{babuvska1991eigenvalue,osborn1975spectral} to
prove an a priori estimate.
At reentrant corners or material interfaces, the eigenvectors feature strong singularities \cite{Nicaise2018Singularities161}.
For more efficient computation, adaptive local refinements are considered.
A posteriori error estimators are essential for the adaptive finite element methods.
We refer to \cite{cochez2007robust, beck2000residual,monk1998posteriori,schoberl2008posteriori} for the a posteriori estimates of source problems
and \cite{dai2008convergence,boffi2019posteriori,boffi2017residual} for eigenvalue problems.
In terms of the quad-curl eigenvalue problem, to the authors' knowledge, no work on a posteriori error estimations has been done so far.
To this end, we start by relating the eigenvalue problem to a source problem. An a posteriori error estimator for the source problem is constructed,
The proof uses the new decomposition and makes no additional regularity assumption.
Then we apply the idea of \cite{dai2008convergence} to obtain an a posteriori error estimate for the eigenvalue problem.
The rest of this paper is organized as follows. In Section 2, we present some notations, the new elements, the new decomposition,
and an $H(\text{curl}^2)$ Cl\'ement interpolation. In Section~3, we derive an a priori error estimate for the quad-curl eigenvalue problem.
In Section 4, we prove an a posteriori error estimate. Finally, in Section 5, we show some numerical experiments.
\section{Notations and basis tools}
\subsection{Notations}Let $\Omega\in\mathbb{R}^2$ be a simply-connected Lipschitz domain.
For any subdomain $D\subset\Omega$, $L^2(D)$ denotes the space of square integrable functions on $D$ with norm $\|\cdot\|_D$.
If $s$ is a positive integer,
$H^s(D)$ denotes the space of scalar functions in $L^2(D)$ whose derivatives up to order $s$ are also in $L^2(D)$.
If $s=0$, $H^0(D)=L^2(D)$. When $D=\Omega$, we omit the subscript $\Omega$ in the notations of norms.
For vector functions, $\bm L^2(D) = (L^2(D))^2$ and $\bm H^s(D) = (H^s(D))^2$.
Let ${\bm u}=(u_1, u_2)^t$ and ${\bm w}=(w_1, w_2)^t$, where the superscript $t$ denotes the transpose.
Then ${\bm u} \times {\bm w} = u_1 w_2 - u_2 w_1$ and $\nabla \times {\bm u} = \partial u_2/\partial x_1 - \partial u_1/\partial x_2$.
For a scalar function $v$, $\nabla \times v = (\partial v/\partial x_2, - \partial v/\partial x_1)^t$.
We now define a space concerning the curl operator
\begin{align*}
H(\text{curl}^2;D)&:=\{\bm u \in {\bm L}^2(D):\; \nabla \times \bm u \in L^2(D),\;(\nabla \times)^2 \bm u \in \bm L^2(D)\},
\end{align*}
whose norm is given by
\[
\left\|\bm u\right\|_{H(\text{curl}^2;D)}=\sqrt{(\bm u,\bm u)+(\nabla\times\bm u,\nabla\times\bm u)+((\nabla\times)^2\bm u,(\nabla\times)^2\bm u)}.
\]
The spaces $H_0(\text{curl}^2;D)$, $H_0^1(D)$, and $H(\text{div}^0;D)$ are defined, respectively, as
\begin{align*}
&H_0(\text{curl}^2;D):=\{\bm u \in H(\text{curl}^2;D):\;{\bm n}\times\bm u=0\; \text{and}\; \nabla\times \bm u=0\;\; \text{on}\ \partial D\},\\
&H_0^1(D):=\{u \in H^1(D):u=0\;\; \text{on}\ \partial D\},\\
&H(\text{div}^0;D) :=\{\bm u\in {\bm L}^2(D):\; \nabla\cdot \bm u=0\}.
\end{align*}
Let \,$\mathcal{T}_h\,$ be a triangular partition of $\Omega$.
Denote by $\mathcal{N}_h$ and $\mathcal{E}_h$ the sets of vertices and edges. Let ${\bm \tau}_e$ be the tangent vector of an edge $e \in \mathcal{E}_h$.
We refer to $\mathcal{N}_h^{\text{int}}$ and $\mathcal{E}_h^{\text{int}}$ as the sets of vertices and edges in the interior of $\Omega$, respectively.
Let $\mathcal{N}_h(T)$ and $\mathcal{E}_h(T)$ be the sets of vertices and edges on the element $T$. Denote by $h_T$ the diameter of
$T \in \mathcal{T}_h$ and $\displaystyle h = \max_{T\in \mathcal {T}_h}h_T$. In the following,
we introduce some subdomains called patches:
\begin{itemize}
\item $\omega_T$: the union of elements sharing a common edge with $T$, $T\in\mathcal{T}_h$;
\item $\omega_e$: the union of elements sharing $e$ as an edge, $e\in\mathcal{E}_h$;
\item $\omega_v$: the union of elements sharing $v$ as a vertex, $v\in \mathcal{N}_h$.
\end{itemize}
We use $P_k$ to represent the space of polynomials on an edge or on a subdomain $D\subset\Omega$ with degrees at most $k$ and $\bm P_k(D)=\left(P_k(D)\right)^2$.
\subsection{A decomposition of $H_0(\text{curl}^2;\Omega)$ }
We mimic the proof of \cite[Prop. 5.1]{dhia1999singular} to obtain a decomposition of the space $H_0(\text{curl}^2;\Omega)$, which plays a critical role in the analysis.
\begin{lemma}\label{Helm}
Let $\nabla H_0^1(\Omega)$ be the set of gradients of functions in $H_0^1(\Omega)$.
Then $\nabla H_0^1(\Omega)$ is a closed subspace of $H_0(\mathrm{curl}^2;\Omega)$ and
\begin{align}\label{decom-00}
H_0(\mathrm{curl}^2;\Omega)=X\oplus \nabla H_0^1(\Omega),
\end{align}
where $X=\left\{\bm u\in H_0(\text{curl}^2;\Omega)\big|(\bm u,\nabla p)=0,\;\;\forall p\in H_0^1(\Omega)\right\}.$
Namely, for $\bm u\in H_0(\mathrm{curl}^2;\Omega)$,
$\bm u=\bm u^0+\bm u^{\perp}$
with $\bm u^0\in \nabla H_0^1(\Omega)$ and $\bm u^{\perp}\in X.$
Furthermore, $\bm u^{\perp}$ admits the splitting
\begin{align}\label{decom-01}
\bm u^{\perp}=\nabla \phi+\bm v,
\end{align}
where $\phi\in H_0^1(\Omega)$ and $\bm v \in \bm H^2(\Omega)$ satisfying
\begin{align}
&\|\bm v\|_2\leq C\|\nabla\times\bm u^{\perp}\|_1.\label{decom-02}\\
&\|\nabla\phi\|\leq C\left(\|\nabla\times\bm u^{\perp}\|_1+\|\bm u^{\perp}\|\right).\label{decom-03}
\end{align}
\end{lemma}
\begin{proof}
The proof of \eqref{decom-00} can be found in \cite{WZZelement}. We only need to prove \eqref{decom-01}.
Let $\mathcal{O}$ be a bounded, smooth, simply-connected open set with $\bar{\Omega}\subset\mathcal{O}$.
For any $\bm u^{\perp}\in X$, we can extend $\bm u^{\perp}$ in the following way:
\begin{align*}
\bm {\tilde{u}}&=\begin{cases}
\bm u^{\perp},&\Omega,\\
0,&\mathcal{O}-\bar{\Omega}.\\
\end{cases}
\end{align*}
Obviously, $\bm {\tilde{u}}\in H_0(\text{curl}^2;\mathcal{O})$ and $\nabla \times \bm {\tilde{u}}\in H_0^1(\mathcal{O})$.
Now, we consider the following problem:
Find $\psi$ defined in $\mathcal{O}$ such that
\begin{align}\label{lap-01}
-\triangle \psi&=\nabla \times \bm {\tilde{u}},\ \text{in}\ \mathcal{O},\\
\psi&=0,\ \text{on}\ \partial \mathcal{O}.\label{lap-02}
\end{align}
Since $\nabla \times \bm {\tilde{u}}\in H_0^1(\mathcal{O})$ and $\mathcal O$ has a smooth boundary, there exists a function $\psi\in H^3(\mathcal{O})$ satisfying \eqref{lap-01} and \eqref{lap-02} and
\begin{align}\label{reg1}
\left\|\psi\right\|_{3,\mathcal O}\leq C\|\nabla \times \bm {\tilde{u}}\|_{1,\mathcal O}.
\end{align}
In addition, \eqref{lap-01} can be rewritten as
$
\nabla\times(\nabla\times \psi-\bm {\tilde{u}})=0.
$
Based on \cite[Thm. 2.9]{Girault2012Finite}, there exists a unique function $p$ of $H^1(\mathcal{O})/\mathbb{R}$ such that
\begin{align}\label{proof-lem}
-\nabla\times \psi+\bm {\tilde{u}}=\nabla p.
\end{align}
Now, we restrict \eqref{proof-lem} to the domain $\mathcal{O}-\bar{\Omega}$ and obtain
\begin{align}
\nabla p=-\nabla\times \psi\in H^2(\mathcal{O}/\bar{\Omega}).
\end{align}
Using the extension theorem \cite{chen2016sobolev}, we can extend $p\in H^3(\mathcal{O}/\bar{\Omega})$ to $\tilde{p}$ defined on $\mathcal O$ satisfying
\begin{align}\label{reg2}
\left\|\tilde{p}\right\|_{3,\mathcal O}\leq C\left\|{p}\right\|_{3,\mathcal O\slash \bar{\Omega}}\leq C\left\|{\nabla p}\right\|_{2,\mathcal O\slash \bar{\Omega}}\leq C\left\|{\nabla \times \psi}\right\|_{2,\mathcal O\slash \bar{\Omega}},
\end{align}
where we have used Poincar\'e-Friedrichs inequality for $p\in H^3(\mathcal O\slash\bar{\Omega})$ since we can choose $p$ for which $\int_{\mathcal O\slash\bar{\Omega}}p=0.$
Restricting on $\Omega$, we have
\begin{align*}
\bm {{u}^{\perp}}=\underbrace{\nabla\times\psi+\nabla \tilde{p}}_{\in H^2(\Omega)}+\nabla\underbrace{(p-\tilde{p})}_{\in H^1(\Omega)}
\triangleq\bm v+\nabla \phi.
\end{align*}
Note that $\phi=p-\tilde p\in H_0^1(\Omega)$ since $\tilde p$ is the extension of $p$.
Therefore, \eqref{decom-01} is proved.
Combining \eqref{reg1} and \eqref{reg2}, we obtain
\[\|\bm v\|_{2,\Omega}=\|\nabla\times \psi+\nabla \tilde{p}\|_{2,\Omega}\leq\|\nabla\times \psi+\nabla \tilde{p}\|_{2,\mathcal{O}}\leq C\|\nabla\times\psi\|_{2,\mathcal{O}}\leq C\|\nabla\times\bm {\tilde u}\|_{1,\mathcal{O}}=C\|\nabla\times\bm { u}^{\perp}\|_{1,\Omega}\]
and
\[\|\nabla \phi\|_{\Omega}=\|\bm { u}^{\perp}-\bm v\|_{\Omega}\leq\|\bm { u}^{\perp}\|_{\Omega}+\left\|\bm v\right\|_{\Omega}\leq \|\bm { u}^{\perp}\|_{\Omega}+\left\|\bm v\right\|_{2,\Omega}\leq C\left(\|\bm { u}^{\perp}\|_{\Omega}+\|\nabla\times\bm u^{\perp}\|_{1,\Omega}\right).\]
\end{proof}
\subsection{A new family of $H(\text{curl}^2)$-conforming elements}
In this subsection, we propose a new family of $H(\text{curl}^2)$-conforming finite elements.
The new elements can lead to one order higher accuracy than the elements in \cite{WZZelement} when the solution $\bm u$ is smooth enough.
\begin{definition}\label{tri-dof-def}
For an integer $k \geq 4$, an ${H}(\mathrm{curl}^2)$-conforming element is given by the triple:
\begin{equation*}
\begin{split}
&{T}\;\text{is a triangle},\\
&P_{T} = \bm P_k(T),\\
&\Sigma_{{T}} = \bm M_{{p}}( {\bm u }) \cup \bm M_{{e}}({\bm u}) \cup \bm M_{K}({\bm u}),
\end{split}
\end{equation*}
where $\Sigma_{{T}}$ is the set of DOFs (degree of freedom) defined as follows.
\begin{itemize}
\item $\bm M_{{p}}({\bm u})$ is the set of DOFs on all vertex nodes and edge nodes ${p}_{i}$:
\begin{equation}\label{2def2}
\bm M_{{p}}( {\bm u})=\left\{{\nabla}\times {\bm u}({p_{i}}), i=1,\;2,\;\cdots\;,3k\right\}
\end{equation}
with the points $p_{i}$ chosen at $3$ vertex nodes and $(k-1)$ distinct nodes on each edge.
\item $\bm M_{{e}}({\bm u})$ is the set of DOFs given on all edges ${e}_i$ of ${T}$ with the unit tangential vector ${\bm \tau}_{e_i}$:
\begin{equation}\label{2def3}
\bm M_{{e}}( {\bm u})= \left\{\int_{{e}_i} {\bm u}\cdot {\bm \tau}_{e_i} {q}\mathrm d {s},\ \forall {q}\in P_{k}({e}_i),\;i=1,2,3 \right\}.
\end{equation}
\item $\bm M_{T}( {\bm u})$ is the set of DOFs on the element ${T}$:
\begin{align}\label{2def4}
\bm M_{{T}}({\bm u})=\left\{\int_{T} {\bm u}\cdot {\bm q} \mathrm d V,\ \forall \bm q \in \mathcal{D}\right\},
\end{align}
where $\mathcal{D}=\bm P_{k-5}( T)\oplus\widetilde{P}_{k-5}{\bm x}\oplus\widetilde{P}_{k-4}{\bm x}\oplus
\widetilde{P}_{k-3}{\bm x}\oplus
\widetilde{P}_{k-2}{\bm x}$ when $k\geq 5$ and $\mathcal{D}=\widetilde{P}_{0}{\bm x}\oplus
\widetilde{P}_{1}{\bm x}\oplus
\widetilde{P}_{2}{\bm x}$ when $k=4$. Here $\widetilde{P}_k$ is the space of a homogeneous polynomial of degree $k$.
\end{itemize}
\end{definition}
\begin{lemma}\label{unisolvence}
The above finite elements are unisolvent and $H(\mathrm{curl}^2)$-conforming.
\end{lemma}
Using the above Lemma, the global finite element space $V_h$ on $\mathcal{T}_h$ is given by \begin{align*}
&V_h=\{\bm{v}_h\in H(\text{curl}^2;\Omega):\ \bm v_h|_T\in \bm {P}_k(T)\ \forall T\in\mathcal{T}_h\}.
\end{align*}
Provided $\bm u \in \bm H^{1/2+\delta}(\Omega)$ and $ \nabla \times \bm u \in H^{1+\delta}(\Omega)$ with $\delta >0$,
define an $H(\text{curl}^2;\Omega)$ interpolation $\Pi_h\bm u\in V_h$, whose restriction on $T$, denoted by $\Pi_T\bm u$, is such that
\begin{eqnarray}\label{def-inte-tri}
\bm M_p(\bm u-\Pi_T\bm u)=0,\ \bm M_e(\bm u-\Pi_T\bm u)=0,\ \text{and}\ \bm M_T(\bm u-\Pi_T\bm u)=0,
\end{eqnarray}
where $\bm M_p,\ \bm M_e$, and $\bm M_T$ are the sets of DOFs in \eqref{2def2}-\eqref{2def4}.
\begin{theorem}\label{err-interp}
If $\bm u\in \bm H^{s+1}(\Omega)$, $1+\delta\leq s\leq k$ with $\delta>0$, then the following error estimate for the interpolation $\Pi_h$ holds:
\begin{align*}
&\left\|\bm u-\Pi_h\bm u\right\|_T+h_T\left\|\nabla\times(\bm u-\Pi_h\bm u)\right\|_T+h_T^2\left\|(\nabla\times)^2(\bm u-\Pi_h\bm u)\right\|_T\leq C{h^{s+1}}\left\|\bm u\right\|_{s+1,T}.
\end{align*}
\end{theorem}
Since the proofs of Lemma~\ref{unisolvence} and Theorem~\ref{err-interp} are similar to those in \cite{WZZelement}, we omit them.
\subsection{An $H$(curl$^2$)-type Cl\'ement interpolation}
Let $\omega_{v}$ be a patch on a vertex $v$ and $R^k_{v}\phi$ be the $L^2$ projection of $\phi$ on $\omega_{v}$, i.e., $R_{v}^k\phi\in P_k(\omega_{v})$ such that
\begin{align*}
\int_{\omega_{v}}\left(\phi-R_{v}^k\phi\right)p\text{d} V=0, \quad \forall p\in P_k(\omega_{v}).
\end{align*}
Similarly, we can define an $L^2$ projection $R_e^k$ on a patch $\omega_e$ for an edge $e$.
For $\bm u\in H_0(\text{curl}^2;\Omega)$, the lowest-order $H(\text{curl}^2;\Omega)$ interpolation $\Pi_h\bm u$ can be rewritten as
\begin{align*}
\Pi_h\bm u=\sum_{v\in \mathcal{N}_h^{\text{int}}}\alpha_v(\bm u)\bm \phi_v+\sum_{e\in \mathcal{E}_h^{\text{int}}}\sum_i\big(\alpha_{e}^{\text{curl},i}(\bm u)\bm \phi_{e}^{\text{curl},i}+\alpha_{e}^i(\bm u)\bm \phi_{e}^i\big)+\sum_{T\in\mathcal{T}_h}\sum_{i}\alpha_{T}^i(\bm u)\bm \phi_{T}^i,
\end{align*}
where
\begin{align*}
\alpha_v(\bm u)&=\nabla\times \bm u(v)\text{ for any vertex }v,\\
\alpha_{e}^{\text{curl},i}(\bm u)&=\nabla\times \bm u(v_{e,i})\text{ for any node $v_{e,i}$ on an edge }e,\\
\alpha_{e}^i(\bm u)&=\int_{e} {\bm u}\cdot{\bm \tau}_e {q_i}\text{d}{s}\ \text{for any } {q_i}\in P_{4}({e}),\\
\alpha_{T}^i(\bm u)&=\int_{{T}} {\bm u}\cdot{\bm q}_i\text{d} V \text{ for any }{\bm q}_i\in \mathcal{D},
\end{align*}
and the functions $\bm\phi_v$, $\bm\phi_{v_e}$, $\bm\phi_e^i$, and $\bm\phi_T^i$ are the corresponding Lagrange basis functions.
Now we define a new $H(\text{curl}^2)$ Cl\'ement interpolation $\Pi_{C}$ for $\bm u\in \{\bm u\in\bm H^{1/2+\delta}(\Omega)|\ \nabla\times\bm u \in H^{1}(\Omega)\}$:
\begin{align*}
\Pi_{C}\bm u=\sum_{v\in \mathcal{N}_h^{\text{int}}}\tilde{\alpha}_v(\bm u)\bm \phi_v+\sum_{e\in \mathcal{E}_h^{\text{int}}}\sum_i\big(\tilde{\alpha}_{e}^{\text{curl},i}(\bm u)\bm \phi_{e}^{\text{curl},i}+\alpha_{e}^i(\bm u)\bm \phi_{e}^i\big)+\sum_{T\in\mathcal{T}_h}\sum_{i}\alpha_{T}^i(\bm u)\bm \phi_{T}^i,
\end{align*}
where $\tilde{\alpha}_v(\bm u)=R^4_{v}(\nabla\times \bm u)(v)$ and $\tilde{\alpha}_{e}^{\text{curl},i}(\bm u)= R^4_{e}(\nabla\times \bm u)(v_{e,i})$.
The interpolation is well-defined and the following error estimate holds.
\begin{theorem}\label{clmt-error}
For any $T\in \mathcal T_h$, let $\omega=\{\omega_{v_i}: T\subset \omega_{v_i}\}$. Then, for $\bm u \in \bm H^2(\Omega)$, it holds that
\begin{align}
&\|\bm u-\Pi_{C} \bm u\|_T+h_T\|\nabla(\bm u-\Pi_{C} \bm u)\|_T+h_T^2\|\nabla\left(\nabla\times(\bm u-\Pi_{C} \bm u)\right)\|_T\le C h^{2}\|\bm u\|_{2,\omega}.\label{err-05}
\end{align}
\end{theorem}
The theorem can be obtained using the similar arguments for Theorem \ref{err-interp} and the boundedness of the operators $R_v^4,\ R_e^4$.
\section{An a priori error estimate for the eigenvalue problem}
Following \cite{Sun2016A}, the quad-curl eigenvalue problem is to seek $\lambda$ and $\bm u$ such that
\begin{equation}\label{eig-prob1}
\begin{split}
(\nabla\times)^4\bm u&=\lambda\bm u\ \text{in}\;\Omega,\\
\nabla \cdot \bm u &= 0\ \text{in}\;\Omega,\\
\bm u\times\bm n&=0\;\text{on}\;\partial \Omega,\\
\nabla \times \bm u&=0\ \text{on}\;\partial \Omega,
\end{split}
\end{equation}
where $\bm n$ is the unit outward normal to $\partial \Omega$. The assumption that $\Omega$ is simply-connected implies $\lambda\neq 0$.
The variational form of the quad-curl eigenvalue problem is to find $\lambda\in \mathbb{R}$ and $\bm u \in X$ such that
\begin{align}\label{eig-02}
((\nabla\times)^2\bm u,(\nabla\times)^2\bm v)=\lambda(\bm u,\bm v),\quad \forall \bm v \in X.
\end{align}
In addition to $V_h$ defined in Section 3, we need more discrete spaces. Define
\begin{eqnarray*
&& V^0_h=\{\bm{v}_h\in V_h:\ \bm{n} \times \bm{v}_h=0\ \text{and}\ \nabla\times \bm{v}_h = 0 \ \text {on} \ \partial\Omega\},\\
&& S_h=\{{w}_h\in H^1(\Omega):\ w_h|_T\in P_k\},\\
&& S^0_h=\{{w}_h\in W_h,\;{w}_h|_{\partial\Omega}=0\},\\
&& X_h=\{\bm u_h\in V^0_h \ | \ (\bm u_h,\nabla q_h)=0,\ \ \text{for all}\ \ q_h\in S^0_h\}.
\end{eqnarray*}
The discrete problem for \eqref{eig-02} is to find $\lambda_h \in \mathbb R$ and $\bm u_h\in X_h$ such that
\begin{equation}\label{eig-dis-01}
\begin{split}
((\nabla\times)^2\bm u_h,(\nabla\times)^2\bm v_h) &=\lambda_h(\bm u_h, \bm v_h),\quad \forall \bm v\in X_h.
\end{split}
\end{equation}
\subsection{The source problem}
We start with the associated source problem.
Given $\bm f\in L^2(\Omega)$, find $\bm u\in H_0(\text{curl}^2;\Omega)$ and $p\in H_0^1(\Omega)$ such that
\begin{equation}\label{prob1}
\begin{split}
(\nabla\times)^4\bm u+\bm u+\nabla p&=\bm f\ \text{in}\;\Omega,\\
\nabla \cdot \bm u &= 0\ \text{in}\;\Omega,\\
\bm u\times\bm n&=0\;\text{on}\;\partial \Omega,\\
\nabla \times \bm u&=0\ \text{on}\;\partial \Omega.
\end{split}
\end{equation}
Note that $p=0$ for $\bm f\in H(\text{div}^0;\Omega)$.
The weak formulation is to find $(\bm u;p)\in H_0(\text{curl}^2;\Omega)\times H_0^1(\Omega)$ such that
\begin{equation}\label{prob22}
\begin{split}
a(\bm u,\bm v) + b(\bm v, p)&=(\bm f, \bm v),\quad \forall \bm v\in H_0(\text{curl}^2;\Omega),\\
b(\bm u,q)&=0,\quad \hspace{0.7cm}\forall q\in H^1_0(\Omega),
\end{split}
\end{equation}
where
\begin{align*}
a(\bm u,\bm v)&=\left((\nabla\times)^2 \bm u, (\nabla\times)^2 \bm v\right)+(\bm u,\bm v),\\
b(\bm v,p)&=(\bm v,\nabla p ).
\end{align*}
The well-posedness of \eqref{prob22} is proved in Thm. 1.3.2 of \cite{Sun2016Finite}.
Consequently, we can define an solution operator $A: \bm L^2(\Omega)\rightarrow \bm L^2(\Omega)$ such that ${\bm u} = A{\bm f} \in X \subset L^2(\Omega)$.
In fact, $A$ is compact due to the following result.
\begin{lemma}\label{continous-cmpct}
$X$ processes the continuous compactness property.
\end{lemma}
\begin{proof}
Since $X\subset Y:=\left\{\bm u\in H_0(\text{curl};\Omega)\big|(\bm u,\nabla p)=0,\;\;\forall p\in H_0^1(\Omega)\right\}\hookrightarrow\hookrightarrow L^2(\Omega)$ \cite{Monk2003}, then $X\hookrightarrow\hookrightarrow L^2(\Omega)$.
\end{proof}
The $H(\text{curl}^2)$-conforming FEM seeks $\bm u_h\in V^0_h$ and $p_h\in S_h^0$ such that
\begin{equation}\label{prob3}
\begin{split}
a(\bm u_h,\bm v_h)+b(\bm v_h,p_h) &=(\bm f, \bm v_h),\quad \forall \bm v_h\in V^0_h,\\
b(\bm u_h,q_h)&=0,\quad \hspace{0.9cm}\forall q_h\in S^0_h.
\end{split}
\end{equation}
The well-posedness of problems \eqref{prob3} is due to the discrete compactness of $\{X_h\}_{h\in \Lambda}$ with $\Lambda={h_n,n=0,1,2,\cdots}$,
which is stated in the following theorem. Its proof is similar to that of Theorem 7.17 in \cite{Monk2003} and thus is omitted.
\begin{theorem}\label{discrete-compact}
$X_h$ processes the discrete compactness property.
\end{theorem}
Consequently, we can define a discrete solution operator $A_h: \bm L^2(\Omega)\rightarrow \bm L^2(\Omega)$
such that $\bm u_h = A_h\bm f$ is the solution of \eqref{prob3}. It is straightforward to use the standard finite element framework and the approximation property of the interpolation to show the following theorem.
\begin{theorem}\label{conv-curlcurl}
Assume that $A\bm f\in \bm H^s(\Omega)$, $\nabla\times A\bm f\in H^s(\Omega)$ $(1+\delta\leq s\leq k\;\text{with}\;\delta>0)$, and $p\in H^s(\Omega)$. It holds that
\begin{equation*}
\begin{split}
&\|A\bm f-A_h\bm f\|_{H(\mathrm{curl}^2;\Omega)} \leq C h^{s-1}\left(\left\|A\bm f\right\|_s+\left\|\nabla\times A\bm f\right\|_s+\left\|p\right\|_s\right).
\end{split}
\end{equation*}
\end{theorem}
\subsection{An a priori error estimate of the eigenvalue problem}
We first rewrite the eigenvalue problem as follows. Find $\lambda\in \mathbb{R}$ and $(\bm u;p)\in H_0(\text{curl}^2;\Omega)\times H_0^1(\Omega)$ such that
\begin{equation}\label{eig-01}
\begin{split}
a(\bm u,\bm v) + b(\bm v,p)&=(\lambda+1)(\bm u, \bm v),\quad \forall \bm v\in H_0(\text{curl}^2;\Omega),\\
b(\bm u,q)&=0,\quad \hspace{0.7cm}\forall q\in H^1_0(\Omega).
\end{split}
\end{equation}
Due to the fact that $\nabla\cdot \bm u=0$, we have $p=0$.
Then \eqref{eig-01} can be written as an operator eigenvalue problem of finding $\mu:= 1/(\lambda+1) \in \mathbb{R}$ and $\bm u \in X$ such that
\begin{align}\label{eig-03}
A \bm u=\mu\bm u.
\end{align}
The discrete eigenvalue problem is to find $\lambda_h\in \mathbb{R}$ and $(\bm u_h; p_h)\in V_h^0\times S_h^0$ such that
\begin{equation}\label{eig-dis-001}
\begin{split}
a(\bm u_h,\bm v_h) + b(\bm v_h,p_h)&=(\lambda_h+1)(\bm u_h, \bm v_h),\quad \forall \bm v_h\in V_h^0,\\
b(\bm u_h,q_h)&=0,\quad \hspace{0.7cm}\forall q_h\in S^0_h.
\end{split}
\end{equation}
Using the operator $A_h$, the eigenvalue problem is to find $\mu_h\in \mathbb{R}$ and $\bm u_h\in X_h$ such that
\begin{align}\label{eig-dis-02}
A_h \bm u_h=\mu_h\bm u_h,
\end{align}
where $\mu_h = 1/(\lambda_h+1)$.
Define a collection of operators,
\[\mathcal A=\{A_h:\bm L^2(\Omega)\rightarrow \bm L^2(\Omega), \ h\in \Lambda\}.\]
Due to Theorem \ref{discrete-compact} and Theorem \ref{conv-curlcurl},
(1) \(\mathcal{A}\) is collectively compact, and
(2) \(\mathcal{A}\) is point-wise convergent, i.e., for \(\boldsymbol{f} \in\bm L^{2}(\Omega),\)
$
\begin{array}{l}{A_{h_{n}} \boldsymbol{f} \rightarrow A \boldsymbol{f}} \end{array}
$
strongly in $\bm L^{2}(\Omega)$ as $n \rightarrow \infty$.
\begin{theorem}\label{eigenvalue-convergence}
Let $\mu$ be an eigenvalue of $A$ with multiplicity $m$ and $E(\mu)$ be the associated eigenspace. Let $\{\bm \phi_j\}_{j=1}^m$ be an orthonormal basis for $E(\mu)$.
Assume that $\bm\phi\in \bm H^{s+1}(\Omega)$ for $\bm\phi\in E(\mu)$.
Then, for $h$ small enough, there exist exactly $m$ discrete eigenvalues $\mu_{j,h}$ and the associated eigenfunctions $\bm \phi_{j,h}, j=1,2,\cdots,m,$ of $A_h$ such that
\begin{align}
|\mu-\mu_{j,h}|&\leq C\max_{1 \leq i\leq m}a(\bm \phi_i-\bm \phi_{i,h},\bm \phi_i-\bm \phi_{i,h}), \ 1\leq j\leq m,\label{eig_vec}\\
|\mu-\mu_{j,h}|&=O(h^{2(s-1)}), \ 1\leq j\leq m.\label{eigconv-01}
\end{align}
\end{theorem}
\begin{proof}
Note that $A$ and $A_h$ are self-adjoint. We have that
\begin{align*}
((A-A_h)\bm \phi_i,\bm \phi_j)&=(\nabla\times\nabla\times(A-A_h)\bm\phi_i,\nabla\times\nabla\times A\bm\phi_j)\\
&=(\nabla\times\nabla\times(A-A_h)\bm\phi_i,\nabla\times\nabla\times (A-A_h)\bm\phi_j).
\end{align*}
Due to \cite[Thm 2.52]{Monk2003}, it holds that
\begin{align}\label{eigconv-02}
|\mu-\mu_{j,h}|\leq C\left\{\max_i\|\nabla\times\nabla\times(A-A_h)\bm\phi_i\|^2+\|(A-A_h)|_{E(\mu)}\|^2\right\}.
\end{align}
Let $\bm \phi \in E(\mu)$,
\[
\|(A-A_h)\bm \phi\|_{H(\text{curl}^2;\Omega)}\leq \inf_{\bm v_h\in X_h}\|A\bm \phi-\bm v_h\|_{H(\text{curl}^2;\Omega)}
\leq Ch^{s-1}\left\|A\bm \phi\right\|_{s+1}
\leq {C{\mu}h^{s-1}}\left\|\bm \phi\right\|_{s+1}.
\]
In addition, we have that
\begin{align*}
\|(A-A_h)\bm \phi\|_{H(\text{curl}^2;\Omega)}&\leq \inf_{\bm v_h\in X_h}\|A\bm \phi-\bm v_h\|_{H(\text{curl}^2;\Omega)}=\mu\inf_{\bm v_h\in X_h}\|\bm \phi-({1}/{\mu})\bm v_h\|_{H(\text{curl}^2;\Omega)}\\
&\leq \mu\|\bm \phi-\bm \phi_h\|_{H(\text{curl}^2;\Omega)}{\color{blue}\leq}\mu \sqrt{a(\bm \phi-\bm \phi_h,\bm \phi-\bm \phi_h)}.
\end{align*}
Since $E(\mu)$ is finite dimensional, we obtain \eqref{eig_vec} and
\begin{align*}
\|(A-A_h)|_{E(\mu)}\|_{H(\text{curl}^2)}&\leq {C_{\mu}h^{s-1}},
\end{align*}
which proves \eqref{eigconv-01}.
\end{proof}
\section{A posteriori error estimates for the eigenvalue problem}
Assume that $(\lambda, \bm u)\in \mathbb R\times H_0(\text{curl}^2;\Omega)$ is a simple eigenpair of \eqref{eig-02} with
$\|\bm u\|_0=1$ and $(\lambda_h, \bm u_h)\in \mathbb R\times V_h^0$ is the associated finite element eigenpair of \eqref{eig-dis-01}
with $\|\bm u_h\|_0=1$. According to Theorem \ref{eigenvalue-convergence} and \cite[(3.28a)]{babuvska1989finite}, the following inequalities hold:
\begin{align}
&\|\bm u-\bm u_{h}\|\leq C\rho_\Omega(h){|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_{h}{|\hspace{-.02in}|\hspace{-.02in}|},\label{error-eig-01}\\
&|\lambda_{h}-\lambda|\leq C {|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_{h}{|\hspace{-.02in}|\hspace{-.02in}|}^2,\label{error-eig-02}
\end{align}
where ${|\hspace{-.02in}|\hspace{-.02in}|}\bm u{|\hspace{-.02in}|\hspace{-.02in}|}^2=a(\bm u,\bm u)$ and
\[
\rho_{\Omega}(h)= \sup_{\bm f\in \bm L^2(\Omega),\|\bm f\|=1} \inf _{\bm v \in V^{0}_{h}}\left\|A\bm f-\bm v\right\|_{H(\text{curl}^2;\Omega)}.
\]
It is obvious that $\rho_{\Omega}(h) \rightarrow 0$ as $h\rightarrow 0$.
Define two projection operators $R_h, Q_h$ as follows.
For $\bm u\in H_0(\text{curl}^2;\Omega)$ and $p\in H_0^1(\Omega)$, find $R_h \bm u\in V_h^0, Q_h p\in S_h^0$, such that
\begin{align*}
a(\bm u-R_h\bm u,\bm v_h)+b(\bm v_h,p-Q_h p)&=0,\quad \forall v_h\in V_h^0,\\
b(\bm u-R_h\bm u,q_h)&=0,\quad\forall q_h\in S_h^0.
\end{align*}
According to the orthogonality and the uniqueness of the discrete eigenvalue problem,
\begin{align*}
&\bm u_h=(\lambda_h+1)R_hA\bm u_h.
\end{align*}
Let $\left(\bm \omega^h; p^h\right)$ be the solution of \eqref{prob22} with $\bm f=(\lambda_h+1)\bm u_h$.
Then
\begin{align}\label{omegah}
&\bm \omega^h=(\lambda_h+1)A\bm u_
\quad \text{and}\quad \bm u_h=R_h \bm \omega^h.
\end{align}
The following theorem relates the eigenvalue problem to a source problem with $\bm f=(\lambda_h+1)\bm u_h$.
\begin{theorem}\label{lemm7}
Let $r(h)=\rho_\Omega(h)+{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}$. It holds that
\begin{align}\label{the-04}
{|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega^h-R_h\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}-Cr(h){|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}\leq{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}\leq {|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega^h-R_h\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}+Cr(h){|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align}
Furthermore, for $h$ small enough, there exist two constants $c$ and $C$ such that
\begin{align}\label{the-05}
c{|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega^h-R_h\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}\leq{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}\leq C {|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega^h-R_h\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align}
\end{theorem}
\begin{proof}
Since $\bm u_h=R_h \bm \omega^h$, by the triangle inequality, we have that
\begin{align*}
-{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}+{|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega^h-R_h\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}\leq {|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}&\leq{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}+{|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega^h-R_h\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align*}
Using $\bm u=(\lambda+1)A\bm u$ and \eqref{omegah}, we obtain that
\begin{align}
{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}&={|\hspace{-.02in}|\hspace{-.02in}|}(\lambda+1) A\bm u-(\lambda_h+1)A\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}\nonumber\\
&\leq|\lambda+1|{|\hspace{-.02in}|\hspace{-.02in}|} A(\bm u-\bm u_h){|\hspace{-.02in}|\hspace{-.02in}|}+|\lambda -\lambda_h|\3barA\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}.\label{the-01}
\end{align}
Due to the well-posedness of \eqref{prob22}, it holds that
\begin{align*}
&\3barA(\bm u-\bm u_h){|\hspace{-.02in}|\hspace{-.02in}|}\leq C\|\bm u-\bm u_h\|,
\end{align*}
which, together with \eqref{error-eig-01} and \eqref{error-eig-02}, leads to
\begin{align}\label{the-03}
{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm \omega^h{|\hspace{-.02in}|\hspace{-.02in}|}\leq C r(h){|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align}
Then \eqref{the-04} follows immediately. Note that $r(h) \to 0$ as $h \to 0$. For $h$ small enough, \eqref{the-04} implies \eqref{the-05}. \end{proof}
We first derive an a posteriori error estimate when (a) $\bm f\in
H(\text{div}^0,\Omega)$ or (b) $\bm f$ is a vector polynomial for which $(\bm f,\nabla q_h)=0$, $\forall q_h\in S_h^0$.
Note that $p=p_h=0$ for (a) and $p_h=0$ for (b). Hence $p_h=0$ holds for both cases.
Denote the total errors by $\bm e:=\bm u-\bm u_h \text{ and } \varepsilon:= p-p_h=p$.
Then $\bm e \in H_0(\text{curl}^2;\Omega) \text{ and } \varepsilon\in H_0^1(\Omega)$ satisfy the defect equations
\begin{align}\label{error-equation-1}
a( \bm e,\bm v)+b(\bm v, \varepsilon)&=r_1(\bm v), \ \forall \bm v\in H_0(\text{curl}^2;\Omega),\\
\label{error-equation-2}
b(\bm e,q)&=r_2(\nabla q),\ \forall q\in H_0^1(\Omega),
\end{align}
where
\[
r_1(\bm v)=(\bm f,\bm v)-\left((\nabla\times)^2\bm u_h,(\nabla\times)^2\bm v\right)-(\bm u_h,\bm v)-(\nabla p_h,\bm v)=(\bm f,\bm v)-\left((\nabla\times)^2\bm u_h,(\nabla\times)^2\bm v\right)-(\bm u_h,\bm v)
\]
and $r_2(\nabla q)=-(\bm u_h,\nabla q)$.
We have the following Galerkin orthogonality
\begin{align}\label{orth-01}
r_1(\bm v_h)&=0, \ \forall \bm v_h\in V_h^0,\\
\label{orth-02}
r_2(\nabla q_h)&=0,\ \forall q_h\in S_h^0.
\end{align}
The error estimator will be constructed by employing
Lemma \ref{Helm}. Writing $\bm e=\bm e^0+\bm e^\bot$ and $\bm v = \bm v^0+\bm v^\bot$ with $\bm e^0, \bm v^0 \in \nabla H_0^1(\Omega)$ and $\bm e^\bot, \bm v^\bot \in X$, we obtain that
\begin{align}
\left(\bm e^0, \bm{v}^{0}\right)+\left(\bm{v}^{0}, \nabla\varepsilon\right) &=r_1(\bm{v}^0), \quad \forall \bm{v}^{0} \in \nabla H_0^1(\Omega), \label{irrotational}\\
\left((\nabla\times)^2\bm e^{\perp}, (\nabla\times)^2\bm v^{\perp}\right)+(\bm e^{\perp}, \bm v^{\perp}) &=r_1(\bm{v}^{\perp}), \quad \forall \bm{v}^{\perp} \in X,\label{irrotationa2}\\
(\bm e^0,\nabla q)&=r_2(\nabla q),\quad\forall q\in H_0^1(\Omega).\label{irrotationa3}
\end{align}
The estimators for the irrotational part $\bm e^0$, the solenoidal part $\bm e^\bot$, and $\nabla\varepsilon$ will be derived separately.
Firstly, consider the irrotational part $\bm e^0$ and $\nabla \varepsilon$. For a $\vartheta\in H_0^1(\Omega)$, we have
\[
r_1(\nabla \vartheta) =\sum_{T \in \mathcal{T}_{h}}\left(\bm f-\bm u_h, \nabla \vartheta\right)_T
=\sum_{T \in \mathcal{T}_{h}}-\left(\nabla\cdot ({\bm{f}-\bm u_h}),\vartheta\right)_T+\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left(\left[\![\bm{n}_E\cdot({\bm f}-\bm u_h)]\!\right]_{E}, \vartheta\right)_{E}
\]
where the jump
\[\left[\![\bm{n}_E\cdot{\bm{u}}_{h}]\!\right]_{E}=\left(\bm{n}_E\cdot{\bm{u}}_{h}\right)_{E\subset T_2}-\left(\bm{n}_E\cdot{\bm{u}}_{h}\right)_{E\subset T_1},\]
with $E\in \mathcal{E}_h^{\text{int}}$ the common edge of two adjacent elements $T_1, T_2\in \mathcal{T}_{h}$ and $\bm n_E$ the unit normal vector of $E$ directed towards the interior of $T_1$.
We also have
\[
r_2(\nabla \vartheta)=\sum_{T \in \mathcal{T}_{h}}-(\bm u_h, \nabla \vartheta)_T
=\sum_{T \in \mathcal{T}_{h}}\left(\nabla \cdot \bm u_h, \vartheta\right)_T-\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left([\![\bm n_E\cdot \bm u_h]\!]_E,\vartheta\right)_E.
\]
We introduce the error terms which are related to the upper and lower bounds for $\bm e^0$ and $\nabla\varepsilon$:
\begin{align}\label{errore0}
\eta_{0}:=\bigg(\sum_{T \in \mathcal{T}_{h}}\left(\eta_{0}^{T}\right)^{2}\bigg)^{1 / 2}+\bigg(\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left(\eta_{0}^{E}\right)^{2}\bigg)^{1 / 2},\\
\label{errore3}
\eta_{3}:=\bigg(\sum_{T \in \mathcal{T}_{h}}\left(\eta_{3}^{T}\right)^{2}\bigg)^{1 / 2}+\bigg(\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left(\eta_{3}^{E}\right)^{2}\bigg)^{1 / 2},
\end{align}
where
\begin{align*}
\eta_{0}^{T} &:= h_{T}\left\|\nabla\cdot ({\bm f}-\bm u_h)\right\|_T ,\ T \in \mathcal{T}_{h}, \\
\eta_{0}^{E} &:=h_{E}^{1 / 2}\left\|[\![\bm n_E\cdot (\bm f-\bm u_h)]\!]_E\right\|_{E} ,\ E \in \mathcal{E}_{h}^{\text{int}},\\
\eta_{3}^{T} &:= h_{T}\left\|\nabla\cdot \bm u_h\right\|_T ,\ T \in \mathcal{T}_{h}, \\
\eta_{3}^{E} &:=h_{E}^{1 / 2}\left\|[\![\bm n_E\cdot \bm u_h]\!]_E\right\|_{E} ,\ E \in \mathcal{E}_{h}^{\text{int}}.
\end{align*}
Next, we consider the bounds for $\bm e^\bot$. For $\bm w \in X$, the residual $r_1(\bm w)$ can be expressed as
\begin{eqnarray*}
r_1(\bm w)&=&\sum_{T \in \mathcal{T}_{h}}\big(\bm f-\bm u_h, \bm{w}\big)_T-\left((\nabla\times)^2{\bm{u}}_{h}, (\nabla\times)^2\bm{w}\right)_T \\
&=&\sum_{T \in \mathcal{T}_{h}}\left(\bm f-(\nabla\times)^4{\bm{u}}_{h}-\bm u_h, \bm{w}\right)_T-\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left([\![ (\nabla\times)^2 {\bm{u}}_{h} \times\bm{n}_E]\!]_{E}, \nabla\times\bm{w}\right)_{E}\\
&& ~ -\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left([\![(\nabla\times)^3 {\bm{u}}_{h}]\!]_{E}, \bm n_E\times\bm{w}\right)_{E},
\end{eqnarray*}
where $[\![(\nabla\times)^2\bm u_h\times\bm n_E]\!]_E$ stands for the jump of the tangential component of $(\nabla\times)^2\bm u_h$ and $[\![(\nabla\times)^3 {\bm{u}}_{h}]\!]_{E}$ stands for the jump of $(\nabla\times)^3 {\bm{u}}_{h}$.
The bounds for ${|\hspace{-.02in}|\hspace{-.02in}|}\bm e^\bot {|\hspace{-.02in}|\hspace{-.02in}|}$ contain the error terms
\begin{align}
\label{errore1}\eta_{1} &:=\bigg(\sum_{T \in \mathcal{T}_{h}}\left(\eta_{1}^{T}\right)^{2}\bigg)^{1 / 2}+\bigg(\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left(\eta_{1;1}^{E}\right)^{2}\bigg)^{1 / 2}+\bigg(\sum_{E \in \mathcal{E}_{h}^{\text{int}}}\left(\eta_{1;2}^{E}\right)^{2}\bigg)^{1 / 2},\\
\label{errore2}\eta_{2} &:=\bigg(\sum_{T \in \mathcal{T}_{h}}\left(\eta_{2}^{T}\right)^{2}\bigg)^{1 / 2},
\end{align}
where
\begin{align*}
\eta_{1}^{T} &:= h_{T}^2\left\|\pi_{h} \bm{f}-(\nabla\times)^4{\bm{u}}_{h}-\bm u_h\right\|_T, \quad T \in \mathcal{T}_{h}, \\
\eta_{2}^{T} &:= h_{T}^2\left\|\bm{f}-\pi_{h} \bm{f}\right\|_T, \quad T \in \mathcal{T}_{h}, \\
\eta_{1;1}^{E} &:= h_{E}^{1/ 2}\left\|[\![\bm{n}_E \times (\nabla\times)^2 {\bm{u}}_{h}]\!]_{E}\right\|_{E},
\quad E \in \mathcal{E}_{h}^{\text {int }},\\
\eta_{1;2}^{E} &:= h_{E}^{3 / 2}\left\|[\![ (\nabla\times)^3 {\bm{u}}_{h}]\!]_{E}\right\|_{E}, \quad E \in \mathcal{E}_{h}^{\text {int }},
\end{align*}
and $\pi_{h} \bm{f}$ denotes the $\bm L^2$-projection of $\bm f$ onto $\bm P_k(T)$.
Now we state the a posteriori estimate for $\bm e$ and $\varepsilon$ in the energy norm.
\begin{theorem}\label{posteriori-estimate-main}
Let $\eta_0, \ \eta_1$, $\eta_2$, and $\eta_3$ be defined in \eqref{errore0}, \eqref{errore1}, \eqref{errore2}, and \eqref{errore3}, respectively.
Then, if $h< 1$,
\begin{align*}
\gamma_1(\eta_{0}+\eta_{1}+\eta_3)-\gamma_2\eta_2\leq{|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}+\|\nabla \varepsilon\|\leq\Gamma_1(\eta_{0}+\eta_{1}+\eta_3)+\Gamma_2\eta_2
\end{align*}
and, if $h$ is small enough,
\begin{align*}
\gamma_3(\eta_{1}+\eta_3)-\gamma_4(\eta_2+h^2\eta_{0})\leq{|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}\leq\Gamma_3(\eta_{0}+\eta_{1}+\eta_3)+\Gamma_4\eta_2,
\end{align*}
where $\gamma_1, \gamma_2, \gamma_3, \gamma_4, \Gamma_1, \Gamma_2, \Gamma_3$, and $\Gamma_4$ are some constants independent of $h$.
\end{theorem}
\begin{proof}
Since $\bm e = \bm e^0+\bm e^\bot$, the proof is split into two parts.
{\bf (i) Estimation of the irrotational part of the error.}
Based on \eqref{irrotationa3}, we have the following uniformly positive definite variational problem on $H^1_0(\Omega)$. Seek $\varphi\in H_0^1(\Omega)$ such that
\begin{align}
(\nabla\varphi,\nabla q)=r_2(\nabla q),\quad\forall q\in H^1_0(\Omega).
\end{align}
Note that $r_2(\nabla q_h)=0, \forall q_h \in S_h^0$ and $\bm e^0=\nabla \varphi$ for some $\varphi$.
Define a projection operator $P_h^k: H^1_0(\Omega)\longrightarrow S_h^0$ such that (see, e.g., \cite{beck2000residual,osborn1975spectral, scott1990finite})
\begin{align}
&P_h^k \phi=\phi,\quad \forall \phi\in S_h^0,\label{app-01}\\
&\|\phi-P_h^k \phi\|_T\leq Ch_T\|\nabla \phi\|_{\omega_T}, \label{app-02}\\
&\|\phi-P_h^k \phi\|_{L^2(E)}\leq C\sqrt{h_E}\|\nabla \phi\|_{\omega_E},\label{app-03}\\
&\|\nabla P_h^k \phi\|_T\leq C\|\nabla\phi\|_{\omega_T}.\label{app-04}
\end{align}
Due to \eqref{irrotationa3} and the orthogonal property \eqref{orth-02}, we have that
\begin{align}
\|\bm e^0\|^2=r_2(\bm e^0)=r_2(\nabla\varphi-\nabla P_h^k\varphi)=-(\bm u_h,\nabla(\varphi-P_h^k\varphi)).
\end{align}
Using integration by parts, \eqref{app-02}, and \eqref{app-03}, we obtain that
\begin{eqnarray*}
\-\left(\bm u_h,\nabla(\varphi-P_h^k\varphi)\right)
&=&\sum_{T\in \mathcal{T}_h}-(\nabla\cdot\bm u_h,\varphi-P_h^k\varphi)_T+\sum_{E\in \mathcal{E}_h^{\text{int}}}([\![\bm n\cdot \bm u_h]\!]_E,\varphi-P_h^k\varphi)_E\\
&\le &C \sum_{T\in \mathcal{T}_h}\|\nabla\cdot\bm u_h\|_Th_T\|\nabla \varphi\|_{\omega_T}+C\sum_{E\in \mathcal{E}_h^{\text{int}}}\|[\![\bm n\cdot \bm u_h]\!]_E\|_{L^2(E)}\sqrt{h_E}\|\nabla \varphi\|_{\omega_E}\\
&\le & C \Big(\sum_{T\in \mathcal{T}_h}\|\nabla\cdot\bm u_h\|_T^2h_T^2\Big)^{1/2}\|\bm e^0\|+
C \Big(\sum_{E\in \mathcal{E}_h^{\text{int}}}\|[\![\bm n\cdot \bm u_h]\!]_E\|_{L^2(E)}^2{h_E}\Big)^{1/2}\|\bm e^0\|.
\end{eqnarray*}
Therefore, we have
\begin{align}\label{irrota-01}
\|\bm e^0\|
&\leq C\eta_3.
\end{align}
Similarly, we can obtain the upper bounds of $\|\nabla\varepsilon\|$. Due to \eqref{irrotational} and \eqref{irrotationa3} , we have
\begin{align}
\|\nabla\varepsilon\|^2=r_1(\nabla\varepsilon)-r_2(\nabla \varepsilon)=r_1 (\nabla \varepsilon-\nabla P^k_h\varepsilon)- r_2( \nabla \varepsilon-\nabla P^k_h\varepsilon)
\end{align}
By Green's formula, \eqref{app-02}, and \eqref{app-03},
\begin{align*}
\|\nabla\varepsilon\|^2
&=\sum_{T\in \mathcal{T}_h}-\left(\nabla\cdot\left(\bm f-\bm u_h\right),\varepsilon-P_h^k\varepsilon\right)-\sum_{T\in \mathcal{T}_h}\left(\nabla\cdot\bm u_h,\varepsilon-P_h^k\varepsilon\right)\\
& \quad +\sum_{E\in \mathcal{E}_h^{\text{int}}}\left([\![\bm n\cdot\left(\bm f-\bm u_h)\right]\!]_E,\varepsilon-P_h^k\varepsilon\right)_E
+\sum_{E\in \mathcal{E}_h^{\text{int}}}\left([\![\bm n\cdot\bm u_h]\!]_E,\varepsilon-P_h^k\varepsilon\right)_E\\
&\leq \sum_{T\in \mathcal{T}_h}\|\nabla\cdot(\bm f-\bm u_h)\|_Th_T\|\nabla \varepsilon\|_{\omega_T}+\|[\![\bm n\cdot (\bm f-\bm u_h)]\!]_E\|_{L^2(E)}\sqrt{h_E}\|\nabla\varepsilon\|_{\omega_E}\\
& \quad +\sum_{T\in \mathcal{T}_h}\|\nabla\cdot\bm u_h\|_Th_T\|\nabla \varepsilon\|_{\omega_T}+\|[\![\bm n\cdot \bm u_h]\!]_E\|_{L^2(E)}\sqrt{h_E}\|\nabla\varepsilon\|_{\omega_E}\\
&\le C\Big(\sum_{T\in \mathcal{T}_h}\|\nabla\cdot(\bm f-\bm u_h)\|_T^2h_T^2\Big)^{1/2}\|\nabla \varepsilon\|+
C \Big(\sum_{T\in \mathcal{T}_h}\|\nabla\cdot\bm u_h\|_T^2h_T^2\Big)^{1/2}
\|\nabla \varepsilon\|\\
& \quad + C\Big(\sum_{E\in \mathcal{E}_h^{\text{int}}}\|[\![\bm n\cdot (\bm f-\bm u_h)]\!]_E\|_{L^2(E)}^2{h_E}\Big)^{1/2}\|\nabla \varepsilon\|+
C\Big(\sum_{E\in \mathcal{E}_h^{\text{int}}}\|[\![\bm n\cdot \bm u_h]\!]_E\|_{L^2(E)}^2{h_E}\Big)^{1/2}\|\nabla \varepsilon\|.
\end{align*}
Therefore, we have that
\begin{align}\label{irrota-varepsilon}
\|\nabla \varepsilon\|
\leq C\left(\eta_0+\eta_3\right).
\end{align}
We now derive the lower bounds of $\bm e^0$ and $\|\nabla \varepsilon\|$ using the bubble functions.
Denote by $\lambda_1^T,\lambda_2^T,\lambda_3^T$ the barycentric coordinates of $T\in\mathcal{T}_h$ and define the bubble function $b_T$ by
\begin{align*}
b_T=
\left\{
\begin{array}{ll}
27\lambda_1^T\lambda_2^T\lambda_3^T, & {\text{on}\ T,} \\
0, & {\Omega \setminus T.}
\end{array}
\right.
\end{align*}
Given $E\in \mathcal{E}_h$, a common edge of $T_1$ and $T_2$,
let $\omega_E=T_1\cup T_2$ and enumerate the vertices of $T_1$ and $T_2$ such that the vertices of $E$ are numbered first.
Define the edge-bubble function $b_E$ by
\begin{align*}
b_E=
\left\{
\begin{array}{ll}
4\lambda_1^{T_i}\lambda_2^{T_i}, & {\text{on}\ T_i,\ i=1, 2,} \\
0, & {\Omega \setminus \omega_E.}
\end{array}
\right.
\end{align*}
Using the technique in \cite{ainsworth1997posteriori}, we have the following norm equivalences.
\begin{align}
&\|b_T\phi_h\|_T\leq\|\phi_h\|_T\leq C\|b_T^{1/2}\phi_h\|_T, \forall \phi_h\in P_k(T),\label{eq-nor-01}\\
&\|b_E\phi_h\|_E\leq\|\phi_h\|_E\leq C\|b_E^{1/2}\phi_h\|_E, \forall \phi_h\in P_k(E),\label{eq-nor-02}\\
&{\|b_E\phi_h\|_T\leq\|\phi_h\|_T, \forall \phi_h\in P_k(T)}.\label{eq-nor-03}
\end{align}
Using \eqref{eq-nor-01}, integration by parts, the inverse inequality, and the fact that $b_T\nabla\cdot\bm u_h\in H_0^1(T)\subset H_0^1(\Omega)$,
we have that
\begin{align*}
\frac{(\eta_3^T)^2}{h_T^2}&=\|\nabla\cdot\bm u_h\|_T^2\leq C\|b_T^{1/2}\nabla\cdot\bm u_h\|_T^2=C\int_Tb_T(\nabla\cdot\bm u_h)^2\text{d} \bm x\\
&=-C\int_T\bm u_h\nabla(b_T\nabla\cdot\bm u_h)\text{d} \bm x=Cr_2\big(\nabla(b_T\nabla\cdot\bm u_h)\big)\\
&=C\left(\bm e^0,\nabla(b_T\nabla\cdot\bm u_h)\right)\leq C\|\bm e^0\|_T\|\nabla(b_T\nabla\cdot\bm u_h)\|_T\\
&\leq \frac{C}{h_T}\|\bm e^0\|_T\|\nabla\cdot\bm u_h\|_T,
\end{align*}
which implies that
\begin{align}\label{eta3T}
\eta_3^T\leq C\|\bm e^0\|_T.
\end{align}
Extend $[\![\bm n\cdot \bm u_h]\!]_E$ to $[\![\bm n\cdot \bm u_h]\!]_{E;T_i}$ defined on $T_i$ such that
\begin{align}
\left\|[\![\bm n\cdot \bm u_h]\!]_{E;T_i}\right\|_{T_i}\leq Ch^{1/2}_{T_i}\|[\![\bm n\cdot \bm u_h]\!]_E\|_E.
\end{align}
The estimate of the local upper bound for $\eta_3^E$ can be obtained similarly:
\begin{eqnarray*}
\frac{(\eta_3^E)^2}{h_E}&=&\|[\![\bm n\cdot \bm u_h]\!]_E\|_E^2\leq C\int_E[\![\bm n\cdot \bm u_h]\!]_E^2b_E\text{d} s\\
&=&C\left(\sum_{i=1}^2\int_{T_i}\bm u_h\nabla(b_E[\![\bm n\cdot \bm u_h]\!]_{E;T_i})+\nabla\cdot\bm u_hb_E[\![\bm n\cdot \bm u_h]\!]_{E;T_i}\text{d} \bm x\right)\\
&=& -C r_2\big(\nabla(b_E[\![\bm n\cdot \bm u_h]\!]_{E;T_1\cup T_2})\big)+C\int_{T_1\cup T_2}\nabla\cdot\bm u_hb_E[\![\bm n\cdot \bm u_h]\!]_{E;T_i}\text{d} \bm x\\
&\leq & C\sum_{i=1}^2\left(h_{T_i}^{-1}\|\bm e^0\|_{T_i}+\|\nabla\cdot\bm u_h\|_{T_i}\right){\eta_3^E},
\end{eqnarray*}
where we have used the fact that
\[
\|\nabla(b_E[\![\bm n \cdot \bm u_h]\!]_{E;T_i})\|_{T_i}\leq Ch_{T_i}^{-1}\|b_E[\![\bm n \cdot \bm u_h]\!]_{E;T_i}\|_{T_i}\leq Ch_{T_i}^{-1/2}\|[\![\bm n \cdot \bm u_h]\!]_E\|_{E}.
\]
Consequently,
\begin{align}\label{eta3E}
\eta_3^E\leq C\big(\|\bm e^0\|_{\omega_E}+\eta_3^{T_1}+\eta_3^{T_2}\big)\leq C\|\bm e^0\|_{\omega_E}.
\end{align}
Now collecting \eqref{irrota-01}, \eqref{eta3T}, and \eqref{eta3E}, we have that
\begin{align}\label{est-eta3}
c\eta_3\leq\|\bm e^0\|\leq C\eta_3.
\end{align}
Similarly,
\begin{align}\label{eta0T}
\eta_0^T\leq C\left(\|\nabla\varepsilon\|_T+\|\bm e^0\|_T\right),
\end{align}
\begin{align}\label{eta0E}
\eta_0^E\leq C\left(\|\nabla\varepsilon\|_{\omega_T}+\|\bm e^0\|_{\omega_T}\right)+\eta_0^{T_1}+\eta_0^{T_2}\leq C\left(\|\nabla\varepsilon\|_{\omega_T}+\|\bm e^0\|_{\omega_T}\right).
\end{align}
Combining \eqref{irrota-01}, \eqref{irrota-varepsilon}, \eqref{eta0T}, and \eqref{eta0E}, we obtain that
\begin{align}\label{este0}
c(\eta_0+\eta_3)\leq\|\nabla \varepsilon\|+\|\bm e^0\|\leq C(\eta_0+\eta_3).
\end{align}
{\bf (ii) Estimation of the solenoidal part $\bm e^\bot$.}
We start with proving the upper bound for $\eta_{1}^T$ by using $b_T$ again.
Employing the similar technique in \cite{ainsworth1997posteriori}, we have the following estimates for any $\bm v$ in finite dimensional spaces:
\begin{align}
&\|\bm v\|_{T}^{2} \leq C \int_{T} b^2_{T} \bm v^{2}\text{d}\bm x,\label{est1}\\
&\left\|b^2_{T} \bm v\right\|_{T} \leq\|\bm v\|_{T}.\label{est2}
\end{align}
Setting $\bm \phi_h=\pi_{h} \bm{f}-(\nabla\times)^4{\bm{u}}_{h}-\bm u_h$, we have that
\begin{align*}
\left(\frac{\eta_{1}^T}{h_T^2}\right)^2&=\left\|\pi_{h} \bm{f}-(\nabla\times)^4{\bm{u}}_{h}-\bm u_h\right\|_T^2\\
&\leq C\int_T(\bm f-(\nabla\times)^4{\bm{u}}_{h}-{\bm{u}}_{h})b_T^2\bm \phi_h+(\pi_{h} \bm{f}-\bm f)b_T^2\bm \phi_h\text{d} \bm x \quad \big(\text{By \eqref{est1}}\big)\\
&=C\Big(r_1( b_T^2\bm \phi_h)+\int_T(\pi_{h} \bm{f}-\bm f)b_T^2\bm \phi_h\text{d} \bm x \Big)\quad \Big( b_T^2\bm \phi_h\in H_0(\text{curl}^2;\Omega)\Big)\\
&=C\Big(a(\bm e, b_T^2\bm \phi_h)+b(b_T^2\bm \phi_h,\varepsilon)+\int_T(\pi_{h} \bm{f}-\bm f)b_T^2\bm \phi_h\text{d} \bm x \Big)\quad \big( \text{By } \eqref{error-equation-1}\big)\\
&\leq C {\big|\hspace{-.02in}\big|\hspace{-.02in}\big|}\bm e{\big|\hspace{-.02in}\big|\hspace{-.02in}\big|}_{T}{\big|\hspace{-.02in}\big|\hspace{-.02in}\big|} b_T^2\bm \phi_h{\big|\hspace{-.02in}\big|\hspace{-.02in}\big|}_{T}+C\big\|\nabla\varepsilon\big\|_T\left\|b_T^2\bm \phi_h\right\|_T+C\eta_{2}^Th_T^{-2}\left\|b_T^2\bm \phi_h\right\|_T
\end{align*}
Due to the inverse inequality and \eqref{est2}, it holds that
\begin{align*}
\3barb_T^2\bm \phi_h{|\hspace{-.02in}|\hspace{-.02in}|}_{T}&=\|b_T^2\bm \phi_h\|_T+\|(\nabla\times)^2b_T^2\bm \phi_h\|_T\\
&\leq\|b_T^2\bm \phi_h\|_T+C h_T^{-1}\|\nabla\times b_T^2\bm \phi_h\|_{T}\\
&\leq C h_T^{-2}\|b_T^2\bm \phi_h\|_{T}\leq C h_T^{-2}\bm \|\bm\phi_h\|_{T}.
\end{align*}
Thus we obtain that
\begin{align*}
\left(\frac{\eta_{1}^T}{h_T^2}\right)^2
&\leq C\left( h_T^{-2}{|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}_{T}+\|\nabla\varepsilon\|_T+h_T^{-2}\eta_{2}^T\right)\|\bm \phi_h\|_T.
\end{align*}
Dividing the above inequality by $\left\|\bm \phi_h\right\|_T$ and multiplying by $h_T^2$, we obtain
\begin{align}\label{eta1T-upper}
\eta_{1}^T\leq C\left({|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}_{T}+h_T^2\|\nabla\varepsilon\|_T+\eta_{2}^T\right).
\end{align}
Next we estimate the upper bound for $\eta_{1;1}^E$ by using the bubble functions $b_T, b_E$. Let $T_1$ and $T_2$ be two elements sharing the edge $E$.
We extend the jump $[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_E$ defined on $E$ to two polynomial functions $[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_{E;T_1}$
defined on $T_1$ and $[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_{E;T_2}$ defined on $T_2$ such that, for $1\leq i\leq 2$,
\begin{align}\label{extension-estimate}
\|[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_{E;T_i}\|_{T_i}\leq C h_{T_i}^{1/2}\|[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_{E}\|_{E}.
\end{align}
Denote $ \psi_h|_{T_i}=[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_{E;T_i}$ for $i=1, 2$ and $\bm \omega_{E,1}=(b_{T_1}-b_{T_2})b_E\psi_h\bm \tau_E$.
A simple calculation shows that
\begin{align*}
(\nabla\times\bm\omega_{E,1})|_E=\frac{27}{8}\left(\frac{h_E}{|T_1|}+\frac{h_E}{|T_2|}\right)b_E^2\psi_h.
\end{align*}
Similar to \eqref{est1} and \eqref{est2}, the following inequalities hold
\begin{align}
&\| v\|_{E} \leq C\| b_Ev\|_{E},\label{est3}\\
&\left\|(b_{T_1}-b_{T_2})b_E v\right\|_{T_1\cup T_2} \leq\|v\|_{T_1\cup T_2}.\label{est4}
\end{align}
Now we are ready to construct the upper bound for $\eta_{1;1}^E$:
\begin{eqnarray*}
&&h_E^{-1}\|[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_E\|_E^2\\
&\leq &C\int_E[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_E \nabla \times \bm \omega_{E,1}\text{d} s\quad(\text{By \eqref{est3}})\\
&=&C\int_{T_1\cup T_2}(\nabla\times)^4\bm u_h\cdot \bm \omega_{E,1}-(\nabla\times)^2\bm u_h\cdot (\nabla\times)^2\bm \omega_{E,1}\text{d} \bm x\quad(\text{By in tegration by parts})\\
&=&C\left(r_1( \bm \omega_{E,1})-(\bm f -\bm u_h-(\nabla\times)^4\bm u_h,\bm \omega_{E,1})\right)\quad\Big(\bm \omega_{E,1}\in H_0(\text{curl}^2;\Omega)\Big)\\[1.5mm]
&\leq & C {|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}_{T_1\cup T_2}{|\hspace{-.02in}|\hspace{-.02in}|}\bm \omega_{E,1}{|\hspace{-.02in}|\hspace{-.02in}|}_{T_1\cup T_2}+C\|\bm \omega_{E,1}\|_{T_1\cup T_2}\|\nabla\varepsilon\|_{T_1\cup T_2}\\[1.5mm]
&& +C\left(h_{T_1}^{-2}(\eta_{1}^{T_1}+\eta_{2}^{T_1})+h_{T_2}^{-2}(\eta_{1}^{T_2}+\eta_{2}^{T_2})\right)
\|\bm \omega_{E,1}\|_{T_1\cup T_2}.
\end{eqnarray*}
By applying the inverse inequality, \eqref{extension-estimate}, and \eqref{est4}, we get
\[\!|\!|\bm \omega_{E,1}{|\hspace{-.02in}|\hspace{-.02in}|}_{T_1\cup T_2}\leq h_E^{-2}\|\bm \omega_{E,1}\|_{T_1\cup T_2}\leq h_E^{-3/2}\|[\![\bm n\times(\nabla\times)^2\bm u_h]\!]_E\|_E,\]
which, together with \eqref{eta1T-upper}, leads to
\begin{align}\label{etaE-1}
\eta_{1;1}^E\leq C\left({|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}_{T_1\cup T_2}+\eta_{2}^{T_1}+\eta_{2}^{T_2}+h_{T_1}^2\|\nabla\varepsilon\|_{T_1}+h_{T_2}^2\|\nabla\varepsilon\|_{T_2}\right).
\end{align}
The upper bound for $\eta_{1;2}^E$ can be constructed in a similar way.
Extend $[\![(\nabla\times)^3\bm u_h]\!]_{E}$ to $[\![(\nabla\times)^3\bm u_h]\!]_{E;T_i}$ on $T_i$ such that
\begin{align}\label{extension-estimate2}
\|[\![(\nabla\times)^3\bm u_h]\!]_{E;T_i}\|_{T_i}\leq C h_{T_i}^{1/2}\|[\![(\nabla\times)^3\bm u_h]\!]\|_{E}.
\end{align}
Denote $\bm \omega_{E,2}|_{T_i}:=b_E^2[\![(\nabla\times)^3\bm u_h]\!]_{E;T_i}\bm \tau_E$ with $\bm \tau_E$ such that $\bm n_E\times\bm \tau_E=1$.
Then $\bm n_E\times \bm\omega_{E,2}|_{E}=b_E^2[\![(\nabla\times)^3 \bm u_h]\!]_{E}$. Hence,
\begin{eqnarray*}
&&\|[\![(\nabla\times)^3\bm u_h]\!]_E\|_E^2
\lesssim \int_E[\![(\nabla\times)^3\bm u_h]\!]_E\bm n_E\times \bm \omega_{E,2} \text{d} s\\
&= &-\left((\nabla\times)^4\bm u_h, \bm \omega_{E,2}\right)_{T_1\cup T_2}+\left((\nabla\times)^2\bm u_h,(\nabla\times)^2\bm \omega_{E,2}\right)_{T_1\cup T_2}+\int_E [\![\bm n\times(\nabla\times)^2\bm u_h]\!]_E\nabla\times\bm \omega_{E,2} \text{d} s\\
&\le & C\left(r_1( \bm \omega_{E,2})-\left(\bm f-\bm u_h -(\nabla\times)^4\bm u_h,\bm\omega_{E,2}\right)+\int_E [\![\bm n\times(\nabla\times)^2\bm u_h]\!]_E\nabla\times\bm \omega_{E,2} \text{d} s\right)\\
&\le & C h_E^{-3/2}\left(\eta_{1}^{T_1}+\eta_{2}^{T_1}+\eta_{1}^{T_2}+\eta_{2}^{T_2}+\eta_{1;1}^E+{|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}_{T_1\cup T_2}+h_{T_1}^2\|\nabla\varepsilon\|_{T_1}+h_{T_2}^2\|\nabla\varepsilon\|_{T_2}\right)\left\|[\![(\nabla\times)^3\bm u_h]\!]_E\right\|_{E}.
\end{eqnarray*}
Dividing the above inequality by $\left\|[\![(\nabla\times)^3\bm u_h]\!]_E\right\|_{E}$ and applying \eqref{eta1T-upper} and \eqref{etaE-1}, we obtain
\begin{align}\label{eta_12_E}
\eta_{1,2}^E\le C\left( \eta_{2}^{T_1}+\eta_{2}^{T_2}+{|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}_{T_1\cup T_2}+h_{T_1}^2\|\nabla\varepsilon\|_{T_1}+h_{T_2}^2\|\nabla\varepsilon\|_{T_2}\right).
\end{align}
Collecting \eqref{eta1T-upper},\eqref{etaE-1}, and \eqref{eta_12_E}, we have that
\begin{align}\label{eta1-upper}
\eta_1\le C\left( \eta_{2}+{|\hspace{-.02in}|\hspace{-.02in}|}\bm e{|\hspace{-.02in}|\hspace{-.02in}|}+h^2\|\nabla\varepsilon\| \right).
\end{align}
It remains to construct the upper bound of $\bm e^{\bot}$.
For $\bm e^\bot\in X\subset H_0(\text{curl}^2;\Omega)$, according to Lemma \ref{Helm}, $\bm e^{\perp}=\bm w+\nabla \psi$ with $\bm w\in \bm H^2(\Omega)$ and $\psi\in H_0^1(\Omega)$,
we have
\begin{align*}
{|\hspace{-.02in}|\hspace{-.02in}|}\bm e^\bot{|\hspace{-.02in}|\hspace{-.02in}|}^2=r_1( \bm e^\bot)=r_1( \bm w)+r_1( \nabla \psi).
\end{align*}
Due to the Galerkin orthogonality \eqref{orth-01}, for any $\bm w_h\in V_h$,
\begin{eqnarray*}
&&r_1( \bm w)=r_1( \bm w-\bm w_h)\\
&=&\sum_{T\in \mathcal{T}_h}\Big(\left(\bm f-\bm u_h-(\nabla\times)^4\bm u_h, \bm w-\bm w_h\right)-\sum_{E\in\mathcal E_h(T)}\int_E\bm n\times(\nabla\times)^2\bm u_h\nabla\times(\bm w-\bm w_h)\text{d} s\\
&&-\sum_{E\in \mathcal E_h(T)}\int_E(\nabla\times)^3\bm u_h \bm n\times (\bm w-\bm w_h)\text{d} s\Big)\\
&\leq & \sum_{T\in \mathcal{T}_h}\Big(\|\bm\pi _h\bm f-\bm u_h-(\nabla\times)^4\bm u_h\|_T\|\bm w-\bm w_h\|_T+\|\bm\pi_h\bm f-\bm f\|_T\|\bm w-\bm w_h\|_T\Big)\\
&&+\sum_{E\in \mathcal{E}_h^{\text{int}}}\Big(\|[\![\bm n \times (\nabla\times)^2\bm u_h]\!]_E\|_E\|\nabla\times(\bm w-\bm w_h)\|_E+\|[\![(\nabla\times)^3\bm u_h]\!]_E\|_E\|\bm n\times(\bm w-\bm w_h)\|_E\Big)\\
&\leq & C\bigg(\sum_{T\in \mathcal{T}_h}\Big(h_T^4\|\bm\pi _h\bm f-\bm u_h-(\nabla\times)^4\bm u_h\|_T^2+h_T^4\|\bm\pi_h \bm f-\bm f\|_T^2+\sum_{E\in\mathcal E_h(T)} h_E^3\|[\![(\nabla\times)^3\bm u_h]\!]_E\|_E^2\Big)\\
&&\qquad+\sum_{E\in \mathcal E_h(T)} h_E\|[\![\bm n \times (\nabla\times)^2\bm u_h]\!]_E\|_E^2\bigg)^{1/2}\\
&&\bigg(\sum_{T\in \mathcal{T}_h}\Big(h_T^{-4}\|\bm w-\bm w_h\|_T^2+\sum_{E\in \mathcal E_h(T)} h_E^{-1}\|\nabla\times(\bm w-\bm w_h)\|_E^2+\sum_{E\in \mathcal E_h(T)} h_E^{-3}\|\bm w-\bm w_h\|_E^2\Big)\bigg)^{1/2}.
\end{eqnarray*}
Let $\bm w_h=\Pi_{C}\bm w$. According to the trace inequality and Theorem \ref{clmt-error}, we obtain
\begin{align*}
\sum_{T\in \mathcal{T}_h}\Big(h_T^{-4}\|\bm w-\bm w_h\|_T^2+\sum_{E\in \mathcal E_h(T)} h_E^{-1}\|\nabla\times(\bm w-\bm w_h)\|_E^2+\sum_{E\in \mathcal E_h(T)} h_E^{-3}\|\bm w-\bm w_h\|_E^2\Big)
\le C \|\bm w\|_2^2.
\end{align*}
Furthermore, we use \eqref{decom-01}, \eqref{decom-02}, and the Poincar\'e inequality to obtain
\begin{align*}
r_1( \bm w)&\leq C(\eta_1+\eta_2)\|\bm w\|_2\leq C(\eta_1+\eta_2)\|\nabla\times\bm e^\bot\|_1\leq C(\eta_1+\eta_2){|\hspace{-.02in}|\hspace{-.02in}|}\bm e^\bot{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align*}
Similar to the proof of \eqref{irrota-varepsilon}, using \eqref{decom-03}, it holds that
\begin{align*}
r_1( \nabla \psi)\leq C(\eta_0+\eta_3)\|\nabla \psi\|\leq C (\eta_0+\eta_3){|\hspace{-.02in}|\hspace{-.02in}|}\bm e^\bot{|\hspace{-.02in}|\hspace{-.02in}|}.
\end{align*}
Hence,
\begin{align}\label{e-upper}
{|\hspace{-.02in}|\hspace{-.02in}|}\bm e^\bot{|\hspace{-.02in}|\hspace{-.02in}|}\leq C(\eta_0+\eta_1+\eta_2+\eta_3).
\end{align}
Combining \eqref{irrota-varepsilon}, \eqref{est-eta3}, \eqref{este0}, \eqref{eta1-upper}, and \eqref{e-upper}, we obtain Theorem \ref{posteriori-estimate-main}.
\end{proof}
When $\bm f=(\lambda_h+1)\bm u_h$, according to the definition of $\eta_0, \eta_2$, and $\eta_3$, we have that $\eta_0=\lambda_h\eta_3$ and $\eta_2=0$.
The following error estimator is a direct consequence of Theorem \ref{posteriori-estimate-main} and \eqref{error-eig-02}.
\begin{theorem}
For $h$ small enough, there exist constants $c_1, C_1$, and $C_2$ such that
\begin{align*}
c_1(\eta_1+\eta_3)\leq{|\hspace{-.02in}|\hspace{-.02in}|}\bm u-\bm u_h{|\hspace{-.02in}|\hspace{-.02in}|}\leq C_1(\eta_1+(\lambda_h+1)\eta_3),
\end{align*}
and
\begin{align*}
|\lambda-\lambda_h|\leq C_2(\eta_1+(\lambda_h+1)\eta_3)^2,
\end{align*}
where $\eta_1$ and $\eta_3$ are respectively defined in \eqref{errore1} and \eqref{errore3} with $\bm f=(\lambda_h+1)\bm u_h$.
\end{theorem}
\section{Numerical Examples}
\subsection{A priori error estimate}
Consider three domains:
\begin{itemize}
\item $\Omega_1$: the unit square given by $(0, 1) \times (0, 1)$,
\item $\Omega_2$: the L-shaped domain given by $(0, 1) \times (0, 1) \slash [1/2, 1) \times (0, 1/2]$,
\item $\Omega_3$: given by $(0, 1) \times (0, 1) \slash [1/4, 3/4] \times [1/4, 3/4]$.
\end{itemize}
The initial meshes of the domains are shown in Figure \ref{fig1}. In Tables \ref{tab1}, \ref{tab3}, and \ref{tab5}, we list the first five eigenvalues.
Tables \ref{tab2}, \ref{tab4}, and \ref{tab6} show the convergence rates of the relative errors for the first eigenvalues, which agree with the theory.
\begin{figure}
\centering
\includegraphics[width=0.3\linewidth, height=0.2\textheight]{domain1-eps-converted-to}
\includegraphics[width=0.3\linewidth, height=0.2\textheight]{domain2-eps-converted-to}
\includegraphics[width=0.3\linewidth, height=0.2\textheight]{domain3-eps-converted-to}
\caption{Sample meshes for $\Omega_1$ (left), $\Omega_2$ (middle), and $\Omega_3$ (right).}
\label{fig1}
\end{figure}
\begin{table}[h]
\centering
\caption{The first 5 eigenvalues of $\Omega_1$ with $k=4$.} \label{tab1}
\begin{tabular}{cccccc}
\hline
$h$ &$\lambda_1^h$&$\lambda_2^h$&$\lambda_3^h$&$\lambda_4^h$&$\lambda_5^h$\\
\hline
$1/4$&7.08101988e+02&7.08102390e+02&2.35145718e+03&4.25922492e+03&5.02522026e+03\\
$1/8$&7.07978763e+02&7.07978786e+02&2.35006082e+03&4.25597055e+03&5.02401495e+03\\
$1/16$&7.07971973e+02&7.07971975e+02&2.34999027e+03&4.25582307e+03&5.02399272e+03\\
$1/32$&7.07971564e+02&7.07971564e+02&2.34998613e+03&4.25581473e+03&5.02399235e+03\\
$1/64$&7.07971528e+02&7.07971555e+02&2.34998587e+03&4.25581421e+03&5.02399235e+03\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{Convergence rate for $\Omega_1$ with $k=4$ (relative error).} \label{tab2}
\begin{tabular}{cccc}
\hline
$h$ &$\lambda_1^h$& error& order\\
\hline
\hline
$1/4$&7.08101988e+02&1.74021691e-04&-\\
$1/8$&7.07978763e+02&9.59045415e-06&4.1815\\
$1/16$&7.07971973e+02&5.77922813e-07&4.0527\\
$1/32$&7.07971564e+02&5.08588883e-08&3.5063 \\
$1/64$&7.07971528e+02&-&-\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{The first 5 eigenvalues of $\Omega_2$ with $k=4$.} \label{tab3}
\begin{tabular}{cccccc}
\hline
$h$ &$\lambda_1^h$&$\lambda_2^h$&$\lambda_3^h$&$\lambda_4^h$&$\lambda_5^h$\\
\hline
$1/4$&5.34885649e+02&1.57586875e+03&6.10288551e+03&6.40711482e+03&1.09459861e+04\\
$1/8$&5.35061810e+02&1.57477474e+03&6.09556539e+03&6.37916246e+03&1.09184358e+04\\
$1/16$&5.35222062e+02&1.57468831e+03&6.09528577e+03&6.37104166e+03&1.09152964e+04\\
$1/32$&5.35292267e+02&1.57467206e+03&6.09528045e+03&6.36787675e+03&1.09143027e+04\\
$1/64$&5.35320748e+02&1.57466664e+03&6.09528434e+03&6.36661570e+03&1.09139180e+04\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{Convergence rate for $\Omega_2$ with $k=4$ (relative error).} \label{tab4}
\begin{tabular}{cccc}
\hline
$h$ &$\lambda_1^h$& error& order\\
\hline
$1/4$&5.34885649e+02&3.29341761e-04&-\\
$1/8$&5.35061810e+02&2.99502830e-04&0.1370\\
$1/16$&5.35222062e+02&1.31169871e-04&1.1911\\
$1/32$&5.35292267e+02&5.32057764e-05&1.3018\\
$1/64$&5.35320748e+02&-&-\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{The first 5 eigenvalues of $\Omega_3$ with $k=4$.} \label{tab5}
\begin{tabular}{cccccc}
\hline
$h$ &$\lambda_1^h$&$\lambda_2^h$&$\lambda_3^h$&$\lambda_4^h$&$\lambda_5^h$\\
\hline
$1/4$&9.43570924e+02&9.43570924e+02&3.35118080e+03&5.10757870e+03&1.03672699e+04\\
$1/8$&9.40543704e+02&9.40543704e+02&3.33230800e+03&5.11255084e+03&1.03470233e+04\\
$1/16$&9.39507116e+02&9.39507116e+02&3.32612997e+03&5.11519580e+03&1.03445476e+04\\
$1/32$&9.39103168e+02&9.39103168e+02&3.32373447e+03&5.11630255e+03&1.03438189e+04\\
$1/64$&9.38943028e+02&9.38943036e+02&3.32278551e+03&5.11674950e+03&1.03435487e+04\\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\caption{Convergence rate for $\Omega_3$ with $k=4$ (relative error).} \label{tab6}
\begin{tabular}{cccc}
\hline
$h$ &$\lambda_1^h$& error& order\\
\hline
$1/4$&9.43570924e+02&3.20825910e-03&-\\
$1/8$&9.40543704e+02&1.10211572e-03&1.5415\\
$1/16$&9.39507116e+02&4.29957430e-04&1.3580\\
$1/32$&9.39103168e+02&1.70524522e-04&1.3342\\
$1/64$&9.38943028e+02&-&-\\
\hline
\end{tabular}
\end{table}
\subsection{A posteriori error estimates}
Figure \ref{fig_rate} shows global error estimators and the relative errors of some simple eigenvalues for the three domains.
It can be observed that both the relative errors and the estimators have the same convergence rates.
Figure \ref{fig_dist} shows the distribution of the local estimators. The estimators are large at corners and catch the singularities effectively.
\begin{figure} \centering
\subfigure[the third eigenvalue of $\Omega_1$] { \label{fig:a}
\includegraphics[width=0.4\columnwidth]{domain1_rate.png}
}
\subfigure[the first eigenvalue on $\Omega_2$] { \label{fig:b}
\includegraphics[width=0.4\columnwidth]{domain2_rate.png}
}
\subfigure[the third eigenvalue on $\Omega_3$] { \label{fig:b}
\includegraphics[width=0.4\columnwidth]{domain3_rate.png}
}
\caption{The convergence rates of error estimators and the relative errors} \label{fig_rate}
\label{fig}
\end{figure}
\begin{figure}\label{fig_dist}
\includegraphics[scale=0.22]{postdomain1-eps-converted-to}
\includegraphics[scale=0.22]{postdomain2-eps-converted-to}
\includegraphics[scale=0.22]{postdomain3-eps-converted-to}
\caption{The local estimators.}
\label{fig_dist}
\end{figure}
\section{Conclusion}
A $H(\text{curl}^2)$-conforming element is proposed for the quad-curl problem in 2D.
We construct a priori and robust a posteriori error estimates for the eigenvalue problem.
Due to a new decomposition for the solution for the quad-curl problem,
the theory assumes no extra regularity of the eigenfunctions. In future, we plan to use the estimator to develop adaptive finite element methods.
The 3D counterpart is anther interesting but challenging topic.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.834961,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfYm6NNjgB1scaiWA | \section{Introduction}
A blind search for pulsars along the Galactic plane covering 10\% of the region between Galactic longitude 45$^{\circ} <$ l $<$ 135$^{\circ}$ and Galactic latitude 0$^{\circ} < |$b$| <$ 5$^{\circ}$ was recently carried out with the Giant Metrewave Radio Telescope (GMRT). It was named \lq GMRT Galactic Plane Pulsar and Transient Survey\rq. The survey was carried out at 325 MHz with a bandwidth of 16 MHz, divided into 256 filterbank channels. Each field (circular region of radius $\sim$ 1$^{\circ}$) was observed for 1800 s with a sampling time of 256 $\mu$s.\\
\paragraph{}
The advantage of observing at 325 MHz was the wide field of view ($\sim$ 4 deg$^2$) paired with a sensitivity of 0.6 mJy. The observations were carried out using the incoherent array (IA) mode of the GMRT. The data were written to magnetic tapes. They were extracted to network attached storage (NAS) disks of the high performance computing cluster having 64 dual core nodes. Pulsar search was done using SIGPROC\footnote{\tt www.sigproc.sourceforge.net} with extensive RFI mitigation algorithms written by one of the authors. The trial DM range used for the search was 0$-$1200 pc-cm$^{-3}$. The analysis was parallelised such that the DM search for a particular field was divided among the nodes. The results were written back to the NAS disks from where, they could be copied to other machines. The candidate plots thus generated, were manually scrutinised for identifying good candidates.\\
\paragraph{}
The follow-up timing observations continued with a new software back-end at GMRT. This provided 512 filterbank channels across a bandwidth of 33 MHz with 122.88 $\mu$s sampling. The integration time was 1800 s. The timing analysis was done using TEMPO2\footnote{\tt www.atnf.csiro.au/research/pulsar/tempo2} (\cite{H06}).
\section{Results}
PSR J1839+15 came out as a strong candidate and was successfully confirmed in later follow-up observations. The accumulated profile from 23 detections on the pulsar is shown in Figure \ref{accprof}. It shows a very narrow peak. A weaker second component can be seen just before the main pulse. The region before this component may also have another, still weaker component buried in the noise. Overall, it may consist of two or more components although polarization studies are required to confirm the same. The timing solution obtained so far is given in Table \ref{param}\\
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.55]{accprof}
\caption{The accumulated pulse profile for PSR J1839+15 at 325 MHz. The ordinate was scaled using radiometer equation.}
\label{accprof}
\end{center}
\end{figure}
\begin{table}[h!]
\begin{center}
\begin{tabular}{l r}
\hline
RA & 18h39m06.6(2)s\\
DEC & +15$^{\circ}$06'57(6)"\\
P & 0.54916053388(8) s\\
\.{P} & 2.613(6) $\times$ 10$^{-14}$ s/s\\
DM & 68.1(8) pc-cm$^{-3}$\\
Characteristic age $\tau$ = P/2\.{P} & 0.33 Myr\\
Surface Magnetic Field B$_{S}$ & 3.83 $\times$ 10$^{12}$ G\\
DM Distance = DM/n$_{e}$ & 3 kpc$^a$\\
\hline
$^a$DM distance calculated using the model given by \cite{C02}
\end{tabular}
\caption{Important parameters of PSR J1839+15}
Numbers in brackets indicate 2$\sigma$ errors as reported by TEMPO2 in the last digit of the given value. The error on the DM comes from local search done on the time series data.
\label{param}
\end{center}
\end{table}
\paragraph{}
During the follow-up timing observations, the pulsar could not be detected for 278 days from 30$^{th}$ August 2012 to 13$^{th}$ June 2012. It could be detected regularly then onwards until on 2$^{nd}$ September 2012, when it was again not detected. The estimated mean flux densities for the detections are plotted in Figure \ref{fluvar}. As can be clearly seen, the 8$\sigma$ upper limits on non-detections are below the 98\% confidence limit on the expected flux density. Thus, we are fairly confident that PSR J1839+15 is an intermittent pulsar with an OFF time scale of roughly 278 days. We are currently working on calculating the \.{P} in the ON and OFF states. Two different values of \.{P} would confirm this as an intermittent pulsar. There are only three more such pulsars known currently. PSR B1931+24 shows a quasi periodic ON-OFF cycle of about 30-40 days (\cite{K06}), PSR J1841$-$0500 shows an OFF time scale of 580 days (\cite{C12}) while J1832+0029 remained OFF for 650 days and 850 days in the two sampled OFF states (\cite{L12}). The reason for this behavior is not understood. Further investigations and discoveries of new members of this class may shed some light on the underlying physics.
\clearpage
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{flux_var}
\caption{Estimated mean flux density of PSR J1839+15 as a function of MJD. Downward arrows indicate 8$\sigma$ flux density limits on non-detections. The dashed horizontal line indicates 98\% confidence limit for the lowest expected flux density given the variations in the observed flux densities.}
\label{fluvar}
\end{center}
\end{figure}
\section{Discussion}
Intermittent pulsars are a rare breed of pulsars showing very long period nulls. The cause of these nulls is a mystery. To add to the already puzzling phenomenon, it was reported (\cite{K06}, \cite{C12} and \cite{L12}) that the spin-down rate is considerably higher in the ON state than in the OFF state indicating that the particle flow forming the magnetospheric currents, is different in the ON and OFF states. Another puzzling fact is that these pulsars belong to the normal pulsar population in the P$-$\.{P} diagram. The cause of cessation of radio emission altogether may be attributed to the reduced particle flow or it may just be a failure of coherent emission. These pulsars certainly challenge the current emission models and if well studied, may provide vital inputs for coming up with more realistic emission mechanisms.\\
\paragraph{}
Given the ON-OFF nature of these pulsars, they provide great motivation for extending the existing blind searches and embarking on new, sensitive blind searches even in the previously searched areas of sky. Assuming a typical duration of a big survey as 3 to 4 years and given the OFF state time scales of these pulsars, a rough estimate of the intermittent pulsar population may go up considerably, thus opening up the possibility of discovering many more of such objects.
\section{Acknowledgements}
We would like to thank the staff of the GMRT who have made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.
| {
"attr-fineweb-edu": 1.964844,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUf_nxK4tBVhat6q39 | \section{Introduction}
Over the last decades, much effort has been made to develop analytical and computational tools to improve the understanding
of rare events in complex stochastic systems. Consider, for example, a classical dynamical system with two stable
fixed points where, in the absence of noise, the system is unable to switch from one fixed point to the other. In the presence
of noise, a transition from one stable fixed point to the other stable fixed point can become possible, but its occurrence will
be rare if the noise is small. When the switching occurs, the transition path is itself random and different transition paths have different likelihoods.
The most probable path is the most important path, often called the {\em instanton}. In many cases, the probability distribution around the instanton
follows roughly $\sim {\mathrm{e}}^{-S/\epsilon}$ where the small parameter $\epsilon$ characterizes the strength of the noise and $S$
is the Freidlin-Wentzell functional \cite{freidlin-wentzell:1998} associated with the stochastic system. If the Freidlin-Wentzell theory is applicable,
the instanton can be found as the minimizer of $S$.
Given the fundamental importance of instantons for the noise-driven transition in stochastic systems, it is not surprising
that they have been computed, described, and analyzed in a variety of contexts and fields - starting from the beginnings
in the 70s (Martin-Siggia-Rose \cite{martin-siggia-rose:1973}, DeDominicis \cite{dominicis:1976}, Janssen \cite{janssen:1976}),
to applications nowadays in many fields, including for example turbulence \cite{grafke-grauer-schaefer-etal:2015} and nonlinear optics
\cite{falkovich-etal:2001,terekhov-etal:2014}.
While a great deal of work has been accomplished on the analytical side, the development of efficient numerical methods to compute
instantons, in particular in high-dimensional spaces, is more recent. Key steps on the computational side were the development
of the {\em string method} \cite{e-ren-vanden-eijnden:2007} and the more general {\em geometric minimum action method} (gMAM) \cite{heymann-vanden-eijnden:2008,vanden-eijnden-heymann:2008}. Both methods can be used to compute instantons in instances where
the initial and the final state of the system are known. The string method is extremely successful in the case of diffusive processes where the drift field is a gradient field. The gMAM allows to handle non-gradient drift fields as well as more general noise processes. Note that the framework of gMAM is immediately applicable to stochastic ordinary differential equations, but it needs - like most numerical frameworks - to be adapted to the particular form of the stochastic partial differential equation under consideration.
The present paper has two main objectives: First, it shows how to adapt and implement gMAM for a
stochastically driven nonlinear Schr{\"o}dinger equation. While, in terms of applications, we focus on the case of the transition of
two solitary waves in the context of nonlinear optics, the presented gMAM for the nonlinear Schr{\"o}dinger equation can
find applications in a variety of areas, e.g. Bose-Einstein condensation or nonlinear wave phenomena. The second objective of
the paper is to discuss the relationship of the instanton found in the stochastic PDE system to the instanton of an approximation given by a low-dimensional
stochastic dynamical system. This is important as, in the past, often such low-dimensional reductions have been used to analyze the behavior of
complex systems without being able to characterize the limitations of such an approach. With the new tools developed in this paper it is now
possible to look deeper into the validity of low-dimensional models.
\section{Review of the geometric minimum action method (gMAM)}
Consider a stochastic differential equation \cite{arnold:1974,oksendal:2003,gardiner:2009} for a $n$-dimensional stochastic process $x$ given by
\begin{equation} \label{basic_sde}
dx = b(x)dt + \sqrt{\epsilon}\sigma(x)dW\,.
\end{equation}
It is well-known that the transition probability (meaning the probability to find a particle at the location $\tilde x$ at the time $t$
assuming that the particle started at $x(0)=x_0$) can be written as a path integral \cite{langouche-roekaerts-etal:1982,kleinert:2009,Schafer_Moore_2011} in the form of
\begin{equation}
p(\tilde x,t) = \int_{{\mathcal{C}}(\tilde x,t|x_0,0)} {\mathcal{D}}x(\tau)\, {\mathrm{e}}^{-\frac{1}{\epsilon}\int_0^t L(x(\tau),\dot x(\tau))\, d\tau}\,.
\end{equation}
Here, we denote by ${\mathcal{C}}(\tilde x,t|x_0,0)$ the set of all paths that are connecting the starting point $(x_0, 0)$ with the
end point $(\tilde x,t)$. The Lagrangian $L$ is given by
\begin{equation}
L = \frac{1}{2} \langle \dot x - b(x), a^{-1}(\dot x - b(x))\rangle\,,
\end{equation}
where $\langle \cdot,\cdot \rangle$ denotes the usual inner product in ${\mathbb{R}}^n$ and the correlation matrix $a$ is
given by $a = \sigma\sigma^T$. For small $\epsilon$, intuition tells us that the transition probability should be dominated by
a path where the action $S$ defined as
\begin{equation}
S = \int_0^t L(x(\tau),\dot x(\tau))\, d\tau
\end{equation}
is minimal. This intuition can be made rigorous via using the Freidlin-Wentzell approach of large deviations \cite{freidlin-wentzell:1998}. In the mathematical
literature, a curve $\left(\tau,x(\tau)\right)$ that minimizes the action $S$ is called a {\em minimizer of the Freidlin-Wentzell action functional},
in the physics literature, such a path is often called an {\em instanton}. In a simple calculation, we can derive the associated Euler-Lagrange equations that an instanton needs to satisfy. However, in many instances, it is convenient to apply a Legendre-Fenchel transformation and
move from the Lagrangian to the Hamiltonian framework by introducing momenta $p_i = \partial L / \partial \dot x_i$. For diffusive processes as
in (\ref{basic_sde}), the Hamiltonian $H$ is given by
\begin{equation}
H = \frac{1}{2} \langle p, ap\rangle + \langle b, p \rangle\,.
\end{equation}
Note that we restrict ourselves to processes where the drift $b$ and the noise $\sigma$ are not explicit functions of the time $t$. In this case,
the Hamiltonian $H$ is conserved. One particular example that is of interest in many physical applications is the exit from a stable fixed point $x_s$ where the drift $b(x_s)$ is zero at the fixed point. Assume that we start from such a fixed point and we consider transitions to a different point $\tilde x$. It can be shown in general that the path that minimizes the action $S$ will need infinite time to move from the fixed point to the end point $\tilde x$ and
that this path corresponds to the constraint that $H=0$. Therefore, we are often interested in the Freidlin-Wentzell minimizer under the additional constraint that the Hamiltonian $H=0$. The minimizer or instanton satisfies the equations
\begin{equation} \label{instanton_general}
\dot x = \frac{\partial H}{\partial p} \equiv H_p, \qquad \dot p = - \frac{\partial H}{\partial x} = - H_x
\end{equation}
with the appropriate boundary conditions. It is common to parametrize the path of the instanton in a way that we set $x(\tau)$ to have the value $x_s$ for a time $\tau \to -\infty$ and to observe $x$ at time $\tau=0$ at $\tilde x$. Then, the boundary conditions for the instanton equations (\ref{instanton_general}) are given by
\begin{equation}
\lim_{\tau \to -\infty} x(\tau) = x_s, \qquad x(0) = \tilde x\,.
\end{equation}
There are only very few examples, where the instanton equations can be solved analytically. Especially for higher-dimensional systems, we need to find the minimizer of the functional $S$ numerically. Note that, in particular if the stochastic equation under consideration represents a finite-dimensional Galerkin approximation of a stochastic partial differential equation (SPDE), the dimension of $x$ can be large and efficient numerical methods are desirable. One numerically efficient approach is to parametrize the instanton using arc length instead of using the original parametrization in terms of the time $t$. This is possible if the drift $b$ and the covariance matrix $a$ do not explicitly depend on the time $t$ (but they can still depend on $x$). Using a more appropriate parametrization is the key idea of the {\em string method} \cite{e-ren-vanden-eijnden:2007} and the {\em geometric minimum action method} (gMAM) \cite{heymann-vanden-eijnden:2008,vanden-eijnden-heymann:2008}.
Consider a parametrization $t = g(s)$ and let $\Phi(s) = x(g(s))$ be the path of the minimizer depending on the parameter $s$. Setting $\lambda(s) = 1/g'(s)$ we obtain using the chain rule and Hamilton's equations:
\begin{equation}
\lambda^2\Phi'' - \lambda H_{px}\Phi' + H_{pp}H_x + \lambda\lambda'\Phi' = 0\,.
\end{equation}
In order to find the minimizing path using gMAM, one introduces an artificial relaxation time $\tau$ to solve the equation
\begin{equation} \label{gMAMgeneral}
\dot \Phi \equiv \frac{\partial\Phi}{\partial \tau} = \lambda^2\Phi'' - \lambda H_{px}\Phi' + H_{pp}H_x + \lambda\lambda'\Phi' + \mu \Phi'\,.
\end{equation}
The last term is used to enforce the constraint given by the parametrization with respect to arc length.
\section{Schr{\"o}dinger systems and gMAM}
In this section we show how to implement the geometric minimum action method in the case of dissipative
nonlinear Schr{\"o}dinger systems.
Consider the cubic nonlinear Schr{\"o}dinger equation for the complex field $A=A(z,t)$ given by
\begin{equation}
iA_z + dA_{tt} + c|A|^2A = -\alpha iA + i\kappa A_{tt} + \eta(z,t)\,.
\end{equation}
For a stochastic partial differential equation with a nonlinear drift operator ${\mathcal{B}}$ of the form (here $z$ is the evolution variable and $t$ is the transverse variable, ${\mathcal{W}}$ is a Brownian sheet)
\begin{equation}
\varphi_z = {\mathcal{B}}(\varphi)dz + \sqrt{\epsilon}d{\mathcal{W}}
\end{equation}
the equation (\ref{gMAMgeneral}) is written as \cite{heymann-vanden-eijnden:2008}
\begin{equation} \label{gMAM_nlse}
\dot \Phi = \lambda^2\Phi'' - \lambda\left(\partial{\mathcal{B}} - (\partial{\mathcal{B}})^+\right)\Phi' - (\partial{\mathcal{B}})^+{\mathcal{B}} + \lambda'\lambda\Phi' + \mu \Phi'\,.
\end{equation}
In order to apply gMAM to the cubic nonlinear Schr{\"o}dinger equation, we need to compute the different terms in the equation above.
This can be done in the following way: Splitting the complex envelope $A=u(z,t)+iv(z,t)$ into real and imaginary part, we can write the nonlinear operator ${\mathcal{B}}$ as
\begin{equation}
{\mathcal{B}}(\Phi) = \left(\begin{array}{c} {\mathcal{B}}_1(u,v) \\ {\mathcal{B}}_2(u,v) \end{array}\right)
= \left(\begin{array}{c} b_1(u,v) \\ b_2(u,v) \end{array}\right) + \kappa \left(\begin{array}{c} u_{tt} \\ v_{tt} \end{array}\right)
+ d \left(\begin{array}{c} -v_{tt} \\ u_{tt} \end{array}\right)\,.
\end{equation}
For the NLSE in the above form, we find for $b_1$ and $b_2$ obviously
\begin{equation}
b_1(u,v) = -\alpha u - c(u^2+v^2)v, \qquad b_2(u,v) = -\alpha v - c(u^2+v^2)u \,.
\end{equation}
The linearization $\partial{\mathcal{B}}$ of the nonlinear operator ${\mathcal{B}}$ is given by
\begin{equation} \label{split_B}
\partial{\mathcal{B}} = \nabla b + \kappa {\mathcal{L}}_D + d{\mathcal{L}}_H
\end{equation}
where the operators $ {\mathcal{L}}_D $ and ${\mathcal{L}}_H$ are defined by
\begin{equation}
{\mathcal{L}}_D = \left(\begin{array}{cc} \partial_t^2 & 0 \\ 0 & \partial_t^2 \end{array}\right), \qquad
{\mathcal{L}}_H = \left(\begin{array}{cc} 0 & -\partial_t^2 \\ \partial_t^2 & 0 \end{array}\right)\,.
\end{equation}
With these definitions, we can write for the $\partial{\mathcal{B}} - (\partial{\mathcal{B}})^+$
\begin{equation}
\partial{\mathcal{B}} - (\partial{\mathcal{B}})^+ = \nabla b - (\nabla b)^T + 2d {\mathcal{L}}_H
\end{equation}
and for the term $(\partial{\mathcal{B}})^+{\mathcal{B}}$ we find
\begin{equation}
(\partial{\mathcal{B}})^+{\mathcal{B}} = (\nabla b)^T{\mathcal{B}} + \left(\kappa {\mathcal{L}}_D - d {\mathcal{L}}_H\right) b + (\kappa^2+d^2)\partial_t^4 \Phi\,.
\end{equation}
In the numerical implementation, it is essential to treat carefully the last term involving fourth-order derivatives $(\kappa^2+d^2)\partial_t^4 \Phi$ in order
to avoid instabilities. This is the reason, why we split up the second-order derivative terms in the representation (\ref{split_B}) of ${\mathcal{B}}$. Summarizing
the results, for the components $\Phi = (u,v)^T$, we obtain the two equations
\begin{eqnarray}
\dot u &=& \lambda^2u'' - \lambda\left(b_{1v}-b_{2u} -2dv_{tt}'\right) \nonumber \\ && - \left(b_{1u}\mathcal{B}_1 + b_{2u} \mathcal{B}_2 + \kappa b_{1tt} + db_{2tt} + (\kappa^2+d^2)u_{tttt}\right) \,, \\
\dot v &=& \lambda^2v'' - \lambda\left(b_{2u}-b_{1v} -2du_{tt}'\right) \nonumber \\ && - \left(b_{1u} \mathcal{B}_1 + b_{2v} \mathcal{B}_2 + \kappa b_{2tt} - db_{1tt} + (\kappa^2+d^2)v_{tttt}\right) \,.
\end{eqnarray}
\begin{figure}[htb!]
\centering
\hfill
\includegraphics[]{LingmamUc1nn.png}
\hfill
\includegraphics[]{LingmamVc1n.png}
\hfill
\caption{Evolution of the instanton of the linear dissipative Schr{\"o}dinger equation with respect to arc length parametrization. The real part of the instanton is significantly larger than its imaginary part, which has to be zero at both boundaries.}
\label{fig:LingMAM}
\end{figure}
\section{Fourier domain solution of the linear case}
In order to test the numerical implementation of gMAM for Schr{\"o}dinger systems, we can consider the particular case of a linear system. The instanton equations
for the minimizer of the Freidlin-Wentzell action functional are, in this case, given by
\begin{equation} \label{lin_schroedinger}
A_z = -\alpha A + \kappa A_{tt} + idA_{tt} + P\,, \qquad P_z = \alpha P - \kappa P_{tt} + idP_{tt}\,.
\end{equation}
Note that the equation of the optimal noise field $P$ does not contain any term involving the field $A$. In a typical gMAM setting, however, we consider
transitions from a state $A_1$ of the field $A$ to another state $A_2$, leading to boundary conditions for the field $A$. The minimizer has to satisfy these
boundary conditions which will impose boundary conditions on the field $P$. These boundary conditions are only known {\em after} the computation of the
minimizer $A$. The situation is different if we prescribe an initial condition for the field $A$ and a final condition for the field $P$ \cite{chernykh-stepanov:2001,grafke-grauer-schaefer-etal:2014}.
Then, in the linear case above, the equation for $P$ can be solved
independently of the equation for $A$ and the solution can be used to solve the equation for $A$. As a test case, let us choose the following boundary conditions
for $P$:
\begin{equation}
\lim_{z \to -\infty} P(z,t) = 0, \qquad P(0,t) = F\delta(t)\,.
\end{equation}
%
Note that it can be shown from the variational principle that this final condition for the noise field $P$ corresponds to a final condition $A(0,t) = a$ for the field
$A$ with $F$ acting as Lagrange multiplier to enforce this constraint.
Using Fourier transform, we can immediately solve (\ref{lin_schroedinger}) and obtain the solution for the Fourier transform $\hat A$ of the field $A$:
\begin{equation}
\hat A(z,\omega) = \frac{F}{2(\alpha + \kappa \omega^2)}\,{\mathrm{e}}^{(\alpha+\kappa\omega^2 - id\omega^2)z}
\end{equation}
At $z=0$, it is simple to carry out the inverse transform analytically and to obtain an explicit relationship between the amplitude $a$ and the Lagrange multiplier $F$:
\begin{equation}
A(0,t) = \frac{F}{4\sqrt{\alpha\kappa}}\,{\mathrm{e}}^{-\sqrt{\alpha/\kappa}|t|}, \qquad F = 4\sqrt{\alpha\kappa}\,a\,.
\end{equation}
In this way, we can use this linear case as a test case for gMAM since both the initial state $\lim_{z\to -\infty} A(z,t)=0$ and the final state $A(0,t)$ are known.
Fig$.$ \ref{fig:LingMAM} shows the evolution of the instanton (real and imaginary part) with respect to arc length. Note that, if we set the dispersion $d=0$, we would
obtain a solution with an imaginary part equal to zero. Also, for the dispersive case, we observe in both real and imaginary part, slow oscillations with respect to the transversal coordinate $t$
at the beginning of the instanton's evolution. In order to check the accuracy of the numerical code, we can compare the solution obtained by numerically solving (\ref{gMAM_nlse}) to
the analytical solution obtained in Fourier space. Fig$.$ \ref{fig:LinComp} shows the comparison of a slice at a fixed arc length $s$.
\begin{figure} [htb]
\centering
\setlength\figureheight{4cm}
\setlength\figurewidth{5cm}
\input{UvUanaLin0.tikz} \hfill \input{VvVanaLinTest0.tikz}
\caption{Comparison between a slice of the the analytic instanton calculation for the linear Schr{\"o}dinger equation (red) against the gMAM result (blue) for arc length parameter $s = .78$. The left panel shows the real part $u$, the right panel the imaginary part $v$. The small error decreases with refinement of the computational grid.}
\label{fig:LinComp}
\end{figure}
\section{Minimizer of the cubic nonlinear Sch{\"o}dinger equation}
In the following we are looking at the nonlinear case, where we set for simplicity the dispersion coefficient $d=1/2$ and the nonlinear coefficient $c=1$.
It is well-known that, for the lossless case without stochastic perturbations, the cubic nonlinear Sch{\"o}dinger equation is integrable and possesses soliton
solutions \cite{newell-moloney:1992,hasegawa-kodama:1995}. As a simple example, we look at the stochastic transition of a 'flat soliton' to a 'sharply peaked' soliton, hence we choose
as boundary conditions $A_k (t) = \Lambda_k/\cosh(\Lambda_k t)$ with $\Lambda_1 < 1 < \Lambda_2$. While we do not have an analytical solution to the
instanton equations in this case, we can still use them to verify the accuracy of the gMAM algorithm. The minimizer $(A,P)$ has to satisfy the pair of coupled Euler-Lagrange equations
\begin{eqnarray} \label{instanton_eqs_nlse}
A_z &=& - \alpha A + \kappa A_{tt} + \frac{i}{2} A_{tt} + i|A|^2A + P\,, \\
P_z &=& \alpha P - \kappa P_{tt} + \frac{i}{2} P_{tt} + 2i|A|^2P - iA^2P^*\,.
\end{eqnarray}
From the numerical solutions given by gMAM, we can take the initial condition $A_1$ and propagate this initial condition using the evolution equation of $A_z$ (forward in $z$-direction).
In a similar fashion we can take the final condition $P_2$ and propagate this final condition using the evolution equation of $P_z$ (backward in $z$-direction). Note that, when solving these equations,
it is appropriate to scale $z$ as well according to arc length - and the corresponding transformation is also provided by gMAM algorithm in terms of the parameter $\lambda(s)$. The following
fig. \ref{fig:solcomp} shows contour plots for the instanton of the transition of a soliton with $\Lambda_1 = 0.5$ to $\Lambda_2=2.5$. The dissipative coefficients $\alpha$ and $\kappa$ are
both set to 0.1 in this simulation. The left panel shows the gMAM results and the right panel the solutions
of the system of equations (\ref{instanton_eqs_nlse}) with the initial condition $A_1$.
\begin{figure}[htb!]
\centering
\setlength\figureheight{5cm}
\setlength\figurewidth{6cm}
\input{display_amp.tikz}
\hfill
\input{display_acs.tikz}
\caption{Contour plot of the amplitude $|A|$ of the minimizer of the nonlinear Schr{\"o}dinger equation: Left panel shows the result of the gMAM computation. Right panel shows the solution of the instanton equation
of $A$.}
\label{fig:solcomp}
\end{figure}
\section{Comparison to a finite-dimensional model}
Over the last decades, in particular in the context of the cubic nonlinear Schr{\"o}dinger equation, much work has been devoted on the study of finite-dimensional models as
approximations to infinite dimensional models.
In such a setting, the original stochastic partial differential equation is approximated by a low-dimensional random dynamical system \cite{abdullaev-bronski-etal:2000,schaefer-moore-etal:2002}. In many instances, however, the question whether results obtained
from the analysis of the reduced model are still valid for the full system given by the partial differential equation has not been studied thoroughly \cite{Moore_Schafer_Jones_2005}. In the following, we work with
a finite-dimensional reduction from a recent paper by R. Moore \cite{moore:2014}, adapted for our case: We parametrize the soliton's evolution by four parameters $(R,\Omega,T,\phi)$ that are
all functions of the evolution variable $z$. The corresponding pulse $A$ can be constructed from these parameters via
\begin{equation} \label{A_param}
A(z,t) = \frac{R(z)}{\cosh\left(R(z)(t-T(z)\right)}\,{\mathrm{e}}^{i\phi(z)+it\Omega(z)}\,.
\end{equation}
The stochastic dynamical system of the parameters is given by
\begin{equation}
\left(\begin{array}{c} dR \\ d\Omega \\ dT \\ d\phi \end{array}\right) =
\left(\begin{array}{c} -2\alpha R - \frac{2}{3}\kappa R^3 - 2\kappa R\Omega^2 \\
-\frac{4}{3}\kappa R^2\Omega \\ \Omega \\
\frac{1}{2} R^2 - \frac{1}{2} \Omega^2 + \frac{4}{3}\kappa R^2 \Omega T \end{array}\right) dz
+ \sqrt{\epsilon} \Sigma \left(\begin{array}{c} dW_1 \\ dW_2 \\ dW_3 \\ dW_4 \end{array} \right)
\end{equation}
together with the correlation matrix $\Sigma\Sigma^T$ which reads
\begin{equation}
\Sigma\Sigma^T = 2\left(\begin{array}{cccc} R & 0 &-T & 0 \\ 0 & \frac{R}{3} & 0 &-\frac{RT}{3} \\ -T & 0 & \frac{2T^2}{R} + \frac{\pi^2}{12R^3} & 0 \\
0 & -\frac{RT}{3} & 0 & \frac{RT^2}3 + \frac{\pi^2+12}{36R}
\end{array}\right) \,.
\end{equation}
In this four-dimensional reduction, the soliton transition discussed earlier corresponds to a transition of the states $(\Lambda_1,0,0,0)$ to $(\Lambda_2,0,0,0)$,
hence for the boundary conditions we can choose $R_1=\Lambda_1$ and $R_2=\Lambda_2$. In order to compare the results of this approach to the PDE minimizer,
we first use gMAM to compute the minimizing path $(R(s),\Omega(s),T(s),\phi(s))$ and then we construct the approximation of the PDE minimizer via (\ref{A_param}).
Note that it is convenient to parametrize the approximate PDE minimizer with respect to arc length such that it can be compared to the minimizer from the PDE
model. Fig$.$ \ref{fig:mincomp} shows as an example the comparison of the imaginary part of the PDE minimizer to the approximation obtained by the stochastic ODEs.
At the beginning of the evolution, there are again slow oscillations (similar to the linear case) which cannot be properly captured by the finite-dimensional
model. For larger $s$, the agreement of the two solutions is remarkable.
\begin{figure}[htb!]
\centering
\setlength\figureheight{5cm}
\setlength\figurewidth{6cm}
\input{display_v.tikz}
\hfill
\input{display_vod.tikz}
\caption{Contour plot of the imaginary part $v$ of the minimizer of the nonlinear Schr{\"o}dinger equation (left panel) and the approximation using the finite-dimensional set
of stochastic equations (right panel).}
\label{fig:mincomp}
\end{figure}
While a detailed comparison between minimizers of the finite-dimensional reduction and the full model is beyond the scope of the present work, we remark that for
the example studied here (in particular for the chosen parameters), the major contribution stems from the evolution equation of the amplitude $R(z)$. As a first step,
we can simply extract the maximum value of the pulse profile $|A|$ for each value of the arc length $s$. The corresponding graph is shown on the left panel of fig. \ref{fig:plot_a}.
Again, we observe a good agreement between the ODE prediction and the PDE minimizer. Note that, initially the amplitude dips to a fairly low value
(this can already be seen in the contour plots shown previously in fig. \ref{fig:solcomp}). The ODE model captures this dip fairly well.
\begin{figure}[htb!]
\centering
\setlength\figureheight{4cm}
\setlength\figurewidth{5.85cm}
\input{flat2peak.tikz}
\hfill
\input{zero2peak.tikz}
\caption{Left panel: plot of the amplitude $R(s)$ of the ODE minimizer (red) in comparison to the amplitude of the PDE minimizer (blue) for the transition of a soliton with $\Lambda_1 = 0.5$ to
$\Lambda_2 = 2.5$. Right panel: same comparison but for the exit from zero the the soliton with an amplitude $\Lambda_2 = 2.5$.}
\label{fig:plot_a}
\end{figure}
A further important example is the transition from the zero state to a soliton. Note that this transition cannot be captured precisely in the ODE model, as the equations become singular in the limit $R\to 0$.
However, we can compute the ODE minimizer starting from a fairly small amplitude $\Lambda_1 = \delta \ll 1$. The result is presented on the right panel of fig. \ref{fig:plot_a}. In this example,
we chose $\delta = 0.001$. Again, the amplitude of the ODE minimizer and the amplitude of the PDE minimizer show good agreement. While these agreements are encouraging and clearly support validity of the finite-dimensional model to capture the transition of solitary waves of different amplitudes, we also noticed differences between the PDE minimizer and the ODE minimizer: When looking at the pulse shape of the PDE minimizer and as seen in fig$.$ \ref{fig:pPhase}, we find a presence of a parabolic phase (often called "chirp" in the optics) which is not captured by the ODE model above. A thorough analysis of the potential impact of the chirp, including the possibility to extend the low-dimensional model to include a parabolic phase, is beyond the present paper and subject to future research.
\begin{figure} [htb!]
\centering
\setlength\figureheight{4cm}
\setlength\figurewidth{5.3cm}
\input{pPhaseWithAmp2.tikz} \hfill \input{pPhase3WithAmp05.tikz}
\caption{Example plots of phase vs time (in blue) plotted on the same axes with a magnified plot of the amplitude vs time (in red) for s=.195 (left) and s = .586 (right). We can see that the phase shows parabolic behavior within the time frame where the amplitude is large which could justify a using a parabolic phase model to characterize this system.} \label{fig:pPhase}
\end{figure}
\section{Conclusion}
This paper shows how to adapt and implement the geometric minimum action method (gMAM) for the case of a stochastic cubic nonlinear Schr{\"o}dinger equation.
The resulting implementation was tested using an analytical solution for the linear case and an independent solution based on direct integration of the instanton equation for the nonlinear case. We applied this method to the computation of the minimizer for the transition of solitons with small or zero amplitude to peaked solitons with larger amplitude. Finally, the PDE minimizer was compared to the minimizer of a low-dimensional approximation.
\section*{Acknowledgments}
The authors thank R. Moore, E. Vanden-Eijnden, T. Grafke, R. Grauer,
and S. Turitsyn for many valuable discussions.
This work was supported, in part, by the NSF grants DMS-1108780
and DMS-1522737.
\section*{References}
| {
"attr-fineweb-edu": 1.921875,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUf_vxK4sA-5Y3vH2E | \section{Introduction}
A split-state non-malleable code~\cite{NM} for single-bit messages consists of randomized encoding and decoding algorithms $(\enc, \dec)$. A message $m \in \{0,1\}$ is encoded as a pair of strings $(L,R) \in
\{0,1\}^k \times \{0,1\}^k$, such that $\dec(L,R) = m$. An adversary then specifies an arbitrary pair of functions $g,h: \{0,1\}^k \rightarrow \{0,1\}^k$. The code is said to be non-malleable if, intuitively, the message obtained as $\dec(g(L), h(R))$ is ``unrelated'' to the original message $m$. In particular, to be $\varepsilon$-non-malleable, it is enough~\cite{First} to guarantee that when the message $m$ is chosen uniformly at random and encoded into $(L,R)$, the probability that $\dec(g(L), h(R)) = 1-m$ is at most $\frac12 + \varepsilon$. Since their introduction in 2010~\cite{NM}, split-state non-malleable codes have been the subject of intense study within theoretical computer science~\cite{NM,First,Aggarwal,ChaZuck,TamperedExtensions,Li}.
\vspace{1ex}
In this work, we show that expander graphs immediately give rise to split-state non-malleable codes for single-bit messages.
Specifically, we show that any $d$-regular graph on $n=2^k$ nodes with spectral expansion $\lambda$ satisfying $n = \Omega(d^3\log(d)/\lambda)$ yields a $O\left(\frac{\lambda^{3/2}}{d}\right)$-non-malleable code for single-bit messages in the split-state model. Our proof is elementary, requiring a little more than two (fullsize) pages to prove,
having at its heart two nested applications of the Expander Mixing Lemma.
Furthermore, we only need expanders of high degree (e.g.,~$d = n^\varepsilon$), which can be constructed and analyzed easily (see, e.g.,~\cite{Luca-Blog} or the appendix), yielding $2^{-\Omega(k)}$-non-malleable codes.
\iffalse
\paragraph{Comparison with Previous Work.}
Until our work, all known proofs of security for explicit split-state non-malleable codes have required complex mathematical proofs, and all known such proofs either directly or indirectly used the mathematics behind constructions of two-source extractors~\cite{First,Aggarwal,ChaZuck,TamperedExtensions,Li}. In fact, after constructing the first non-malleable code in the split-state model Dziembowski, Kazana, and Obremski wrote: ``This brings a natural question if we could show some relationship between the extractors and the non-malleable codes in the split-state model. Unfortunately, there is no obvious way of formalizing the conjecture that non-malleable codes need to be based on extractors''~\cite{First}. We thus simultaneously find the first simple, elementary solution to the problem of designing 1-bit non-malleable codes and answer in the negative the implicit conjecture of~\cite{First}; it is not necessary to base constructions of non-malleable codes on the theory of extractors.
Our work thus opens up a new line of attack in the study of split-state non-malleable codes. Our proof is also considerably simpler than (and approximately one-fourth the length of) the proof of security of the 1-bit non-malleable code of~\cite{First}. Indeed previous proofs apply extractors repeatedly; for instance the proof of~\cite{First} uses the extractor property several times (e.g., in equation (22) and using equation (43) in~\cite{First}). Previous proofs also highlight the nontriviality and care that is required in applying extractors correctly to yield a valid proof of non-malleability (e.g., the paragraph beginning with ``There are two problems with the above argument.'' found below equation (36) of~\cite{First}).
Nevertheless, it is important to note that current constructions of non-malleable codes supporting messages of arbitrary length use many ideas pioneered in the construction of~\cite{First}, in particular the use of extractors. While we do not yet know how to generalize our results beyond single-bit messages, we speculate that further investigation building upon our work will reveal a deeper connection and more powerful simple constructions based on expanders.
It should be noted that two-source extractors are well-known to exhibit expansion properties; however, in all previous proofs, much more than mere expansion was used to argue non-malleability. Indeed, it is not surprising that 1-bit non-malleable codes will have some sort of expansion properties. Our contribution is the converse: that good expansion is \emph{sufficient} for the construction of non-malleable codes.
\fi
\paragraph{Comparison with Previous Work.}
Until our work, all known proofs of security for explicit split-state non-malleable codes have required complex mathematical proofs, and all known such proofs either directly or indirectly used the mathematics behind constructions of two-source extractors~\cite{First,Aggarwal,ChaZuck,TamperedExtensions,Li}. In fact, after constructing the first non-malleable code in the split-state model Dziembowski, Kazana, and Obremski wrote: ``This brings a natural question if we could show some relationship between the extractors and the non-malleable codes in the split-state model. Unfortunately, there is no obvious way of formalizing the conjecture that non-malleable codes need to be based on extractors''~\cite{First}. We thus simultaneously find the first simple, elementary solution to the problem of designing single-bit non-malleable codes (our proof being approximately one-third the length of the proof of security of the single-bit non-malleable code of~\cite{First}) and answer in the negative the implicit conjecture of~\cite{First}; it is not necessary to base constructions of non-malleable codes on the theory of extractors.
Our construction of non-malleable codes from expander graphs thus opens up a new line of attack in the study of split-state non-malleable codes. It is important to keep in mind that current constructions of non-malleable codes supporting messages of arbitrary length use many ideas pioneered in the construction of~\cite{First}, in particular the use of extractors. While we do not yet know how to generalize our results beyond single-bit messages, we speculate that further investigation building upon our work will reveal a deeper connection and more powerful simple constructions based on expanders.
It should be noted that two-source extractors are well-known to exhibit expansion properties; however, in all previous proofs, much more than mere expansion was used to argue non-malleability. Indeed previous proofs apply extractors repeatedly; for instance the proof of~\cite{First} uses the extractor property several times (e.g., in equation (22) and using equation (43) in~\cite{First}). Previous proofs also highlight the nontriviality and care that is required in applying extractors correctly to yield a valid proof of non-malleability (e.g., the paragraph beginning with ``There are two problems with the above argument.'' found below equation (36) of~\cite{First}). With respect to the expansion properties of two-source extractors, it is not surprising that 1-bit non-malleable codes will have some sort of expansion properties. Our contribution is the converse: that good expansion is \emph{sufficient} for the construction of non-malleable codes.
\iffalse
Points to touch upon:
\begin{itemize}
\item Many different constructions (using the sparseness idea of \cite{First}) seem like they would be nm-codes. However, it has turned out to be really difficult to prove.
\item Greater perspective: Choosing a random graph is the same as choosing a good expander is the same as choosing a good (although perhaps not easily implementable) nm-code. Thus, one could view this as saying that in fact a random (unbalanced in the sense of sparseness) code is a good nm-code.
\item We need to handle that the result in the actual paper is not log or parameter optimal, but that the appendix takes care of this. Mention in introduction?
\end{itemize}
\fi
\section{Preliminaries}
We shall assume familiarity with the basics of codes and non-malleable codes. A cursory
review of relevant definitions
can be found in the appendix.
\begin{notation}[Graphs]
A graph $G=(V, E)$ consists of vertices $V$ and edges $E\subset V\times V$. In this exposition every graph is undirected and $n=\abs V$ always denotes the number of vertices of the graph in question.
\begin{itemize}
\item For any $v\in V$ we denote by $N(v)$ the set of neighbors of $v$ in $G$.
\item For any two subsets $S, T\subseteq V$ we denote by $E(S, T)$ the set of (directed) edges from $S$ to $T$ in $G$. I.e.\
$E(S, T) = \{(v, u)\in S\times T\mid (v, u)\in E \}$.
\end{itemize}
\end{notation}
\begin{definition}[Spectral Expander]
Let $G=(V, E)$ be a $d$-regular graph, $A_G$ be its adjacency matrix, and $\lambda_1\geq \dots \geq \lambda_n$ be the eigenvalues of $A_G$. We say that $G$ is a $\lambda$ spectral expander if $\lambda \geq \max\{\abs{\lambda_2}, \dots, \abs{\lambda_n}\}$.
\end{definition}
\begin{theorem}[Expander Mixing Lemma]\label{mixing}
Suppose that $G=(V, E)$ is a $\lambda$ spectral expander. Then for every pair of subsets $S, T\subset V$ we have
\begin{align*}
\abs{\abs{E(S, T)}-\frac{d\cdot \abs S\cdot \abs T}{n}}\leq \lambda \sqrt{\abs S\cdot \abs T}.
\end{align*}
\end{theorem}
Our results will rely on the following characterization of 1-bit non-malleable codes by Dziembowski, Kazana, and Obremski found in \cite{First}.
\begin{theorem}\label{nflip_equal_nm}
Let $(\enc, \dec)$ be a coding scheme with $\enc\colon \{0, 1\}\to \mathcal X$ and $\dec\colon \mathcal X\to\{0, 1\}$. Further, let $\mathcal F$ be a set of functions $f\colon \mathcal X\to \mathcal X$. Then $(\enc, \dec)$ is $\varepsilon$-non-malleable with respect to $\mathcal F$ if and only if for every $f\in \mathcal F$,
\begin{align*}
\Pr_{b\xleftarrow u \{0,1\}}(\dec(f(\enc(b))) = 1-b) \leq \frac{1}{2}+\varepsilon,
\end{align*}
where the probability is over the uniform choice of $b$ and the randomness of $\enc$.
\end{theorem}
\section{Results}
We first formally introduce our candidate code and then prove that it is a non-malleable code.
\subsection{Candidate Code}
From a graph we can very naturally construct a coding scheme as follows.
\begin{definition}[Graph Code]
Let $G=(V, E)$ be a graph. The associated \emph{graph code}, $(\enc_G, \dec_G)$, consists of the functions
\begin{align*}
\enc_G&\colon \{0, 1\}\to V\times V,&
\dec_G&\colon V\times V\to \{0, 1\}
\end{align*}
which are randomized and deterministic, respectively, and given by
\begin{align*}
\enc_G(b) &=
\begin{cases}
(u, v) \xleftarrow u (V\times V)\setminus E, & b=0,\\
(u, v) \xleftarrow u E, & b=1,
\end{cases}\\
\dec_G(v_1, v_2) &=
\begin{cases}
0, & (v_1, v_2)\not\in E,\\
1, & (v_1, v_2)\in E.
\end{cases}
\end{align*}
\end{definition}
\subsection{Non-Malleability of Expander Graph Codes}
Finally, arriving at the core of the matter, we first establish the following lemma casting the expression of Theorem \ref{nflip_equal_nm} in terms of graph properties.
\begin{proposition}\label{ProbRepresentation}
Let $G=(V, E)$ be a graph, functions $g, h\colon V\to V$ be given, and $f=(g, h)\colon V\times V\to V \times V$ satisfy $f(u, v)=(g(u), h(v))$. For the probability that $f$ flips a random bit encoded by $\enc_G$, write
$$T = \Pr_{b\xleftarrow u\{0, 1\}}(\dec_G(f(\enc_G(b))) = 1-b)$$
where the probability is taken over the randomness of $\enc_G$ and the sampling of $b$.
Then
\begin{align*}
T &= \frac{1}{2}+\frac{1}{2d(n-d)} \sum_{(v, u)\in E}\left(\frac{d\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}}{n}-\abs{E(g^{-1}(v), h^{-1}(u))}\right).
\end{align*}
\end{proposition}
\begin{proof}
For $b\in \{0, 1\}$ denote by $Q_b$ the probability
$$Q_b = \Pr(\dec_G(f(\enc_G(b))) = 1-b)$$
taken over the randomness of $\enc_G$. It is clear that $T=\frac{Q_0+Q_1}{2}$ and that by definition
\begin{align*}
Q_0 &= \Pr_{(v, u)\xleftarrow uV\times V\setminus E}\left[(g(v), h(u))\in E\right], &
Q_1 &= \Pr_{(v, u)\xleftarrow u E}\left[(g(v), h(u))\not\in E\right].
\end{align*}
First, for $b=0$ we see that the number of non-edges that are mapped by $f$ to any given $(v, u)\in E$ is given by $\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}-\abs{E(g^{-1}(v), h^{-1}(u))}$. There are $n(n-d)$ non-edges in $G$ so it follows that
\begin{align*}
Q_0 = \frac{\sum_{(v, u)\in E}\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}-\abs{E(g^{-1}(v), h^{-1}(u))}}{n(n-d)}.
\end{align*}
Second, for $b=1$ the number of edges of $G$ that are mapped to non-edges by $f$ is given by $\sum_{(v, u)\not\in E}\abs{E(g^{-1}(v), h^{-1}(u))}$. Since there are $dn$ edges of $G$ to choose from when encoding the bit $b=1$,
\begin{align*}
Q_1 = \frac{\sum_{(v, u)\not\in E}\abs{E(g^{-1}(v), h^{-1}(u))}}{dn}.
\end{align*}
Now, observing that the number of (directed) edges in the graph is $dn$ and
that $\{g^{-1}(v)\}_{v\in V}$ and $\{h^{-1}(u)\}_{u\in V}$ are both partitions of $V$, we get
\begin{align*}
Q_1 &= \frac{dn-\sum_{(v, u)\in E}\abs{E(g^{-1}(v), h^{-1}(u))}}{dn} = 1-\frac{\sum_{(v, u)\in E}\abs{E(g^{-1}(v), h^{-1}(u))}}{dn}.
\end{align*}
\noindent
Putting it all together,
\begin{eqnarray*}
T &=& \frac{\sum_{(v, u)\in E}\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}-\abs{E(g^{-1}(v), h^{-1}(u))}}{2n(n-d)} + \frac{1}{2}-\frac{\sum_{(v, u)\in E}\abs{E(g^{-1}(v), h^{-1}(u))}}{2dn}\\
&=& \frac{1}{2} + \frac{1}{2d(n-d)} \sum_{(v, u)\in E}\left(\frac{d\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}}{n}-\abs{E(g^{-1}(v), h^{-1}(u))}\right).
\end{eqnarray*}
\end{proof}
\noindent
We proceed immediately with the main theorem, which concludes the exposition. In order to keep this presentation short and to the point, more elaborate calculations, which save a few $\log$-factors, have been placed in the appendix as Theorem \ref{Thm-Elaborate}.
\begin{theorem}\label{mainTheorem}
Let $G=(V, E)$ be $d$-regular with spectral expansion $\lambda$ satisfying $n = \Omega(d^3\log(d)^4/\lambda)$. Then $(\enc_G, \dec_G)$ is an $\tilde O\left(\frac{\lambda^{3/2}}{d}\right)$-non-malleable code in the split-state model.
\end{theorem}
\begin{proof}
Let $f=(g, h)\colon V\times V\to V\times V$ be given. By Theorem \ref{nflip_equal_nm} and Proposition \ref{ProbRepresentation} we just need to show that
\begin{align*}
R = \frac{1}{2d(n-d)}\cdot \sum_{(v, u)\in E}\left(\frac{d\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}}{n}-\abs{E(g^{-1}(v), h^{-1}(u))}\right)
\end{align*}
is bounded by $\tilde O\left( \frac{\lambda^{3/2}}{d} \right)$. Define the sets
\begin{align*}
G_1 &= \left\{v\in V\mid \abs {g^{-1}(v)}> \frac{n}{d^2}\right\}, & H_1 &=\left\{u\in V\mid \abs {h^{-1}(u)}> \frac{n}{d^2}\right\},\\
G_2 &= \left\{v\in V\mid \abs {g^{-1}(v)}\leq \frac{n}{d^2} \right\}, & H_2 &=\left\{u\in V\mid \abs {h^{-1}(u)}\leq\frac{n}{d^2}\right\},
\end{align*}
for $i, j\in \{1, 2\}$ write
\begin{align*}
R_{i, j} = \frac{1}{2d(n-d)} \sum_{(v, u)\in E\cap (G_i\times H_j)}\left(\frac{d\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}}{n}-\abs{E(g^{-1}(v), h^{-1}(u))}\right),
\end{align*}
and observe that $R = \sum_{1\leq i, j\leq 2}R_{i, j}$.
Consider the case when $i=2$. Simply bounding the terms of the form $\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}$ by using that each vertex has only $d$ neighbours, we get
\begin{align*}
R_{2, 1}+R_{2, 2} &\leq \frac{1}{2n(n-d)} \sum_{(v, u)\in E\cap (G_2\times V)}\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}\\
&\leq \frac{1}{2n(n-d)}\cdot d \cdot \sum_{u\in V}\frac{n}{d^2}\cdot \abs{h^{-1}(u)}= \frac{n}{2(n-d)d}.\\
\end{align*}
Thus,
$R_{2, 1}+R_{2, 2} = O\left(d^{-1}\right)$.
By symmetry, $R_{1, 2} = O\left(d^{-1}\right)$. It only remains to show that $R_{1,1}=\tilde O\left(\frac{\lambda^{3/2}}{d}\right)$.
To this end, partition $G_1$ and $H_1$, respectively, as
\begin{align*}
G_1^k &= \left\{ v\in G_1\mid \frac{n}{2^{k-1}}\geq \abs{g^{-1}(v)}>\frac{n}{2^{k}} \right\},&
H_1^l &= \left\{ v\in H_1\mid \frac{n}{2^{l-1}}\geq \abs{h^{-1}(u)}>\frac{n}{2^{l}} \right\}
\end{align*}
for $1\leq k, l\leq \ceil{\log_2\left(d^2\right)}$. Now, focusing on each pair $G_1^k$ and $H_1^l$, we write
\begin{align*}
S_{k, l} = \frac{1}{2d(n-d)} \sum_{(v, u)\in E\cap (G_1^k\times H_1^l)}\left(\frac{d\abs{g^{-1}(v)}\cdot \abs{h^{-1}(u)}}{n}-\abs{E(g^{-1}(v), h^{-1}(u))}\right)
\end{align*}
and apply first the mixing lemma then the Cauchy-Schwartz inequality to get
\begin{eqnarray*}
2d(n-d)S_{k, l}
&=& \sum_{v\in G_1^k}\left( \frac{d\abs{g^{-1}(v)}\cdot\sum_{u\in N(v)\cap H_1^l}\abs{h^{-1}(u)}}{n}-\abs{E\left( g^{-1}(v), \bigcup_{u\in N(v)\cap H_1^l}h^{-1}(u) \right)} \right)\\
&\leq& \sum_{v\in G_1^k}\lambda\sqrt{\abs{g^{-1}(v)}\cdot\sum_{u\in N(v)\cap H_1^l}\abs{h^{-1}(u)}}\\
&\leq& \lambda \sqrt{\frac{n}{2^{k-1}}\cdot \frac{n}{2^{l-1}}}\cdot \sum_{v\in G_1^k}\sqrt{\abs{N(v)\cap H_1^l}}\\
&\leq& 2\lambda n\cdot 2^{-\frac{l+k}{2}}\cdot\sqrt{\abs{G_1^k}}\cdot \sqrt{\abs{E(G_1^k, H_1^l)}}.
\end{eqnarray*}
We use the fact that $\abs{G_1^k}\leq 2^k, \abs{H_1^l}\leq 2^l$, apply the mixing lemma to the last factor, and wield Jensen's inequality on the arising square root to obtain
\begin{align*}
d(n-d)S_{k, l}&\leq \lambda n\cdot 2^{-\frac{l+k}{2}}\cdot \sqrt{\abs{G_1^k}}\cdot \sqrt{\frac{d\cdot\abs{G_1^k}\cdot \abs{H_1^l}}n+\lambda\sqrt{\abs{G_1^k}\cdot \abs{H_1^l}}}\\
&\leq \lambda\sqrt{2^k dn}+2^{\frac{k-l}4}\lambda^{3/2}n \leq \lambda\cdot \sqrt{d^3n}+2^{\frac{k-l}4}\lambda^{3/2}n.
\end{align*}
By symmetry of $k$ and $l$, $d(n-d)S_{k, l} \leq \lambda\cdot \sqrt{d^3n}+2^{\frac{l-k}4}\lambda^{3/2}n$. Thus,
\begin{align*}
R_{1, 1}&=\sum_{1\leq k, l\leq \ceil{\log_2(d^2)}}S_{k, l}\\
&\leq O\left(\frac{\lambda\log(d)^2\cdot\sqrt d}{\sqrt{n}}\right) + O\left(\frac{\lambda^{3/2}}{d}\right)\cdot \sum_{1\leq k, l\leq \ceil{\log_2(d^2)}}2^{-\frac{\abs{k-l}}4}\\
&= O\left( \frac{\log(d)\lambda^{3/2}}{d} \right).
\end{align*}
\end{proof}
\section*{Acknowledgements}
A significant effort was made to simplify our proof as much as possible, which eventually resulted in the approximately 2-page proof of our main result presented here; we thank Anders Aamand and Jakob B\ae k Tejs Knudsen for suggestions and insights regarding the main theorem that helped simplify and improve the results presented. Furthermore, we thank Aayush Jain, Yuval Ishai, and Dakshita Khurana for early discussions regarding simple constructions of split-state non-malleable codes not based on expander graphs.
Research supported in part from a DARPA/ARL SAFEWARE award, NSF Frontier Award 1413955, and NSF grant 1619348, BSF grant 2012378, a Xerox Faculty Research Award, a Google Faculty Research Award, an equipment grant from Intel, and an Okawa Foundation Research Grant. This material is based upon work supported by the Defense Advanced Research Projects Agency through the ARL under Contract W911NF-15-C- 0205. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense, the National Science Foundation, or the U.S. Government.
Research supported in part by grant 16582, Basic Algorithms Research Copenhagen (BARC), from the VILLUM Foundation.
\bibliographystyle{alpha}
| {
"attr-fineweb-edu": 1.460938,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfgc5qhLBtiOLdAFx | \section{Introduction}
Manufacturing process variations have led to significant performance uncertainties in submicron and nano-scale IC design~\cite{variation2008,VLSI2008}. Many results have been reported on variation-aware modeling for semiconductor devices~\cite{cicc2011,Boning00modelsof,Dopant}, interconnects~\cite{Tarek_DAC:08,Tarek_DAC:10,Tarek_ISQED:11,zzhang_iccad:2011,Shen2010,Wenjian:2009,Gong:2012,Hengliang:2007}, and for analog/RF and digital ICs~\cite{xli2010,TCAD2006}. However, few have focused on the uncertainty quantification aspect that analyzes the uncertainty propagation from the device level to the circuit level through SPICE simulation.
Monte Carlo (MC)~\cite{MCintro} has been the mainstream uncertainty quantification technique in commercial circuit simulators for decades~\cite{kundertbook:1995,Tuinenga:1995,Cadence,HSPICE}. Recently, Singhee {\it et al.} improved MC-based simulation and applied it to the yield analysis of analog/RF and digital ICs~\cite{SingheeR09,SingheeR10}. Despite its wide application, MC has a slow convergence rate proportional to $\frac{1}{\sqrt{N_s}}$ (where $N_s$ is the number of samples used in MC). Very often, one must run a huge number of SPICE simulations to achieve acceptable accuracy at a prohibitively high computational cost.
Stochastic spectral methods~\cite{sgscCompare,sfem,book:Dxiu,UQ:book,gPC2002,gPC2003,xiu2009} have emerged as a promising solution to uncertainty quantification problems, showing significant speedup over MC (especially when the parameter dimensionality is small or medium). Such methods represent the parameter-dependent solutions by some properly constructed basis functions, such as polynomial chaos (PC, also called Hermite polynomial chaos)~\cite{PC1938} or generalized polynomial chaos (gPC)~\cite{gPC2002}. Mainstream stochastic spectral solvers include the stochastic Galerkin (also called stochastic finite element method~\cite{sfem}) and stochastic collocation~\cite{col:2005,Ivo:2007,Nobile:2008,Nobile:2008_2} methods. Stochastic Galerkin is an intrusive (or ``non-sampling based") solver, since it directly computes the PC/gPC coefficients by solving a coupled equation resulting from Galerkin testing. Stochastic collocation is a non-intrusive (or ``sampling based") method: it solves the deterministic equations at a set of sample points, followed by a numerical scheme to reconstruct the PC/gPC coefficients.
There is an increasing interest in applying stochastic spectral methods to circuit simulation. Most works use PC-based stochastic collocation or stochastic Galerkin methods to simulate on-chip and off-chip interconnects with Gaussian parameters~\cite
Stievano:2012,Stievano:2011_1,Fan:2007,Wang:2004,yzou:2007}. Limited results have been reported on nonlinear circuit analysis. The PC-based stochastic circuit simulator proposed by Strunz~\cite{Strunz:2008} requires constructing the stochastic device models {\it a-priori}, thus it cannot be easily integrated with industrial semiconductor device models. In~\cite{Tao:2007}, stochastic collocation was combined with harmonic balance to simulate nonlinear RF circuits under Gaussian variations.
\begin{table*}[t]
\centering
\caption{Univariate gPC polynomial basis of some typical random parameters~\cite{xiu2009}.}
\label{tab:gPC}
\begin{threeparttable}
\begin{tabular}{|c||c|c|c|}
\hlin
Distribution of $\xi_k$ & ${\rm PDF}$ of $\xi_k$ [$\rho_k(\xi_k)$]\footnotemark[1]& univariate gPC basis $\phi^k _{\nu} \left( {\xi _k } \right)$ & Support $\Omega _k$\\
\thickhline
Gaussian & $\frac{1}{{\sqrt {2\pi } }}\exp \left( {\frac{{ - \xi _k^2 }}{2}} \right)$ &Hermite-chaos polynomial& $(-\infty, +\infty)$ \\ \hline
Gamma & $\frac{{\xi _k ^{\gamma - 1} \exp \left( { - \xi _k } \right)}}{{\Gamma \left( \gamma \right)}},\;\gamma > 0$ &Laguerre-chaos polynomial& $[0, +\infty)$ \\ \hline
Beta & $\frac{{ {\xi_k}^{\alpha - 1} \left( {1 - \xi_k} \right)^{\beta - 1} }}{{{\rm B}\left( {\alpha ,\beta } \right)}},\;\;\alpha ,\beta > 0 $ &Jacobi-chaos polynomial& $[0, 1]$ \\ \hline
Uniform & $\frac{1}{2}$ &Legendre-chaos polynomial& $[-1, 1]$ \\
\hlin
\end{tabular}
\begin{tablenotes}
\item[1] $\Gamma \left( \gamma \right) = \int\limits_0^\infty {t^{\gamma - 1} \exp \left( { - t} \right)dt}$ and ${\rm B}\left( {\alpha ,\beta } \right) = \int\limits_0^1 {t^{\alpha - 1} \left( {1 - t} \right)^{\beta - 1} dt}$ are the Gamma and Beta functions, respectively.\normalsize
\end{tablenotes}
\end{threeparttable}
\end{table*}
Practical ICs often contain also non-Gaussian parameters, and they cannot be effectively simulated by PC-based techniques. For such cases, gPC is more appealing since it can effectively handle non-Gaussian parameters. Motivated by this, Pulch applied gPC-based spectral methods to analyzing stochastic linear circuits~\cite{Pulch:2011}. Since almost all semiconductor devices are nonlinear, it is necessary to develop uncertainty quantification tools for nonlinear circuit simulation. Some progress has been reported along this line~\cite{zzhang:tcad2013,zzhang:tcas2_2013,Pulch:2011_1,Pulch:2009}. The RF circuit simulators in~\cite{Pulch:2011_1,Pulch:2009} directly apply gPC and stochastic Galerkin, showing remarkable speedup over MC. In order to further reduce the computational cost, the authors of this paper have proposed to simulate nonlinear circuits using a stochastic testing scheme~\cite{zzhang:tcad2013,zzhang:tcas2_2013}. Stochastic testing can be regarded as a hybrid version of stochastic collocation and stochastic Galerkin methods, and it proves more efficient for time-domain circuit simulation.
In this paper, we aim to review the fundamental ideas of gPC-based transistor-level simulation, and to summarize the recent progress on this topic. In Section II, we review some backgrounds on gPC and numerical quadrature. Section III discusses stochastic testing, stochastic Galerkin and stochastic collocation techniques and compares their performances in circuit simulation. In Section IV, some stochastic periodic steady-state simulators based on intrusive solvers are discussed and compared. Section V discusses some open problems in this field, followed by the conclusion in Section IV.
\section{Preliminaries}
Consider the stochastic differential algebraic equation obtained from modified nodal analysis~\cite{mna:1975}:
\begin{equation}
\label{eq:sdae}
\begin{array}{l}
\displaystyle{\frac{{d\vec q\left( {\vec x( {t,\vec \xi } ),\vec \xi } \right)}}{{dt}} }+ \vec f\left( {\vec x( {t,\vec \xi } ),\vec \xi } \right) = B\vec u\left( t \right)
\end{array}
\end{equation}
where $\vec u(t)$ is the input; ${\vec x}\in \mathbb{R}^n$ denotes nodal voltages and branch currents; ${\vec q}\in \mathbb{R}^n$ and ${\vec f}\in \mathbb{R}^n$ represent the charge/flux and current/voltage terms, respectively. Here ${\vec \xi}$=$[\xi_1,\cdots,\xi_d]\in\Omega$ (with $\Omega\subseteq\mathbb{R}^d$) denotes $d$ Gaussian and/or non-Gaussian parameters describing the device-level variations. Assume that all random parameters are independent, i.e., their joint probability density function (PDF) can be expressed as
\begin{equation}
\label{PDF}
\rho(\vec \xi)=\prod\limits_{k = 1}^d {\rho _{k } \left( \xi_k \right)},
\end{equation}
with ${\rho _{k } \left( \xi_k \right)}$ being the PDF of $\xi_k \in \Omega_k$. In this paper, we focus on how to solve~(\ref{eq:sdae}) by stochastic spectral methods to extract the statistical information of the state vector $\vec x( {t,\vec \xi } )$.
\subsection{Generalized Polynomial Chaos (gPC) Construction}
\textbf{Univariate gPC.} For $\xi_k \in\Omega_k \subseteq \mathbb{R}$, one can construct a set of polynomial functions subject to the orthonormal condition:
\begin{equation}
\label{uni_gPC}
\left\langle {\phi^k_{\gamma} ( {\xi_k } ),\phi^k_{\nu} ( {\xi_k } )} \right\rangle = \int\limits_{\Omega_k} {\phi^k_{\gamma} ( {\xi_k } )\phi^k_{\nu} ( {\xi_k } ){\rho_k}( {\xi_k } )d\xi_k }=\delta_{\gamma,\nu}
\end{equation}
where $\langle , \rangle$ denotes the inner product; $\delta_{\gamma,\nu}$ is a Delta function; integers $\gamma$ and $\nu$ are the degrees of $\xi_k$ in polynomials $\phi^k_{\gamma} ( {\xi_k } )$ and $\phi^k_{\nu} ( {\xi_k } )$, respectively. Given ${\rho_k}( {\xi_k } )$ and $\Omega_k$, one can utilize a three-term recurrence relation to construct such orthonormal polynomials~\cite{Walter:1982}.
Some univariate gPC basis functions are listed in Table~\ref{tab:gPC} as a demonstration. It is worth noting that: 1) the univariate gPC basis functions are not limited to the cases listed in Table~\ref{tab:gPC}; 2) when $\xi_k$ is a Gaussian variable, its gPC simplifies to the Hermite polynomial chaos~\cite{PC1938}.
\textbf{Multivariate gPC.} When the components of $\vec \xi$ are assumed mutually independent, the multivariate gPC can be constructed based on the univariate gPC of each $\xi_k$. Given an index vector $\vec \alpha=[\alpha_1,\cdots,\alpha_d]\in \mathbb{N}^d $, the corresponding multivariate gPC is constructed as
\begin{equation}
H_{\vec \alpha} ( {\vec \xi } ) = \prod\limits_{k = 1}^d {\phi^k _{\alpha_k } ( {\xi _k } )}.
\end{equation}
The obtained multivariate gPC is orthonormal, i.e.,
\begin{equation}
\label{multi_gPC}
\left\langle {H_{\vec \alpha} ( {\vec \xi } ),H_{\vec \beta} ( {\vec \xi} )} \right\rangle = \int\limits_{\Omega} {H_{\vec \alpha} ( {\vec \xi } )H_{\vec \beta} ( {\vec \xi} ){\rho}( {\vec \xi } )d{\vec \xi} }=\delta_{\vec \alpha,\vec \beta}.
\nonumber
\end{equation}
Note that $H_{\vec \alpha} ( {\vec \xi } ) $ is the product of different types of univariate gPC bases when $\xi_k$'s have different density functions.
\subsection{gPC Expansion}
If $\vec x({\vec \xi},t)$ is a 2nd-order stochastic process (i.e., $\vec x({\vec \xi},t)$ has a bounded 2nd-order moment), we can approximate it by a finite-term gPC expansion
\begin{equation}
\label{gpcExpan}
\vec x(t,\vec \xi ) \approx \tilde x(t,\vec \xi) =\sum\limits_{\vec \alpha \in {\cal P}} {\hat x_{\vec \alpha} (t)H_{\vec \alpha}(\vec \xi )}
\end{equation}
where $\hat x_{\vec \alpha} (t)\in \mathbb{R}^n$ denotes the gPC coefficient with index ${\vec \alpha}$, and ${\cal P}$ is a set containing some properly selected index vectors.
Given $p\in \mathbb{N}^+$, there are two popular choices for ${\cal P}$~\cite{sgscCompare}. In the tensor product method, one sets ${\cal P}=\{\vec \alpha |\; 0\leq \alpha _k \leq p\}$, leading to a total of $(p+1)^d$ gPC bases. In order to reduce the total number of basis functions, the total degree scheme sets ${\cal P}=\{\vec \alpha |\; \alpha_k\in \mathbb{N},\; 0\leq {\alpha _1}+\cdots+\alpha_d \leq p\}$, leading to
\begin{equation}
\label{Kvalue}
K = \left( \begin{array}{l}
p + d \\
\;\;p \\
\end{array} \right) = \frac{{(p + d)!}}{{p!d!}}
\end{equation}
gPC bases in total. This total degree method is employed in our stochastic circuit simulator. There is a one-to-one correspondence between $k$ (with $1\leq k\leq K$) and the index vector $\vec \alpha$, thus for simplicity (\ref{gpcExpan}) is normally rewritten as
\begin{equation}
\label{gpcExpan_k}
\vec x(t,\vec \xi ) \approx \tilde x(t,\vec \xi) =\sum\limits_{k=1}^K {\hat x^k (t)H_k(\vec \xi )}.
\end{equation}
It is shown that gPC expansions converge exponentially for some analytical functions~\cite{book:Dxiu,gPC2002,xiu2009}. Such exponential convergence rates may not be observed in practical engineering problems, but gPC still converge very fast when the function of interest has a smooth dependence on $\vec \xi$. With gPC approximations, some statistical information (e.g., mean and variance) can be easily calculated due to the orthonormality of $H_{k}(\vec \xi)$'s.
\begin{figure*}[t]
\centering
\includegraphics[width=150mm]{fig/method_class.eps}
\caption{Classification of various stochastic solvers~\cite{zzhang:tcad2013}. ``TP" and ``SP" means the quadrature rules based on tensor product and sparse grids, respectively.}
\label{fig:method_class}
\end{figure*}
\subsection{Numerical Quadrature}
This section briefly reviews some numerical quadrature methods widely used in stochastic spectral methods.
\textbf{1-D Case.} When computing an integral with a quadrature method one typically uses the expression
\begin{equation}
\label{stoInt}
\int\limits_{\Omega _k } {g( {\xi _k } )\rho_k ( {\xi _k } )d\xi _k } \approx \sum\limits_{j = 1}^{\hat n} {g( {\xi _k^j } )} w_k^j
\end{equation}
when $g\left( {\xi _k } \right)$ is a smooth function. The quadrature points ${\xi _k^j }$'s and weights $w_k^j$'s are chosen according to $\Omega_k$ and $\rho_k \left( {\xi _k } \right)$. Two kinds of quadrature rules are widely used: Gauss quadrature~\cite{Golub:1969} and Clenshaw-Curtis rules~\cite{Clenshaw:1960,Trefethen:2008}. With $\hat n$ points, Gauss quadrature rule produces exact results for all polynomials of degree $\leq 2\hat n-1$, and Clenshaw-Curtis gets exact results when the degree of $g(\xi_k)$ is $\leq \hat n-1$. Clenshaw-Curtis scheme generates nested quadrature points and assumes that $\xi_k$ is uniformly distributed in a bounded domain.
\textbf{Multi-dimensional Case.} One can also evaluate a multidimensional integral in $\Omega\subseteq\mathbb{R}^d$ using the formula
\begin{equation}
\label{stoInt:md}
\int\limits_{\Omega } {g( {\vec \xi } ){\rho} ( {\vec \xi })d\vec \xi } \approx \sum\limits_{j = 1}^{\hat N} {g( {\vec \xi ^j } )} w^j.
\end{equation}
where $\hat N$ is the total number of quadrature points, and $w^j$ is the weight corresponding to quadrature point $\vec \xi ^j$. Given the 1-D quadrature points for each $\xi _k$, $\vec \xi^j$'s and $w^j$'s can be obtained for instance using a tensor-product rule or using sparse grids~\cite{sparse_grid:2000,Gerstner:1998}. With Smolyak's algorithm, sparse grid technique uses much fewer quadrature points than the tensor-product rule, thus it is widely used to solve stochastic PDEs~\cite{col:2005,Ivo:2007,Nobile:2008,Nobile:2008_2}. In~\cite{col:2005,Ivo:2007,Nobile:2008,Nobile:2008_2} Smolyak's algorithm produces nested sparse grids because all random parameters are assumed uniformly distributed (and thus Clenshaw-Curtis rule is used for all $\xi_k$'s). However, Smolyak's algorithm generates non-nested sparse grids when non-nested 1-D quadrature points are used for some parameters (since many random parameters with non-uniform distributions may not be effectively handled by the Clenshaw-Curtis rule).
\section{Stochastic Spectral Methods}
\subsection{Classification of Stochastic Solvers}
The main stochastic solvers are classified in Fig.~\ref{fig:method_class}. MC and stochastic collocation are both non-intrusive (or sampling-based) methods: they solve (\ref{eq:sdae}) as a deterministic problem at a set of samples. Their main difference lies in the sampling stage: MC randomly draws some samples based on $\rho(\vec \xi)$, whereas stochastic collocation typically uses the points from a tensor-product or sparse-grid rule such that the gPC coefficients can be well reconstructed. Stochastic Galerkin and stochastic testing belong to the family of intrusive solvers: through solving a new coupled differential algebraic equation they directly compute the gPC coefficients. The former sets up the coupled equation by Galerkin testing, whereas the latter constructs a coupled equation via collocation testing.
\subsection{Stochastic Testing (ST)}
\label{subsec:st}
The stochastic testing method needs to select $K$ testing points $\vec \xi_1, \cdots, \vec \xi_K$. First, a quadrature scheme (e.g., tensor-product or sparse grid rule in Section II-C) is applied to generate $\hat N$ quadrature points $\vec \xi^j$'s in parameter space $\Omega$, which are called candidate nodes. Second, the $K$ most important candidate nodes are selected such that the transformation matrix $\textbf{V}\in \mathbb{R}^{K\times K}$, with its $(i,j)$ entry being
\begin{equation}
\label{def:V}
\textbf{V}_{i,j} = {H_j (\vec \xi _i )},
\end{equation}
is invertible and well conditioned.
\begin{algorithm}[t]
\caption{Testing Point Selection for ST~\cite{zzhang:tcad2013}.}
\label{alg:testNode}
\begin{algorithmic}[1]
\STATE {construct $\hat N$ $d$-D quadrature points and weights; }
\STATE {reorder the quadrature points such that $|w^j|\geq |w^{j+1}|$;}
\STATE {set $V=\vec H\left( \vec \xi^1\right)/||\vec H\left( \vec \xi^1\right) ||$, $\vec \xi_1=\vec \xi^1$, and $m=1$; }
\STATE {\textbf{for} $j=2,\;\cdots$, $\hat N$ \textbf{do}}
\STATE {\hspace{10pt} $\vec v=\vec H\left( \vec \xi^j\right)-V\left( V^T\vec H\left( \vec \xi^j\right)\right)$;}
\STATE {\hspace{10pt}\textbf{if} $||\vec v||/||\vec H\left( \vec \xi^j\right) ||>\beta$}
\STATE {\hspace{20pt}set $V=[V;\vec v/||\vec v||]$, $m=m+1$, $\vec \xi_m=\vec \xi^j$;}
\STATE {\hspace{20pt}\textbf{if} $m\geq K$, break, \textbf{end};}
\STATE {\hspace{10pt}\textbf{end if}}
\STATE {\textbf{end for} }
\end{algorithmic}
\end{algorithm}
Define a vector function $\vec H(\vec \xi):=[H_1(\vec \xi); \cdots; H_K(\vec \xi)]$, then the testing points can be selected by Algorithm~\ref{alg:testNode}~\cite{zzhang:tcad2013}. Only a small portion of the candidate nodes are finally selected as the testing points.
Let $\hat{\textbf{x}}(t)=[\hat x^1(t);\cdots ;\hat x^K(t)]$ denote the gPC coefficients, $\tilde q(\hat{\textbf{x}}(t),\vec \xi)=\vec q( {\tilde x( {t,\vec \xi } ),\vec \xi } )$ and $\tilde f(\hat{\textbf{x}}(t),\vec \xi)=\vec f( {\tilde x( {t,\vec \xi } ),\vec \xi } )$. Substituting $\tilde x(t,\vec \xi)$ of (\ref{gpcExpan_k}) into (\ref{eq:sdae}) yields a residual function
\begin{equation}
\label{eq:residual}
\begin{array}{l}
{\rm R}(\hat{\textbf{x}}(t),\vec \xi )=\displaystyle{\frac{{d\tilde q(\hat{\textbf{x}}(t),\vec \xi)}}{{dt}}} + \tilde f(\hat{\textbf{x}}(t),\vec \xi)- B\vec u( t ).
\end{array}
\end{equation}
\textbf{Collocation Testing.} Enforcing the residual function to zero at all testing points, stochastic testing generates the following coupled differential algebraic equation:
\begin{align}
\label{ST:forced}
\frac{{d\textbf{q}(\hat{\textbf{x}}(t))}}{{dt}} + \textbf{f}(\hat{\textbf{x}}(t)) = \textbf{B}u(t),
\end{align}
where the $k$-th blocks of $\textbf{q}(\hat{\textbf{x}}(t))$, $\textbf{f}(\hat{\textbf{x}}(t))$ and $\textbf{B}$ are $\tilde q(\hat{\textbf{x}}(t),\vec \xi_k)$, $\tilde f(\hat{\textbf{x}}(t),\vec \xi_k)$ and $B$, respectively.
\textbf{Numerical Solver.} Stochastic testing is an intrusive solver: the gPC coefficients $\hat{\textbf{x}}(t)$ are directly computed by simulating (\ref{ST:forced}), then the parameter-dependent current/voltage variables are obtained by gPC approximations. In transient analysis, the time step sizes can be selected adaptively according to the local truncation error (LTE) of (\ref{ST:forced}) as done in commercial deterministic circuit simulators~\cite{kundertbook:1995,Tuinenga:1995}. Another desirable feature of stochastic testing is the decoupling procedure inside the intrusive solver. Assume that $\textbf{J}$ is the Jacobian inside the Newton's iteration when simulating (\ref{ST:forced}) (as a DC problem or as a transient problem using numerical integration such as backward Euler). It is shown in~\cite{zzhang:tcad2013} that $\textbf{J}$ can be factored as
\begin{equation}
\textbf{J}={\rm blkdiag}(J_1,\cdots, J_K)(\textbf{V}\otimes\textbf{I}_n)
\end{equation}
where ${\rm blkdiag}$ is the block diagonal operator, $\otimes$ is the Kronecker product operation, and $\textbf{I}_n \in \mathbb{R}^{n\times n}$ is an identity matrix. Matrix $J_k\in \mathbb{R}^{n\times n}$ can be treated as a Jacobian corresponding to (\ref{eq:sdae}) with $\vec \xi=\vec \xi_k$. Since the Vandermonde-like matrix \textbf{V} [as defined in (\ref{def:V})] can be easily inverted~\cite{fastInverse}, the linear system solution inside each Newton's iteration can be decoupled into $K$ small-size problems. Consequently, the overall computational cost scales linearly with $K$~\cite{zzhang:tcad2013}.
\subsection{Stochastic Galerkin (SG)}
\label{subsec:sg}
\textbf{Galerkin Testing.} Applying Galerkin testing
\begin{equation}
\left <{\rm R}(\hat{\textbf{x}}(t),\vec \xi ), H_k(\vec \xi)\right >=\int\limits_{\Omega } {{\rm R}(\hat{\textbf{x}}(t),\vec \xi ) H_k(\vec \xi){\rho} ( {\vec \xi })d\vec \xi }=0
\end{equation}
for $k=1,\cdots, K$, stochastic Galerkin forms a coupled equation in the form of (\ref{ST:forced}). Now the $k$-th blocks of $\textbf{q}(\hat{\textbf{x}}(t))$, $\textbf{f}(\hat{\textbf{x}}(t))$ and $\textbf{B}$ are $\left<\tilde q(\hat{\textbf{x}}(t),\vec \xi), H_k(\vec \xi)\right >$, $\left <\tilde f(\hat{\textbf{x}}(t),\vec \xi),H_k(\vec \xi) \right >$ and $\left < B, H_k(\vec \xi) \right >$, respectively. The inner products can be evaluated using the numerical quadrature rules described in Section II-C or by an MC integration (if $d$ is large).
\textbf{Numerical Solver.} After (\ref{ST:forced}) is formed by Galerkin testing, $\hat{\textbf{x}}(t)$ is also computed in an intrusive manner. In time-domain simulation, the time step sizes can also be controlled adaptively as in stochastic testing. Compared with stochastic testing, stochastic Galerkin has two drawbacks. First, the inner product evaluation needs $\hat N>K$ quadrature points, and thus at each time point stochastic Galerkin requires more circuit/device evaluations. This can lead to remarkable time cost when complex semiconductor device models are employed. Second, the resulting Jacobian in a stochastic Galerkin-based simulator cannot be decoupled, although it can be decoupled for linear circuits if the gPC bases are chosen by the tensor product method~\cite{Pulch:2011}. This causes a significant computational overhead compared with stochastic testing.
\subsection{Stochastic Collocation (SC)}
In stochastic collocation, Eq. (\ref{eq:sdae}) is first is solved at $\hat N$ sample points to obtain a set of deterministic solutions $\vec x(t,\vec \xi^k)$'s. After that the gPC coefficients are reconstructed by a post-processing numerical scheme. In the mainstream stochastic collocation schemes~\cite{col:2005,Ivo:2007,Nobile:2008,Nobile:2008_2}, the samples are selected by a tensor product or sparse-grid quadrature technique, and thus the $j$-th gPC coefficient vector can be estimated by
\begin{equation}
\label{SC:interpolation}
\hat x^{j } ( t ) = \left\langle {\vec x ( t,{\vec \xi } ),H_{j } ( {\vec \xi } )} \right\rangle\approx \sum\limits_{k = 1}^{\hat N } {w^k H_{j } ( {\vec \xi ^k } )} \vec x(t, {\vec \xi ^k } ).
\end{equation}
In practical time-domain simulation, each $x(t,\vec \xi^k)$ is computed at a set of discretized time points. Therefore, to reconstruct the gPC coefficients, the deterministic solutions for all samples should be located on the same time grid. Since it is difficult to preselect an adaptive time grid for the black-box deterministic solver, a small fixed step size is normally used, leading to excessive computational cost for stiff circuits.
The speedup factor of stochastic testing over stochastic collocation can be estimated as~\cite{zzhang:tcad2013}
\begin{equation}
\kappa_{\rm overall}=\kappa_{\rm samp}\times \kappa_{\rm tctrl}.
\end{equation}
Here $\kappa_{\rm samp}={\hat N}/{K}>1$ because stochastic testing uses fewer samples than stochastic collocation. If stochastic collocation uses tensor-product quadrature points, $\kappa_{\rm samp}$ gets extremely large as $d$ increases. When stochastic collocation uses nested Smolyak sparse grids and the total degree of the gPC expansion is $p$, $\kappa_{\rm samp}$ is about $2^p$ for $d\gg 1$. The second factor $\kappa_{\rm tctrl}>1$ is caused by adaptive time stepping in stochastic testing, which is case dependent. In DC analysis, $\kappa_{\rm tctrl}=1$.
\subsection{Performance Analysis}
We have implemented stochastic testing, stochastic Galerkin and stochastic collocation in MATLAB and performed various simulations (DC, transient, AC) on several analog/RF and digital ICs~\cite{zzhang:tcad2013}. For those benchmarks with several Gaussian and non-Gaussian random parameters, all stochastic spectral methods have shown $10^2$--$10^3\times$ speedup over MC due to the fast convergence of gPC expansions. The speedup factors of stochastic testing over stochastic Galerkin and stochastic collocation are on the level of $O(1)$ to $O(10^2)$, which are more significant as the gPC order $p$ increases.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/LNA.eps}
\caption{Schematic of the LNA.}
\label{fig:LNA}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/methodCompare_LNA.eps}
\caption{Accuracy and efficiency of stochastic testing (ST), stochastic Galerkin (SG) and stochastic collocation (SC) for the DC analysis of LNA.}
\label{fig:methodCompare_LNA}
\end{figure}
The static analysis of a common-source amplifier (with four random parameters) shows that stochastic testing has slightly larger errors than stochastic Galerkin and stochastic collocation, but it uses the least amount of CPU time to achieve a similar level of accuracy. The results of a low-noise amplifier (LNA) with three random parameters (in Fig.~\ref{fig:LNA}) is plotted in Fig.~\ref{fig:methodCompare_LNA}. The $L_2$-norm errors of the computed gPC coefficients from all three methods are almost the same, and stochastic testing costs significantly less CPU time. Our experiments show that a $3$rd-order gPC expansion (i.e., $p=3$) is enough for most circuits.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/dbmixer.eps}
\caption{Schematic of the BJT double-balanced mixer.}
\label{fig:dbmixer}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=85mm]{fig/dbmixer_output.eps}
\caption{Uncertainties of $V_{\rm out}$$=$$V_{{\rm out}1}-V_{{\rm out}2}$ of the double-balanced mixer: (a) mean value, (b) standard deviation.}
\label{fig:dbmixer_output}
\end{figure}
Transient simulation of the common-source amplifier and LNA shows that the speedup factor of stochastic testing over stochastic Galerkin and stochastic collocation is about $O(10^1)$ to $O(10^2)$, which is more significant for large-size circuits. In analog circuits, the speedup factor caused by adaptive time stepping is about $O(1)$ to $O(10^1)$. For digital (e.g., SRAM cell) and multi-rate RF (e.g., BJT mixer) circuits, stochastic testing can solve the problem with seconds or minutes of CPU time, whereas stochastic collocation may require $>1$ hour due to the uniform time stepping. Fig.~\ref{fig:dbmixer} shows a mixer with uncertainties at $R_1$ and $R_2$. Stochastic testing produces the mean and standard-deviation waveforms (in Fig.~\ref{fig:dbmixer_output}) after $21$ minutes, whereas stochastic Galerkin, stochastic collocation and MC are prohibitively expensive on the MATLAB platform.
\section{Uncertainty Quantification for Periodic Steady States}
Analog/RF and power electronic circuit designers are interested in periodic steady-state analysis~\cite{kundert:jssc99,Nastov:ieeeProc,Jacob:matrixfree,Aprille:ieeeProc,Aprille:TCAS,Vytyaz:tcad}. Using stochastic spectral methods, uncertainties of the periodic steady states can be analyzed more efficiently than using MC. This section summarizes the progress on stochastic time-domain periodic steady-state solvers~\cite{zzhang:tcas2_2013}. Other solvers (e.g., harmonic balance) can also be easily implemented.
\subsection{Forced Circuits}
For many forced circuits (e.g., amplifiers and power converters), there exists a periodic steady-state solution $\vec x(t,\vec \xi)=\vec x(t+T, \vec \xi)$ when the input is a time-varying periodic signal $\vec u(t)=\vec u(t+T)$. The state vector $\vec x(t,\vec \xi)$ is periodic for any $\vec \xi\in \Omega$ if and only if $\hat{\textbf{x}}(t)$ is periodic. Therefore, we can set up the following equation
\begin{align}
\label{st_pss_forced}
\textbf{g} (\hat{\textbf{y}}) = \Phi (\hat{\textbf{y}},0,T) - \hat{\textbf{y}} = 0.
\end{align}
Here $\hat{\textbf{y}}=\hat{\textbf{x}}(0)$, and $\hat{\textbf{x}}(T)=\Phi(\hat{\textbf{y}},0,T)$ is the state transition function of (\ref{ST:forced}) formed by stochastic testing (c.f. Section~\ref{subsec:st}) or stochastic Galerkin (c.f. Section~\ref{subsec:sg}).
Eq. (\ref{st_pss_forced}) can be solved by the standard shooting Newton method~\cite{kundert:jssc99,Nastov:ieeeProc,Jacob:matrixfree,Aprille:ieeeProc}. When solving the linear equation inside each Newton's iteration, evaluating the right-hand side requires integrating (\ref{ST:forced}) from $t=0$ to $t=T$, and the Jacobian matrix can be obtained once a Monodromy matrix is computed (via a sensitivity analysis along the discretized trajectories). Directly solving (\ref{st_pss_forced}) requires $O(K^3n^3)$ cost if a direct matrix solver is employed. Fortunately, \cite{zzhang:tcas2_2013} shows that the linear equation solution can be easily decoupled into $K$ small problems by a similarity transform, if (\ref{ST:forced}) is formed by stochastic testing. The decoupled intrusive transient solver in Section~\ref{subsec:st} can be employed to evaluate the right-hand side of each linear equation inside Newton's iterations, thus the overall cost can be reduced to $KO(n^3)$ in the stochastic testing formulation.
\textbf{Results.} The simulation result of the LNA (with $V_{\rm in}=0.1{\rm sin}(4\pi\times 10^8t)$ V) is plotted in Fig.~\ref{fig:LNA_wave}. With a $3$rd-order total-degree gPC expansion, the stochastic testing-based and stochastic Galerkin-based solvers give the same results. Using a standard MC, $8000$ samples are required to achieve a similar level of accuracy ($<$$1\%$ relative errors for the mean and standard deviation). Fig.~\ref{fig:LNA_pdf} plots the density functions of the total harmonic distortion and power consumption extracted from the computed periodic steady-state solution, which are consistent with those from MC. The simulation cost of the decoupled stochastic testing solver is $3.4$ seconds, which is $42\times$ faster over the coupled stochastic testing solver, $71\times$ faster over the stochastic Galerkin-based solver, and $220\times$ faster over MC. In~\cite{zzhang:tcas2_2013} an $O(K^2)$ speedup factor caused by decoupling is clearly observed for stochastic testing.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/LNA_wave1.eps}
\caption{Periodic steady-state waveforms for the LNA. (a) $\&$ (b): mean and s.t.d of $V_{\rm out}$; (c) $\&$ (d): mean and s.t.d of $I (V_{\rm dd})$.}
\label{fig:LNA_wave}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/LNA_pdf.eps}
\caption{Probability density functions obtained by MC and stochastic testing (ST). (a) total harmonic distortion and (b) power dissipation.}
\label{fig:LNA_pdf}
\end{figure}
\subsection{Autonomous Circuits}
For unforced cases (e.g., oscillators), the input signal $\vec u(t)=\vec u$ is time-invariant, and the period is unknown. The periodicity constraint is $\hat{\textbf{x}}(t,\vec \xi)=\hat{\textbf{x}}(t+T(\vec \xi),\vec \xi)$, where the period $T(\vec \xi)$ depends on $\vec \xi$. Choose a constant $T_0$ and assume that $a(\vec \xi)$ is a scaling factor such that $T(\vec \xi)=T_0 a(\vec \xi )$, then we obtain a scaled time variable $\tau=t/a (\vec \xi )$~\cite{Vytyaz:tcad}. Let $\vec z(\tau,\vec{\xi}):=\vec x (t,\vec \xi)$, then $\vec z(\tau,\vec \xi)$ has a constant period $T_0$ on the scaled time axis $\tau$. Both $a(\vec \xi)$ and $\vec z(\tau, \vec \xi)$ can be approximated by gPC expansions
\begin{equation}
\begin{array}{l}
a(\vec \xi ) \approx \tilde a(\vec \xi ) = \sum\limits_{k = 1}^K {\hat a^k H_k (\vec \xi )} ,\;\; \\
\vec z(\tau ,\vec \xi ) \approx \tilde z(\tau ,\vec \xi ) = \sum\limits_{k = 1}^K {\hat z^k (\tau )H_k (\vec \xi )} .
\end{array}
\end{equation}
Substituting the above approximation into (\ref{eq:sdae}) and changing the time variable, we obtain a new residual function
\begin{equation}
\label{eq:residual_tau}
\begin{array}{l}
{\rm R}(\hat{\textbf{z}}(\tau),\hat {\textbf{a}},\vec \xi )=\displaystyle{\frac{{d\tilde q(\hat{\textbf{z}}(\tau),\vec \xi)}}{{d\tau}}} + \tilde a(\vec \xi)\tilde f(\hat{\textbf{z}}(\tau),\vec \xi)- \tilde a(\vec \xi)B\vec u.
\end{array} \nonumber
\end{equation}
Here $\tilde q(\hat{\textbf{z}}(\tau),\vec \xi)=\vec q(\tilde{z}(\tau,\vec \xi),\vec \xi)$, $\tilde f(\hat{\textbf{z}}(\tau),\vec \xi)=\vec f(\tilde{z}(\tau,\vec \xi),\vec \xi)$; $\hat {\textbf{a}}$ and $\hat{\textbf{z}}(\tau)$ collect the gPC coefficients of $\tilde a(\vec \xi )$ and $\tilde z(\tau ,\vec \xi )$, respectively. The following coupled differential equation
\begin{equation}
\label{ST:unforced}
\displaystyle{\frac{{d\textbf{q}\left(\hat{\textbf{z}}(\tau)\right)}}{{d\tau}}} + \textbf{f}\left(\hat{\textbf{z}}(\tau),\hat{\textbf{a}}\right)= \textbf{B}(\hat{\textbf{a}})\vec u
\end{equation}
can be constructed by either stochastic testing~\cite{zzhang:tcas2_2013} or stochastic Galerkin~\cite{Pulch:2011_1}. In stochastic testing we perform collocation testing (c.f. Section~\ref{subsec:st}) on ${\rm R}(\hat{\textbf{z}}(\tau),\hat {\textbf{a}},\vec \xi )$, whereas in stochastic Galerkin one applies Galerkin testing (c.f. Section~\ref{subsec:sg}).
Based on (\ref{ST:unforced}), an algebraic equation can be set up to solve for the gPC coefficients of $\tilde z(0, \vec \xi)$ and $\tilde a(\vec \xi)$. Let $\hat{\textbf{y}}:=[\hat{\textbf{z}}(0);\hat {\textbf{a}}]$ and fix the $j$-th component of $\vec z(0)$ at $\lambda$, then we have
\small
\begin{equation}
\label{st_pss_unforced}
\textbf{g} ( {\hat{\textbf{y}} } ) = \left[ {\begin{array}{*{20}c}
{\Psi ( {\hat{\textbf{z}}(0) ,\hat{\textbf{a}}} )} \\
{\chi ( \hat{\textbf{z}}(0))} \\
\end{array}} \right] = \left[ {\begin{array}{*{20}c}
{\Phi ( {\hat{\textbf{z}}(0),0,T_0 ,\hat{\textbf{a}}} ) - \hat{\textbf{z}}(0)} \\
{\chi ( \hat{\textbf{z}}(0))} \\
\end{array}} \right] = 0.
\end{equation} \normalsize
Here $\Phi ( {\hat{\textbf{z}}(0),0,T_0 ,\hat{\textbf{a}}} )$ is the state transition function of (\ref{ST:unforced}), which depends on $\hat{\textbf{a}}$. The phase constraint $\chi ( \hat{\textbf{z}}(0))=0\in \mathbb{R}^K$
\begin{align}
\chi ( \hat{\textbf{z}}(0)) = \left[ \hat{\textbf{z}}_j(0) - \lambda ;\; \hat{\textbf{z}}_{j + n}(0);\; { \cdots ;\;} \hat{\textbf{z}}_{j + (K - 1)n}(0) \right] = 0 \nonumber
\end{align}
is added to make (\ref{st_pss_unforced}) a determined equation.
When solving (\ref{st_pss_unforced}) by Newton's iterations, the Jacobian evaluation is more involved than that in forced circuits. Besides the Monodromy matrix, the sensitivity matrix of $\textbf{g} ( {\hat{\textbf{y}} } )$ w.r.t $\hat{\textbf{a}}$ is also required, which can be obtained in an iterative way~\cite{zzhang:tcas2_2013}. Similar to the forced circuits, decoupling leads to an $O(K^2)$ speedup if the stochastic testing formulation is employed~\cite{zzhang:tcas2_2013}.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/Col_osc.eps}
\caption{Schematic of the BJT Colpitts oscillator.}
\label{fig:Colp_osc}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/Colp_wave.eps}
\caption{Realizations of $V_{\rm out}$ for the Colpitts oscillator. (a) on the scaled time axis, (b) on the original time axis.}
\label{fig:Colp_wave}
\end{figure}
\textbf{Results.} The gPC-based periodic steady-state solvers are applied to analyze the BJT Colpitts oscillator in Fig.~\ref{fig:Colp_osc}. The oscillation frequency is influenced by the Gaussian variation of $L_1$ and non-Gaussian variation of $C_1$. With a $3$rd-order gPC expansion, the stochastic testing-based~\cite{zzhang:tcas2_2013} and stochastic Galerkin-based~\cite{Pulch:2011_1} solvers produce the same results. Fig.~\ref{fig:Colp_wave} shows some realizations of $V_{\rm out}$. The variation looks small on the scaled time axis $\tau$, but it is significant on the original time axis due to the uncertainties of the frequency. The CPU time of the decoupled stochastic testing-based solver is $4.9$ seconds, which is $2\times$ and $5\times$ faster over the coupled stochastic testing-based solver and the stochastic Galerkin-based solver~\cite{Pulch:2011_1}, respectively. To achieve the similar level of accuracy ($<1\%$ errors for the mean and standard deviation of the frequency), MC must use $5000$ samples, which is about $507\times$ slower than the stochastic testing-based simulator with decoupling.
\subsection{Other Related Work}
An intrusive simulator has been proposed to analyze the uncertainties of RF circuits with multi-rate input signals~\cite{Pulch:2009}. It uses the multi-time PDE technique~\cite{Jaijeet:2001} to solve a coupled differential equation formed by stochastic Galerkin, generating stochastic quasi-periodic steady-state solutions. The stochastic testing-based formulation can be easily extended to this case to further reduce the computational cost.
Non-intrusive periodic steady-state solvers are not discussed in this paper due to their ease of implementation.
\section{Open Problems}
Although stochastic spectral methods seem promising for stochastic circuit simulation, there still exist many open problems, some of which are summarized below.
\textbf{High Dimensionality.} The number of total gPC bases increases very fast as the parameter dimensionality $d$ increases. Consequently, the computational cost becomes prohibitively expensive when $d$ is large. It is worth exploiting the sparsity of the gPC coefficients to reduce the complexity. Compressed sensing~\cite{Donoho:2006} seems effective for behavior modeling~\cite{xli2010}, but its efficiency can degrade for simulation problems (since the gPC coefficients of different nodal voltages and/or branch currents have different sparsity pattens). A dominant singular vector method has been proposed for high-dimensional linear stochastic problems~\cite{Tarek_DAC:08}, yet solving the non-convex optimization is challenging for nonlinear problems.
\textbf{Correlated Non-Gaussian Parameters.} In existing literatures, the parameters are typically assumed mutually independent, which is not valid for many practical circuits. Unlike Gaussian variables, correlated non-Gaussian parameters cannot be easily transformed to independent ones, making the gPC basis construction challenging. A theoretical method has been proposed to deal with parameters with arbitrary density functions~\cite{arb_chaos}, but its numerical implementation is non-trivial.
\textbf{Long-Term Integration.} In digital IC simulation, normally designers have to perform a long-time transient simulation. In the applied math community, it is well known that PC/gPC approximation can be inaccurate for a tong-time integration, despite some improvements~\cite{Wan:2006}.
\section{Conclusion}
Stochastic spectral methods have emerged as a promising technique for the uncertainty quantification of integrated circuits. After reviewing some key concepts about gPC, this paper has discussed stochastic testing, stochastic Galerkin and stochastic collocation methods, as well as their implementation and performance in nonlinear transistor circuit analysis. Some recent progress on stochastic periodic steady-state analysis has been summarized. Among these techniques, stochastic testing has shown higher efficiency in time-domain IC simulation. Some important problems, such as how to deal with high parameter dimensionality, correlated non-Gaussian parameters and long-term integration errors, have not been solved.
\section*{Acknowledgment}
This work was supported by the MI-MIT Collaborative Program (Reference No.196F/002/707/102f/70/9374). I. Elfadel's work was also supported by SRC under the MEES I, MEES II, and ACE$^{4}$S programs, and by ATIC under the TwinLab program. Z. Zhang would like to thank Dr. Tarek El-Moselhy for his helpful discussions during the work of~\cite{zzhang:tcad2013,zzhang:tcas2_2013}.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.526367,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfgo5qhDCeBHYseaZ | \section{Введение.}
Пусть числа $c, d\in\mathbb N$ таковы, что $(c,d) =1, \text{ где } 1\le c \le d-1 $. Тогда $\frac{c}{d}$ представима в виде цепной дроби
$$\frac{c}{d}=\cfrac{1}{a_1+\cfrac{1}{a_2 + \cfrac{1}{\ldots + \cfrac{1}{a_n}}}} = [0;a_1,\ldots,a_n]=[a_1,\ldots,a_n],
$$
где $a_1,\ldots,a_n$ - неполные частные (также называемые элементами цепной дроби) и $ a_i\in\mathbb N, i=1,\ldots,n. $
Знаменатель конечной цепной дроби $[a_1,\ldots,a_n]=[\textbf{u}]$ - это функция от последовательности
$\textbf{u}=(a_1,\ldots,a_n)$. Это число называется {\slshapeконтинуатом} $\textbf{u}$ и обозначается
$\langle a_1,\ldots,a_n\rangle$ или $\langle \textbf{u}\rangle$. Для каждой последовательности
$\textbf{u}\in{\mathbb N}^{n}$ определим $\textbf{u\_}=(a_2,\ldots,a_n)$ и
$\textbf{u}^{\textbf{---}}=(a_1,\ldots,a_{n-1})$. Будем также писать $\{ \textbf{u} \}$
для последовательности $(a_n,\ldots,a_1)$. Для пустого континуанта $\textbf{u}$ положим
${[\textbf{u}]=[\{\textbf{u}\}]=0, \langle \textbf{u} \rangle=1}$.
\begin{stat}{(см. [1])}
Пусть ${\textbf{u}=(u_1,\ldots,u_n), n\ge 2}$ - последовательность натуральных чисел. Тогда:
\begin{itemize}
\item[(1)]
$$ \langle\textbf{u}\rangle =\begin{vmatrix}
u_1 & 1 & 0 & \hdotsfor{1} & 0 \\ -1 & u_2 & 1 & 0 & \hdotsfor{1} \\ 0 & -1 & u_3 & 1 & \hdotsfor{1} \\
\hdotsfor{5} \\
0 & 0 & \ldots & -1 & u_n
\end{vmatrix}
$$\newpage
\item[(2)] $\langle u_1,\ldots, u_n\rangle = \langle u_n,\ldots, u_1 \rangle$
~\item[(3)] если $u_2\neq 1$, то $\langle 1, u_2 - 1,u_3, \ldots, u_n\rangle = \langle u_2,\ldots, u_n\rangle$
\end{itemize}
\end{stat}
Гипотеза Зарембы гласит, что для любого $d\ge 2$ существует дробь $\frac{c}{d}$,
все неполные частные которой строго меньше $N$. Предполагается, что $N=6$. В 1986 году
Нидеррейтер показал, что гипотеза верна для чисел $d$, являющихся степенями 2 и 3 с ограничением
$N=4$, а также для чисел $d$ являющихся степенями 5 с ограничением $N=5$ (см. \cite{Nie}).
В 2002 году справедливость была доказана для $d$ - степени 6 с $N=6$ \cite {jap1}, в 2005
году был получен положительный результат для $d$, имеющего вид $7^{k\cdot 2^{n}}$, где
$k=1, 3, 5, 7, 9, 11$ и $N=4$ \cite{jap2}.
В настоящей статье будет получена оценка снизу количества последовательностей натуральных чисел $(a_1,\ldots,a_n)$ с ограниченными элементами и континуантом, равным $a^m, где a, m \in \mathbb N, a\ge 2, n - \text{ не фиксировано}$, из которой очевидно следует оценка снизу количества цепных дробей с ограниченными неполными частными и знаменателем, равным $a^m$. Общий результат сформулирован ниже.
\section{Основные результаты.}
Пусть $a, m, s \in \mathbb{N}, a\ge 2$. Будем рассматривать последовательности натуральных чисел $a_1,\ldots,a_n$ произвольной длины $n$, континуант которых $\langle a_1,\ldots,a_n\rangle$ равен
некоторой $m$-ой степени числа $a$,
причем для элементов последовательностей выполнено:\\${a_i< N, i=1,\ldots,n, N\in \mathbb{N}}$.
Пусть $f(a^m, N)=f_m$ - количество последовательностей указанного вида, континуант которых равен $~a^m$.
Определим также многочлен $P_s=P_s(\lambda)$ при различных значениях $s$:
$$
P_s(\lambda)=\left\{
\begin{aligned}
&{\lambda}^{3} - s{\lambda}^2 - s\lambda - s,
\quad &\text{при } s\equiv 0 (\mod \text{ }8), s\ge 6 \quad &\text{(1)} \\
&{\lambda}^{3} - \left(s + 2\right) {\lambda}^2 + \left(s + 2\right) \lambda - \left(s-2\right),
\quad &\text{при } s \equiv2 (\mod \text{ }8), s\ge 6 \quad &\text{(2)}\\
&{\lambda}^{3} - \left(s+ 2\right) {\lambda}^2 + s\lambda + \left(s + 4\right),
\quad &\text{при } s \equiv4 \left(\mod \text{ }8\right), s\ge 6 \quad &\text{(3)}\\
&{\lambda}^{3} - s{\lambda}^2 - \left(s + 2\right) \lambda + \left(s + 2\right),
\quad &\text{при } s \equiv 6 (\mod \text{ }8), s\ge 6 \quad &\text{(4)}\\
&{\lambda}^2 - 4\lambda -4,
\quad &\text{при } s=4 \quad &\text{(5)}\\
&\lambda - 2
,
\quad &\text{при } s=2 \quad &\text{(6)}\\
&\lambda - s - 1,
\quad &\text{при } s\equiv 1 (\mod \; 2), s\ge 3\quad &\text{(7)}.
\end{aligned}
\right.
$$
\begin{theo}
Для любых натуральных фиксированных $a$ и $s$, не равных 1, для всех достаточно больших $m$ справедлива оценка
$$
f(a^m, a^s)\ge \lceil C_1(s)m^{\log_2 {\lambda}}\rceil,
$$
растущая степенным образом по $m$, \\где положительное число $C_1 (s)$ зависит только от $s$,
$$
C_1\gg s^{-\log_2 (s+1) - 3},
$$
константа в знаке "$\gg $"$\;$абсолютная, $\lambda$ - наибольший из действительных корней многочлена $P_s(\lambda).$
%
\begin{rem}
\begin{flushleft} Пусть ${\lambda}_1, {\lambda}_2, {\lambda}_3, {\lambda}_4$ - наибольшие из действительных корней многочленов $(1)-(4)$ соответственно. Тогда:\end{flushleft}
$$
\begin{aligned}
&{\lambda}_1=s + 1 +\frac{{\theta}_1}{s^2},\qquad \text{ где } {\theta}_1 \in (-1, 0);\\
&{\lambda}_2=s + 1 -\frac{3}{s^2} + \frac{3}{s^3} + \frac{{\theta}_2}{s^4},\qquad \text{ где } {\theta}_2 \in (0, 3);\\
&{\lambda}_3=s +1 -\frac{3}{s^2} + \frac{3}{s^3} -\frac{{\theta}_3}{s^4},\qquad \text{ где } {\theta}_3 \in (-9,-6);\\
&{\lambda}_4=s + 1 +\frac{{\theta}_4}{s^2}, \qquad \text{ где } {\theta}_4 \in (-1, 0).
\end{aligned}
$$
\end{rem}
\end{theo}
\begin{theo}
Существует $s_0\in\mathbb{N}$, такое что при $N\ge a^{s_0}$, для достаточно больших $m$ справедлива оценка (растущая по $m$):
$$
f(a^m,N)\ge \lceil C_2(N)m^{\log_{2}\lambda} \rceil, \qquad C_2( N)\gg 2^{-5\log^2_2\log_2 N},
$$
$\lambda$ - наибольший из действительных корней многочлена $(3)$ при $s=\lfloor \log_a N \rfloor.$
\end{theo}
\begin{rem}
При $s=6$ многочлен (4) является характеристическим для матрицы $2A$, где
$$
A=\begin{pmatrix}
2&1&1\\
2&0&1\\
1&1&1
\end{pmatrix}
$$
\end{rem}
\begin{theo}
При достаточно большом $m$ и $s=6$ теорема 1 допускает улучшение. Справедлива оценка:
$$
f(a^m, a^6)\ge \left\lceil Cm^{1 + \frac{1}{5}\log_{2}\mu}\right\rceil \asymp m^{\log_{2}{2\sqrt[5]{\mu}}},
$$
где $C>0, \mu$ - наибольшее из действительных собственных значений
матрицы
$$
B={\begin{pmatrix}
2&1&1\\
2&0&1\\
1&1&1\\
\end{pmatrix}}^5 + {\begin{pmatrix}
0&1&-1\\
0&1&-1\\
0&1&-1
\end{pmatrix}}
\eqno (8)
$$
\end{theo}
\begin{rem}
При $s=6$ выполняется следующее неравенство:
$$
2\sqrt[5]{\mu}-\lambda > 0,0000756,
$$
где $\lambda$ - наибольшее из действительных собственных значений матрицы $2A$,
$\text{ }\mu$ - наибольшее из действительных собственных значений матрицы $B.$
\end{rem}
\begin{theo}
При $m\ge 8$ справедлива оценка:
$f(3^m, 4)\ge \left\lceil\frac{m+1}{4}\right\rceil$.
\end{theo}
\begin{theo}
Пусть $k\in \mathbb{N}, k\ge 2 $, тогда справедлива оценка:
$f(2^{2^k-1},3) \ge 2^{k}$
\end{theo}
\begin{rem}
Утверждения теорем остаются верными, если вместо последовательностей рассматривать цепные дроби с ограниченными элементами.
\end{rem}
\section{Описание метода, основные леммы.}
\begin{lemm}\cite{hensley}
Пусть $b\in \mathbb{N}, b>1$, $\textbf{u}=(u_1,\ldots,u_n)$ - последовательность натуральных чисел, причем $u_n>1$.
Положим \begin{align*} \textbf{w}=&(u_1,\ldots,u_{n-1},u_n-1,1, b-1, u_n,\ldots,u_1),\\ \textbf{w}\prime=&(u_1,\ldots,u_{n-1},u_n-1,u_n+1,u_{n-1},\ldots,u_1).\end{align*}. Тогда $\langle \textbf{w}\rangle=b{\langle \textbf{u}\rangle}^2, \langle \textbf{w}\prime \rangle={\langle \textbf{u}\rangle}^2.$
\end{lemm}
\begin{lemm}
Пусть задано $m>s$. Пусть $b=a^r, \text{ где } a, \:r\in \mathbb{N}_0, \:a\ge 2, \:r\le s,$ \\$r\equiv~m~(~mod ~2)$ и $\:\textbf{u}=(u_1,\ldots,u_n)$ - последовательность натуральных чисел, такая что ${1\le u_i\le a^s -1, i=1,\ldots, n,\quad u_1, u_n\neq 1, a^s-1 }\:$ и $\:{\langle\textbf{u}\rangle=a^{\frac{m-r}{2}}}$.
Тогда:
\begin{itemize}
\item[(1)] Для последовательностей
$$
\textbf{w}=(w_1,\ldots, w_J)=
\left\{
\begin{aligned}
&(u_1,\ldots,u_n,b-1,1, u_n-1,u_{n-1},\ldots,u_1)\qquad &\text{при } r\neq 0,\\
&(u_1,\ldots,u_{n-1},u_n-1,b-1,1, u_n,\ldots,u_1)\qquad &\text{при } r\neq 0,\\
&(u_1,\ldots,u_{n-1},u_n+1,u_n-1,u_{n-1},\ldots,u_1)\qquad &\text{при } r=0,\\
&(u_1,\ldots,u_{n-1},u_n-1,u_n+1,u_{n-1},\ldots,u_1)\qquad &\text{при } r=0,\\
\end{aligned}
\right.
$$
справедливо $\langle \textbf{w}\rangle=a^{m};$
\item[(2)] для перечисленных последовательностей $\textbf{w}=(w_1,\ldots, w_J):$
\item[$\bullet$] $J=2(n+r);$
\item[$\bullet$] $ 1\le w_j\le a^s-1$ для $1\le j\le J$;
\item[$\bullet$] $w_1,w_{J} \neq1, a^s-1;$
\item[$\bullet$] все эти последовательности различны.
\end{itemize}
\end{lemm}
\begin{proof} Все утверждения леммы 2 следуют из леммы 1, утверждения 1 и определения последовательностей
\end{proof}
\begin{lemm}
Пусть последовательности $\textbf{w}_1$ и $\textbf{w}_2$, $\langle \textbf{w}_1\rangle=\langle\textbf{w}_2\rangle =a^m $ получены из $ \textbf{u}=(u_1,\ldots,u_n), \langle\textbf{u}\rangle=a^{m_1} $ и $ \textbf{v}=(v_1,\ldots,v_k), \langle\textbf{v}\rangle=a^{m_2}$ по лемме 2 с использованием $b_1=a^{r_1} $ и $b_2=a^{r_2} $ соответственно. Тогда, если $\textbf{u}$ и $\textbf{v}$ различны, то $\textbf{w}_1$ и $\textbf{w}_2$ также различны.
\end{lemm}
\begin{proof}
Пусть последовательность $\textbf{u}$ удовлетворяет условию леммы 2,\\ последовательность $\textbf{w}_1$ получена из $\textbf{u}$. Тогда длина $\textbf{w}_1$ может быть равна $2n$ или $2n+2$. Аналогично, если $\textbf{v}$ удовлетворяет условию леммы 2, то длина $\textbf{w}_2$, полученной из нее, может быть равна $2k$ или $2k+2$.
Рассмотрим 2 случая:
1. Пусть $n=k$ (длины последовательностей совпадают). Тогда если $r_1=r_2=0$, то последовательности $\textbf{w}_1$ и $\textbf{w}_2$ могут иметь вид
$$
\textbf{w}_1=
\left[
\begin{aligned}
&(u_1,\ldots,u_{n-1}, u_n+1,u_n-1,u_{n-1},\ldots,u_1),\\
&(u_1,\ldots,u_{n-1},u_n-1,u_n+1,u_{n-1},\ldots,u_1);\\
\end{aligned}
\right.
$$
$$
\textbf{w}_2=
\left[
\begin{aligned}
&(v_1,\ldots,v_{n-1},v_n+1,v_n-1,v_{n-1},\ldots,v_1),\\
&(v_1,\ldots,v_{n-1},v_n-1,v_n+1,v_{n-1}\ldots,v_1),\\
\end{aligned}
\right.
$$
Предположим, что $\textbf{w}_1=\textbf{w}_2$. Это невозможно, так как в этом случае, во-первых, ${(u_1,\ldots,u_{n-1})=(v_1,\ldots,v_{n-1})}$ и, во-вторых, либо ${u_n=v_n}$, \\либо ${u_n + 1=v_n-1}$ и ${u_n -1=v_n +1}$ одновременно.
\\
Если же $r_1,r_2>0$, то
$$
\textbf{w}_1=
\left[
\begin{aligned}
&(u_1,\ldots,u_n, a^{r_1}-1,1,u_n-1,u_{n-1},\ldots,u_1),\\
&(u_1,\ldots,u_{n-1},u_n-1,1,a^{r_1}-1,u_n,\ldots,u_1);\\
\end{aligned}
\right.
$$
$$
\textbf{w}_2=
\left[
\begin{aligned}
&(v_1,\ldots,v_n, a^{r_2}-1, 1,v_n-1,v_{n-1},\ldots,v_1),\\
&(v_1,\ldots,v_{n-1},v_n-1,1, a^{r_2}-1, v_n,\ldots,v_1).\\
\end{aligned}
\right.
$$
Если предположить, что $\textbf{w}_1=\textbf{w}_2$, получим, ввиду $\textbf{u}\neq \textbf{v}$,
$$(u_1,\ldots,u_n,a^{r_1}-1,1,u_n-1,u_{n-1},\ldots,u_1) = (v_1,\ldots,v_{n-1},v_n-1,1, a^{r_2}-1, v_n,\ldots,v_1),$$
то есть
$u_n=v_n-1 \text{ и }
u_n - 1=v_n,
$
одновременно, что невозможно.
2. Пусть $n=k+1$ (длины последовательностей $\textbf{u}$ и $\textbf{v}$ отличаются на 1), тогда $r_1=1, r_2=0$ и
$$
\textbf{w}_1=
\left[
\begin{aligned}
&(u_1,\ldots,u_k, u_{k+1}+1,u_{k+1}-1,u_k,\ldots,u_1),\\
&(u_1,\ldots,u_k,u_{k+1}-1,u_{k+1}+1,u_k,\ldots,u_1);\\
\end{aligned}
\right.
$$
$$
\textbf{w}_2=
\left[
\begin{aligned}
&(v_1,\ldots,v_k, a^{r_2}-1, 1,v_k-1,v_{k-1},\ldots,v_1),\\
&(v_1,\ldots,v_{k-1},v_k-1,1, a^{r_2}-1, v_k,\ldots,v_1),\\
\end{aligned}
\right.
$$
предположив, что $\textbf{w}_1=\textbf{w}_2$, получим $v_{k}=u_k \text{ и } v_k-1=u_k$ одновременно, что невозможно.
\end{proof}
\begin{rem}
Пусть в результате применения леммы 2 была получена последовательность
$\textbf{u}=(u_1,\ldots,u_n), \; \langle\textbf{u}\rangle=a^m,\; 1\le u_i\le a^s-1,\; i=1,\ldots,n$,
тогда для последовательностей ${(1, u_1-1,\ldots, u_n), (u_1,\ldots,u_n-1,1)}$ и ${(1, u_1-1,\ldots,u_n-1, 1)}$ выполнено:
\begin{itemize}
\item[1.] континуанты равны $a^m$;
\item[2.] элементы являются натуральными числами;
\item[3.] элементы строго ограничены сверху числом $N=a^s$.
\item[4.] указанные последовательности различны.
\end{itemize}
\end{rem}
\newpage
\begin{explain}
Для оценки $f_m$, где $m\ge2$, будем использовать числа $g_m$, определенные следующим образом:
$$
g_2=\ldots=g_{s+2}=1;
$$
при $m\ge s+3$ числа $g_m$ определяются рекуррентным соотношением
$$
g_m=2\sum_{\substack{r=0\\ r\equiv m \:(\mod 2)}}^{s} g_{\frac{m-r}{2}}=2\sum_{k=\left\lfloor \frac{m-s+1}{2} \right\rfloor }^{\left\lfloor\frac{m}{2}\right\rfloor}g_{k}.
\eqno (9)
$$
При $s\ge 3$ для $m=2,\ldots,s$ справедливо: $f_m\ge 4 g_m$, так как для каждого из таких $m$ будет существовать хотя бы одна последовательность натуральных чисел $(u_1,\ldots, u_n) \text{ c } u_i< a^s, i=1,\ldots,n$, у которой $u_1,u_n\neq1, a^s-1$ Это следует из существования для каждого такого $m$ несократимых дробей со знаменателями $a^m$, числителями меньше $\frac{a^m}{2}$, неполные частные представления в виде цепной дроби которых ограничены (строго) $a^s$. Множитель $4$ появляется согласно замечанию $5$. Последовательности, континуанты которых равны ${a^{s+1}, a^{s+2}\text{ и } a^{s+3}}$, будут существовать согласно лемме 2, примененной к последовательностям с континуантами $a^{\left\lfloor\frac{s}{2}\right\rfloor}, a^{\left\lfloor\frac{s-2}{2}\right\rfloor}$ и замечанию $5$.\\
Для $m\ge s+3$, в результате применения лемм 2, 3 с различными ${r\in{1,\ldots, s}}$, \\где ${r=m \:(\mod 2)}$, получим, что ${f_m\ge g_m}$. Применение замечания $5$ дает ${f_m\ge4 g_m}$.\\
(Для случаев ограничений $N$, не рассмотренных здесь, рассуждения будут проведены отдельно.)
Метод состоит в том, чтобы дать возможно более точную нижнюю оценку для $g_m$ на основании формулы $(9)$, примененной достаточно большое количество раз.
\end{explain}
\begin{theo}Пусть $a, s\in \mathbb{N}, a\ge 2.$
В рамках рассматриваемого метода невозможно при фиксированном $s$ и $m\rightarrow\infty$ получить более точную (по сравнению с теоремой ~1) по порядку оценку снизу для величины $f(a^m, a^s)$ в случаях:
\begin{itemize}
\item[\textit{(i)}] нечетного $s\ge 3$,
\item[\textit{(ii)}] $s=4$,
\item[\textit{(iii)}] $s=2$.
\end{itemize}
\end{theo}
\section{Доказательства теорем.}
\begin{defi}
Для двух векторов $u \in\mathbb{N}^n_0 \text{ и } v\in \mathbb{N}^n_0 \text{ положим } u>v \; (u\ge v),$ $ \text{ если для каждого } i, 1\le i \le n \text{ выполнено } u_i > v_i\; (u_i\ge v_i). $
Аналогично, для двух матриц $A=(a_{ij}) \in\mathbb{N}^{n^2}_0 \text{ и } B=(b_{ij})\in \mathbb{N}^{n^2}_0 \text{ положим }$ \\${ A>B \; (A\ge B),}\text{ если для каждой пары } i,j: 1\le i,j \le n \text{ выполнено }$
\\${ a_{ij} > b_{ij}\; (a_{ij}\ge b_{ij}).} $
\end{defi}
\newpage
\begin{proof}[Доказательство теоремы 1]
$ \text{ }$\par
\begin{flushleft}\textbf{Случай четного $\textbf{s}\ge \textbf{6}$}. \par Доказательство разобьем на 2 части: $s=2 (
\mod 4) \text{ и } s=0 (\mod 4)$\end{flushleft}
1. Пусть $s=4q+2, q\in \mathbb{N}$.
Для любого четного $m > s+2$ рекуррентная формула $9$ для $g_m$ содержит $\frac{s}{2}+1$ слагаемое, а для нечетного $m > s+2$ указанная сумма имеет $\frac{s}{2}$ слагаемых. Выделим среди чисел $g_m, m\ge 2$ классы $A_1\subset A_2\subset A_3$:
$$
\begin{aligned}
&A_1=\{ g_l | l\equiv0(mod 2)\}\\
&A_2=\{ g_l | l\equiv0(mod 2)\}\cup \{ g_l | l\equiv1(mod 4)\}\\
&A_3=\{ g_l | l \in \mathbb{N}\}
\end{aligned}
\eqno (10)$$
Таким образом, при $m\ge s+3$, принадлежность $g_m$ классу можно определить следующим образом:
$A_1=\{ g_m | $ сумма в (9) содержит $ \frac{s}{2} + 1 $ слагаемых, из которых $\lceil\frac{s}{4}\rceil$ четные\}
$A_2=\{ g_m | $ сумма в (9) содержит хотя бы $ \frac{s}{2} $ слагаемых, из которых хотя бы $ \lceil\frac{s}{4} \rceil$ четные$\}$
$A_3=\{ g_m | $ сумма в (9) содержит хотя бы $ \frac{s}{2} $ слагаемых, из которых хотя бы $\lfloor\frac{s}{4}\rfloor$ четные$\}$.
Пусть дано достаточно большое $m$. Тогда результат одного шага оценки $g_m$ по рекуррентной формуле $(9)$ можно записать, используя скалярное умножение вектора-строчки на вектор-столбец, следующим образом:
$$
g_m\ge 2(q, \left\lfloor\frac{q + 1}{2}\right\rfloor, \left\lceil\frac{q + 1}{2}\right\rceil)\begin{pmatrix} a^0_1\\ \rule{0pt}{5mm}a^0_2\\ \rule{0pt}{5mm} a^0_3 \end{pmatrix},
\eqno (11)$$
где $a^0_1, a^0_2, a^0_3$ - наименьшие из элементов классов $A_1, A_2 \text{ и } A_3 $ соответственно, получившихся после применения одного шага рекуррентной формулы; $q, \left\lfloor\frac{q + 1}{2}\right\rfloor, \left\lceil\frac{q+1}{2}\right\rceil$ - количества элементов, заведомо принадлежащих каждому из классов, получившихся после применения формулы. Причем $\left\lceil\frac{q+1}{2}\right\rceil$ - количество элементов $A_2$, не учтенных при подсчете количества элементов $A_1$; $\left\lfloor\frac{q+1}{2}\right\rfloor$ - количество элементов $A_3 $, не учтенное при подсчете элементов $A_1$ и $A_2$.
После первого применения формулы $(9)$, к каждому слагаемому, входящему в получившуюся сумму, также применим эту рекуррентную формулу, и т.д. Рассмотрим последовательность вложенных интервалов $I_0\subset I_1\subset\ldots\subset I_n$, соответствующих интервалам изменения индексов $k $ слагаемых $g_k$, входящих в сумму $(9)$ для $g_m$ после $j$~ применений рекуррентной формулы, где $j=1,\ldots,n+1$.
$$
\begin{aligned}
I_0=&\left\{\; v\in\mathbb{N} \quad | \quad \lfloor\frac{m-s+1}{2}\rfloor\le v\le \lfloor\frac{m}{2}\rfloor \; \right\},
&i_0=\min_{\substack{v \in \text{ } I_0}} v;\\
I_1=&\left\{\; v\in\mathbb{N} \quad | \quad \lfloor\frac{i_0-s+1}{2}\rfloor\le v\le \lfloor\frac{m}{2}\rfloor \; \right\},
&i_1=\min_{\substack{v \in \text{ } I_1}} v;\\
&\qquad \qquad
\qquad \qquad
\cdots \cdots \cdots &\\
I_{n-1}= &\left\{\; v\in\mathbb{N} \quad | \quad \lfloor\frac{i_{n-2}-s+1}{2}\rfloor\le v\le \lfloor\frac{m}{2}\rfloor \; \right\},
&\quad i_{n-1}=\min_{\substack{v \in \text{ }I_{n-1}}} v;\\
I_{n}= &\left\{\; v\in\mathbb{N} \quad | \quad \lfloor\frac{i_{n-1}-s+1}{2}\rfloor\le v\le \lfloor\frac{m}{2}\rfloor \; \right\},
&i_{n}=\min_{\substack{v \in \text{ }I_{n}}} v, \\
\end{aligned}
\eqno (12)
$$
Число $n=n(m)$ мы хотим выбрать как можно больше. Единственным ограничением является условие $i_{n-1}\ge s+3$ (иначе дальнейшее применение формулы $(9)$ незаконно).
\\Таким образом, $i_n = \max \{ \; j \quad | \quad i_{j-1} \ge s+3\; \}. $
\begin{flushleft} Определим также числа $a^i_j,\; i=1,\ldots,n; j=1,2,3$
$$
a^i_j=\min \{\; g_k\in A_i\quad | \quad k\in I_j\;\}.
$$
\end{flushleft}
\begin{lemm} Для $l=0,\ldots,n-1:$
$$
\begin{pmatrix} a^l_1\\ \rule{0pt}{5mm}a^l_2\\\rule{0pt}{5mm} a^l_3 \end{pmatrix}\ge 2A\begin{pmatrix} a^{l+1}_1\\ \rule{0pt}{5mm}a^{l+1}_2\\ \rule{0pt}{5mm}a^{l+1}_3 \end{pmatrix}, \qquad \text{ где } A=\begin{pmatrix}q+1 & \left\lfloor\frac{q+1}{2}\right\rfloor & \left\lceil\frac{q+1}{2}\right\rceil\\ \rule{0pt}{5mm} q+1 & \left\lfloor \frac{q}{2} \right\rfloor & \left\lceil\frac{q}{2}\right\rceil \\ \rule{0pt}{5mm} q & \left\lfloor \frac{q+1}{2} \right\rfloor & \left\lceil\frac{q+1}{2}\right\rceil \end{pmatrix}
\eqno (13)
$$
\end{lemm}
\begin{proof}
Действительно, для каждого $j\in\{1, 2, 3\}, \quad a^l_j\; $
- это наименьший из элементов класса $A_j$, получившихся после $l+1$
применений рекуррентной формулы (9). Продолжим оценку этого элемента по рекуррентной формуле.
Подсчитаем, сколько индексов $k$ в сумме $(9)$ для $a^l_j$ гарантированно принадлежат классам
$A_1$ и $A_2$ (остальные учтем как принадлежащие $A_3$).
Число
$a^l_1$ будет оцениваться снизу удвоенной суммой $q+1$ элемента класса $A_1$, $\left\lceil\frac{q+1}{2}\right\rceil$ элементов класса $A_2$ и $\left\lfloor\frac{q+1}{2}\right\rfloor$ элементов класса $A_3$, причем индексы указанных элементов принадлежат $\left[i_{l+1}, i_{l+1}+\frac{s+2}{2}\right]$; указанное выражение не меньше, чем сумма, получающаяся при умножении первой строки матрицы $2A$ на столбец $(a^{l+1}_1, a^{l+1}_2, a^{l+1}_3)^{\perp}$, так как $a^{l+1}_j \text{ где } j=1,2,3$ - наименьшие из элементов соответствующих классов с индексами, принадлежащими $\left[i_{l+1}, i_{l+1} + \frac{s+2}{2}\right]$. Аналогично рассматривается $a^l_2$ и $a^l_3$ и, соответственно, вторая и третья строки матрицы $A$.
\end{proof}
\begin{lemm}
Пусть $n$ - максимально возможно количество применений формулы (9)
при вычислении $g_m$, ограниченное условием $i_{n-3}\ge s+3$, тогда
$$n\ge\left\lceil\log_2 \left\lfloor\frac{m-s+1}{4s+4}\right\rfloor\right\rceil.
\eqno (14)$$
\end{lemm}
\begin{proof}
Заметим, что $i_j\le 2i_{j+1} + s$, поэтому
$$i_0\le 2 i_{1} + s\le2(2i_{2} + s) + s\le\ldots\le2^{n}i_n + (2^n -1)s, \text{ где } i_n\in\{2,\ldots,s+2\}.$$
Отсюда
$$
n\ge \min_{i_n\in{2,\ldots,s+2}} \log_2 \left(\frac{i_0 + s}{i_n + s} \right)=log_2 \left(\frac{\left\lfloor\frac{m-s+1}{2}\right\rfloor + s}{2s+2} \right).
$$
Таким образом,
$$
n\ge\left\lceil \log_{2}\left\lfloor\frac{m-s+1}{4(s+1)}\right\rfloor\right\rceil.
$$
\end{proof}
\begin{lemm} Пусть $\lambda$ - наибольшее из действительных собственных
значений матрицы $2A$, где $A$ определена в (13), тогда:
$$
(q, \left\lfloor\frac{q+1}{2}\right\rfloor, \left\lceil\frac{q+1}{2}\right\rceil )(2A)^n\begin{pmatrix} a^n_1\\ a^n_2\\ a^n_3 \end{pmatrix}\ge
\frac{1}{3}{\lambda}^n.
\eqno (15)$$
\end{lemm}
\begin{proof}
Так как матрица $ A $ слева и справа домножается на вектора из положительного октанта, то
$$
(q, \left\lfloor\frac{q+1}{2}\right\rfloor, \left\lceil\frac{q+1}{2}\right\rceil)(2A)^n\begin{pmatrix} a^n_1\\ a^n_2\\ a^n_3 \end{pmatrix}\ge \max (2A)^n,
\eqno (16)
$$
где $\max (2A)^n$ - максимальный элемент матрицы $(2A)^n$.
Пусть $\bar{x}$ - собственный вектор матрицы $2A$, отвечающий собственному значению $\lambda$ и \\ ${| \:\bar{x}\:|=\max (x_j\; | \; j=1,2,3)}$, тогда
$$
2A\bar{x}=\lambda \bar{x} ,
\quad (2A)^n\bar{x}={\lambda}^n \bar{x}.
$$
Оценим $| \:(2A)^n\bar{x}\: |$:\\ c одной стороны
$
| \:(2A)^n\bar{x}\: |=| \:{\lambda}^n\bar{x}\: |={\lambda}^n| \:\bar{x}\: |
$ так как $\lambda>0$;\\
с другой стороны, для некоторого $i\in\{1, 2, 3\}$, \\
${|\:(2A)^n \bar{x} \: |= |\: a^{(n)}_{i1}x_1 + a^{(n)}_{i2}x_2 +a^{(n)}_{i3}x_3\:|\le 3\max (2A)^n|\:\bar{x}\:|}$, где ${(2A)^n=(a^{(n)}_{ij})}$.\\
Поэтому, $$\max (2A)^n\ge \frac{1}{3}{\lambda}^n.
\eqno (17)
$$
Объединяя неравенства (16) и (17), получаем
$$
(q, \left\lfloor\frac{q+1}{2}\right\rfloor, \left\lceil\frac{q+1}{2}\right\rceil)(2A)^n\begin{pmatrix} a^n_1\\ a^n_2\\ a^n_3 \end{pmatrix}\ge \frac{1}{3}
{\lambda}^n.
$$
\end{proof}
Применение лемм 4, 5 и 6 позволяет при $m\ge 5s+5$ продолжить неравенство (11):
$$
\begin{aligned}
&g_m\ge
2(q, \left\lfloor\frac{q+1}{2}\right\rfloor, \left\lceil\frac{q+1}{2}\right\rceil)(2A)^n\begin{pmatrix} a^n_1\\ a^n_2\\ a^n_3 \end{pmatrix}
\ge \frac{2}{3}{(\lambda)}^{\left\lceil\log_2\left\lfloor\frac{m-s+1}{4s+4}\right\rfloor\right\rceil}
\ge \\ &\ge\frac{2}{3}\frac{{\lambda}^{\log_2 m}}{{\lambda}^{\log_2 (s+1) + 3}}\gg \frac{1}{s^{\log_2 (s+1) + 3}}m^{\log_2 \lambda},
\end{aligned}
\eqno (18)$$
где константа в знаке $\gg$ является абсолютной.
\begin{lemm}
Наибольшее из действительных собственных собственных значений матрицы $2A$, где матрица $A$ определена в $(13)$, - это наибольший из действительных корней
\item[$\textit{(i)}$] многочлена (2), при $s=8r + 2, l\in\mathbb{N}$;
\item[$\textit{(ii)}$] многочлена (4), при $s=8r + 6, l\in\mathbb{N_0}$;
\end{lemm}
\begin{proof}
В случаях $\textit(i)$ и $\textit(ii)$ матрица $A$ принимает, соответственно, вид
$$
\begin{pmatrix} 2r+1& r & r+1\\ 2r+1 &r & r \\ 2r& r& r+1\end{pmatrix}, \quad\qquad \begin{pmatrix} 2r+2& r+1 & r+1\\ 2r +2 & r & r+1 \\ 2r+1 & r+1 & r+1\end{pmatrix}.
$$
Таким образом, характеристическим для матрицы $2A$
в случае $\textit{(i)}$ будет многочлен $(2)$, а в случае $\textit{(ii)}$ - многочлен $(4)$.
%
\end{proof}
2. Пусть $s=4q,\text{ где } q\in \mathbb{N}, q\ge 2$
Заметим, что для любого четного $m > s+2$ рекуррентная формула $(9)$ для $g_m$ содержит $\frac{s}{2}+1$ слагаемое, а для любого нечетного $m > s+2$ указанная сумма имеет $\frac{s}{2}$ слагаемых. Выделим среди чисел $g_m, m\ge 2$ классы $A_1\subset A_2\subset A_3$:
$$
\begin{aligned}
&A_1=\{ g_l | l\equiv0\:(\mod 4)\}\\
&A_2=\{ g_l | l\equiv0\:(\mod 2)\}\\
&A_3=\{ g_l | l \in \mathbb{N}\}
\end{aligned}
\eqno(19)
$$
Таким образом, при $m\ge s+3$, принадлежность $g_m$ классу можно определить следующим образом:
$A_1=\{ g_m | $ сумма в $(9)$ имеет $ \frac{s}{2} + 1 $ слагаемое, из которых $ \lceil\frac{s+2}{4}\rceil $ четные$\}$
$A_2=\{ g_m | $ сумма в $(9)$ имеет $ \frac{s}{2} + 1 $ слагаемое, из которых хотя бы $ \lfloor\frac{s+2}{4} \rfloor$ четные$\}$
$A_3=\{ g_m | $ сумма в $(9)$ имеет хотя бы $ \frac{s}{2} $ слагаемых, из которых хотя бы $\lfloor\frac{s}{4}\rfloor$ четные$\}$
\begin{flushleft}Проводя рассуждения, аналогичные тем, что были проведены при рассмотрении предыдущего случая, получаем вид матрицы $ A $ и для каждого $m\ge 5s+5$ оценку $g_m$:\end{flushleft}
$$
A=\begin{pmatrix} \left\lfloor\frac{q+1}{2}\right\rfloor & \left\lceil\frac{q+1}{2}\right\rceil & q\\ \rule {0pt}{5mm}\left\lfloor \frac{q}{2} \right\rfloor & \left\lceil\frac{q}{2}\right\rceil & q+1 \\ \rule{0pt}{5mm}\left\lfloor \frac{q}{2} \right\rfloor & \left\lceil\frac{q}{2}\right\rceil & q \end{pmatrix},
\eqno(20)$$
$$
\begin{aligned}
&g_m\ge2(\left\lfloor\frac{q}{2}\right\rfloor, \left\lceil\frac{q}{2}\right\rceil, q){(2A)}^n\begin{pmatrix} 1\\ 1 \\ 1 \end{pmatrix}\ge
\frac{2}{3}{\lambda}^{\left\lceil \log_2 {\left\lfloor\frac{m-s+1}{4s+4}\right\rfloor} \right\rceil}
\ge \\ &\ge\frac{2}{3}
\frac{{\lambda}^{\log_2 m}}{{\lambda}^{\log_2 (s+1) + 3}}\gg \frac{1}{s^{\log_2 (s+1) + 3}}m^{\log_2 \lambda}
\end{aligned}
$$
\begin{lemma}
Наибольшее из действительных собственных собственных значений матрицы $2A$, где матрица $A$ определена в $(20)$, - это наибольший из действительных корней
\item[$\textit(i)$] многочлена (1), при $s=8r, l\in\mathbb{N}$;
\item[$\textit(ii)$] многочлена (3), при $s=8r+4, l\in\mathbb{N}$;
\end{lemma}
\begin{proof}
В случаях $\textit(i)$ и $\textit(ii)$ матрица $A$ принимает вид
$$
\begin{pmatrix} r& r+1 & 2r\\ r &r & 2r+1 \\ 2r& r& r+1\end{pmatrix}, \quad\qquad \begin{pmatrix} r+1& r+1 & 2r+1\\ r & r+1 & 2r+2 \\ r & r+1 & 2r+1\end{pmatrix}.
$$
Тогда характеристическим для матрицы $2A$ будет многочлен $(1)$ или $(3)$ соответственно.
\end{proof}
\newpage
\begin{flushleft}\textbf{Случай $\textbf{s=4}$.} \end{flushleft}\par
Необходимо среди чисел $g_m, m\ge 2$ выделить классы
$$
\begin{aligned}
&A_1=\{\:g_m\; | \; m\equiv 0 \:(\mod 2)\:\}\\
&A_2=\{\:g_m\; | \; m\ge 2\}.
\end{aligned}
\eqno(21)
$$
Тогда, как и в случае четного $s\ge6$, введем матрицу
$$
A=\begin{pmatrix} 1& 2\\ 1& 1
\end{pmatrix}
\eqno (22)$$
Пусть $\lambda$ - наибольшее из собственных значений матрицы $2A$.\\
Снова рассмотрим интервалы ${ I_0\subset\ldots\subset I_n}$, числа $i_0,\ldots, i_n$ (12) \\и числа ${a^j_i=\min \{g_k\in A_i\; | \;k\in I_j\}}$, тогда для каждого $m>s+2$
$$
g_m\ge 2\begin{pmatrix}1&1\end{pmatrix}\begin{pmatrix}a^0_1\\ a^0_2\end{pmatrix}.
\eqno (23)
$$
если $m\ge 25$, то продолжим неравенство (23):
$$
\begin{aligned}
g_m\ge
2\begin{pmatrix}1&1\end{pmatrix}{(2A)}^n\begin{pmatrix}1\\1\end{pmatrix}\ge
{\lambda}^{\left\lceil log_2 {\left\lfloor\frac{m-3}{20}\right\rfloor}\right\rceil}
\gg m^{\log_2 \lambda}
\end{aligned}$$
\begin{flushleft}\textbf{Случай $\textbf{s=2}$.} \end{flushleft
Пусть $a=2$. Для $m=6, 7, \ldots, 11$ положим $g_6=\ldots=g_{11}=1$, так как существуют последовательности, удовлетворяющие условию нашей теоремы, действительно:
$$
\begin{aligned}
2^6&=\langle2, 1, 3, 1, 1, 2\rangle,\\
2^7&=\langle2, 1, 2, 1, 1, 1, 1, 2\rangle,\\
2^8&=\langle2, 3, 3, 1, 3, 2\rangle,\\
2^9&=\langle2, 3, 2, 1, 1, 1, 3, 2\rangle,\\
2^{10}&=\langle2, 3, 2, 3, 1, 1, 3, 2\rangle,\\
2^{11}&=\langle2, 1, 1, 2, 3, 3, 1, 1, 2, 2\rangle.\\
\end{aligned}
$$
Для $m\ge 12$ значения $g_m$ вычисляются по рекуррентной формуле (9), которая теперь принимает вид
$$
g_m \ge 2g_{\left\lfloor\frac{m}{2}\right\rfloor}.
\eqno(24)$$
В этом случае, в обозначениях (12), ${i_n\in\{6, 7, 8, 9, 10, 11\}}$. Оценим снизу максимальное число $n=n(m)$ применений рекуррентной формулы $(9)$ для вычисления $g_m$. Используя такие же рассуждения, как и при доказательстве леммы $5$, получаем:
$$
{m\ge 2^n(i_n +1),}
$$
откуда
$$
n\ge \left\lceil\log_2 \frac{m+1}{12} \right\rceil.
$$
Тогда для $m\ge 12$ неравенство (24) можно продолжить:
$$
g_m\ge 2^n g_{i_n}\ge 2^{\left\lceil\log_2 \frac{m+1}{12} \right\rceil}\ge \frac{m+1}{12}.
$$
Если $a>2$, положим $g_2=g_3=g_4=1$ (так как существуют последовательности, удовлетворяющие условию теоремы, континуанты которых равны, соответственно, ${a^2, a^3, a^4}$ с неполными частными, ограниченными $a^2$). Для ${m\ge 5}$ значения $g_m$ вычисляются по рекуррентной формуле (24). Максимальное количество применений рекуррентной формулы ${n=\left\lceil\log_2 \frac{m+1}{5}\right\rceil}$. Таким образом, при $m\ge 5$ получаем ${g_m\ge \frac{m+1}{5}}$.
\par\medskip
\begin{flushleft}\textbf{Случай нечетного $\textbf{s}\ge\textbf{3}$.} \end{flushleft}
\par В рекуррентную формулу для ${g_m, \text{ где } m>s+2}$ входит $\frac{s+1}{2}$ слагаемое, не зависимо от четности $m$. Таким образом, для ${m>5s+5}$ справедлива следующая оценка:
$$
\begin{aligned}
&g_m=2\left(g_{\left\lfloor\frac{m}{2}\right\rfloor} + \ldots + g_{\left\lfloor\frac{m-s+1}{2}\right\rfloor}\right)\ge2\frac{s+1}{2}\min_{v\in \left[i_0, i_0 + \frac{s+1}{2}\right]} g_v\ge (s+1)^2\min_{v\in \left[i_1, i_1 + \frac{s+1}{2}\right] } g_v \ge\ldots\ge\\&\ge(s+1)^n\ge(s+1)^{\left\lceil\log_2 \left\lfloor\frac{m-s+1}{4s+4}\right\rfloor\right\rceil}\ge
\frac{(s+1)^{log_2 m}}{(s+1)^{3+log_2 (s+1)}}=\frac{m^{log_2 (s+1)}}{(s+1)^{3+log_2 (s+1)}}.
\end{aligned}
$$
При оценке было использовано, что интервалы $I_0\subset I_1\subset\ldots\subset I_n$ изменения индексов слагаемых, входящих в рекуррентную формулу для $g_m$ соответствуют (12) и ${i_k, k\in{1, 2,\ldots, n}}$. Также было использовано, что $n$ - максимальное количество применений формулы (9) - может быть оценено согласно лемме 5.
Теорема 1 полностью доказана.
\end{proof}
\begin{lemm}[\textbf{О сравнении корней многочленов (1)-(4)\text{ и } (7)}]
Среди наибольших действительных корней многочленов (1)-(4) и (7) наименьшим при
всех достаточно больших $s$ является корень многочлена (3).
\end{lemm}
\begin{proof}
Пусть ${\lambda}_1, {\lambda}_2, {\lambda}_3, {\lambda}_4, {\lambda}_7$ - наибольшие из корней многочленов (1), (2),(3),(4) и (7) соответственно. Тогда ${\lambda}_i \in\left(s, s+1\right)$, при $i\in\{1, 2, 3, 4\}$, то есть ${\lambda}_i=\underline{\underline{O}} (s)$. Следовательно, для ${\lambda}_1$:
$$
\begin{aligned}
&{{\lambda}_1}^3=s{{\lambda}_1}^2 +s{\lambda}_1 + s,\\
&{\lambda}_1=s +\frac{s}{{\lambda}_1} + \bar{\bar{o}}(1), \text{ получаем }\\
&{\lambda}_1=s + 1 + \varepsilon,\qquad \text{ где } \varepsilon=\underline{\underline{O}} \left(\frac{1}{s}\right).
\end{aligned}
$$
Поставляя это выражение в многочлен (1) и приравнивая к нулю, получаем
$$
\begin{aligned}
&{\left(s + 1 +\varepsilon\right)}^{3} - s{\left(s + 1 +\varepsilon\right)}^2 - s\left(s + 1 +\varepsilon\right) - s=0, \text{ или, эквивалентно, }\\
&{\varepsilon}^3 + (2s+3){\varepsilon}^2 + (s^2 + 3s + 3)\varepsilon +1=0, \text{ откуда }\\
&\varepsilon=-\frac{1}{(s^2 + 3s + 3)} - \underbrace{\frac{(2s+3){\varepsilon}^2}{(s^2 + 3s + 3)} - \frac{{\varepsilon}^3}{(s^2 + 3s + 3)}}_{\underline{\underline{O}}\left(\frac{1}{s^3}\right)}=-\frac{1}{s^2} + \underline{\underline{O}}\left(\frac{1}{s^3}\right).
\end{aligned}
$$
Следовательно,
$$
{\lambda}_1=s + 1 -\frac{1}{s^2} + \underline{\underline{O}}\left(\frac{1}{s^3}\right)=s + 1 + \frac{{\theta}_1}{s^2}, $$
где ${\theta}_1\in(-1, 0).$
Проводя аналогичные рассуждения для ${\lambda}_2, {\lambda}_3, {\lambda}_4$
получаем:
$$
\begin{aligned}
{\lambda}_2=s + 1 -\frac{3}{s^2} + \underline{\underline{O}}\left(\frac{1}{s^3}\right),\\
{\lambda}_3=s +1 -\frac{3}{s^2} + \underline{\underline{O}}\left(\frac{1}{s^3}\right),\\
{\lambda}_4=s + 1 -\frac{1}{s^2} + \underline{\underline{O}}\left(\frac{1}{s^3}\right)=s + 1 +\frac{{\theta}_4}{s^2}.
\end{aligned}
$$
где ${\theta}_4\in(-1, 0)$
Таким образом, необходимо сравнить при $s\rightarrow\infty$ величины ${\lambda}_2$ и ${\lambda}_3$.\\ Для этого положим
$$
\begin{aligned}
&{\lambda}_2=s + 1 -\frac{3}{s^2} + {\varphi}_2,\\
&{\lambda}_3=s + 1 -\frac{3}{s^2} + {\varphi}_3, \text{ где } {\varphi}_2, {\varphi}_3=\underline{\underline{O}}\left(\frac{1}{s^3}\right),
\end{aligned}
$$
и подставим значения ${\lambda}_2,\text{ и } {\lambda}_3$ в соответствующие многочлены.
Получим
$$
\begin{aligned}
&{\varphi}_2=\frac{3}{s^3} + \frac{3}{s^4} + \underline{\underline{O}}\left(\frac{1}{s^5}\right)=\frac{3}{s^3} + \frac{{\theta}_2}{s^4}, \text{ где } {\theta}_2 \in(0, 3);\\
&{\varphi}_3=\frac{3}{s^3} -\frac{6}{s^4} + \underline{\underline{O}}\left(\frac{1}{s^5}\right)=\frac{3}{s^3} -\frac{{\theta}_3}{s^4}, \text{ где } {\theta}_3 \in (-9, -6).\
\end{aligned}
$$
Таким образом, учитывая что для каждого ${i\in\{1, 2, 3, 4\}}$ значение ${\lambda}_i<s+1$, получаем, что наименьшим из ${{\lambda}_1, {\lambda}_2,{\lambda}_3,{\lambda}_4 \text{} {\lambda}_7}$ при $s\rightarrow\infty $ является
$$
{\lambda}_3=s +1 -\frac{3}{s^2} + \frac{3}{s^3} -\frac{{\theta}_3}{s^4}, \qquad {\theta}_3 \in (-9, -6).
$$
\begin{rem}
Более точное выражение для ${\lambda}_3$:
$$
\qquad {\lambda}_3=s +1 -\frac{3}{s^2} + \frac{3}{s^3} -\frac{6}{s^4} - \frac{9}{s^5} - \frac{15}{s^6} + \underline{\underline{O}}\left(\frac{1}{s^7}\right).
\eqno(25)
$$
\end{rem}
\end{proof}
\begin{proof}[Доказательство теоремы 2]
$\text{ }$\par
Согласно лемме 8, найдется $s_0\in\mathbb{N}$, такое что для всех $s\ge s_0$ наименьшим среди чисел ${{\lambda}_1, {\lambda}_2, {\lambda}_3, {\lambda}_4, s+1}$ является ${\lambda}_3$. Рассмотри произвольное ${N\in\mathbb{N}, N\ge a^{s_0}}$, и применим теорему 1 для ${s=\left\lfloor\log_2 N\right\rfloor}$.
Тогда для достаточно больших $m$ получим:
$$
f_m\ge 4g_m\ge C_3 (N) m^{log_2 \lambda_3},
$$
где $ C_3 (N)$ имеет вид:
$$
C_3\ge(s+1)^{-3-\log_{2} (s+1)}\ge \frac{1}{2^{5\log_2 {\log_2 N}}}.
$$
\end{proof}
\begin{proof}[Доказательство теоремы 3]
$\text{ }$\par
Как и при доказательстве теоремы 1 выделим среди чисел $g_m, m\ge 1$ классы $A_1\subset A_2\subset A_3$ (10).
Матрица $A$ будет иметь вид:
$$
A=\begin{pmatrix}
2&1&1\\2&0&1\\1&1&1
\end{pmatrix}
\eqno(26)$$
В основе доказательства лежит
\begin{lemm}
Для достаточно большого $m$ можно утверждать, что хотя бы на одном из 4-х последовательных шагов вычисления $g_m$ по рекуррентной формуле (9) встретилось слагаемое, индекс которого сравним с 5 по модулю 8.
\end{lemm}
\begin{proof}[]
Доказательство проходит перебором всевозможных остатков при делении $m$ на 128.
Для примера рассмотрим "наихудший" ${ }$ случай, то есть когда слагаемое, индекс которого сравним с 5 по модулю 8, появляется на 4-м шаге: пусть ${m=83 + 128k, k\in\mathbb{N}}$, тогда
$$
\begin{aligned}
&g_{83+128k}=2\left(g_{41+64k}+g_{40+64k}+g_{39+64k}\right)=\\
&=4\left(2g_{20+32k}+3g_{19+32k}+3g_{18+32k}+2g_{17+32k}\right)=\\
&=8\left(2g_{10+16k}+8g_{9+16k}+10g_{8+16k}+10g_{7+16k}+5g_{6+16k}\right)=\\
&=16\left(2g_{5+8k}+20g_{4+8k}+35g_{3+8k}+35g_{2+8k}+25g_{1+8k} +5g_{8k}\right),
\end{aligned}
$$
\end{proof}
и интересующее нас слагаемое - первое в полученном выражении.
\begin{lemm}
Пусть для достаточно большого $m$ при вычислении $g_m$ по формуле (9), появилось слагаемое с индексом, сравнимым с 5 по модулю 8. Тогда при следующем применении формулы (9) появится слагаемое, заведомо принадлежащее классу $A_2$, но не учтенное, как принадлежащее этому классу (то есть учтенное как число из $A_3$)
\end{lemm}
\begin{proof}
Если $l=5 \:(\mod 8)$, то $l$ - нечетно. Следовательно, $g_l$ могло быть учтено как член класса $A_2$ или $A_3$. Если оно было учтено как член $A_2$, то на следующем шаге мы получим 2 элемента из класса $A_1$ и один из $A_2$ вместо 2 элементов из $A_1$ и одного из $A_3$. Если же $g_l$ учтено как элемент класса $A_3$, то получаем 2 элемента из $A_1$ и один из $A_2$ вместо одного из $A_1$, одного из $A_2$ и одного из $A_3$. Заметив, что $A_1\subset A_2$, получим утверждение леммы.
\end{proof}
\begin{lemm}
Для достаточно большого $m$, при $0\le i\le n-5-5t$ ($t\in\mathbb{N}_0$, $n$ определено в ~(12)) верно:
$$
\begin{pmatrix}
a^{i}_1\\ \rule{0pt}{5mm}a^{i}_2\\ \rule{0pt}{5mm}a^{i}_3
\end{pmatrix}
\ge {2^{5}B}^t
\left(2A\right)^5
\begin{pmatrix}
a^{i+5t+5}_1\\ \rule{0pt}{5mm}a^{i+5t+5}_2\\ \rule{0pt}{5mm}a^{i+5t+5}_3
\end{pmatrix},
\eqno(27)$$
где
$$
B=A^5 + \begin{pmatrix}0 &1 & -1\\ \rule{0pt}{5mm}0 & 1 & -1\\ \rule{0pt}{5mm}0 & 1 & -1\end{pmatrix}$$
\end{lemm}
\begin{proof}
При $t=0$ утверждение леммы следует из леммы 4, примененной 5 раз. Пусть лемма уже доказана для некоторого $t\ge 0$.
Сформулируем полученные выше результаты в терминах матриц. Согласно леммам 9 и 10, для любого $i$ найдется $j\in\{2, 3, 4, 5\}$, такое что на $i+j$-ом шаге применения формулы $(8)$ появится слагаемое, принадлежащее классу $A_2$, учтенное как элемент из $A_3$ (в терминах леммы 4). Другими словами,
$$
\begin{pmatrix}
a^{i}_1\\ \rule{0pt}{5mm}a^{i}_2\\ \rule{0pt}{5mm}a^{i}_3
\end{pmatrix}
\ge 2^j \left({\begin{pmatrix} 2 & 1 & 1\\ \rule{0pt}{5mm} 2 & 0 & 1\\ \rule{0pt}{5mm} 1 & 1 & 1 \end{pmatrix}}^j + \begin{pmatrix} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\end{pmatrix}\right)
\begin{pmatrix} a^{i+j}_1\\ \rule{0pt}{5mm}a^{i+j}_2\\ \rule{0pt}{5mm}a^{i+j}_3 \end{pmatrix},
\eqno(28)
$$
Возьмем наименьшее из таких $j$.
Для оценки вектора из правой части еще $5-j$ раз применим лемму 4, ввиду чего
$$
\begin{pmatrix}
a^{i}_1\\ \rule{0pt}{5mm}a^{i}_2\\ \rule{0pt}{5mm}a^{i}_3
\end{pmatrix}
\ge 2^j \left({A}^j + \begin{pmatrix} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\end{pmatrix}\right) (2A)^{5-j} \begin{pmatrix} a^{i+5}_1\\ \rule{0pt}{5mm}a^{i+5}_2\\ \rule{0pt}{5mm}a^{i+5}_3\end{pmatrix},
\eqno(29)
$$
Докажем, что лемма верна для $t+1$. Применяя индуктивное предположение к вектору из правой части $(29)$, получим:
$$
\begin{pmatrix}
a^{i}_1\\ \rule{0pt}{5mm}a^{i}_2\\ \rule{0pt}{5mm}a^{i}_3
\end{pmatrix}
\ge 2^5 \left({A}^5 + \begin{pmatrix} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\end{pmatrix}{A}^{5-j}\right)
(2^{5}B)^{t}(2A)^{5}\begin{pmatrix} a^{i+5t + 10}_1\\ \rule{0pt}{5mm}a^{i+5t + 10}_2\\ \rule{0pt}{5mm}a^{i+5t + 10}_3\end{pmatrix},
\eqno(30)
$$
Лемма будет доказана, если показать, что минимум по $j\in\{2, 3, 4, 5\}$ произведения матриц в $(30)$ (в смысле определения 1) будет достигнут при $j=5$. Для доказательства этого факта оценим снизу произведение
$$
\begin{pmatrix} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\end{pmatrix} {A}^{5-j}
(2^{5} B)^{t}(2A)^{5}
$$
Для этого положим
$$
C=\left\{
\begin{aligned}
&B,\qquad &\text{при } t\ge 1,\\
&A^5, \qquad &\text{при } t=0
\end{aligned}
\right.
$$
и проверим конечным перебором по $j\in\{2, 3, 4, 5\}$, что
$$
\begin{pmatrix} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\end{pmatrix}{A}^{5-j}C\ge
\begin{pmatrix} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\\ \rule{0pt}{5mm} 0 & 1 & -1\end{pmatrix}C.
$$
Таким образом, минимум достигается при $j=5$.
\par
\begin{flushleft}
Лемма доказана.
\end{flushleft}
\end{proof}
Пусть $\lambda$ - наибольшее из действительных собственных значений матрицы ${2A}$, где $A$ определено в (26);
$\mu$ - наибольшее из действительных собственных значений матрицы $B$ (8).
Тогда ${2\sqrt[5]{\mu}- \lambda \ge 0.0000756}$ (что может быть проверено непосредственно на компьютере).
Таким образом, используя леммы 9-11, для достаточно больших $m$ можно дать более точную оценку для $g_m$, чем та, что получена в теореме 1:
$$
g_m\gg m^{1+\frac{1}{7}\log_{2}\mu}.
$$
\end{proof}
\begin{proof}[Доказательство теоремы 4.]
$\text{ }$\par
Будем рассматривать последовательности $(a_1,\ldots,a_n)$ произвольной длины $n$, для которых
\begin{itemize}
\item[$1.$] континуант $\langle a_1,\ldots,a_n\rangle=3^m$,где $m\in \mathbb{N}$,
\item[$2.$] $a_i\le 3, i=1,\ldots, n$,
\item[$3.$] $a_1, a_n=2$.
\end{itemize}
Для $m=1, 2, 3$ последовательностей, удовлетворяющих указанным выше условиям, не существует. Однако,
$$
\begin{aligned}
&3^4=\langle2, 2, 1, 1, 1, 1, 2\rangle,\\
&3^5=\langle2, 1, 1, 1, 3, 1, 2, 2\rangle,\\
&3^6=\langle2, 3, 1, 3, 1, 2, 1, 1, 2\rangle,\\
&3^7=\langle2, 1, 2, 2, 1, 2, 3, 2, 1, 2\rangle.
\end{aligned}
$$
Таким образом, для $m\ge 8$ возможно применить вычисления по рекуррентной формуле:
$$
\begin{aligned}
g_m=2g_{\left\lfloor\frac{m}{2}\right\rfloor}
\end{aligned}
\eqno(31)$$
Тогда для $m\ge 8$ получаем следующую оценку:
$$
g_m\ge 2^{\left\lfloor log_2 \left(\frac{m+1}{8}\right)\right\rfloor}\ge \frac{m+1}{16}.
$$
Учитывая замечание 5, получаем утверждение теоремы.
\end{proof}
\begin{proof}[Доказательство теоремы 5]
$\text{ }$\par
Рассмотрим последовательность $(2, 1, 2)$, для которой ${\langle 2, 1, 2\rangle=2^{2^1 - 1}}$.
Пусть ${g_{2^2-1}=1}$, для каждого ${k\ge 3}$ положим ${g_{2^k-1}=2g_{2^{k-1}-1}}$. Так как ${f_{2^2-1}\ge g_{2^2-1}}$, применяя лемму 2 с $b=2$ к указанной выше последовательности получим, что для каждого ${k\ge 3}$ ${f_{2^k-1}\ge g_{2^{k}-1}=2^{k}}$.
\smallskip В итоге, с учетом замечания 5, получим ${f_{2^{2^k-1}}\ge2^{k}}$.
\end{proof}
\begin{proof}[Доказательство теоремы 6]
$\text{ }$\par
Покажем, что применением описанного метода в некоторых случаях невозможно получить более точную по порядку величины $m$ оценку, чем та, что доказана в теореме 1.
\par(\textit{i}) Согласно описанию используемого метода $g_2=\ldots=g_{s+2}=1.$ В рекуррентную формулу для ${g_m, \text{ где } m>s+2}$ входит $\frac{s+1}{2}$ слагаемое. \\Таким образом, для $m>5s+5$ справедлива следующая оценка:
$$
\begin{aligned}
&g_m=2\left(g_{\left\lfloor\frac{m}{2}\right\rfloor} + \ldots + g_{\left\lfloor\frac{m-s+1}{2}\right\rfloor}\right)\le2\frac{s+1}{2}\max_{v\le 2^{-1}m} g_v\le (s+1)^2\max_{v\le2^{-2}m} g_v \le\ldots\le\\&\le(s+1)^n\max_{v in \le 2^{-n}m} g_v \le m^{\log_2 (s+1)}.
\end{aligned}
$$
При оценке было использовано, что интервалы $I_0\subset I_1\subset\ldots\subset I_n$ изменения индексов слагаемых, входящих в рекуррентную формулу для $g_m$ соответствуют (12). Также было использовано, что количество $n$ - применений формулы не может превышать $\log_2 m$.
\par\medskip
(\textit{ii})
Рассмотрим числа $g_m$, введенные при описании метода, такие что $m=2^k-1, k\in \mathbb{N}$, а также числа ${b^i_j=\max \{\; g_l\in A_i,\quad | \quad l\in \{2^{k-1-j}-1, 2^{k-1-j}-3\}\;\}}$. Напомним, что ${A_i\text{ для } i=1, 2 \text{ определены в (21)}}.
\\При $k\ge \log_2 {5s+6}$ применим рекуррентную формулу вычисления $g_m$ (9). Получим:
$$
\begin{aligned}
&g_{2^k-1}=2\left(g_{2^{k-1}-1}+g_{2^{k-1}-2}\right)= 2(1, 1)\begin{pmatrix}b^0_1\\b^0_2\end{pmatrix}=4(1, 1)\begin{pmatrix}g_{2^{k-2}-1}+g_{2^{k-2}-2}+ g_{2^{k-2}-3}\\g_{2^{k-2}-1}+g_{2^{k-2}-2}\end{pmatrix}
\le\\
&\le 4(1, 1)\begin{pmatrix}2\max(g_{2^{k-2}-1}, g_{2^{k-2}-3})+g_{2^{k-2}-2}\\ \max(g_{2^{k-2}-1}, g_{2^{k-2}-3})+g_{2^{k-2}-2}\end{pmatrix}=4(1, 1)A\begin{pmatrix}b^1_1\\b^1_2\end{pmatrix}\le\\
&\le 8(1, 1)A\begin{pmatrix}2\max (g_{2^{k-3}-1}, g_{2^{k-3}-3})+g_{2^{k-3}-2}\\ \max(g_{2^{k-3}-1}, g_{2^{k-3}-3})+g_{2^{k-3}-2}\end{pmatrix}=8(1, 1)A^2\begin{pmatrix}b^2_1\\b^2_2\end{pmatrix}\le\ldots\le\\
&\le 2^{k-3}(1, 1)A^{k-5}\begin{pmatrix}2\max (g_{2^{3}-1}, g_{2^{3}-3})+g_{2^{3}-2}\\ \max(g_{2^{3}-1}, g_{2^{3}-3})+g_{2^{3}-2}\end{pmatrix}\le 2^{k-3}(1, 1)A^{k-5}\begin{pmatrix}2g^{7} + g_6 \\g^7 + g_6\end{pmatrix}=\\
&=2^{k-3}(1, 1)A^{k-4}\begin{pmatrix}g_6 \\g_7\end{pmatrix}=2^{k-3}(1, 1)A^{k-4}\begin{pmatrix}1 \\2\end{pmatrix}
\ll {\lambda}^{\log_2 (m+1)}
\ll m^{\log_2 {\lambda}},
\end{aligned}
$$
Оценка сверху степени матрицы получена стандартным образом с помощью приведения матрицы $A$ к жордановой форме.
Таким образом, при вычислении $g_m$ указанного вида невозможно с помощью нашего метода получить оценку снизу лучше заявленной в теореме 1.
(\textit{iii})
В случае $s=2$ для $m$ - четного в рекуррентной формуле (9) для $g_m$ может быть два
слагаемых,
а для нечетного $m$ - одно слагаемое.
Следовательно, для ~$m\ge 23, \text{ такого что } m=2^k-1, \text{ где } k\in\mathbb{N}$ получаем:
$$
\begin{aligned}
&g_{2^k-1}=2g_{2^{k-1}-1}=4g_{2^{k-2}-1}=\ldots=2^{k-4}g_{15}=2^{k-3}=\frac{1}{8}2^{\log_2 (m+1)}=\frac{1}{8} (m+1).
\end{aligned}
$$
Таким образом, утверждение теоремы доказано.
\end{proof}
\newpage
| {
"attr-fineweb-edu": 1.345703,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfiU241xiNk_AY6zk | \section{Introduction}\label{sec:intro}
Ensembling methods have been widely used to improve the generalization performance of machine learning methods \cite{dietterich2000ensemble,zhou2012ensemble,caruana2004ensemble,dvzeroski2004combining}. However, they struggle to apply in learning with modern deep neural networks (DNNs). A modern DNN often has millions, even billions, of parameters, see e.g., \cite{beal2022billion}. A direct ensembling of $k$ DNNs leads to a $k$-folded computational overhead in terms of training time, memory requirement, and test-time prediction.
Nevertheless, important advances have been made recently in adapting the idea of ensembling to improve deep learning. For instance, the fast geometric ensembling (FGE) and snapshot ensemble (SNE) methods can train an ensemble of DNNs in the same time as a single model, thus getting around the hurdle of training time \cite{garipov2018loss,huang2017snapshot}. However, their computational overhead of training-time model recording and test-time predictions remain much higher than their single model based counterparts. To reduce the test-time cost of ensembles, \cite{bucilua2006model,hinton2015distilling} propose methods for model compression and knowledge distillation, which aim to train one single model to encompass the ``knowledge" of the ensembles. However, such methods do not consider the computational overhead due to ensemble training.
For the aforementioned methods, the computational overhead remains prohibitively high for many real-life applications with limited budgets for saving and deploying the model ensembles.
In this paper, we present PFGE, a parsimonious version of FGE, which aims to reduce both the training-time and the test-time computational overhead yielded from DNNs ensembling. Compared with state-of-the-art (SOTA) methods, PFGE achieves better generalization performance and satisfactory calibration capability, while the computational overhead of model recording and test-time predictions is significantly reduced. The design of PFGE is inspired by an observation of that running one time of stochastic weight averaging (SWA) procedure can lead to a wider optimum \cite{izmailov2018averaging}, and performing a series of SWA procedures successively could find a set of higher-performing weights than those obtained by stochastic gradient descend (SGD) \cite{guo2022stochastic}. FGE employs an ensemble of models found by SGD. We expect that, by employing an ensemble of higher-performing models found by SWA, PFGE could use much fewer model ensembles to yield a comparable performance with FGE.
As PFGE reduces both the training and test-time computational overhead without a compromise in generalization, we believe that the appearance of PFGE can remove the obstacle to a large extent in applying ensemble methods for diverse DNN applications.
Our contributions can be summarized as follows:
\begin{itemize}
\item We propose a novel, generic, and architecture-agnostic ensemble-based algorithm, referred to as PFGE, for improving DNNs in terms of generalization and calibration. PFGE maintains the full advantage of FGE in high training efficiency and desirable generalization performance, while remarkably reduces its computational overhead of model recording and test-time predictions. To our knowledge, PFGE is by far the only ensemble-based algorithm that targets reducing both the training and test-time computational overhead yielded from DNN ensembling.
\item We empirically demonstrate that employing higher-performing DNNs is an effective strategy for reducing the requirement on the number of models for ensemble-based deep learning.
\item We propose a novel SWA-based approach to generate higher-performing models, in terms of generalization and mode connectivity, for ensemble-based deep learning.
\item We perform extensive experiments to test the performance of our algorithm. Results show a multifaceted advantage of our algorithm over competitors, including a better balance between generalization, calibration and computational overhead, the capability of generating model components with better generalization and mode connectivity.
\end{itemize}
\section{Related Works}\label{sec:related}
\textbf{Optimization algorithms for training DNNs}
SGD is by far the \emph{de facto} optimization approach to train DNNs. A decaying learning rate (LR) is the standard configuration for SGD. A commonly used strategy for improving SGD in terms of generalization is to design better LRs. For instance, the AdaGrad method estimates the LR online from the gradients \cite{duchi2011adaptive}. AdaGrad is further improved in \cite{kingma2014adam}, where the resulting algorithm is referred to as AdaDelta. The Adam algorithm is proposed in \cite{kingma2014adam}, which combines the advantages of AdaGrad and RMSProp \cite{tieleman2012rmsprop}. In \cite{schaul2013no}, the diagonal approximation of the Hessian of the gradients is used for designing adaptive LRs. As the literature on this topic is vast, we could not enumerate due to space limitation. We refer interested readers to \cite{sun2019optimization} and references therein for more details.
The above optimization-based approaches are all single model based. In this paper, we focus on ensemble methods that is featured by the application of multiple models for improving DNN performance in terms of generalization and calibration.
\textbf{Bayesian methods for learning with neural networks}
Since the seminal works of \cite{neal2012bayesian,mackay1992practical}, Bayesian methods have been extensively investigated in neural network based machine learning. Such methods have desirable theoretical properties, especially in terms of uncertainty qualification. However, they are largely computationally intractable when facing modern DNNs, due to an extremely high dimension of the model parameter and a strong non-convexity of the posterior distribution.
\cite{neal2012bayesian} proposes Hamiltonian Monte Carlo (HMC), a powerful iterative sampling method that can handle non-convex posteriors, while it requires full gradients at each iteration, thus is computationally prohibitive for training with massive data points. A common wisdom used for training with large scale datasets is to iteratively train with mini-batch samples, which is scalable and able to discover good solution subspaces \cite{keskar2017large}. Stochastic gradient HMC (SGHMC) extends HMC by using mini-batch sample based stochastic gradients \cite{chen2014stochastic}. \cite{welling2011bayesian} introduces the first order Langevin dynamics into the framework of stochastic gradient based Markov Chain Monte Carlo setting (SG-MCMC). \cite{mandt2017stochastic} provide a theoretical tie that connects SG-MCMC to SGD and its variants, and analyze their approximation errors.
Outside of MCMC, variational inference and sequential Monte Carlo methods are also explored for large scale training, see e.g., \cite{hoffman2013stochastic,graves2011practical,liu2020particle}, while it is quite difficult for balancing computational efficiency with approximation errors when adapting them to modern DNNs.
Our PFGE algorithm acts as a much more computationally efficient subspace-based posterior sampling approach, which is applicable for modern DNNs. Its desirable performance in terms of computational efficiency, generalization, and calibration is due to a novel algorithm design that combines SWA and FGE, which makes full use of recently revealed geometric insights regarding the DNN loss landscape.
\textbf{Ensembling methods adapted to DNNs}
As aforementioned, ensemble methods are often computationally intractable for learning with modern DNNs, due to an extremely high overhead of ensemble training, model recording, and test-time predictions. Nevertheless, notable advances have been made to adapt ensemble methods to DNNs, such as FGE \cite{garipov2018loss}, SNE \cite{huang2017snapshot}, SWA-Gaussian (SWAG) \cite{maddox2019simple}, Monte-Carlo dropout \cite{gal2016dropout}, and deep ensembles \cite{fort2019deep,lakshminarayanan2017simple}. Among these modern approaches, FGE, SNE and SWAG are most related to this work, as they all employ a cyclical LR to generate the model ensembles in the same time as one single model.
Both FGE and SNE build DNN ensembles by sampling network weights from an SGD trajectory corresponding to a cyclical LR \cite{smith2017cyclical}. Running an SGD with a cyclical LR is in principle equivalent to SGD sampling with periodic warm restarts \cite{loshchilov2017sgdr}. \cite{huang2017snapshot,garipov2018loss} demonstrate that the cyclical LR indeed provides a highly efficient way to collect high-quality DNN weights, which define the models for ensembling.
Compared with SNE, FGE is featured by a geometric explanation of its way to generating the ensembles. Specifically, FGE is inspired by one geometric insight about the DNN loss landscape, which says that there exist simple curves that connect local optima of the DNN loss landscape, and over these curves, both the training accuracy and the test accuracy remain approximately constant. FGE provides an efficient way to discover the high-accuracy pathways between local optima.
Inspired by FGE, SWA is proposed, which averages the high-performing network weights yielded by FGE for test-time inference \cite{izmailov2018averaging}. The geometric insight underlying SWA is that averaging weights along an SGD trajectory corresponding to a cyclical or constant LR can find wider optima, and wider optima lead to better generalization. This insight is questioned by \cite{guo2022stochastic}, which shows that the real function of SWA's weight averaging operation lies in reducing the variance of the final output, similarly to tail-averaging \cite{jain2018parallelizing}.
SWAG uses the SWA solution as the center of a Gaussian distribution, which is formed to approximate the posterior of the network weights \cite{maddox2019simple}. SWAG generates the model ensembles by sampling from this Gaussian.
PFGE represents a novel SOTA algorithm for ensemble-based deep learning, featured by its competitive performance, in terms of generalization and calibration, compared with related SOTA methods, and a much-reduced computational overhead of memory and test-time predictions.
\begin{algorithm}[!htb]
\caption{SWA based model training and test-time prediction}
\label{alg:swa}
\textbf{Input}: initial network weights $w_{0}$, cyclical LR schedule $SC$, cycle length $c$, budget (the total number of allowable iterations) $n$, test data $x$\\
\textbf{Output}: predicted label $y$ of $x$
\begin{algorithmic}[1]
\STATE $w\leftarrow w_{0}$; solution set $\mathcal{S}\leftarrow \{\}$.
\STATE $w_{\tiny{\mbox{SWA}}}\leftarrow w$.
\FOR{$i\leftarrow 1,2,\ldots,n$}
\STATE Compute current learning rate $\alpha$ according to $SC$.
\STATE $w\leftarrow w-\alpha\bigtriangledown \mathcal{L}_i(w)$ (stochastic gradient update).
\IF {mod($i$,$c$)=0}
\STATE $n_{\tiny{\mbox{models}}}\leftarrow i/c$ (number of models averaged).
\STATE $w_{\tiny{\mbox{SWA}}}\leftarrow \left(w_{\tiny{\mbox{SWA}}}\cdot n_{\tiny{\mbox{models}}}+w\right)/\left(n_{\tiny{\mbox{models}}}+1\right)$.
\ENDIF
\ENDFOR
\STATE Input $x$ into the DNN with weights $w_{\tiny{\mbox{SWA}}}$, then compute the its softmax output.
\STATE \textbf{return} $y$ that maximizes the above softmax output.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!htb]
\caption{FGE based model training and test-time prediction}
\label{alg:fge}
\textbf{Input}: initial network weights $w_{0}$, cyclical LR schedule $SC$, cycle length $c$, budget (the total number of allowable iterations) $n$, test data $x$\\
\textbf{Output}: predicted label $y$ of $x$
\begin{algorithmic}[1]
\STATE $w\leftarrow w_{0}$; solution set $\mathcal{S}\leftarrow \{\}$.
\FOR{$i\leftarrow 1,2,\ldots,n$}
\STATE Compute current learning rate $\alpha$ according to $SC$.
\STATE $w\leftarrow w-\alpha\bigtriangledown \mathcal{L}_i(w)$ (stochastic gradient update).
\IF {mod($i$,$c$)=0}
\STATE Add $w$ into $\mathcal{S}$ (collect weights).
\ENDIF
\ENDFOR
\STATE Given $x$ as the input, compute the average of softmax outputs of models included in $\mathcal{S}$.
\STATE \textbf{return} $y$ that maximizes the above averaged softmax output.
\end{algorithmic}
\end{algorithm}
\textbf{Uncertainty calibration for DNNs}
Uncertainty calibration is crucially important for robust decision-making and model interpretability \cite{guo2017calibration}. It aims to provide a calibrated, more accurate confidence measure for each prediction provided by a DNN model.
Ensemble methods provide a natural mechanism for uncertainty calibration. \cite{maddox2019simple} suggests uncertainty calibration of DNNs via sampling from the Gaussian approximation to the posterior given by SWAG, and then doing Bayesian model averaging over those samples. \cite{lakshminarayanan2016simple} propose incorporating an adversarial loss function into the ensemble for enhanced calibration.
Outside of ensemble methods, the rescaling techniques are commonly used for enhancing calibration. They work by rescaling the logits of DNN outputs \cite{guo2017calibration,kuleshov2018accurate}.
As an ensemble method, PFGE can be naturally used for uncertainty calibration. We test its calibration performance in Section \ref{sec:experiments}.
\section{The Proposed PFGE Algorithm}
Our PFGE algorithm is developed based on SWA and FGE.
We present the pseudo-codes to implement SWA, FGE, and PFGE in Algorithms \ref{alg:swa}, \ref{alg:fge}, and \ref{alg:pfge}, respectively.
They all perform stochastic gradient based weight updating iteratively, starting at a local optimum $w_0$ given by a preceding SGD phase or a pre-trained DNN model.
\begin{algorithm}[!htb]
\caption{PFGE based model training and test-time prediction}
\label{alg:pfge}
\textbf{Input}: initial network weights $w_{0}$, cyclical LR schedule $SC$, cycle length $c$, budget (the total number of allowable iterations) $n$, test data $x$, model recording period $P$ \\
\textbf{Output}: predicted label $y$ of $x$
\begin{algorithmic}[1]
\STATE $w\leftarrow w_{0}$; solution set $\mathcal{S}\leftarrow \{\}$.
\STATE $w_{\tiny{\mbox{SWA}}}\leftarrow w$.
\STATE $n_{\tiny{\mbox{recorded}}}\leftarrow 0$ (number of models recorded in $\mathcal{S}$).
\FOR{$i\leftarrow 1,2,\ldots,n$}
\STATE Compute current learning rate $\alpha$ according to $SC$.
\STATE $w\leftarrow w-\alpha\bigtriangledown \mathcal{L}_i(w)$ (stochastic gradient update).
\STATE $j\leftarrow i-n_{\tiny{\mbox{recorded}}}\times P$ (iterate index for the follow-up SWA procedure).
\IF {mod($j$,$c$)=0}
\STATE $n_{\tiny{\mbox{models}}}\leftarrow j/c$ (number of models that have been averaged within the current SWA procedure).
\STATE $w_{\tiny{\mbox{SWA}}}\leftarrow \left(w_{\tiny{\mbox{SWA}}}\cdot n_{\tiny{\mbox{models}}}+w\right)/\left(n_{\tiny{\mbox{models}}}+1\right)$.
\ENDIF
\IF {mod($i$,$P$)=0}
\STATE Add $w_{\tiny{\mbox{SWA}}}$ into $\mathcal{S}$ (collect weights).
\STATE $w\leftarrow w_{\tiny{\mbox{SWA}}}$ (initialization for the follow-up SWA procedure).
\STATE $n_{\tiny{\mbox{recorded}}}\leftarrow i/P$ (number of models recorded in $\mathcal{S}$).
\ENDIF
\ENDFOR
\STATE Given $x$ as the input, compute the average of softmax outputs of models recorded in $\mathcal{S}$.
\STATE \textbf{return} $y$ that maximizes the above averaged softmax output.
\end{algorithmic}
\end{algorithm}
The iterative weight updating operation employs a cyclical LR for letting the weight trajectory escape from the current optimum, and then discover and converge to novel local optima. A conceptual diagram of the cyclical LR is shown in Figure \ref{fig:lrs}, where $\alpha_1$ and $\alpha_2$ bound the LR values, $c$ denotes the cycle length, $n$ the total number of allowable iterations that define the budget of training time, $P$ the model recording period for PFGE.
\begin{figure}[htbp]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{LRS.eps}}
\caption{A conceptual diagram of the cyclical LR used by SWA, FGE and PFGE. The circles mask the time instances for recording the local optima discovered along the SGD trajectory. The real relationship between $c$, $P$, and $n$ is that $P$ is an integer multiple of $c$, and $n$ is an integer multiple of $P$. Here only one example case is plotted, in which $P=2c$ and $n=2P$. See the text for detailed explanations for all parameters involved.}
\label{fig:lrs}
\end{center}
\vskip -0.2in
\end{figure}
As shown in Algorithm \ref{alg:swa}, SWA maintains a running average of the network weights that are recorded at every $c$ iterations, and finally outputs a single model with weight $w_{\tiny{\mbox{SWA}}}$ for test-time prediction. $w_{\tiny{\mbox{SWA}}}$ is in fact the average of $(\frac{n}{c}+1)$ weights traversed along the SGD trajectory.
As shown in Algorithm \ref{alg:fge}, FGE uses the same mechanism as SWA to find $(\frac{n}{c}+1)$ local optima. Different from SWA that uses the average of those local optima for test-time prediction, FGE maintains an ensemble of $(\frac{n}{c}+1)$ models defined by those local optima. Its test-time prediction is made based on the average of the model ensembles' softmax outputs.
Compared with using one single DNN, using FGE requires $(\frac{n}{c}+1)$ times memory for recording the model components, and $(\frac{n}{c}+1)$ times computational overhead of test-time predictions. PFGE reduces the multiple number $(\frac{n}{c}+1)$ to $(\frac{n}{P}+1)$. Note that $P$ is integer multiples of $c$, as exemplifies in Figure \ref{fig:lrs}.
PFGE differentiates with FGE in the way to generate the ensemble models. For FGE, its ensemble models are generated purely by SGD (see operations 4 and 6 in Algorithm \ref{alg:fge}), while PFGE resorts to multiple SWA operations performed in succession (see operations 10 and 13 in Algorithm \ref{alg:pfge}) to generate the models. In the successively performed SWA procedures, the output of an SWA procedure is used to initialize its follow-up SWA procedure (see operation 14 in Algorithm \ref{alg:pfge}).
The recently reported Double SWA (DSWA) and Triple SWA (TSWA) methods also use the strategy of performing multiple SWA procedures successively \cite{guo2022stochastic}. Both DSWA and TSWA output a single model, while PFGE maintains an ensemble of multiple models, for test-time prediction.
As SWA is capable of finding local optima with better generalization \cite{izmailov2018averaging}, we replace the iterative SGD procedure in FGE with a series of successively performed SWA procedures for generating an ensemble of higher-performing models, yielding the PFGE algorithm. In the following section, we present experimental results, which demonstrate the desirable performance of PFGE in terms of generalization and calibration compared with SOTA methods.
\section{Experiments}\label{sec:experiments}
We compare PFGE against SOTA methods FGE \cite{garipov2018loss}, SWA \cite{izmailov2018averaging}, and SWAG \cite{maddox2019simple}, on CIFAR-10 \cite{krizhevsky2009learning}, CIFAR-100 \cite{krizhevsky2009learning}, and ImageNet ILSVRC-2012 \cite{deng2009imagenet,russakovsky2015imagenet}, to test its performance in terms of generalization and uncertainty calibration.
\begin{table*} \tiny
\caption{Test accuracy, NLL, and ECE on CIFAR-10. The best results for each architecture are \textbf{bolded}.}
\label{table:big_table_cifar}
\begin{tabular}{c|c|c|c}
\hline
& Test Accuracy (\%) & NLL (\%) & ECE (\%) \\
\begin{tabular}{c}
Algorithm \\ \hline PFGE \\ FGE \\ SWA \\ SWAG \\ \hline FGE-whole \\ SWAG-whole
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
\textbf{93.41}$_{\pm0.08}$ & 95.70$_{\pm0.05}$ & 96.37$_{\pm0.03}$\\
93.03$_{\pm0.18}$ & 95.52$_{\pm0.08}$ & 96.14$_{\pm0.07}$\\
93.33$_{\pm0.02}$ & \textbf{95.78}$_{\pm0.07}$ & \textbf{96.47}$_{\pm0.04}$\\
93.24$_{\pm0.06}$ & 95.45$_{\pm0.14}$ & 96.36$_{\pm0.04}$\\ \hline
93.40$_{\pm0.08}$ & 95.57$_{\pm0.05}$ & 96.27$_{\pm0.02}$\\
93.37$_{\pm0.07}$ & 95.61$_{\pm0.11}$ & 96.45$_{\pm0.07}$\\
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
26.16$_{\pm0.08}$ & \textbf{13.05}$_{\pm0.10}$ & \textbf{11.04}$_{\pm0.05}$\\
25.25$_{\pm0.60}$ & 13.60$_{\pm0.11}$ & 11.73$_{\pm0.15}$\\
28.06$_{\pm0.20}$ & 13.50$_{\pm0.07}$ & 11.22$_{\pm0.11}$\\
\textbf{24.58}$_{\pm0.06}$ & 13.71$_{\pm0.13}$ & 11.43$_{\pm0.19}$\\ \hline
21.89$_{\pm0.53}$ & 13.07$_{\pm0.10}$ & 11.18$_{\pm0.06}$\\
23.10$_{\pm0.29}$ & 13.12$_{\pm0.08}$ & 11.07$_{\pm0.17}$\\
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
3.80$_{\pm0.08}$ & 0.54$_{\pm0.15}$ & 0.62$_{\pm0.02}$\\
\textbf{3.12}$_{\pm0.09}$ & \textbf{0.49}$_{\pm0.09}$ & \textbf{0.32}$_{\pm0.08}$\\
4.44$_{\pm0.07}$ & 1.21$_{\pm0.09}$ & 11.33$_{\pm0.04}$\\
3.21$_{\pm0.03}$ & 0.75$_{\pm0.12}$ & 1.03$_{\pm0.05}$\\ \hline
2.24$_{\pm0.08}$ & 0.39$_{\pm0.06}$ & 0.30$_{\pm0.08}$\\
3.01$_{\pm0.10}$ & 0.53$_{\pm0.09}$ & 0.78$_{\pm0.05}$\\
\end{tabular}
\\ \hline
\end{tabular}
\end{table*}
\begin{table*} \tiny
\caption{Test accuracy, NLL, and ECE on CIFAR-100. The best results for each architecture are \textbf{bolded}.}
\label{table:big_table_cifar100}
\begin{tabular}{c|c|c|c}
\hline
& Test Accuracy (\%) & NLL (\%) & ECE (\%) \\
\begin{tabular}{c}
Algorithm \\ \hline PFGE \\ FGE \\ SWA \\ SWAG \\ \hline FGE-whole \\ SWAG-whole
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
\textbf{74.17}$_{\pm0.04}$ & \textbf{80.06}$_{\pm0.13}$ & \textbf{81.96}$_{\pm0.01}$\\
73.49$_{\pm0.24}$ & 79.76$_{\pm0.06}$ & 81.09$_{\pm0.25}$\\
73.83$_{\pm0.20}$ & 79.97$_{\pm0.06}$ & 81.92$_{\pm0.02}$\\
73.77$_{\pm0.18}$ & 79.24$_{\pm0.04}$ & 81.55$_{\pm0.06}$\\ \hline
74.34$_{\pm0.05}$ & 80.17$_{\pm0.09}$ & 81.62$_{\pm0.16}$\\
74.15$_{\pm0.17}$ & 80.00$_{\pm0.03}$ & 81.83$_{\pm0.12}$\\
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
132.85$_{\pm0.19}$ & \textbf{72.07}$_{\pm0.34}$ & \textbf{65.00}$_{\pm0.14}$\\
125.77$_{\pm0.20}$ & 72.75$_{\pm0.18}$ & 68.91$_{\pm0.18}$\\
143.23$_{\pm0.57}$ & 76.88$_{\pm0.43}$ & 69.57$_{\pm0.39}$\\
\textbf{122.87}$_{\pm0.88}$ & 74.37$_{\pm0.14}$ & 68.09$_{\pm0.25}$\\ \hline
109.93$_{\pm0.51}$ & 69.26$_{\pm0.23}$ & 63.54$_{\pm0.24}$\\
117.25$_{\pm0.62}$ & 71.07$_{\pm0.15}$ & 66.37$_{\pm0.29}$\\
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
13.95$_{\pm0.07}$ & 5.05$_{\pm0.23}$ & 4.24$_{\pm0.02}$\\
11.80$_{\pm0.19}$ & \textbf{3.41}$_{\pm0.12}$ & \textbf{2.84}$_{\pm0.32}$\\
16.42$_{\pm0.29}$ & 7.56$_{\pm0.14}$ & 6.89$_{\pm0.12}$\\
\textbf{11.20}$_{\pm0.03}$ & 4.74$_{\pm0.03}$ & 5.22$_{\pm0.07}$\\ \hline
8.84$_{\pm0.01}$ & 1.92$_{\pm0.19}$ & 1.15$_{\pm0.04}$\\
10.99$_{\pm0.11}$ & 3.36$_{\pm0.05}$ & 4.65$_{\pm0.11}$\\
\end{tabular}
\\ \hline
\end{tabular}
\end{table*}
\begin{table*} \tiny
\caption{Test accuracy, NLL, and ECE on Imagenet. The best results for each architecture are \textbf{bolded}.}
\label{table:big_table_imagenet}
\begin{tabular}{c|c|c|c}
\hline
& Test Accuracy (\%) & NLL (\%) & ECE (\%)\\
\begin{tabular}{c}
Algorithm \\ \hline PFGE \\ FGE \\ SWA \\ SWAG \\ \hline FGE-whole \\ SWAG-whole
\end{tabular}
& \begin{tabular}{ccc}
ResNet-50 & ResNet-152 & DenseNet-161 \\ \hline
\textbf{77.28} & \textbf{79.10} & \textbf{78.79}\\
76.77 & 78.71 & 78.45\\
77.13 & 78.84 & 78.75\\
76.49 & 78.75 & 78.15\\ \hline
77.25 & 79.08 & 78.84\\
77.09 & 79.04 & 78.66\\
\end{tabular}
& \begin{tabular}{ccc}
ResNet-50 & ResNet-152 & DenseNet-161 \\ \hline
\textbf{81.08} & \textbf{81.08} & \textbf{82.03}\\
90.47 & 81.91 & 82.87\\
90.90 & 83.40 & 83.47\\
92.02 & 82.00 & 84.91\\ \hline
88.62 & 80.21 & 81.31\\
88.94 & 80.19 & 82.85\\
\end{tabular}
& \begin{tabular}{ccc}
ResNet-50 & ResNet-152 & DenseNet-161 \\ \hline
2.00 & 2.61 & 1.84\\
1.59 & \textbf{1.54} & \textbf{1.51}\\
3.43 & 4.10 & 2.46\\
\textbf{1.54} & 1.85 & 3.22\\ \hline
2.19 & 2.43 & 2.54\\
1.73 & 1.65 & 3.35\\
\end{tabular}
\\ \hline
\end{tabular}
\end{table*}
\subsection{Experimental Setting}\label{sec:experiment_setting}
As shown in Algorithms \ref{alg:swa}-\ref{alg:pfge}, SWA, FGE, and PFGE are all initialized with a local optimum $w_{0}$ and an LR schedule. For all architectures and datasets, we initialize all algorithms in comparison with the same $w_{0}$, and the same LR setting. Following \cite{garipov2018loss}, we use a triangle LR schedule, as shown in Figure \ref{fig:lrs}. Specifically, we set $c$ to iteration numbers corresponding to 2 or 4 epochs (following \cite{garipov2018loss}), $P$ to 10 epochs, and $n$ to 40 or 20 epochs. For $\alpha_1$, and $\alpha_2$, we set them in the same way as in \cite{garipov2018loss}. The mini-batch size for model training is fixed at 128.
For CIFAR-$\{10,100\}$, we obtain $w_{0}$ from running a standard SGD with momentum affiliated by the same type of decaying LR schedule as used in \cite{izmailov2018averaging}, to minimize an $L2$-regularization based cross-entropy loss, until convergence. We set the hyperparameters of SGD, e.g., the weight decaying parameter, the momentum factor, in the same way as in \cite{izmailov2018averaging}. For ImageNet, we use pre-trained models ResNet-50, ResNet-152, and DenseNet-161 contained in PyTorch to initialize $w_{0}$ and fixes $n$ to be the iteration number corresponding to 40 epochs.
The considered performance metrics include test accuracy, negative log-likelihood (NLL), and the expected calibration error (ECE) \cite{guo2017calibration}. The latter two are used for evaluating an algorithm's capability for uncertainty calibration \cite{guo2017calibration,maddox2019simple}.
In our experiments, PFGE always employs 4 model components. It uses the average of those 4 models' softmax outputs for test-time prediction. For FGE and SWAG, we compute test-time predictions by averaging softmax outputs of last 4
models that have been added into their ensemble set $\mathcal{S}$. By this, we guarantee that all algorithms have roughly the same overhead of model recording and test-time predictions. For reference, we also present results of FGE-whole and SWAG-whole, corresponding to FGE and SWAG that use the whole ensemble of models for test-time predictions.
\subsection{CIFAR Datasets}\label{sec:cifar}
We experiment with network architectures VGG16 \cite{simonyan2014very}, Preactivation ResNet-164 (PreResNet-164) \cite{he2016identity}, WideResNet-28-10 \cite{zagoruyko2016wide} on CIFAR-$\{10,100\}$.
\textbf{Test accuracy, NLL and ECE}
We run each algorithm at least 3 times independently, and report the averaged results of test accuracy, NLL, and ECE, and the corresponding standard error in Tables \ref{table:big_table_cifar} and \ref{table:big_table_cifar100}. We bold the best-performing approach for all architectures and datasets.
On CIFAR-10, we find that for VGG16, PFGE has the highest test accuracy (93.41\%). For PreResNet-164 and WideResNet-28-10, PFGE beats FGE and SWAG but loses to SWA in terms of test accuracy; SWA achieves the highest accuracy (95.78\% and 96.47\%) with a comparable NLL score, but gives the worst ECE score.
On CIFAR-100, we find that PFGE performs best in terms of test accuracy for all network architectures. It also gives the best NLL results for PreResNet-164 and WideResNet-28-10. SWAG gives the best calibration for VGG16. FGE gives the best calibration for PreResNet-164 and WideResNet-28-10. SWA performs worst on calibration in terms of both NLL and ECE, for all architectures.
We also find from Tables \ref{table:big_table_cifar} and \ref{table:big_table_cifar100} that, even compared with FGE-whole and SWAG-whole, PFGE achieves a comparable even better performance in terms of test accuracy, with only $20\%$ computational overhead of model recording and test-time predictions. The major advantage of FGE-whole and SWAG-whole lies in their capability of uncertainty calibration. They provides better NLL and ECE scores, compared with PFGE and FGE, for all architectures and datasets.
\textbf{Reliability diagrams}
We plot reliability diagrams in Figure \ref{fig:reliability}, in which FGE and SWAG respectively correspond to FGE-whole and SWAG-whole in Tables \ref{table:big_table_cifar} and \ref{table:big_table_cifar100}. The result in Figure \ref{fig:reliability} re-corroborates FGE-whole and SWAG-whole's advantage and SWA's shortage for uncertainty calibration, and that PFGE behaves in between them in terms of calibration, for all datasets and architectures.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{fig7.eps}\includegraphics[width=0.5\linewidth]{fig10.eps}\\
\includegraphics[width=0.5\linewidth]{fig8.eps}\includegraphics[width=0.5\linewidth]{fig11.eps}\\
\includegraphics[width=0.5\linewidth]{fig9.eps}\includegraphics[width=0.5\linewidth]{fig12.eps}
\caption{Reliability diagrams. Left column: CIFAR-10. Right column: CIFAR-100. Top row: VGG16. Middle Row: PreResNet-164. Bottom row: WideResNet-28-10}\label{fig:reliability}
\end{figure}
\textbf{Performance of separate models}
We check the ensemble performance of PFGE and FGE as a function of the training iteration index $i$. The results of an example experiment are shown in Figure \ref{fig:separate}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{fig13.eps}\includegraphics[width=0.5\linewidth]{fig16.eps}\\
\includegraphics[width=0.5\linewidth]{fig14.eps}\includegraphics[width=0.5\linewidth]{fig17.eps}\\
\includegraphics[width=0.5\linewidth]{fig15.eps}\includegraphics[width=0.5\linewidth]{fig18.eps}
\caption{Ensemble performance of PFGE and FGE as a function of the training iteration index $i$. Crosses represent the performance of separate ``snapshot" models, and diamonds show the performance of the ensembles composed of all models available at the given iteration. Left column: CIFAR-10. Right column: CIFAR-100. Top row: VGG16. Middle Row: PreResNet-164. Bottom row: WideResNet-28-10.}\label{fig:separate}
\end{figure}
We find that: (1) the separate models that construct PFGE are indeed higher-performing than those for FGE in terms of test accuracy, for all datasets and network architectures; (2) for both PFGE and FGE, the usage of more model components leads to higher test accuracy; (3) PFGE ensemble outperforms FGE ensemble in all but one cases using fewer model components. Note that the term FGE in Figure \ref{fig:separate} corresponds to FGE-whole in Tables \ref{table:big_table_cifar} and \ref{table:big_table_cifar100}. We believe that it is the higher-quality of the separate model components that makes PFGE achieve a generalization performance comparable to FGE, with a reduced computational overhead due to the usage of fewer model components.
\textbf{Mode connectivity}
In \cite{yang2021taxonomizing}, the authors demonstrate that, when ensembles of trained models converge to locally smooth regions of the loss landscape, the best test accuracy is obtained. That means, the mode connectivity of its model components closely relates to the ensemble's generalization performance. We conduct an experiment to test the mode connectivity of PFGE's model components against that of FGE's.
We randomly select a pair of neighboring model components, denoted by $w$ and $w'$, in the ensemble set $\mathcal{S}$.
Given $w$ and $w'$, we first search a low-energy curve $\gamma(t), t\in[0,1]$, for which $\gamma(0)=w, \gamma(1)=w'$, which minimizes $\int_0^1\mathcal{L}(\gamma(t))dt$, where $\mathcal{L}$ denotes the DNN loss function. Following \cite{yang2021taxonomizing,garipov2018loss}, we approximate $\int_0^1\mathcal{L}(\gamma(t))dt$ with $\mathbb{E}_{t\sim U(0,1)}\mathcal{L}(\gamma_{\phi}(t))$, and use the Bezier curve with $k+1$ bends, given by $\gamma_{\phi}(t)=\sum_{j=0}^{k}\binom{k}{j}(1-t)^{k-j}t^jw_j$ for $t\in[0,1]$, where $U(0,1)$ denotes a continuous uniform distribution between 0 and 1, $w_0=w, w_k=w'$, and $\phi=\{w_1,\ldots,w_{k-1}\}$ are parameters of additional models to be trained.
Given the curve $\gamma_{\phi}(t)$, the mode connectivity of the models $w, w'$ is defined to be \cite{yang2021taxonomizing}
\begin{equation}\label{eqn:mc}
\mbox{mc}(w,w')=\frac{1}{2}(\mathcal{L}(w)+\mathcal{L}(w'))-\mathcal{L}(\gamma_{\phi}(t^{\star})),
\end{equation}
where $t^{\star}$ maximizes the function $f(t)\triangleq |\frac{1}{2}(\mathcal{L}(w)+\mathcal{L}(w'))-\mathcal{L}(\gamma_{\phi}(t))|$.
We use the same computational procedure and hyperparameter setting in \cite{yang2021taxonomizing,garipov2018loss} to minimize the loss on the curve. We record the training loss and test error values as a function of $t$, and plot them in Figure \ref{fig:mode_connect}. The related statistics is shown in Tables \ref{table:stat_test} and \ref{table:stat_train}. In Figure \ref{fig:mode_connect}, we find that the training loss curve of PFGE has a consistent lower energy than that of FGE for VGG16 and PreResNet-164. The test error curve of PFGE is below that of FGE for most $t$ values, for all architectures. The results in Tables \ref{table:stat_test} and \ref{table:stat_train} also show a consistent advantage of PFGE over FGE in terms of both test error and training loss.
The corresponding mc values are presented in Table \ref{table:mode_connect}, which shows that the model connectivity of PFGE is greater than that of FGE, for all architectures.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{mode_connect_1.eps}\includegraphics[width=0.5\linewidth]{mode_connect_2.eps}\\
\includegraphics[width=0.5\linewidth]{mode_connect_3.eps}
\caption{Test error and training loss on CIFAR-10 corresponding to $\gamma_{\phi}(t)$ as a function of $t$. Top left: VGG16. Top right: PreResNet-164. Bottom: WideResNet-28-10.}\label{fig:mode_connect}
\end{figure}
\begin{table}[htbp] \tiny
\caption{Statistics for test errors (\%) of $\gamma_{\phi}(t)$ on CIFAR-10. The test error values are collected along the changes of the $t$ value from 0 to 1.}
\label{table:stat_test}
\begin{tabular}{c|c|c|c}
\hline
& Max & Min & Mean\\
\begin{tabular}{c}
Architecture \\ \hline VGG16 \\ PreResNet \\ WideResNet
\end{tabular}
& \begin{tabular}{cc}
PFGE & FGE \\ \hline
6.96 & 7.51 \\
4.65 & 5.22 \\
4.14 & 4.53
\end{tabular}
& \begin{tabular}{cc}
PFGE & FGE \\ \hline
6.64 & 6.76 \\
4.30 & 4.61 \\
3.43 & 3.63
\end{tabular}
& \begin{tabular}{cc}
PFGE & FGE \\ \hline
6.76 & 6.99 \\
4.50 & 4.82 \\
3.63 & 3.87
\end{tabular}
\\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp] \tiny
\caption{Statistics for training losses of $\gamma_{\phi}(t)$ on CIFAR-10. The training loss values are collected along the changes of the $t$ value from 0 to 1.}
\label{table:stat_train}
\begin{tabular}{c|c|c|c}
\hline
& Max & Min & Mean\\
\begin{tabular}{c}
Architecture \\ \hline VGG16 \\ PreResNet \\ WideResNet
\end{tabular}
& \begin{tabular}{cc}
PFGE & FGE \\ \hline
0.238 & 0.234 \\
0.241 & 0.272 \\
0.284 & 0.324
\end{tabular}
& \begin{tabular}{cc}
PFGE & FGE \\ \hline
0.193 & 0.186 \\
0.197 & 0.213 \\
0.217 & 0.238
\end{tabular}
& \begin{tabular}{cc}
PFGE & FGE \\ \hline
0.207 & 0.199 \\
0.210 & 0.230 \\
0.238 & 0.266
\end{tabular}
\\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Mode connectivity test on CIFAR-10. See the definition of $\mbox{mc}$ in Equation (\ref{eqn:mc}). A $\mbox{mc}$ value closer to 0 indicates a better mode connectivity and vice versa. See more details on the relationship between the value of $\mbox{mc}$ and mode connectivity in \cite{yang2021taxonomizing}. The best results for each architecture are \textbf{bolded}.}
\label{table:mode_connect}
\begin{tabular}{c|c}
\hline
& $\mbox{mc}$ value \\
\begin{tabular}{c}
Algorithm \\ \hline PFGE \\ FGE
\end{tabular}
& \begin{tabular}{ccc}
VGG16 & PreResNet & WideResNet \\ \hline
\textbf{0.039} & \textbf{0.044} & \textbf{0.065}\\
0.045 & 0.057 & 0.084\\
\end{tabular}
\\ \hline
\end{tabular}
\end{table}
\subsection{IMAGENET}\label{sec:imagenet}
We experiment with network architectures ResNet-50 \cite{he2016deep}, ResNet-152 \cite{he2016deep}, and DenseNet-161 \cite{huang2017densely} on ImageNet ILSVRC-2012 \cite{deng2009imagenet,russakovsky2015imagenet}. See the experimental setting in subsection \ref{sec:experiment_setting}.
Results in Table \ref{table:big_table_imagenet} show that PFGE outperforms FGE, SWA, and SWAG in terms of both test accuracy and NLL, and performs comparatively to FGE-whole and better than SWAG-whole in terms of test accuracy. The reliability diagrams in the left column of Figure \ref{fig:res_imagenet} show that PFGE performs better than SWA, while worse than FGE-whole and SWAG-whole in terms of calibration. Note that the terms FGE and SWAG in Figure \ref{fig:res_imagenet} respectively correspond to FGE-whole and SWAG-whole in Table \ref{table:big_table_imagenet}. The right column of Figure \ref{fig:res_imagenet} shows that the separate models associated with PFGE are better than those of FGE in terms of test accuracy, for all network architectures.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{reliability_r50.eps}\includegraphics[width=0.5\linewidth]{separate_r50.eps}\\
\includegraphics[width=0.5\linewidth]{reliability_r152.eps}\includegraphics[width=0.5\linewidth]{separate_r152.eps}\\
\includegraphics[width=0.5\linewidth]{reliability_d161.eps}\includegraphics[width=0.5\linewidth]{separate_d161.eps}
\caption{Left column: Reliability diagrams on Imagenet. Right column: ensemble performance of PFGE and FGE on Imagenet as a function of the training iteration index $i$. Top Row: ResNet-50. Middle Row: ResNet-152. Bottom Row: DenseNet-161. In subfigures of the left column, crosses represent the performance of separate ``snapshot" models, and diamonds show the performance of the ensembles composed of all models available at the given iteration.}\label{fig:res_imagenet}
\end{figure}
\section{Conclusions}
In this paper, we proposed a generic, architecture-agnostic, and highly computationally efficient ensemble-based algorithm, referred to as PFGE, to improve DNNs in terms of generalization and uncertainty calibration. In this algorithm, we use several successively performed SWA procedures to generate high-performing DNN models within the framework of FGE. We empirically demonstrated that the DNN models yielded by PFGE have better separate generalization performance and mode connectivity, compared with those of FGE. Due to these higher-performing model components, PFGE achieves better generalization performance and satisfactory calibration capability than SOTA methods, while the computational overhead of model recording and test-time predictions is significantly reduced to say 20\%.
\newpage
| {
"attr-fineweb-edu": 1.313477,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfizxK0zjCrsOB-ld | \section{Introduction}
The partially asymmetric exclusion process (PASEP) \cite{ASEP1,ASEP2} is one of the most thoroughly studied models of non-equilibrium
statistical mechanics \cite{Derrida98,Schuetz00,GolinelliMallick06,BEreview}. It is a microscopic model of a driven system \cite{schmittmann} describing the asymmetric diffusion of hard-core particles along a one-dimensional chain. In this paper we will consider a finite chain with $L$ sites. At late times the PASEP exhibits a relaxation towards a non-equilibrium stationary state. In the presence of two boundaries at which particles are injected and extracted with given rates, the bulk behaviour at stationarity is strongly dependent on the injection and extraction rates. The corresponding stationary phase diagram as well as various physical quantities have been determined by exact methods \cite{Derrida98,Schuetz00,GolinelliMallick06,DEHP,gunter,sandow,EsslerR95,PASEPstat1,PASEPstat2,BEreview}.
More recently exact dynamical properties such as relaxation rates have been derived. On an infinite lattice where random matrix techniques can be used, considerable progress has been made, see e.g. \cite{gunter97,johansson,praehoferS02a,TracyWidom08}.
For the PASEP on a finite ring, where particle number is conserved, relaxation rates were obtained by means of the Bethe ansatz some time ago \cite{dhar,BAring1a,BAring1b,BAring2}. The dynamical phase diagram of the PASEP with open boundaries was studied in \cite{GierE05,GierE06,dGE08}, again using Bethe ansatz methods.
In the present work we extend some of these results to the case of reverse bias, where the boundary parameters strongly oppose the bias in the bulk hopping rates. We will in particular focus on the coexistence line where the effects are most pronounced.
\subsection{The partially asymmetric exclusion process}
\begin{figure}[ht]
\centerline{
\begin{picture}(320,80)
\put(0,0){\epsfig{width=0.7\textwidth,file=aseprules.pdf}}
\end{picture}}
\caption{Transition rates for the partially asymmetric exclusion process.}
\label{fig:paseprules}
\end{figure}
We now turn to a description of the dynamical rules defining the PASEP on a one dimensional lattice with $L$ sites, see Figure~\ref{fig:paseprules}. At any given time
$t$ each site is either occupied by a particle or empty. The system is then updated as follows.
Particles attempt to hop one site to the right with rate $p$ and one site to the left with rate $q$. The hop is prohibited if the neighbouring site is occupied, and at sites $1$ and $L$ these rules are modified in the following way. If site $i=1$ is empty, a particle may enter the system with rate $\alpha$. If, on the other hand, this site is occupied, the particle will leave the system with rate $\gamma$ or attempt to hop to the right with rate $p$. Similarly, at $i=L$ particles are injected and extracted with rates $\delta$ and $\beta$ respectively, and attempt to hop to the left with rate $q$.
It is customary to associate a Boolean variable $\tau_i$ with every
site, indicating whether a particle is present ($\tau_i=1$) or not
($\tau_i=0$) at site $i$. Let $\bra0$ and $\bra1$ denote the standard
basis vectors in $\mathbb{C}^2$. A state of the system
at time $t$ is then characterised by the probability distribution
\begin{equation}
\bra{P(t)} = \sum_{\bm \tau} P(\bm{\tau}|t) \bra{\bm{\tau}},
\end{equation}
where
\begin{equation}
\bra{\bm{\tau}} = \bra{\tau_1,\ldots,\tau_L} = \bigotimes_{i=1}^{L} \bra{\tau_i}.
\end{equation}
The time evolution of $\bra{P(t)}$ is governed by the aforementioned
rules, which gives rise to the master equation
\begin{eqnarray}
\frac{{\rm d}}{{\rm d} t} \bra{P(t)} &=& M \bra{P(t)},
\label{eq:Markov}
\end{eqnarray}
where the PASEP transition matrix $M$ consists of two-body
interactions only and is given by
\begin{equation}
M = \sum_{k=1}^{L-1} I^{(k-1)}\otimes M_k\otimes I^{(L-k-1)} + m_1 \otimes I^{(L-1)}+ I^{(L-1)}\otimes m_L.
\label{eq:TransitionMatrix}
\end{equation}
Here $I^{(k)}$ is the identity matrix on the $k$-fold tensor product
of $\mathbb{C}^2$ and, for $1\leq k \leq L-1$, $M_k: \mathbb{C}^2\otimes \mathbb{C}^2
\rightarrow \mathbb{C}^2\otimes \mathbb{C}^2$ is given by
\begin{eqnarray}
M_k = \left(\begin{array}{@{}cccc@{}}
0 & 0 & 0 & 0\\
0 & -q & p & 0 \\
0 & q & -p & 0 \\
0 & 0 & 0 & 0
\end{array}\right).
\end{eqnarray}
The terms involving $m_1$ and $m_L$ describe injection
(extraction) of particles with rates $\alpha$ and $\delta$ ($\gamma$ and $\beta$) at
sites $1$ and $L$ respectively. Their explicit forms are
\begin{equation}
m_1=\left(\begin{array}{@{}cc@{}}-\alpha&\gamma \cr \alpha &-\gamma\cr\end{array}\right),\qquad
m_L= \left(\begin{array}{@{}cc@{}}-\delta&\beta\cr \delta&-\beta\cr \end{array}\right).
\label{h1l}
\end{equation}
The transition matrix $M$ has a unique stationary state
corresponding to the eigenvalue zero. For positive rates, all
other eigenvalues of $M$ have non-positive real parts. The late time
behaviour of the PASEP is dominated by the eigenstates of $M$ with the
largest real parts of the corresponding eigenvalues.
In the next sections we set $p=1$ without loss of generality and determine the eigenvalue of $M$ with the largest non-zero
real part using the Bethe ansatz. The latter reduces the problem of
determining the spectrum of $M$ to solving a system of coupled
polynomial equations of degree $3L-1$. Using these equations, the
spectrum of $M$ can be studied numerically for very large $L$, and, as
we will show, analytic results can be obtained in the limit for large $L$.
\subsection{Reverse bias}
The reverse bias regime corresponds to the boundary parameters opposing the direction of flow in the bulk. In this paper we achieve this by considering the regime
\begin{equation}
q < 1, \quad \alpha, \beta \ll \gamma, \delta.
\label{eq:rb}
\end{equation}
With $q<1$ the bias in the bulk is for diffusion left to right, but setting $\alpha,\beta \ll \gamma,\delta$ strongly
favours particles entering from the right and exiting on the left. Traditionally the reverse bias regime was considered only to exist for the case $\alpha=\beta=0$, see e.g. \cite{PASEPstat2}\footnote{Note that the authors of \cite{PASEPstat2} consider $q>1$ and $\gamma=\delta=0$ but we can simply translate their results to our notation using left-right symmetry.}
\section{Stationary phase diagram}
In this section we briefly review some known facts about the stationary phase diagram.
It will be convenient to use the following standard combinations of parameters \cite{sandow},
\begin{equation}
\kappa^{\pm}_{\alpha,\gamma} = \frac{1}{2\alpha} \left(
v_{\alpha,\gamma} \pm \sqrt{v_{\alpha,\gamma}^2 +4\alpha\gamma}\right),\qquad
v_{\alpha,\gamma} = 1-q-\alpha+\gamma.
\end{equation}
In order to ease notations we will use the following abbreviations,
\begin{equation}
a=\kappa^+_{\alpha,\gamma},\quad b=\kappa^+_{\beta,\delta},\quad c=\kappa^-_{\alpha,\gamma},\quad d=\kappa^-_{\beta,\delta}.
\label{eq:ab}
\end{equation}
The parameters $a,b,c,d$ are standard notation for the parameters appearing in the Askey-Wilson polynomials $P_n(x;a,b,c,d|q)$ used for the PASEP sationary state \cite{UchiSW}.
\subsection{Forward bias}
The phase diagram of the PASEP at stationarity was found by Sandow \cite{sandow} and is depicted in Figure~\ref{fig:statPD}. The standard phase diagram is understood to depend only on the parameters $a$ and $b$ defined in (\ref{eq:ab}) rather than $p,q,\alpha,\beta,\gamma,\delta$ separately, see e.g. the recent review \cite{BEreview}. However, we will show below that the picture may be more nuanced in the regime where $\alpha, \beta \ll \gamma, \delta$.
\begin{figure}[ht]
\centerline{\includegraphics[width=200pt]{statPD.pdf}}
\caption{Stationary state phase diagram of the PASEP.
The high and low density phases are separated by the
coexistence line (CL). The maximum current phase (MC) occurs at small
values of the parameters $a$ and $b$ defined in (\ref{eq:ab}). }
\label{fig:statPD}
\end{figure}
Many quantities of interest, such as density profiles and currents, have been calculated for particular limits of the PASEP. For the most general case these have been computed by \cite{UchiSW,UchiW}.
\subsection{Reverse bias}
Much less is known for the case of reverse bias.
The stationary state normalisation and current for $\alpha=\beta=0$ have been computed by \cite{PASEPstat2}. It is found that the current decays exponentially as
\begin{equation}
J = (1-q^{-1}) \left(\frac{\gamma\delta}{(1-q+\gamma)(1-q+\delta)} \right)^{1/2} q^{L/2-1/4}.
\end{equation}
It was argued in \cite{PASEPstat2} that non-zero values of $\alpha$ and $\beta$ would allow for a right flowing current of particles to be sustained, thus destroying the reverse bias phase. However, in the following we will see that when $\alpha$ and $\beta$ are $\mathcal{O}(q^m)$ for some $m>0$, the system still feels the effects of a reverse bias in its relaxation rates. It would be interesting to know whether such effects survive in the stationary state.
\section{Relaxation rates -- approach to stationarity}
The slowest relaxation towards the stationary state at asymptotically late times $t$ is given by ${\rm e}^{\mathcal{E}_1 t}$, where $\mathcal{E}_1$ is the eigenvalue of the transition matrix $M$ with the largest non-zero real part.
It is well-known that the PASEP can be mapped onto the spin-1/2 anisotropic Heisenberg chain with general (integrable) open boundary conditions \cite{sandow,EsslerR95}. By building on recent progress in applying
the Bethe ansatz to the diagonalisation of the Hamiltonian of the latter problem \cite{Cao03,Nepo02,YangZhang}, the Bethe ansatz equations for the PASEP with the most general open boundary conditions can be obtained. Recently these equations have also been derived directly using a (matrix) coordinate Bethe anstaz in the PASEP \cite{Simon09,CrampeRS}. The Bethe ansatz equations diagonalise the PASEP transition matrix. By analysing this set of equations for large, finite $L$ the eigenvalue of the transition matrix with the largest non-zero real part has been obtained in the forward bias regime \cite{GierE05,GierE06,dGE08}. In this regime, particles diffuse predominantly from left to right and particle injection and extraction occurs mainly at sites $1$ and $L$ respectively. We will briefly review these results.
\subsection{Bethe ansatz equations}
As was shown in \cite{GierE05,GierE06,dGE08}, all eigenvalues ${\cal E}$ of $M$ other than ${\cal E}=0$ can be expressed in terms of the roots $z_j$ of a set of $L-1$ non-linear algebraic equations as
\begin{eqnarray}
{\mathcal{E}}= -\mathcal{E}_0-\sum_{j=1}^{L-1}\varepsilon(z_j),
\label{eq:pasep_en}
\end{eqnarray}
where
\begin{equation}
\mathcal{E}_0 = \alpha+\beta+\gamma+\delta = (1-q)\left(\frac{1-ac}{(1+a)(1+c)}
+\frac{1-bd}{(1+b)(1+d)}\right),
\label{eq:e0}
\end{equation}
and the ``bare energy'' $\varepsilon(z)$ is
\begin{equation}
\varepsilon(z) = \frac{(q-1)^2 z}{(1-z)(qz-1)} =(1-q)\left(\frac{1}{z-1}-\frac{1}{qz-1}\right).
\label{eq:epsdef}
\end{equation}
The complex roots $z_j$ satisfy the Bethe ansatz equations
\begin{eqnarray}
\left[\frac{qz_j-1}{1-z_j}\right]^{2L} K(z_j) =\prod_{l\neq j}^{L-1}
\frac{qz_j-z_l}{z_j-qz_l} \frac{q^2z_jz_l-1} {z_jz_l-1},\
j=1\ldots L-1.\nonumber\\
\label{eq:pasep_eq}
\end{eqnarray}
Here $K(z)$ is defined by
\begin{equation}
K(z) =
\frac{(z+a)
(z+c)}{(qaz+1)
(qcz+1)} \frac{(z+b)
(z+d)}{(qbz+1)
(qdz+1)},
\end{equation}
and $a,b,c,d$ are given in \eref{eq:ab}.
\subsubsection{Analysis in the forward bias regime}
The analysis of the Bethe equations for the forward bias regime proceeds along the following lines. One first defines the counting function,
\begin{equation}
{\rm i} Y_L(z) = g_{\rm}(z) + \frac{1}{L} g_{\rm b}(z)
+ \frac{1}{L} \sum_{l=1}^{L-1} K(z_l,z),
\label{eq:logtasepBAE}
\end{equation}
where
\begin{eqnarray}
g_{\rm }(z) &=& \ln \left( \frac{z(1-qz)^2}{(z-1)^2}\right), \nonumber\\
g_{\rm b}(z) &=& \ln \left(\frac{z(1-q^2z^2)}{1-z^2}\right) +
\ln\left(\frac {z+a}{1+qaz}\frac{1+c/z}{1+qcz}\right) \nonumber\\
&& {}+
\ln\left(\frac {z+b}{1+qbz}\frac{1+d/z}{1+qdz}\right),
\label{eq:kernelDefFB}\\
K(w,z) &=& -\ln(w) -\ln \left( \frac{1-q z/w}{1-qw/z} \frac{1-q^2zw}{1-w z}\right).
\nonumber
\end{eqnarray}
We note that the choice of branch cuts in this definition of $K(w,z)$ differs from \cite{dGE08}, but is numerically stable for large $L$. A further discussion on the choice of branch cuts in $K$ is given in Section~\ref{sec:revasymp}.
Using the counting function, the Bethe ansatz equations \fr{eq:pasep_eq} can be cast in logarithmic form as
\begin{equation}
Y_L(z_j) = \frac{2\pi}{L} I_j\ ,\qquad j=1,\ldots,L-1,
\label{eq:Z=I}
\end{equation}
where $I_j$ are integers. Each set of integers $\{I_j|\; j=1,\ldots, L-1\}$ in (\ref{eq:Z=I}) specifies
a particular (excited) eigenstate of the transition matrix.
In \cite{dGE08}, the logarithmic Bethe equations were solved numerically to find the structure of the root distribution (an example of such a solution will be seen in the next section). By comparing the resulting eigenvalues with brute force diagonalisation of the transition matrix, the integers $I_j$ corresponding to the first excited state can be found for small values of $L$. The integers were found to be
\begin{equation}
I_j = -L/2+j\quad {\rm for}\quad j=1,\ldots,L-1.
\label{eq:Idef}
\end{equation}
Assuming \eref{eq:Idef} holds for all $L$, \fr{eq:Z=I} can be turned into an integro-differential equation for the
counting function $Y_L(z)$, which can be solved asymptotically in the limit of large $L$. Exact asymptotic
expressions were obtained for the first excited eigenvalue in the forward bias regime in all areas of the
stationary phase diagram except the maximal current phase. We recall here the expression for the coexistence
line:
\begin{equation}
\mathcal{E}_1 = \frac{1-q}{L^2} \frac{\pi^2}{(a^{-1}-a)} + \mathcal{O}(L^{-3}).
\label{eq:E_CL}
\end{equation}
\subsection{Dynamical phase diagram}
\begin{figure}[ht]
\centerline{\includegraphics[width=250pt]{phasediagram.png}}
\caption{Dynamical phase diagram of the PASEP in the forward bias
regime determined by the lowest excitation $\mathcal{E}_1$. The
horizontal axes are the boundary parameters $a$ and $b$ \fr{eq:ab} and
the vertical axis is the lowest relaxation rate. The latter goes to
zero for large systems on the coexistence line (CL) and in the maximum
current phase (MC). The curves and lines correspond to various
crossovers in the low and high density phase, across which
the relaxation rate changes non-analytically. }
\label{fig:phasediagram}
\end{figure}
The dynamical phase diagram for the PASEP resulting from an analysis in the regime $q<1$ is shown in Figure~\ref{fig:phasediagram}. It exhibits the same subdivision found from the analysis of the current in the stationary state: low and high density phases ($a>1$ and $b>1$) with dynamic exponent $z=0$, the coexistence line ($a=b>1$) with diffusive ($z=2$) behaviour and the maximum current phase ($a,b<1$) with KPZ ($z=3/2$) universality. Furthermore, the finite-size scaling of the lowest excited state energy of the transition matrix suggests the sub-division of both low and high-density phases into four regions respectively. These regions are characterized by different functional forms of the relaxation rates at asymptotically late times \cite{dGE08}, see also \cite{ProemeBE}.
\section{Bethe ansatz in the reverse bias regime}
\label{se:BAanalysis}
The reverse bias regime corresponds to taking parameters $a$, $b$ large, and $c$, $d$ towards $-1$ (note that
$-1 < c,d \le 0$). This change, and the resulting changes in the structure of the root distribution, requires
a reformulation of the counting function, which will be discussed in Section~\ref{sec:revasymp} and
\ref{ap:numsols}. But first we describe the changes to the root structure. In the following we address
solutions on the coexistence line, $a = b$, and to simplify computations further also set $c = d$.
\begin{figure}[b]
\begin{center}
\includegraphics{roots_forward_full}
\caption{Root distribution in the forward bias regime for $L = 40$, with parameters $a = b = 11$, $q = 0.3$
and $c = -0.07$. The dashed line marks the position of $-c$ and the region in the box is expanded
in Figure \ref{fig:roots_forward} below.}
\label{fig:roots_forward_full}
\end{center}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[$c = -0.07$]{
\includegraphics{roots_forward}
\label{fig:roots_forward}
}
\subfigure[$c = -0.088$]{
\includegraphics{roots_interm}
\label{fig:roots_interm}
}
\subfigure[$c = -0.1$]{
\includegraphics{roots_reverse}
\label{fig:roots_reverse}
}
\caption{Leading part of the root distribution for $L=40$, $a = b = 11$, $q = 0.3$, with $c = -0.07$ (forward bias),
$c = -0.088$ (intermediate stage), and $c = -0.1$ (reverse bias). The dashed line shows the position
of $-c$.}
\end{figure}
Moving from the forward to the reverse bias regime, the structure of the Bethe roots changes. From a forward
bias root distribution (Figures \ref{fig:roots_forward_full}, \ref{fig:roots_forward}), increasing $-c$ pushes
out a complex conjugate pair of roots along with the single root on the positive real axis (Figure
\ref{fig:roots_interm}). As $-c$ increases further, the complex conjugate pair pinch in to the real axis until some
point where they ``collapse'' (Figure \ref{fig:roots_reverse}) -- one root returns to close the contour on the
positive real axis leaving an isolated pair of real roots at
\begin{equation}
z^\pm = -c \pm e^{-\nu^\pm L}, \quad \nu^\pm > 0.
\end{equation}
In fact, there is a pair of isolated roots for all $k \in \mathbb{N}$ such that $-q^k c$ falls outside the contour of complex roots.
For $L$ large, the contour crosses the real axis at \cite{dGE08}
\begin{equation}
z^*(a) = \frac{1+4a+a^2-\sqrt{(1+4a+a^2)^2-4a^2}}{2a} \simeq \frac{1}{a},
\label{eq:zstar}
\end{equation}
where the approximation is for $a$ large. Therefore, the integer $m$ defined by $a$ and $c$ through the inequalities
\begin{equation}
-q^m c < \frac{1}{a} < -q^{m-1} c,
\label{eq:mdef}
\end{equation}
is equal to the maximum number of pairs of real roots as $L \to \infty$. While we take our parameters to have generic values, it would be of interest to analyse the system at the resonant values of $a$, $c$ where the inequalities become equalities, see e.g. \cite{CrampeRS10}.
\begin{figure}[hb]
\centering
\subfigure{
\includegraphics{allroots_100_m20}
}
\subfigure{
\includegraphics{contour_100_m20}
}
\caption{Complete root distribution (top) and zoomed in on the contour and first few pairs of isolated roots
(bottom) for $L=100$, $\bar{a} = 3.09$, $c = -0.9$, $q = 0.3$ and $m=20$ (at this resolution only five real pairs are visible in the top figure).}
\label{fig:exsolutions}
\end{figure}
We define
\begin{equation}
\bar{a} = q^{m-1} a,
\label{eq:abar}
\end{equation}
then, for example, we will reach a maximum $m = 20$ pairs of real roots with the parameters
\begin{equation}
\bar{a} = 3.09, \quad c = -0.9, \quad q = 0.3.
\label{eq:exparams}
\end{equation}
The corresponding root distribution is displayed in Figure~\ref{fig:exsolutions}.
We denote by $m'$ the number of pairs of real roots for a particular value of $L$, with $m' \le m$. Then the number of roots on the contour is $L - 2m' - 1$, which is plotted in
Figure \ref{fig:Lbar} for the parameters \eref{eq:exparams}. There are three regions to consider:
\begin{enumerate}
\item Isolated roots region: $L - 2m' - 1$ small and almost all roots occur as isolated pairs;
\item Crossover region: The contour of complex roots begins to form, $m' \simeq m$;
\item Asymptotic region: The number of roots on the contour, $L-2m'-1$ is large.
\end{enumerate}
\begin{figure}[here!]
\begin{center}
\includegraphics{Lbar_inc_m20}
\caption{Number of complex roots in the contour ($L - 2m' - 1$) against $L$. Only beyond $L = 40$ does the contour of complex roots begin to form.}
\label{fig:Lbar}
\end{center}
\end{figure}
\subsection{Isolated roots region}
\label{sec:allreal}
We first consider the case where $L - 2m' -1 $ is small and assume all roots are real, a situation similar to that encountered in the recent calculation of the PASEP current fluctuations \cite{dGE2011}. The $m'$ pairs of isolated roots are
\begin{equation}
z_k^\pm = -q^k c \pm e^{-\nu_k^\pm L}\qquad k=0,\ldots,m'-1,
\end{equation}
and the remaing root is close to $-q^m c$. Substituting these roots into the
expression for the eigenvalue \eref{eq:pasep_en} and dropping exponentially small parts, the sum telescopes, leaving
\begin{equation}
\mathcal{E}_1^{\rm{(is)}} =
-(1-q) \frac{2q^{m-1}}{q^{m-1} + \bar{a}}
-(1-q)q^{m'-1}\left(
\frac{-qc}{1+q^{m'}c}+\frac{-q^2c}{1+q^{m'+1}c}
\right).
\label{eq:E_CL_RR}
\end{equation}
The $L$ dependence enters through $m'$ and implies that the eigenvalue decays
as $q^{L/2}$ to a constant, which is $\mathcal{O}\left(q^m\right)$. This
differs markedly from the asymptotic expression in the forward bias regime
\eref{eq:E_CL}.
\subsection{Asymptotic region}
\label{sec:revasymp}
We now analyse the third region, where the number of roots on the contour is large. As we are interested in the asymptotic behaviour, we will assume we have reached the maximum number of pairs of real roots ($m' = m$).
We label the roots on the contour
\begin{eqnarray}
z_j, \quad j = 1 \ldots L - 2m - 1,
\end{eqnarray}
and the real roots, as before, as
\begin{eqnarray}
z_k^{\pm} = -q^k c \pm {\rm e}^{-\nu_k^\pm L}, \qquad \nu_k^\pm > 0,\qquad k = 0, \ldots, m-1.
\end{eqnarray}
The choice of branch cuts in the counting function \eref{eq:logtasepBAE}, \eref{eq:kernelDefFB} is suitable for the forward bias regime. In the reverse bias regime we must redefine the counting function. The form we use is
\begin{eqnarray}
{\rm i} Y_L(z)& = & g(z) + \frac{1}{L} g_{\rm b}(z)
+ \frac{1}{L} \sum_{l=0}^{m-1} \left( K_{\rm{r}}(z_l^-,z) + K_{\rm{r}}(z_l^+,z)\right) \nonumber \\
&& \hspace{2.5cm} + \frac{1}{L} \sum_{l=1}^{L-2m-1} K(z_l,z)
\label{eq:logtasepRB}
\end{eqnarray}
where now
\begin{eqnarray}
g(z) &=& \ln \left( \frac{z(1-qz)^2}{(1-z)^2}\right),
\nonumber\\
g_{\rm b}(z) &=& \ln \left(\frac{1-q^2z^2}{z(1-z^2)}\right)
+ 2 \ln\left(
c \frac {q^{m-1}z+\bar{a}}{q^{m-1}+q\bar{a}z}\frac{1+z/c}{1+qcz}
\right),
\nonumber\\
K(w,z) &=& -\ln \left( \frac{1 - qz/w}{1-qw/z} \frac{1-q^2zw}{1-zw}\right)
- \ln w, \label{eq:KernelDef} \\
K_{\rm{r}}(w,z) &=& -\ln \left( \frac{1 - q^{-1}w/z}{1-qw/z} \frac{1-q^2wz}{1-wz} \right)
- \ln(qz).
\nonumber
\end{eqnarray}
With this definition, the complex roots satisfy
\begin{equation}
Y_L(z_j) = \frac{2\pi}{L} I_j, \quad j = 1, \ldots, L - 2 m - 1,
\label{eq:Z=Icomplex}
\end{equation}
with the integers $I_j$ that correspond to the first excited state given by
\begin{equation}
I_j = -\left(\frac{L}{2} - m\right) + j.
\end{equation}
The real roots satisfy
\begin{equation}
Y_L(z_k^\pm) = \frac{2\pi}{L} I_k^\pm, \qquad k = 0, \ldots, m - 1,
\end{equation}
but the $I_k^\pm$ are not distinct. To find numerical solutions, we use an alternative form of the counting
function that numbers all roots consecutively (see \ref{ap:numsols}). However, the form \eref{eq:logtasepRB},
\eref{eq:KernelDef} is better suited to the following asymptotic analysis.
\subsubsection{Counting function for the complex roots}
We can account for the contribution of the isolated roots in \eref{eq:Z=Icomplex} and obtain Bethe equations for the complex roots alone. Ignoring exponentially small parts, the sum over real roots in \eref{eq:logtasepRB} telescopes,
\begin{eqnarray} \label{eq:greal}
\nonumber && \sum_{l=0}^{m-1}\Bigg(K_r(z_l^-,z) + K_r(z_l^+,z) \Bigg) \\
&\simeq& -2m \ln(q z)
+ 2\ln\left[
\left(1+\frac{z}{q^m c}\right)
\left(1+\frac{z}{q^{m-1}c}\right)
\left(c + q z\right)
\right] \nonumber\\
&& \hspace{0.5cm}
-\ln(z + c)
-\ln\left[
\frac{(1+q^{m+1}c z)(1+q^m c z)}{(1+q c z)(1 + c z)}
\right].
\end{eqnarray}
The complex roots lie on a simple curve in the complex plane, which approaches a closed contour as
$L\rightarrow \infty$. From \eref{eq:zstar} and \eref{eq:abar}, we see that the size of the
contour scales as $q^{m-1}$. We therefore change to the scaled variables
\begin{equation}
z = q^{m-1} \zeta,
\end{equation}
and take into account the $2m$ real roots ``missing'' from the contour by defining a new counting function
\begin{equation}
\overline{Y}_{L-2m}(\zeta) = \frac{L}{L-2m} Y_L(q^{m-1}\zeta).
\end{equation}
With the change to the scaled variables $\zeta$, we may drop any $\mathcal{O}(q^m)$ contributions.
For example,
\begin{eqnarray}
g(z) = g(q^{m-1}\zeta)
&=& (m-1)\ln q + \ln \left( \frac{\zeta(1-q^{m}\zeta)^2}{(1-q^{m-1}\zeta)^2}\right) \nonumber\\
&\simeq& (m-1)\ln q + \ln \zeta.
\end{eqnarray}
Collecting terms by order of $1/(L-2m)$, the new counting function is given by
\begin{equation}
{\rm i} \overline{Y}_{L-2m}(\zeta) = \bar{g}(\zeta) + \frac{1}{L-2m} \bar{g}_{\rm b}(\zeta)
+ \frac{1}{L-2m} \sum_{l=1}^{L-2m-1} \overline{K}(\zeta_l,\zeta),
\label{eq:cf}
\end{equation}
with
\begin{eqnarray}
\bar{g}(\zeta) &= \ln \zeta \label{eq:gSc}
\nonumber \\
%
\bar{g}_{\rm b}(\zeta) &= -\ln(\zeta)
+ 2\ln \left(\frac{\bar{a}\zeta}{1+q\bar{a}\zeta}\right)
+ 2 \ln\left(\zeta+qc\right) + 2\ln\left(1 + \frac{\zeta}{c}\right), \label{eq:gbSc} \\
%
\overline{K}(\omega, \zeta) &=
-\ln \left(\frac{1- q\zeta / \omega}{1 - q\omega /\zeta} \right) -\ln(\omega),
\nonumber
\end{eqnarray}
where we recall that $\bar{a}$ is defined in \eref{eq:abar}. The telescoped sum \eref{eq:greal} contributes terms to both $\bar g$ and $\bar{g}_{\rm b}$.
The Bethe ansatz equations for the scaled complex roots are
\begin{equation}
\overline{Y}_{L-2m}(\zeta_j) = \frac{2\pi}{L-2m}I_j, \quad j=1,\dots,L-2m-1,
\label{eq:logtasepSc}
\end{equation}
with
\begin{equation}
I_j=-\frac{L-2m}{2}+j.
\end{equation}
The eigenvalue, in terms of the scaled roots $\zeta_j$ is
\begin{equation}
\mathcal{E}_1^{\textrm{(rev)}} = -\mathcal{E}_0^{(\textrm{rev})}
-\sum_{l = 1}^{L-2m-1} \bar{\varepsilon}(\zeta_l),
\label{eq:pasep_enSc}
\end{equation}
where
\begin{equation}
\mathcal{E}_0^{(\textrm{rev})} = -2(1-q)q^{m-1}\left(
\frac{1}{q^{m-1}+\bar{a}} - \frac{qc}{1 + q^m c} \right),
\label{eq:EzeroSc}
\end{equation}
includes $\mathcal{E}_0$ and the contribution from the real roots, and the scaled bare energy is
\begin{equation}
\bar{\varepsilon}(\zeta) = -(1-q)^2q^{m-1}
\frac{\zeta}{\left(1-q^{m-1}\zeta\right)\left(1-q^m\zeta\right)}.
\label{eq:epsdefSc}
\end{equation}
\section{Asymptotic Analysis}
The equations \eref{eq:logtasepSc} are logarithmic Bethe equations for the complex roots in the reverse bias
regime. These equations have the same form as the forward bias equations \eref{eq:logtasepBAE} with the key
difference that there are now $L-2m-1$ roots instead of $L-1$ (recall that $m$ is defined by the condition \eref{eq:mdef}). The method used in \cite{dGE08} to derive exact large $L$ asymptotics for the eigenvalue $\mathcal{E}_1$ can be applied to the reverse bias regime. But we
require now that $L - 2m$ is large.
\begin{figure}[here!]
\begin{center}
\resizebox{6cm}{!}{\includegraphics{contour_scaled}}
\caption{Integration contour enclosing the complex roots.
\label{contour2}}
\end{center}
\end{figure}
As a simple consequence of the residue theorem we can write
\begin{eqnarray}\label{eq:sti}
\frac{1}{L-2m}\sum_{j=1}^{L-2m-1}f(\zeta_i)\nonumber
&=& \\
&&\hspace{-2cm}\oint_{C_1+C_2} \frac{d\zeta}{4\pi {\rm i}} f(\zeta)\overline{Y}^{\prime}_{L-2m}(\zeta)
\cot \left( \frac{1}{2}(L-2m)\overline{Y}_{L-2m}(\zeta)\right),
\end{eqnarray}
where $C = C_1 + C_2$ is a contour enclosing all the complex roots $\zeta_j$ (see Figure \ref{contour2}) and
$f(\zeta)$ is a function analytic inside $C$. We will use \eref{eq:sti} to write
an integro-differential equation for the counting function $\overline{Y}_{L-2m}(\zeta)$, but first we note some
properties of $\overline{Y}_{L-2m}(\zeta)$.
The complex roots $\zeta_j$ are connected by the contour $\textrm{Im}\left[\overline{Y}_{L-2m}(\zeta)\right] =
0$. This contour extends to a point on the negative real axis, $\zeta_{\rm c}$. We fix $\xi$ and $\xi^*$,
the endpoints of $C_1$ and $C_2$, on this contour by
\begin{equation}
\overline{Y}_{L-2m}(\xi^*) = -\pi + \frac{\pi}{L-2m}, \quad
\overline{Y}_{L-2m}(\xi) = \pi - \frac{\pi}{L-2m}.
\label{eq:xidef}
\end{equation}
On $C_1$, inside the contour connecting the roots, the imaginary part of $\overline{Y}_{L-2m}(\zeta)$ is
positive; on $C_2$ it is negative.
Using \eref{eq:sti} in \eref{eq:cf}, we obtain a non-linear integro-differential
equation for $\overline{Y}_{L-2m}(\zeta)$, which, using the properties just described, is written as an
integral over the contour of roots from $\xi^*$ to $\xi$, and correction terms integrated over $C_1$ and $C_2$:
\begin{eqnarray}
{\rm i}\,\overline{Y}_{L-2m}(\zeta) &=& \bar{g}(\zeta) + \frac{1}{L-2m} \bar{g}_{\rm b}(\zeta)
+ \frac{1}{2\pi}\int_{\xi^*}^{\xi}
\overline{K}(\omega,\zeta) \overline{Y}'_{L-2m}(\omega) {\rm d} \omega \nonumber \\
&& \hspace{-1cm} + \frac{1}{2\pi}\int_{C_1}
\frac{\overline{K}(\omega,\zeta)\overline{Y}'_{L-2m}(\omega)}{1-{\rm e}^{-{\rm i} (L-2m) \overline{Y}_{L-2m}(\omega)}}\, {\rm d} \omega
\nonumber \\
&& \hspace{-1cm} + \frac{1}{2\pi}\int_{C_2}
\frac{\overline{K}(\omega,\zeta)\overline{Y}'_{L-2m}(\omega)}{{\rm e}^{{\rm i} (L-2m) \overline{Y}_{L-2m}(\omega)}-1}\,{\rm d} \omega.
\label{eq:intY}
\end{eqnarray}
The correction terms can be approximated as in \cite{dGE08} giving
\begin{eqnarray}
{\rm i}\,\overline{Y}_{L-2m}(\zeta) &=& \bar{g}(\zeta) + \frac{1}{L-2m} \bar{g}_{\rm b}(\zeta)
+\frac{1}{2\pi}\int_{\zeta_{\rm c}^-}^{\zeta_{\rm c}^+}
\overline{K}(\omega, \zeta) \overline{Y}'_{L-2m}(\omega) {\rm d} \omega \nonumber\\
&&{}\hspace{-1.8cm} + \frac1{2\pi} \int_{\xi^*}^{\zeta_{\rm c}^-} \overline{K}(\omega,\zeta) \overline{Y}'_{L-2m}(\omega) {\rm d} \omega
+ \frac1{2\pi} \int_{\zeta_{\rm c}^+}^{\xi} \overline{K}(\omega,\zeta) \overline{Y}'_{L-2m}(\omega) {\rm d} \omega \nonumber\\
&&\hspace{-1.8cm} {} + \frac{\pi}{12(L-2m)^2} \left(
\frac{\overline{K}'(\xi^*,\zeta)}{\overline{Y}'_{L-2m}(\xi^*)}
- \frac{\overline{K}'(\xi,\zeta)}{\overline{Y}'_{L-2m}(\xi)}
\right) + \mathcal{O}\left(\frac{1}{(L-2m)^4}\right).
\label{eq:intY_M}
\end{eqnarray}
Here, the derivatives of $\overline{K}$ are with respect to the first argument, and we have extended the
integration contour beyond the endpoints, so that it pinches the negative real axis at points
$\zeta_{\rm c}^\pm=\zeta_{\rm c}\pm {\rm i} 0$.
Equation \eref{eq:intY_M} is solved as follows. We Taylor expand the integrands of the integrals from $\xi^*$ to $\zeta_c^-$ and from $\zeta_{\rm c}^+$ to $\xi$. Then we substitute the expansions
\begin{equation}
\overline{Y}_{L-2m}(\zeta)=\sum_{n=0}^\infty (L-2m)^{-n}y_n(\zeta),
\quad
\xi=\zeta_c+\sum_{n=1}^\infty (L-2m)^{-n}(\delta_n + i\eta_n)
\label{eq:expansion}
\end{equation}
and expand in inverse powers of $L-2m$. This yields a hierarchy of integro-differential equations for the functions $y_n(\zeta)$
\begin{equation}
y_n(\zeta) = g_n(\zeta) + \frac{1}{2\pi{\rm i}} \int_{\zeta_{\rm c}^-}^{\zeta_{\rm c}^+}
\overline{K}(\omega,\zeta) y'_n(\omega) \,d\omega.
\label{eq:yn}
\end{equation}
The integral is along the closed contour following the locus of roots. The first few driving terms
$g_n(\zeta)$ are given by
\begin{equation}
\renewcommand{\arraystretch}{1.2}
\begin{array}{r@{\hspace{2pt}}c@{\hspace{3pt}}l}
g_0(\zeta) &=& -{\rm i} \bar{g}(\zeta), \\
g_1(\zeta) &=& -{\rm i} \bar{g}_{\textrm b}(\zeta) + \kappa_1 + \lambda_1
\widetilde{\overline{K}}(\zeta_{\rm c}, \zeta), \\
g_2(\zeta) &=& \kappa_2 + \lambda_2 \widetilde{\overline{K}}(\zeta_{\rm c}, \zeta) + \mu_2
\overline{K}'(\zeta_{\rm c}, \zeta), \\
g_3(\zeta) &=& \kappa_3 + \lambda_3 \widetilde{\overline{K}}(\zeta_{\rm c}, \zeta) + \mu_3
\overline{K}'(\zeta_{\rm c}, \zeta)
+ \nu_3 \overline{K}''(\zeta_{\rm c}, \zeta).
\end{array}
\label{eq:gs}
\end{equation}
The functions $\bar{g}(\zeta)$ and $\bar{g_{\textrm b}}(\zeta)$ are defined in \eref{eq:gSc}
and \eref{eq:gbSc}, and
\begin{equation}
\widetilde{\overline{K}}(\zeta_{\rm c},\zeta) = -\ln(-\zeta_{\rm c}) + \ln\left( \frac{1-q \zeta_{\rm c}/\zeta}{1-q\zeta/\zeta_{\rm c}} \right)
\end{equation}
arises when we evaluate $\overline{K}(\zeta_{\rm c}^\pm, \zeta)$ and take the limit $\zeta_{\rm c}^\pm \to
\zeta_{\rm c}$.
The coefficients $\kappa_n$, $\lambda_n$, $\mu_n$ and $\nu_n$ are given in terms of
$\delta_n$, $\nu_n$, defined by \eref{eq:expansion}, and by derivatives of $y_n$ evaluated at
$\zeta_{\rm c}$. Explicit expressions are given in \ref{ap:Mcoef}. We note that although the
definition of the counting function has changed, and despite expanding in $L-2m$ instead of $L$, the form
of these coefficients and the driving terms are unchanged from \cite{dGE08}.
\subsection{Solving for the $y_n$}
We may now solve \eref{eq:yn} order by order. Working with the scaled roots, with the restrictions
on parameters of the reverse bias regime we can assume that $-q c$ lies inside the contour of
integration, and that $-\frac{1}{q\bar{a}}$ and $-c$ lie outside it. The calculation follows in
the same way as in \cite{dGE08}, so we give here only the result for the counting function:
\begin{eqnarray}
\overline{Y}_{L-2m}(\zeta) &=& y_0(\zeta) + \frac{1}{L-2m} y_1(\zeta) + \frac{1}{(L-2m)^2} y_2(\zeta)
\nonumber \\
&&+ \frac{1}{(L-2m)^3} y_3(\zeta)
+ \mathcal{O}\left(\frac{1}{(L-2m)^4}\right)\label{eq:cfsol},
\end{eqnarray}
where
\begin{eqnarray}
y_0(\zeta) &=& -{\rm i} \ln\left(-\frac{\zeta}{\zeta_{\rm c}}\right) \\
\nonumber y_1(\zeta) &=& -{\rm i} \ln\left(-\frac{\zeta}{\zeta_{\rm c}}\right)
-2{\rm i}\ln\left(\bar{a}\right) -\lambda_1\ln(-\zeta_{\rm c}) + \kappa_1 \\
\nonumber
&& -2{\rm i} \ln\left[\frac{\qpch{-q c/\zeta}}{\qpch{-qc/\zeta_{\rm c}}}\right]
+\lambda_1 \ln\left[ \frac{\qpch{q\zeta_{\rm c} /\zeta}}{\qpch{q \zeta/\zeta_{\rm c}}} \right] \\
&&- 2{\rm i}\ln\left[ \frac{\qpch{-\zeta/c}}{\qpch{-\zeta_{\rm c}/c}} \right] \\
\nonumber y_2(\zeta) &=& \kappa_2 -\lambda_2\ln(-\zeta_{\rm c})-\frac{\mu_2}{\zeta_{\rm c}}
+ \lambda_2\ln\left[\frac{\qpch{q\zeta_{\rm c}/\zeta}}{\qpch{q\zeta/\zeta_{\rm c}}}\right]
+ \mu_2 \Psi_1(\zeta|q) \\
\nonumber y_3(\zeta) &=& \kappa_3 -\lambda_3\ln(-\zeta_{\rm c})-\frac{\mu_3}{\zeta_{\rm c}}
+ \frac{\nu_3}{\zeta_{\rm c}^2}
+ \lambda_3\ln\left[\frac{\qpch{q\zeta_{\rm c}/\zeta}}{\qpch{q\zeta/\zeta_{\rm c}}}\right] \\
&& + \mu_3 \Psi_1(\zeta|q) - \nu_3 \Psi_2(\zeta|q)
\end{eqnarray}
Here $\qpch{a}$ denotes the q-Pochhammer symbol
\begin{equation}
\qpch{a} = \prod_{k=0}^{\infty} (1-aq^k),
\label{pochh}
\end{equation}
and we have defined functions\begin{equation}
\Psi_k(\zeta|q) =
\psi_k(\zeta|q^{-1})-\psi_k(\zeta_{\rm c}|q^{-1})
-\left(\psi_k(\zeta|q) - \psi_k(\zeta_{\rm c}|q)\right)
\end{equation}
with\footnote{
The definition of $\psi_k(\zeta|q)$ used in \cite{dGE08} is incorrect for expressions such as
$\psi_k(\zeta^{-1}|q^{-1})$.
}
\begin{equation}
\psi_k(\zeta|q) = \sum_{n=0}^{\infty}\frac{1}{(\zeta_{\rm c}-q^{n+1} \zeta)^k}\ .
\end{equation}
\subsection{Boundary conditions}
Substituting the expansions \eref{eq:expansion} into the boundary conditions
\eref{eq:xidef}, which fix the endpoints $\xi$ and $\xi^*$, we obtain
a hierarchy of conditions for $y_n(\zeta_{\rm c})$, e.g.
\begin{eqnarray}
\overline{Y}_{L-2m}(\xi)&=&y_0(\xi)+\frac{1}{L-2m}y_1(\xi)+\frac{1}{(L-2m)^2}y_2(\xi)+\ldots\nonumber\\
&=& y_0(\zeta_{\rm c})+\frac{1}{L-2m}\left[y_1(\zeta_{\rm c})+y_0'(\zeta_{\rm c})(\delta_1+{\rm i}\eta_1)\right]+\ldots\nonumber\\
&=& \pi-\frac{\pi}{L-2m}\ .
\label{eq:yexpand}
\end{eqnarray}
Solving this equation order by order, we find
\begin{equation}
\lambda_1 = 2 {\rm i}, \qquad \zeta_{\rm c} = -\frac{1}{\bar{a}},
\end{equation}
and
\begin{equation}
\renewcommand{\arraystretch}{1.2}
\begin{array}{r@{\hspace{2pt}}c@{\hspace{3pt}}l}
\lambda_3 &=& \mu_2 = \lambda_2 = \kappa_1= \kappa_2 = 0,\\
\nu_3 &=& \displaystyle \zeta_{\rm c} \mu_3 = -{\rm i} \pi^2 \zeta_{\rm c}.
\end{array}
\label{eq:constants}
\end{equation}
\subsection{Eigenvalue with largest non-zero real part}
As was done for the counting function, we use \eref{eq:sti} in \eref{eq:pasep_enSc} to obtain an
integro-differential equation for the eigenvalue. Evaluating the resulting integrals in the same way as for the counting function itself, we obtain
\begin{equation}
\mathcal{E} = -\mathcal{E}_0^{\textrm{(rev)}}
-\frac{L-2m}{2\pi} \oint_{\zeta_{\rm c}} \bar{\varepsilon}(\zeta)\overline{Y}_{L-2m}'(\zeta)\, d\zeta
-{\rm i} \sum_{n \geq 0} e_n (L-2m)^{-n},
\label{eq:energyMI}
\end{equation}
where the integral is over the closed contour on which the roots lie,
$\mathcal{E}_0^{\textrm{(rev)}}$ and $\bar \varepsilon$ are
given in \eref{eq:EzeroSc} and \eref{eq:epsdefSc}, and
\begin{eqnarray}
\nonumber e_0 &=& \lambda_1 \bar{\varepsilon}(\zeta_{\rm c}),\nonumber \\
e_1 &=& \lambda_2 \bar{\varepsilon}(\zeta_{\rm c}) + \mu_2 \bar{\varepsilon}'(\zeta_{\rm c}),\\
\nonumber e_2 &=& \lambda_3 \bar{\varepsilon}(\zeta_{\rm c}) + \mu_3 \bar{\varepsilon}'(\zeta_{\rm c})
+ \nu_3 \bar{\varepsilon}''(\zeta_{\rm c}).
\end{eqnarray}
Substituting the expansion \eref{eq:cfsol} for $\overline{Y}_{L-2m}(\zeta)$ into \eref{eq:energyMI} we arrive at the following result for the eigenvalue of the transition matrix with the largest non-zero real part
\begin{equation}
\mathcal{E}_1^{\textrm{(rev)}}
= \frac{q^{m-1}}{(L-2m)^2} \frac{\pi^2(1-q) \bar{a}}{q^{2(m-1)} - \bar{a}^2}
+ \mathcal{O}\left(\frac{1}{(L-2m)^3}\right).
\label{eq:E_CL_RB}
\end{equation}
Making the substitution $\bar{a} = q^{m-1}a$, we see that this is
\begin{equation}
\mathcal{E}_1^{\textrm{(rev)}}
= \frac{1}{(L-2m)^2} \frac{\pi^2(1-q)}{a^{-1} - a}
+ \mathcal{O}\left(\frac{1}{(L-2m)^3}\right),
\end{equation}
which differs from the expression in the forward bias regime \eref{eq:E_CL} only by changing $L \to L -2m$. In Section~\ref{se:conc} we discuss the physical interpretation of this result.
\section{Crossover region}
We have found two very different expressions for the eigenvalue: $\mathcal{E}_1^{\textrm{(is)}}$ in \eref{eq:E_CL_RR} describing the
relaxation for relatively small $L$, dominated by the isolated real roots, and $\mathcal{E}_1^{\textrm{(rev)}}$ in \eref{eq:E_CL_RB} describing the asymptotic relaxation when $L - 2m$ is large. As $L$ increases, the contour of complex roots begins to form. The first
approximation is no longer valid, and the second cannot be applied until the contour is sufficiently large.
Nevertheless, we can compare the magnitude of the two expressions in this region. Assuming $m' = m$, we can
approximate \eref{eq:E_CL_RR} as
\begin{equation}
\mathcal{E}_1^{\textrm{(is)}}
\simeq -q^{m-1}(1-q)\left( \frac{2}{\bar{a}} - (1 + q) qc \right).
\end{equation}
The asymptotic expression \eref{eq:E_CL_RB} can be approximated as
\begin{equation}
\mathcal{E}_1^{\textrm{(rev)}} \simeq
-q^{m-1}(1-q)\frac{\pi^2}{(L-2m)^2} \frac{1}{\bar{a}}.
\end{equation}
The contour begins to form when $L-2m = 2$, and at this point we see that
the two expressions are comparable. Figure \ref{fig:E1_crossover} shows the two expressions and
the numerically calculated value in the crossover region.
\begin{figure}[here!]
\begin{center}
\includegraphics{E1_crossover_m20}
\caption{$\mathcal{E}_1^{\textrm{(is)}}$ (dashed), $\mathcal{E}_1^{\textrm{(rev)}}$ (dotted),
and numerical value (datapoints connected by thin solid line) for $m=20$ and parameters \eref{eq:exparams}.
\label{fig:E1_crossover}}
\end{center}
\end{figure}
\section{Conclusion}
\label{se:conc}
We have computed the relaxation rate on the coexistence line of the partially asymmetric exclusion process
with reverse biased boundary rates. Such rates introduce a length scale $m \approx -\log(-ac)/\log q$ in the system, determined by \eref{eq:mdef}. There are two distinct sub regimes. When the system size is small compared to the reverse
bias, i.e. $L \lesssim 2m$, the relaxation is exponential in the system size and the inverse relaxation time
is given by \eref{eq:E_CL_RR}. For large systems, $L\gg 2m$, the relaxation on the coexistence line is
diffusive but the inverse relaxation time vanishes with the square of a reduced system size $L-2m$. This
behaviour can be obtained by an effective one particle diffusion model as in \cite{GierE06,KolomeiskySKS} but
in a reduced system of size $L-2m$.
We thus observe the effects of the reverse bias on the relaxation rate even with non-zero (but small) rates in the forward direction -- provided the rates in the reverse direction are large enough. We suspect that in this case the forward contribution to the stationary current of particles is not strong enough to destroy the reverse bias phase, contrary to the argument in \cite{PASEPstat2}. It would
be of interest to revisit the calculation of the stationary current to see whether this is indeed the case.
The corresponding physical interpretation of these results is the following. In the setup of this paper, where the bulk bias is from left to right, the system will fill up completely on the right hand side over a length $2m$. In the remaining space of size $L-2m$, the system behaves as a regular PASEP on the coexistence line, and forms a uniformly distributed domain wall. As far as we are aware, the stationary density profile of the PASEP in the reverse bias regime has not been computed analytically, and it would be interesting to see whether this picture is confirmed by such a calculation.
\section*{Acknowledgments}
Our warm thanks goes to Fabian Essler for discussions. Financial assistance from the Australian Research Council is gratefully acknowledged.
| {
"attr-fineweb-edu": 1.404297,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfk05qhLBvG9b_i7M | \section{Introduction}
In this paper we are interested in finding the solution of the following equation
\begin{equation}
\left(\frac{{\partial }^4}{\partial x^4}-2 c \frac{{\partial }^4}{\partial x^2\partial y^2}+\frac{{\partial }^4}{\partial y^4}\right)u\left(x,y\right) = 0,\, \, c>0. \label{GrindEQ__1_}
\end{equation}
Depending on the value of $c$ we may consider three cases. Namely, the case where $0<c<1$ and we call it as the
$c$-biwave equation of the elliptic type, the case where $c>1$ and we call it as the $c$-biwave equation of hyperbolic type,
and in the case where $c=1$ Eq.\eqref{GrindEQ__1_} is the well-known biwave equation.
The biwave equation has been used in modeling of
$d$-wave superconductors (see for instance \cite{Feng11}, and references therein) or in probability
theory \cite{Pogor05,Kolo15}. In \cite{Grysh18} the author studied Eq.\eqref{GrindEQ__1_} in the case where $c<-1$ and
considered its application to theory of plain orthotropy.
It is easily verified that any equation of the form
\[\left(A\frac{{\partial }^4}{\partial x^4}+2B\frac{{\partial }^4}{\partial x^2\partial y^2}+C\frac{{\partial }^4}{\partial y^4}\right)u\left(x,y\right)=0,\]
where $AC>0$ and $AB<0$ can be reduced to Eq.\eqref{GrindEQ__1_} by changing variables. To obtain all solutions of Eq. \eqref{GrindEQ__1_} for $1\ne c>0$ we will use the method developed in \cite{Pogor14}. According to such approach we need a commutative algebra with basis containing $e_1$, $e_2$ such that
\begin{equation}
e^4_1-2 c\,e^2_1 e^2_2+e^4_2=0. \label{GrindEQ__2_}
\end{equation}
Then, we study monogenic functions on the subspace of this algebra containing $e_1$, $e_2$ and show that any solution
of Eq. \eqref{GrindEQ__1_} can be obtained as a component of such monogenic functions.
\section{Hyperbolic case}
Firstly we study Eq. \eqref{GrindEQ__1_} in the case where $c>1$, which is said to be hyperbolic. Let us consider an
associative commutative algebra over the real field ${\mathbb R}$
$$A_c=\left\{x\mathbf {u} +y \mathbf {f}+z \mathbf {e}+v \mathbf {fe} : x,y,z,v \in \mathbb{R} \right\}$$
with a basis ${\mathbf u}$, ${\mathbf f}$,
${\mathbf e}$, $\mathbf {fe}$, where ${\mathbf u}$ is the identity element of $A_c$ and the following Cayley table holds
$\mathbf {fe}=\mathbf {ef}$,
${\mathbf {f}}^2 = {\mathbf {u}}$, ${{\mathbf e}}^2 = {\mathbf u}-m \mathbf {fe}$, where $m=\sqrt{2(c-1)}$.
The basis elements ${\mathbf u}$, ${\mathbf e}$ satisfy Eq. \eqref{GrindEQ__2_}.
It is easily verified that for $c>1$ algebra $A_c$ has the following idempotents
\[
i_1=\frac{k_1}{k_1+k_2}{\mathbf u}-\frac{\mathbf {f}\sqrt{2}}{k_1+k_2}{\mathbf e},
\]
\begin{equation}
i_2=\frac{k_2}{k_1+k_2}{\mathbf u}+\frac{\mathbf {f}\sqrt{2}}{k_1+k_2}{\mathbf e}, \label{GrindEQ__3_}
\end{equation}
where $k_1=\sqrt{c+1}-\sqrt{c-1}$, $k_2=\sqrt{c+1}+\sqrt{c-1}$.
Therefore, we have
\[i_1+i_2= {\mathbf u}\]
and
\begin{align*}
i_1 \,i_2=\frac{k_1 k_2}{{\left(k_1+k_2\right)}^2}{\mathbf u}-\frac{\sqrt{2}k_2}{{\left(k_1+k_2\right)}^2}\mathbf {f}{\mathbf e}
+\frac{\sqrt{2}k_1}{{\left(k_1+k_2\right)}^2}\mathbf {f}{\mathbf e} \\
-\frac{2}{{\left(k_1+k_2\right)}^2}{\mathbf u}
+\frac{2m}{{\left(k_1+k_2\right)}^2}\mathbf {f}{\mathbf e}=0.
\end{align*}
It is easily seen that
\begin{equation}
{\mathbf e} ={\mathbf f} \frac{k_1}{\sqrt{2}}i_2 - {\mathbf f} \frac{k_2}{\sqrt{2}}i_1. \label{GrindEQ__4_}
\end{equation}
Consider a subspace $B_c$ of algebra $A_c$ of the following form
\[B_c=\left\{x{\mathbf u}+y{\mathbf e}\mathrel{\left|\vphantom{x{\mathbf u}+y{\mathbf e} x,y\in {\mathbb R}}\right.\kern-\nulldelimiterspace}x,y\in {\mathbb R}\right\}.\]
\begin{defin0} \textit{A function } $g:\,B_c\to A_c$ \textit{ is called differentiable (or monogenic) on }
$B_c$ \textit{ if for any } $B_c\ni w=x{\mathbf u}+y{\mathbf e}$ \textit{ there exists a unique element }
$g'\left(w\right)$ \textit{ such that for any } $h\in B_c$
\[\lim_{{\mathbb R}\ni \varepsilon \to 0} \frac{g\left(w+\varepsilon h\right)-g\left(w\right)}{\varepsilon }
=h g'\left(w\right),\]
\textit{where }$h g'\left(w\right)$\textit{ is the product of }$h$\textit{ and }$g'\left(w\right)$\textit{ as elements of} $A_c$.
\end{defin0}
It follows from \cite{Pogor14} that a function $g\left(w\right)={\mathbf u}\,u_1\left(x,y\right)+{\mathbf f}\,u_2\left(x,y\right)
+{\mathbf e}\,u_3\left(x,y\right)+{\mathbf {f\,e}} \,u_4\left(x,y\right)$
is monogenic if and only if there exist continuous partial derivatives $\frac{\partial u_i\left(x,y\right)}{\partial x}$, $\frac{\partial u_i\left(x,y\right)}{\partial y},$ $i=1,2,3,4$ and it satisfies the following Cauchy-Riemann type conditions
\begin{equation*}
\, {\mathbf e}\frac{\partial }{\partial x}g\left(w\right)={\mathbf u}\frac{\partial }{\partial y}g\left(w\right), \, \forall w\in B_c,
\end{equation*}
or
\[\frac{\partial u_1\left(x,y\right)}{\partial y}=\frac{\partial u_3\left(x,y\right)}{\partial x},\]
\[\frac{\partial u_2\left(x,y\right)}{\partial y}=\frac{\partial u_4\left(x,y\right)}{\partial x},\]
\[\frac{\partial u_3\left(x,y\right)}{\partial y}=\frac{\partial u_1\left(x,y\right)}{\partial x}-m\frac{\partial u_4\left(x,y\right)}{\partial x},\]
\[\frac{\partial u_4\left(x,y\right)}{\partial y}=\frac{\partial u_2\left(x,y\right)}{\partial x}-m\frac{\partial u_3\left(x,y\right)}{\partial x}.\]
It is also proved in \cite{Pogor14} that if $g$
is monogenic then its components $u_i\left(x,y\right)$ satisfies Eq. \eqref{GrindEQ__1_}.
By passing in $B_c$ from the basis ${\mathbf u}$, ${\mathbf e}$ to the basis $i_1$, $i_2$, we have
\[w=x\,{\mathbf u}+y\,{\mathbf e}=\left(x-{\mathbf f}\frac{k_2}{\sqrt{2}}y\right)\,i_1 +\left(x+{\mathbf f}\frac{k_1}{\sqrt{2}}y\right)\,i_2.\]
\begin{lem0}
\textit{A function }$g:\,B_c\to A_c$\textit{, where }$c>1$\textit{, is differentiable if and only if it can be
represented as follows}
\begin{equation} g\left(w\right)=\alpha \left(w_1\right)i_1+\beta \left(w_2\right) i_2, \label{GrindEQ__15_}
\end{equation}
\textit{where }$w_1=x-{\mathbf f}\frac{k_2}{\sqrt{2}}y$\textit{, }$w_2=x+{\mathbf f}\frac{k_1}{\sqrt{2}}y$
\textit{ and }$\alpha \left(w_1\right)$\textit{, }$\beta \left(w_2\right)$
\textit{ have continuous partial derivatives $\frac{\partial }{\partial x}\alpha \left(w_1\right), \frac{\partial }{\partial y}\alpha \left(w_1\right), \frac{\partial }{\partial x}\beta \left(w_2\right), \frac{\partial }{\partial y}\beta \left(w_2\right)$ satisfying }
\[\frac{\partial }{\partial y}\alpha \left(w_1\right)=-{\mathbf f}\frac{k_2}{\sqrt{2}} \frac{\partial }{\partial x}\alpha \left(w_1\right),\] \[\frac{\partial }{\partial y}\beta \left(w_2\right)={\mathbf f}\frac{k_1}{\sqrt{2}} \frac{\partial }{\partial x}\beta \left(w_2\right).\]
\end{lem0}
\begin{proof}
Sufficiency can be verified directly. Indeed,
\[\frac{\partial }{\partial y}g\left(w\right)=\frac{\partial }{\partial y}\alpha \left(w_1\right)i_1+\frac{\partial }{\partial y}\beta \left(w_2\right)i_2\]
\[=-{\mathbf f}\frac{k_2}{\sqrt{2}} \frac{\partial }{\partial x}\alpha \left(w_1\right)i_1+{\mathbf f}\frac{k_1}{\sqrt{2}} \frac{\partial }{\partial x}\beta \left(w_2\right)i_2\]
On the other hand, taking into account Eqs. \eqref{GrindEQ__3_}, \eqref{GrindEQ__4_}, we have
\[{\mathbf e}\frac{\partial }{\partial x}g\left(w\right)=\left({\mathbf f}\frac{k_1}{\sqrt{2}}i_2 - {\mathbf f}\frac{k_2}{\sqrt{2}}i_1\right)\left(\frac{\partial }{\partial x}\alpha \left(w_1\right)i_1 + \frac{\partial }{\partial x}\beta \left(w_2\right)i_2\right)\]
\[= -{\mathbf f}\frac{k_2}{\sqrt{2}} \frac{\partial }{\partial x}\alpha \left(w_1\right)i_1 +{\mathbf f}\frac{k_1}{\sqrt{2}} \frac{\partial }{\partial x}\beta \left(w_2\right)i_2.\]
Hence,
\[{\mathbf e}\frac{\partial }{\partial x}g\left(w\right)={\mathbf u}\frac{\partial }{\partial y}g\left(w\right).\]
Now let us prove necessity. Suppose that a function
\[g\left(w\right)={\mathbf u}u_1\left(x,y\right)+{\mathbf f}\,u_2\left(x,y\right)+{\mathbf e}\,u_3\left(x,y\right)
+{\mathbf {f\,e}}\,u_4\left(x,y\right)\]
is monogenic on $B_c$. Let us define
\[\alpha \left(w_1\right)={\mathbf u}\left(u_1\left(x,y\right) - \frac{k_2}{\sqrt{2}}u_4\left(x,y\right)\right)+{\mathbf f}\left(u_2\left(x,y\right)-\frac{k_2}{\sqrt{2}}u_3\left(x,y\right)\right),\]
\[\beta \left(w_2\right)={\mathbf u}\left(u_1\left(x,y\right) + \frac{k_1}{\sqrt{2}}u_4\left(x,y\right)\right)+{\mathbf f}\left(u_2\left(x,y\right)+\frac{k_1}{\sqrt{2}}u_3\left(x,y\right)\right).\]
Thus, we have
\[\frac{\partial }{\partial y}\alpha \left(w_1\right)={\mathbf u}\left(\frac{\partial u_3\left(x,y\right)}{\partial x}-\frac{k_2}{\sqrt{2}}\left(\frac{\partial u_2\left(x,y\right)}{\partial x}-m\frac{\partial u_3\left(x,y\right)}{\partial x}\right)\right)\]
\[\qquad + {\mathbf f}\left(\frac{\partial u_4\left(x,y\right)}{\partial x}-\frac{k_2}{\sqrt{2}}\left(\frac{\partial u_1\left(x,y\right)}{\partial x}-m\frac{\partial u_4\left(x,y\right)}{\partial x}\right)\right)\]
\[=-{\mathbf f}\frac{k_2}{\sqrt{2}}\frac{\partial u_1\left(x,y\right)}{\partial x} - {\mathbf u}\frac{k_2}{\sqrt{2}}\frac{\partial u_2\left(x,y\right)}{\partial x}+{\mathbf u}\left(\frac{k_2}{\sqrt{2}}\ m+1\right)\frac{\partial u_3\left(x,y\right)}{\partial x}\]
\[\qquad +{\mathbf f}\left(\frac{k_2}{\sqrt{2}}\ m+1\right)\frac{\partial u_4\left(x,y\right)}{\partial x}.\]
Taking into account that
\[\frac{k_2}{\sqrt{2}} m+1=\sqrt{c^2-1}+c=\frac{k^2_2}{2},\]
we have $\frac{\partial }{\partial y}\alpha \left(w_1\right)=-{\mathbf f}\frac{k_2}{\sqrt{2}} \frac{\partial }{\partial x}\alpha \left(w_1\right)$.
Much in the same manner, it can be shown that $\frac{\partial }{\partial y}\beta \left(w_2\right)={\mathbf f}\frac{k_1}{\sqrt{2}} \frac{\partial }{\partial x}\beta \left(w_2\right)$\textit{.}
\end{proof}
\begin{rems0} \textit{Considering variables }$x$\textit{, }$y_1=-\frac{k_2}{\sqrt{2}}y$\textit{ and }$x$\textit{, }$y_2=\frac{k_1}{\sqrt{2}}y,$\textit{ we have }
\begin{equation} \label{GrindEQ__5_}
\frac{\partial }{\partial y_1}\alpha = {\mathbf f} \frac{\partial }{\partial x}\alpha,
\end{equation}
\[\frac{\partial }{\partial y_2}\beta = {\mathbf f} \frac{\partial }{\partial x}\beta.\]
\textit{Hence,} \textit{it is easily verified that if the components $\alpha_1$, $\alpha_2$ of} $\alpha \left(w_1\right)=\alpha_1 \left(w_1\right)+{\mathbf f}\alpha_2 \left(w_1\right)$ \textit{have continuous partial derivatives $\frac{{\partial }^2}{\partial x^2}\alpha_k \left(w_1\right)$ and $\frac{{\partial }^2}{\partial y^2_1}\alpha_k \left(w_1\right)$, $k=1,2$ then they satisfy the wave equation}
\[\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_1}\right)u\left(x,y_1\right)=0.\]
\textit{Similarly, the components $\beta_1$, $\beta_2$ of} $\beta \left(w_2\right)=\beta_1 \left(w_2\right)+{\mathbf f}\beta_2 \left(w_2\right)$ \textit{satisfy the wave equation}
\[\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_2}\right)u\left(x,y_2\right)=0.\]
\end{rems0}
\begin{theor10} \label{the1}
$u\left(x,y\right)$\textit{ is a solution of Eq. \eqref{GrindEQ__1_} for }$c>1$\textit{ if and only if for some }$i,j\in \left\{1,2\right\}$\textit{ it can be represented as follows}
\[u\left(x,y\right)={\alpha }_i\left({\omega }_1\right)+{\beta }_j\left({\omega }_2\right),\]
\textit{where }${\alpha }_i\left({\omega }_1\right),{\beta }_j\left({\omega }_2\right)$\textit{ are four times continuous differentiable components of }$\ \alpha \left({\omega }_1\right)$\textit{ and }$\beta \left({\omega }_2\right)$\textit{ of monogenic function }$g\left(\omega \right)$\textit{ in the decomposition \eqref{GrindEQ__15_} i.e.,}
\[g\left(\omega \right)=\alpha \left({\omega }_1\right)i_1+\beta \left({\omega }_2\right)i_2,\]
\textit{where }$\alpha \left({\omega }_1\right)={\alpha }_1\left({\omega }_1\right)+{\mathbf f}{\alpha }_2\left({\omega }_1\right)$\textit{, }$\beta \left({\omega }_2\right)={\beta }_1\left({\omega }_2\right)+{\mathbf f}{\beta }_2\left({\omega }_2\right)$\textit{ satisfy Eq.}
\eqref{GrindEQ__5_}.
\end{theor10}
\begin{proof}
As it was mentioned above $u\left(x,y\right)={\alpha }_i\left({\omega }_1\right)+{\beta }_j\left({\omega }_2\right)$ is a solution for $c>1$ of Eq. \eqref{GrindEQ__1_}.
Now suppose that $u\left(x,y\right)$ is a solution of Eq. \eqref{GrindEQ__1_}. It is easily verified that
\begin{equation}
\left(\frac{{\partial }^4}{\partial x^4}-2c\frac{{\partial }^4}{\partial x^2\partial y^2}+\frac{{\partial }^4}{\partial y^4}\right)u\left(x,y\right)=\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_1}\right)\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_2}\right)u\left(x,y\right)=0. \label{GrindEQ__11_}
\end{equation}
It is easily seen that Eq. \eqref{GrindEQ__11_} is equivalent to the set of the following systems
\[\left\{ \begin{array}{c}
\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_2}\right)u\left(x,y\right)=v_1\left(x,y\right), \\
\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_1}\right)v_1\left(x,y\right)=0 \end{array}
\right.\]
or
\[\left\{ \begin{array}{c}
\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_1}\right)u\left(x,y\right)=v_2\left(x,y\right), \\
\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_2}\right)v_2\left(x,y\right)=0. \end{array}
\right.\]
Let us consider the first system. Since any solution of $\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_k}\right)v_k\left(x,y\right)=0$ is of the form $v_k\left(x,y\right)=f_1\left(x+y_k\right)+f_2\left(x-y_k\right)$, where $f_i$, $i=1,2$ are arbitrary twice differentiable functions it follows from the second equation of the system that
\[v_1\left(x,y\right)=f_1\left(x+y_1\right)+f_2\left(x-y_1\right).\]
Thus, the first equation of the system is
\begin{equation}
\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_2}\right)u\left(x,y\right)=f_1\left(x+y_1\right)+f_2\left(x-y_1\right). \label{GrindEQ__12_}
\end{equation}
It is easily seen that a partial solution of Eq. \eqref{GrindEQ__12_} is
\begin{align*}U\left(x,y\right)=\frac{k^2_1}{k^2_1-k^2_2}\left(F_1\left(x-\frac{k_2}{\sqrt{2}}y\right)+F_2\left(x+\frac{k_2}{\sqrt{2}}y\right)\right) \\=\frac{k^2_1}{k^2_1-k^2_2}\left(F_1\left(x+y_1\right)+F_2\left(x-y_1\right)\right),\end{align*}
where $F^{''}_k=f_k$, $k=1,2$.
Thus, the general solution of the system is as follows
\[u\left(x,y\right)=g_1\left(x+y_2\right)+g_2\left(x-y_2\right)+\frac{k^2_1}{k^2_1-k^2_2}\left(F_1\left(x+y_1\right)+F_2\left(x-y_1\right)\right)\]
Let us put ${\alpha }_1\left({\omega }_1\right)=\frac{k^2_1}{k^2_1-k^2_2}\left(F_1\left(x+y_1\right)+F_2\left(x-y_1\right)\right)$ and ${\beta }_2\left({\omega }_2\right)=g_1\left(x+y_2\right)+g_2\left(x-y_2\right)$.
Taking into account that $\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_1}\right) \left[\frac{k^2_1}{k^2_1-k^2_2}\left(F_1\left(x+y_1\right)+F_2\left(x-y_1\right)\right)\right]=0$ and \\ $\left(\frac{{\partial }^2}{\partial x^2}-\frac{{\partial }^2}{\partial y^2_2}\right)\left[g_1\left(x+y_2\right)+g_2\left(x-y_2\right)\right]=0$ we conclude the proof for the first system.
The case of the second system can be proved similarly.
\end{proof}
\section{Elliptic case}
Now we consider an associative commutative algebra $A_c$, where $0<c<1$, over the complex field ${\mathbb C}$
with a basis ${\mathbf u}$, ${\mathbf e}$ and the following Cayley table
${\mathbf {u\,e}} ={\mathbf {e\,u}} = {\mathbf e}$, ${{\mathbf e}}^2={\mathbf u}+ {\mathbf i}\mu {\mathbf e}$,
where $\mu =\sqrt{2(1-c)}$. The matrix representations of ${\mathbf u}$ and ${\mathbf e}$ are
\[{\mathbf u}=\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \end{array}
\right){\mathbf ,\ \ }{\mathbf e}=\left( \begin{array}{cc}
0 & 1 \\
1 & {\mathbf i}\mu \end{array}
\right).\]
Hence, we have the following traces of these representations
\[{tr}\left({\mathbf {uu}}\right)=2, \quad {tr}\left({\mathbf {ue}}\right)={\mathbf i}\mu ,\quad {tr}\left({\mathbf {ee}}\right)=2-{\mu }^2.\]
Since
\[{det}\left( \begin{array}{cc}
{tr}\left({\mathbf {uu}}\right) & {tr}\left({\mathbf {ue}}\right) \\
{tr}\left({\mathbf {ue}}\right) & {tr}\left({\mathbf {ee}}\right) \end{array}
\right)=2(1+c)\ne 0,\]
then, $A_c$ is a semi-simple algebra \cite{Waerden59}.
By following similar steps as in Eq. \eqref{GrindEQ__3_} we can show that for $0<c<1$ algebra $A_c$ has the
following idempotents
\begin{equation}
I_-=\frac{k_1}{k_1+k_2}{\mathbf u}+\frac{\sqrt{2}}{k_1+k_2}{\mathbf e},\quad
I_+=\frac{k_2}{k_1+k_2}{\mathbf u}-\frac{\sqrt{2}}{k_1+k_2}{\mathbf e}, \label{GrindEQ__6_}
\end{equation}
where $k_1=\sqrt{c+1}-{\mathbf i}\sqrt{1-c}$, $k_2=\sqrt{c+1}+{\mathbf i}\sqrt{1-c}$.
It is also easily verified that these idempotents also satisfy
\[I_-+I_+={\mathbf u}\]
and
\[I_-\,I_+=0.\]
It is straightforward to see that
\begin{equation}
{\mathbf e} = \frac{ k_2}{\sqrt{2}}\,I_- - \frac{ k_1}{\sqrt{2}}\,I_+. \label{GrindEQ__7_}
\end{equation}
\begin{lem0}
\textit{All non-zero elements of subspace }$B_c=\left\{x\,{\mathbf u}+y\,{\mathbf e}\mathrel{\left|\vphantom{x\,{\mathbf u}
+y\,{\mathbf e}\,\, x,y\in {\mathbb R}}\right.\kern-\nulldelimiterspace}x,y\in {\mathbb R}\right\}$
\textit{ of algebra }$A_c$\textit{ are invertible, that is, if }$0\ne w\in B_c$\textit{ then there exists} $w^{-1}\in B_c$.
\end{lem0}
\begin{proof} Suppose $w=s\,{\mathbf u}+t\,{\mathbf e}\in B_c$.
Let us show that there exists $w^{-1}=x\,{\mathbf u}+y\,{\mathbf e}$,
$x,y\in {\mathbb R}$ such that $w\,w^{-1}=1$. Indeed, the equation
\[\left(s\,{\mathbf u}+t\,{\mathbf e}\right)\left(x\,{\mathbf u}+y\,{\mathbf e}\right)={\mathbf u}\]
has a unique solution since the determinant of the system
\[ \begin{array}{c}
sx+ty=1, \\
tx+\left(s+{\mathbf i}\mu t\right)y=0, \end{array}
\]
where $x,y$ are unknown, is $\Delta =s^2-t^2+{\mathbf i}\mu ts$ and $\Delta =0$ if and only if $s=t=0$.
A function $f\left(w\right)$, $w\in B_c$ is said to be differentiable if it is differentiable in the common sense, i.e., for all
$w\in B_c$ there exists the following limit
\[\lim_{B_c \ni \Delta w\to 0} \frac{f\left(w+\Delta w\right)-f\left(w\right)}{\Delta w} =f'\left(w\right).\]
It is easily seen that if $f$ is differentiable then it is monogenic and hence, it satisfies the following Cauchy-Riemann type of conditions \cite{Pogor14}
\[
{\mathbf e}\frac{\partial }{\partial x}f\left(w\right)={\mathbf u}\frac{\partial }{\partial y}f\left(w\right)
\]
or in this case we have
\[\frac{\partial u_1\left(x,y\right)}{\partial y}=\frac{\partial u_3\left(x,y\right)}{\partial x},\quad \frac{\partial u_2\left(x,y\right)}{\partial y}=\frac{\partial u_4\left(x,y\right)}{\partial x},\]
\[\frac{\partial u_3\left(x,y\right)}{\partial y}=\frac{\partial u_1\left(x,y\right)}{\partial x}-\mu \frac{\partial u_4\left(x,y\right)}{\partial x},\]
\[\frac{\partial u_4\left(x,y\right)}{\partial y}=\frac{\partial u_2\left(x,y\right)}{\partial x}+\mu \frac{\partial u_3\left(x,y\right)}{\partial x}.\]
In \cite{Pogor14} it is also proved that if a function $f\left(x,y\right)= {\mathbf u}u_1\left(x,y\right)+{\mathbf i}\,u_2\left(x,y\right)
+{\mathbf e}\,u_3\left(x,y\right)+{\mathbf i}\,{\mathbf e}\,u_4\left(x,y\right)$ is monogenic
then $u_i\left(x,y\right)$ satisfies Eq. \eqref{GrindEQ__1_}. We should mention that a constructive description of
monogenic functions in a three-dimensional harmonic algebra was studied in \cite{Plaksa10,Plaksa13}.
By passing from the basis ${\mathbf u}$, ${\mathbf e}$ to the basis $I_-$, $I_+$ we have
\[w=x\,{\mathbf u}+y\,{\mathbf e}=\left(x + \frac{k_2}{\sqrt{2}}y\right)I_-+\left(x - \frac{k_1}{\sqrt{2}}y\right)I_+.\]
\end{proof}
\begin{lem0} \label{lem3}
\textit{A function }$f:\,B_c\to A_c$\textit{, }$0<c<1$\textit{, is differentiable if and only if it can be represented as follows}
\[f\left(w\right)=\alpha \left(w_1\right)I_-\,+\,\beta \left(w_2\right)I_+,\]
\textit{where }$w_1= x_1 + {\mathbf i}\,y_1$, $x_1 = x$, $y_1 = - {\mathbf i}\,\frac{k_2}{\sqrt{2}} y$,
$w_2= x_2 + {\mathbf i}\,y_2$, $x_2 = x$, $y_2 = {\mathbf i}\,\frac{k_1}{\sqrt{2}} y$
\textit{and }$\alpha \left(w_1\right)$, $\beta \left(w_2\right)$\textit{ are analytical functions of variables }
$w_1$\textit{, }$w_2,$\textit{ respectively, as follows}
\[\frac{\partial }{\partial y_1}\alpha \left(w_1\right)={\mathbf i} \frac{\partial }{\partial x}\alpha \left(w_1\right),\quad
\frac{\partial }{\partial y_2}\beta \left(w_2\right)={\mathbf i} \frac{\partial }{\partial x}\beta \left(w_2\right).\]
\end{lem0}
\begin{proof}
The sufficiency can be verified directly. Indeed,
\[\frac{\partial }{\partial y}f\left(w\right)=\frac{\partial }{\partial y_1}\alpha \left(w_1\right)
\frac{\partial y_1}{\partial y}I_-+\frac{\partial }{\partial y_2}\beta \left(w_2\right)\frac{\partial y_2}{\partial y}I_+ \]
\[\qquad =-{\mathbf i}\,\frac{k_2}{\sqrt{2}} \frac{\partial }{\partial y_1}\alpha \left(w_1\right)I_-
+ {\mathbf i}\frac{k_1}{\sqrt{2}}\ \frac{\partial }{\partial y_2}\beta \left(w_2\right)I_+\]
\[\qquad = \frac{k_2}{\sqrt{2}} \frac{\partial }{\partial x}\alpha \left(w_1\right)I_- - \frac{k_1}{\sqrt{2}} \frac{\partial }{\partial x}\beta \left(w_2\right)I_+.\]
On the other hand, taking into account Eqs. \eqref{GrindEQ__5_}, \eqref{GrindEQ__6_}, we have
\[{\mathbf e}\frac{\partial }{\partial x}f\left(w\right)=\left(\frac{ k_2}{\sqrt{2}}I_- -
\frac{ k_1}{\sqrt{2}}I_+\right)\left(\frac{\partial }{\partial x}\alpha \left(w_1\right)I_-+ \frac{\partial }{\partial x}\beta \left(w_2\right)I_+\right)\]
\[\quad = \frac{k_2}{\sqrt{2}} \frac{\partial }{\partial x}\alpha \left(w_1\right)I_- - \frac{k_1}{\sqrt{2}} \frac{\partial }{\partial x}\beta \left(w_2\right)I_+.\]
Hence,
\[{\mathbf e}\,\frac{\partial }{\partial x}f\left(w\right)=\frac{\partial }{\partial y}f\left(w\right).\]
Now let us prove necessity. Suppose that a function
\[f\left(w\right)=u_1\left(x,y\right)+ {\mathbf i}\,u_2\left(x,y\right)+{\mathbf e}\,u_3\left(x,y\right)
+{\mathbf i}\,{\mathbf e}\,u_4\left(x,y\right)\]
is monogenic on $B_c$, i.e., ${\mathbf e}\,\frac{\partial }{\partial x}f\left(w\right)=\frac{\partial }{\partial y}f\left(w\right)$.
By using Eq. \eqref{GrindEQ__6_} we can represent $f\left(w\right)$ in the following manner
\[f\left(w\right)=\alpha \left(w_1\right)I_-+\beta \left(w_2\right)I_+,\]
where
\[\alpha \left(w_1\right)=u_1\left(x,y\right)+\frac{ k_2}{\sqrt{2}}u_3\left(x,y\right)
+ {\mathbf i}\,\left( u_2\left(x,y\right) + \frac{ k_2}{\sqrt{2}}u_4\left(x,y\right) \right),\]
\[\beta \left(w_2\right)=u_1\left(x,y\right)-\frac{ k_1}{\sqrt{2}}u_3\left(x,y\right)
+ {\mathbf i}\,\left( u_2\left(x,y\right) - \frac{ k_1}{\sqrt{2}}u_4\left(x,y\right) \right).\]
Consider
\[{\mathbf u}\frac{\partial }{\partial y}f=\frac{\partial }{\partial y_1}\alpha \left(w_1\right)\frac{\partial y_1}{\partial y}I_-
+\frac{\partial }{\partial y_2}\beta \left(w_2\right)\frac{\partial y_2}{\partial y}I_+\]
\[\qquad = -\frac{{\mathbf i} \,k_2}{\sqrt{2}} \frac{\partial \alpha }{\partial y_1}I_-+\frac{{\mathbf i} \,k_1}{\sqrt{2}}\frac{\partial \beta }{\partial y_2}I_+.\]
Then, taking into account Eq. \eqref{GrindEQ__6_}, we have
\[{\mathbf e}\,\frac{\partial }{\partial x}f\left(w\right)=\left(\frac{ k_2}{\sqrt{2}}I_ -
- \frac{ k_1}{\sqrt{2}}I_+\right)\left(\frac{\partial \alpha }{\partial x}I_-+\frac{\partial \beta }{\partial x}I_+\right)\]
\[\quad = \frac{k_2}{\sqrt{2}} \frac{\partial \alpha }{\partial x}I_-
- \frac{k_1}{\sqrt{2}}\frac{\partial \beta }{\partial x} I_+.\]
Therefore,
\[\frac{\partial }{\partial y_1}\alpha \left(w_1\right)= {\mathbf i}\,\frac{\partial }{\partial x}\alpha \left(w_1\right),\]
\[\frac{\partial }{\partial y_2}\beta \left(w_2\right)= {\mathbf i}\,\frac{\partial }{\partial x}\beta \left(w_2\right).\]
Suppose ${\alpha }\left({\omega }_1\right) = {\alpha }_1\left({\omega }_1\right)+{\mathbf i}\,{\alpha }_2\left({\omega }_1\right)$ and ${\beta }\left({\omega }_2\right) = {\beta}_1\left({\omega }_2\right)+{\mathbf i}\,{\beta }_2\left({\omega }_2\right)$.
It follows from the proof of Lemma \ref{lem3} that ${\alpha }_i\left({\omega }_1\right)+{\beta }_j\left({\omega }_2\right)$, $i,j\in \left\{1,2\right\}$ are solutions of Eq. \eqref{GrindEQ__1_} for $0<c<1$.
\end{proof}
\begin{theor10} \label{the2}
$u\left(x,y\right)$\textit{ is a solution of Eq. \eqref{GrindEQ__1_} for }$0<c<1$\textit{ if and only if for some }$i,j\in \left\{1,2\right\}$\textit{ it can be represented as follows}
\[u\left(x,y\right)={\alpha }_i\left({\omega }_1\right)+{\beta }_j\left({\omega }_2\right),\]
\textit{where }${\alpha }_i\left({\omega }_1\right),{\beta }_j\left({\omega }_2\right)$\textit{ are components of }$\ \alpha \left({\omega }_1\right)$\textit{ and }$\beta \left({\omega }_2\right)$\textit{ of monogenic function }$g\left(\omega \right)$\textit{ in the decomposition \eqref{GrindEQ__15_} i.e.,}
\[f\left(\omega \right)=\alpha \left({\omega }_1\right)I_-+\beta \left({\omega }_2\right)I_+,\]
\textit{where }$\alpha \left({\omega }_1\right)$\textit{, }$\beta \left({\omega }_2\right)$\textit{ are complex analytical functions of respective variables}.
\end{theor10}
\begin{proof} As mentioned above ${\alpha }_i\left({\omega }_1\right)+{\beta }_j\left({\omega }_2\right)$, $i,j\in \left\{1,2\right\}$ are solutions of Eq. \eqref{GrindEQ__1_} for $0<c<1$.
If $u\left(x,y\right)$ is a solution of Eq. \eqref{GrindEQ__1_} for $0<c<1$ much in the same way as in proving Theorem \ref{the1} we can show that $u\left(x,y\right)={\alpha }_i\left({\omega }_1\right)+{\beta }_j\left({\omega }_2\right)$.
\end{proof}
| {
"attr-fineweb-edu": 1.626953,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfmLxK5YsWTJsQ3yI | \section{Introduction}
Sociophysics is based on the use of concepts and tools from physics to describe social and political behavior \cite{strike}. While the validity of such a transfer has been long questioned among physicists \cite{testimony}, none ever has expected that some basic sociophysics question may in turn lead to new development within physics. In this paper we report, to our knowledge for the first time, on such a development. Indeed economics long ago has contributed substantially to statistical physics, but indirectly, providing several tools it developed on its own, which were then taken over by physics \cite{econo}.
Opinion dynamics has been a very active field of research among physicists in the recent years \cite{opiniondyn,weron,sznajd}.
Most models are based on some local update rule and corresponding results are obtained using numerical simulations \cite{opiniondyn,weron,sznajd}.
The Sznajd model initiated in 2000 \cite{weron} is the most popular and has generated a great deal of papers \cite{sznajd}.
In parallel, a much older one proposed by Galam in 1986 \cite{voting-old}, has stayed confined to only few works \cite{mino}. But a recent universal scheme for local rules shows that both models have the same dynamics \cite{unify} prompting the question of why they have been perceived as different with an advantage for the first one.
Indeed, the reason lies in the fact that, although the Galam model is solvable analytically, its very formulation has been perceived as the signature of an intrinsic mean field nature. Since its dynamics is monitored by a total reshuffling of agents between repeated local updates, in principle everybody can interact to everybody. This fact has been understood as everybody does interact to everybody simultaneously, as in a mean field treatment \cite{espagnol}. However that is not the case due to the local range of interactions which are restricted to separate small groups of agents after each reshuffling. At least, for years, Galam was adamant in claiming it.
In this work we address this question by a thorough numerical Monte Carlo investigation of the effect of reshuffling spins on the phase diagram of the two-dimensional nearest neighbor (NN) Ising model \cite{ising}.
Reshuffling is introduced gradually according to the variable $0\leq p \leq 1$ where $p$ is the probability of reshuffling all the spins of the lattice at each Monte Carlo step.
We call it the Gradually Reshuffled Ising Model and denote it by GRIM.
It is worth to stress that after each spin reshuffling, interactions stay local among NN.
The critical temperature $T_C$ is calculated for a series of values of $p$ from $p=0$ (square lattice Ising model --- SLIM) up to $p=1$ (totally reshuffled Ising model --- TRIM).
Binder's cumulant for $T_C$ evaluation is used to avoid finite size effect \cite{binder}.
As expected, at $p=0$ the SLIM exact value $T_C=2/\mathrm{arcsinh}(1)\approx 2.27\, [J/k_B]$ is recovered. The variation of the GRIM $T_C$ as function of $p$ is found to exhibit a non-linear behavior. A similar study of gradual reshuffling was performed for Galam opinion model in the case of local groups of size four \cite{chopard1}.
The unifying Galam scheme \cite{unify} is then used to map the TRIM into a local rule model where update groups are of a size five. Its allows an analytical calculation of the critical temperature $T_C$. In contrast to the simulations, it is found to depend on the choice of the dynamics.
Metropolis and Glauber yield different values, respectively $T_C\approx 1.59\, [J/k_B]$ and $T_C\approx 3.09\, [J/k_B]$.
Last one reproduces almost exactly the Monte Carlo result at $p=1$, $T_C\approx 3.03\, [J/k_B]$.
The simplest realization \cite{solomon} of the Solomon network (SN) \cite{erez} is noted to reproduce TRIM results.
Similarly to the critical temperature, critical exponents are found to differ from both, the SLIM case and the mean field values. Concluding remarks with respect to future work end the paper.
\section{The gradually reshuffled Ising model}
We consider the nearest neighbor Ising model \cite{ising} on a square lattice
with ferromagnetic interactions
\begin{equation}
\label{eq-ham}
{\cal H}= - \dfrac{1}{2} \sum_{i,j} J_{ij} S_i S_j,
\end{equation}
where $S_i = \pm 1$ is the Ising spin variable at each node $i$ of the square lattice and
\[
J_{ij}=\begin{cases}
J>0 & \text{if $i$ and $j$ are neighbors,} \\
0 & \text{otherwise.}
\end{cases}
\]
is short-range ferromagnetic exchange integral.
Monte Carlo simulations are performed using either a Glauber or a Metropolis
dynamics. In the Glauber dynamics, every time step all spins at the lattice are
investigated in type-writer fashion, i.e. spin-by-spin: from left to right and
then from top to bottom. For each spin $i$ in an initial configuration $\mu_i$,
a new configuration $\eta_i$ resulting from the single spin flip $S_i\to -S_i$
is accepted with a probability
\begin{equation}
\label{proba-g}
p^G_{\mu_i \to \eta_i}=\dfrac{\exp(-E_{\eta_i}/k_BT)}{\exp(-E_{\mu_i}/k_BT)+\exp(-E_{\eta_i}/k_BT)},
\end{equation}
where $E_{\eta_i}$ is the energy of configuration $\eta_i$, $E_{\mu_i}=-E_{\eta_i}$ is the energy of configuration $\mu_i$ and $k_B$ is the Boltzmann constant. When all $N$ spins are
investigated one Monte Carlo step (MCS) is completed. In Metropolis scheme,
the acceptance probability of the new configuration may be expressed in a
simple form
\begin{equation}
\label{proba-m}
p^M_{\mu_i \to \eta_i}=\min\{ 1, \exp [ -(E_{\eta_i}-E_{\mu_i})/k_BT ] \}.
\end{equation}
But here, at each MCS, all the spins are randomly visited and updated (a random list of
spins labels assures that each spin is reached exactly once).
The value of the critical temperature $T_C$ is extracted from the evaluation of the order parameter $m=\sum_i S_i/N$ as a function of $T$. In a second step, to get a more precise estimate of $T_C$ in the thermodynamic limit, we calculate Binder's fourth cumulant of the order parameter defined by
\begin{equation}
U\equiv 1-\frac{ \langle m^4 \rangle }{ 3 \langle m^2 \rangle ^2 },
\label{eq-bc}
\end{equation}
where $\langle\cdots\rangle$ represent the thermal average (taken over 400000
MCS after discarding 100000 MCS for equilibration at each temperature). This
cumulant should go to $2/3$ below $T_C$ and to zero above $T_C$ when the size
increases, and the finite-size estimates are expected to cross around $T_C$
\cite{binder}.
Our Monte Carlo simulations have been performed on square lattices of size
$50\le L\le 500$ assuming semi-periodic (helical) or periodic boundary conditions.
For given temperature $T$, simulation starts with $m=1$, i.e., with all spins $S_i=1$.
In every MCS, before updating all spins, with a probability $p$ the
reshuffling procedure takes place: random permutation of all spin labels is
produced and their positions are rearranged according to that new labels
order.
With probability $1-p$, all spins keep their current position.
\section{The results}
In Fig. \ref{fig-1} $m(T)$ for size $L=500$ and different reshuffling probabilities $p$ are shown.
Also $m(T)$ for SN browsed from Ref. \cite{solomon} for a million spins is included.
The latter agrees very well with reshuffling case for $p=1$ (TRIM).
In SN a random permutation of spin labels is created and each spin interacts with its two NN and two additional neighbors of its mirror in reshuffled lattice.
In contrast to current work the reshuffling procedure takes place only once: before the simulation starts and the spin neighborhood is fixed during simulation.
The spin dynamics is governed via Eqs. \eqref{eq-ham}, \eqref{proba-g} and \eqref{proba-m} but links are directed.
It means, that if spin $j$ is a neighbor of spin $i$ does not yield $i$ being neighbor of $j$.
\begin{figure}
\includegraphics[width=.9\textwidth]{sousa-1}
\caption{Magnetization $m(T)$ dependence for $L=500$ obtained with Glauber dynamics.
For the SN $N=10^6$ spins were simulated.}
\label{fig-1}
\end{figure}
Decreasing reshuffling probability $p$ shifts $m(T)$ towards SLIM with $T_C=2/\mathrm{arcsinh}(1)\approx 2.27\, [J/k_B]$ \cite{onsager}.
The dependence $T_C(p)$ is found to be nonlinear as a function of the reshuffling parameter $p$.
In the range $p>0.1$ it may be approximated with the logarithmic law $T_C=0.19\ln p+3.01$ (see Fig. \ref{fig-3}).
Examples of the order parameter time evolution $m(T)$ at fixed temperature $T=2.5\, [J/k_B]$ are shown in Fig. \ref{fig-2} for $p=0$ (SLIM) and $p=1$ (TRIM).
The curves show that $m(T)$ vanishes for $p=0$ as expected from its $T_C=2.27<2.5\, [J/k_B]$.
On the opposite for $p=1$ we have $T_C=3.03>2.5\, [J/k_B]$ and $m(T)$ stabilizes at a non zero value as it should be in the associated ordered phase.
\begin{figure}
\includegraphics[width=.9\textwidth]{sousa-3}
\caption{Dependence of Curie temperature $T_C(p)=0.19\ln p+3.01$ on the reshuffling probability $p$ given via cross-point of $U(T)$ curves for different system sizes $50\le L\le 300$.}
\label{fig-3}
\end{figure}
\begin{figure}
\includegraphics[width=.9\textwidth]{sousa-2}
\caption{Time evolution of the magnetization $m(T)$ obtained with Glauber dynamics at $T=2.5\, [J/k_B]$ for $p=0$ and $p=1$.
Associated $T_C$ are respectively $T_C=2.27\, [J/k_B]$ and $T_C=3.03\, [J/k_B]$.
Simulations are carried out for $N=10^6$ spins.
}
\label{fig-2}
\end{figure}
In Fig. \ref{fig-4} Binder's cumulant $U$ dependence on the temperature $T$ for different system sizes $L$ and reshuffling probability $p$ is presented.
The common crossing-point of these curves predicts Curie temperature $T_C$.
\begin{figure}
\includegraphics[width=.9\textwidth]{sousa-4a}\\
\includegraphics[width=.9\textwidth]{sousa-4b}
\caption{Binder's cumulant $U$ dependence on the temperature $T$ for different system sizes $L$ and reshuffling probability (a) $p=1/2$ and (b) $p=1$.}
\label{fig-4}
\end{figure}
In Fig. \ref{fig-5} critical exponents $\beta$ --- which describe magnetization behavior in the vicinity of $T_C$ --- are presented.
For $p=0$ and $p=1$ these exponents are respectively $\beta_{\text{SLIM}}=0.11$ and $\beta_{\text{TRIM}}=0.31$.
The exact value at $p=0$ is $\beta^{\text{th}}_{\text{SLIM}}=1/8$ \cite{onsager,krytyczne} while the mean field value is $\beta^{\text{th}}_{\text{MF}}=1/2$ \cite{krytyczne}.
From $\beta_{\text{TRIM}}=0.31 \neq 1/2=\beta^{\text{th}}_{\text{MF}}$, we can conclude that TRIM, and thus Galam reshuffled models, are not mean field models, since otherwise the associated critical exponents would have been mean field exponents, as it must be for any mean field approximation.
\begin{figure}
\includegraphics[width=.9\textwidth]{sousa-5}
\caption{Critical exponents $\beta$ --- which describes magnetization behavior in the vicinity of $T_C$ --- for $p=0$ and $p=1$ are $\beta_{\text{SLIM}}=0.11$ and $\beta_{\text{TRIM}}=0.31$, respectively.
$N=10^6$ spins were simulated.
$N_{\text{MCS}}=2\cdot 10^4$ and $\langle m\rangle$ is averaged over last $10^4$ [MCS].
}
\label{fig-5}
\end{figure}
\section{The Galam unifying scheme}
We now go one step further in the investigation of the validity of
Galam unifying scheme \cite{unify}. It is a general sequential
frame, which operates via local updates of small groups of spins
randomly selected. Given one group, a specific probabilistic majority
rule is then applied for each peculiar configuration. It is a
function of the local value of the order parameter, i.e., the ratio of
majority to minority.
Applying this scheme to the square NN Ising model investigated above
and following the Monte Carlo procedure, which considers one spin and
its four NN leads to use local groups of one size, with five spins. On
this basis, we define $g_k$ as the probability that a group of five
spins with $k$ plus and $(5-k)$ minus ends up with five plus.
Simultaneously the probability to get five minus is $(1-g_k)$. With
five spins, the number $k$ of plus spins can vary from $5$ down to $0$
producing six different coefficients $g_k$. But from up-down symmetry this
number reduces to three with $g_{0}=1-g_{5}$, $g_{1}=1-g_{4}$ and
$g_{2}=1-g_{3}$. On this basis, if $p(t)$ is the proportion of plus
spins at time $t$, we obtain for the new proportion $p(t+1)$ after one
update cycle
\begin{equation}
\label{eq-ptp1}
p(t+1)=\sum_{k=0}^5 {5 \choose k} g_k [p(t)]^k [1-p(t)]^{5-k},
\end{equation}
which is a binomial with ${5 \choose k}=\frac{5!}{5!(5-k)!}$.
To evaluate the coefficients $g_{i}$ in Eq. \eqref{eq-ptp1} we enumerate all possible $2^5=32$ configurations associated to a local group of size five and regroup them by their respective number of plus spins.
The corresponding energies $E_{\mu_i}$ are calculated as well as energies $E_{\eta_i}$ are obtained once the central spin has been flipped.
As described in Sect. 2 the central spin flip is accepted with probability given by either Eq. \eqref{proba-g} or \eqref{proba-m} depending on chosen dynamics, Glauber or Metropolis.
However, here this probability is weighted by the plus spin proportion \cite{unify}.
We now illustrate the scheme in one case with four plus spins ($S_i=+1$) and one minus ($S_i=-1$).
Being on a square lattice, either the minus spin is at the center or not.
One configuration corresponds to the first case,
\[
\left(\begin{array}{ccc} & + & \\+ & - & + \\ & + & \end{array}\right)\rightarrow
\left(\begin{array}{ccc} & + & \\+ & + & + \\ & +& \end{array}\right),
\]
and for the second case, there exist four configurations of the type
\[
\left(\begin{array}{ccc} & + & \\+ & + & + \\ & - & \end{array}\right)\rightarrow
\left(\begin{array}{ccc} & + & \\+ & - & + \\ & - & \end{array}\right).
\]
where the minus spin can rotate on each one of the four NN sites.
Using Glauber dynamics, in the first case with one configuration, the flip is validated with a probability $a$ weighted by with a proportion one of plus spins.
In contrast, the initial configuration is conserved with a probability $(1-a)$ weighted by $4/5$, the proportion of plus spin.
In the second case, the probability for a flip is given by $b$ weighted with a proportion $3/5$ of plus spins while for no flip we have respectively $(1-b)$ and $4/5$, both values being multiplied by four.
>From Eq. \eqref{proba-g} we have
\begin{equation}
a=\dfrac{ \exp(4J/k_BT) }{ \exp(4J/k_BT)+\exp(-4J/k_BT) },
b=\dfrac{ \exp(-2J/k_BT) }{ \exp(2J/k_BT)+\exp(-2J/k_BT) }.
\end{equation}
Combining above results yields for the coefficient $g_5$ in Eq. (5),
\begin{equation}
g_5= \frac{4+a}{5}.
\end{equation}
In a similar fashion, performing similar above evaluation for the other configurations we get the coefficients $\{g_i\}$ with
\begin{equation}
g_4= \frac{20+a-4b}{25},
\qquad
g_3= \frac{31-4b}{50}.
\end{equation}
In parallel, using Metropolis dynamics from Eq. \eqref{proba-m} the
first configuration leads to a flip with probability one ($\min[1,\exp(8J/k_BT)]=1$), and the second one is performed with probability
\begin{equation}
c=\exp{(-4J/k_BT)}.
\end{equation}
The corresponding coefficients $\{g_i\}$ become
\begin{equation}
g_5= \frac{5-d}{5},
\qquad
g_4= \frac{21-4c}{25}
\qquad
\text{and}
\qquad
g_3= \frac{28}{50},
\end{equation}
where $d=\exp(-8J/k_BT)$
is a probability of the configuration change
\[
\left(\begin{array}{ccc} & + & \\+ & + & + \\ & + & \end{array}\right)\rightarrow
\left(\begin{array}{ccc} & + & \\+ & - & + \\ & +& \end{array}\right)
\]
given by Metropolis dynamics.
At this stage, we can calculate the unstable fixed points from Eq. \eqref{eq-ptp1} to get the corresponding critical temperature, respectively $T_C=3.09\, [J/k_B]$ for Glauber, and $T_C=1.59\, [J/k_B]$ for Metropolis. The Glauber result is rather close to the numerical finding $T_C=3.03$ $[J/k_B]$ but yet a bit different. At present we do not understand why. In contrast, the simulation results are invariant in the choice of the dynamics. It is worth to note that a value of $T_C$ a little bit larger than 3 $[J/k_B]$ was obtained previously by Malarz from a Monte Carlo simulation of SN \cite{solomon}.
\section{ Discussion}
In this work, we have studied the effect of gradual spin reshuffling on the phase diagram of the lattice square Ising model. Performing a series of Monte Carlo simulations, we have proved that Galam reshuffling is neither a mean-field treatment nor the usual Ising model (SLIM). In particular it yields non mean field exponents.
On this basis, new physical questions arise. First, does reshuffling create a new universality class for the Ising model? At which value of the reshuffling parameter $p$ does the crossover occur? Besides, the theoretical interest, and the sociophysics consequences, to find a physical situation which could correspond to reshuffling would be also interesting.
\bigskip
\noindent
{\bf Acknowledgments.}
The authors are grateful to Dietrich Stauffer for many stimulating discussions and permanent criticism of their ideas, claims and results.
Part of numerical calculations was carried out in ACK\---CY\-F\-RO\-NET\---AGH.
The machine time on HP Integrity Superdome is financed by the Polish Ministry of Science and Information Technology under grant No. MNiI/\-HP\_I\_SD/\-AGH/047/\-2004.
| {
"attr-fineweb-edu": 1.583984,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfzA241xiQODM-xH0 | \section{Introduction}
Since the first observation in 1983 of the creation of a {\em Raman spike} in the
pump depletion zone by Dr\"uhl, Wenzel and Carlsten \cite{druwen}, there has been
a large amount of studies of stimulated Raman scattering (SRS) of long laser
pulses in gas.
The reason is that the Raman spike has been shown to be the macroscopic
manifestation of large fluctuations of the phase of the initial Stokes
wave. Raman spike generation were then predicted \cite{engbow},
have been given a coherent-mode description \cite{rayliwam}, and
experiments have been preformed where SRS grows spontaneously on initial
fluctuations \cite{macswa2}. The Raman spike hence appears as a means to study
the quantum properties of the Stokes wave initiation, which gives
informations on the {\em phase of the electromagnetic vacuum}.
This comes in addition to the process of Stokes growth which amplifies the
quantum fluctuations of the medium. A general discussion of the {\em quantum
coherence properties of SRS} is given in \cite{raywal}.
As the SRS equations possess a Lax pair \cite{chusco}, there has been
many attemps to modelize the Raman spike as a soliton (see e.g.
\cite{soliton1}\cite{soliton2}). But it has been proved recently that actually
the Raman spike occurs in the spectral transform theory as a manifestation
of the continuous spectrum (hence it is not a soliton) when
for a short period of time the {\em reflection coefficient} becomes close to zero
\cite{leon-prl}. These results were obtained by solving the initial-boundary value
problem for the SRS equations on the infinite line, which, from a physical point
of view, is inconsistent with a finite dephasing time of the medium oscillators.
However the results obtained are strikingly close to the experimental data
\cite{leon-pra}. Here we construct a solution of SRS on the finite interval by
using the spectral transform on the semi-line, first proposed in the context of
{\em nonlinear polarization dynamics} \cite{leon-sasha}.
The interaction of light with a material medium, in the case when
a laser pump pulse (frequency $\omega_L$, envelope $A_L$)
interacts with the optical phonons (eigenfrequency $\omega_V$, envelope $Q$)
to give rise to a down-shifted laser pulse
(Stokes emission, frequency $\omega_S$, envelope $A_S$) according to the selection
rules ($\Delta\omega$ is the the detuning
from the Raman resonance, $\Delta k$ is the phase missmatch)
\begin{equation}
\omega_S=\omega_L-\omega_V-\Delta\omega,\quad
k_S=k_L-k_V-\Delta k
\label{selec-rules}\end{equation}
can be modelized by the following {\em slowly varying envelope approximation}
\cite{yariv}
\begin{eqnarray}
({\partial\over\partial Z}+{\eta\over c}{\partial\over\partial T}) A_L
&=&\frac{i}{4}{N\alpha'_0\over\eta c}\sqrt{\omega_S\omega_L}\ Q A_S\
\exp[i(\Delta k Z-\Delta\omega T)],\nonumber\\
({\partial\over\partial Z}+{\eta\over c}{\partial\over\partial T}) A_S
&=&\frac{i}{4}{N\alpha'_0\over\eta c}\sqrt{\omega_S\omega_L}\ Q^*\ A_L\
\exp[-i(\Delta kZ-\Delta\omega T)],
\label{base-physique}\\
{\partial\over\partial T}Q+\frac{1}{T_2}Q
&=&\frac{i}{4}{\epsilon_0\alpha'_0\over
m\omega_V}\sqrt{\frac{\omega_S}{\omega_L}}\ A_L A_S^*\
\exp[-i(\Delta kZ-\Delta\omega T)].
\nonumber\end{eqnarray}
The electromagnetic wave field $E(Z,T)$
and the material excitation $\tilde X(Z,T)$ are obtained as
\begin{equation}
E(Z,T)=\frac12
A_L\ \exp[i(k_LZ-\omega_LT)]+\frac12\sqrt{\frac{\omega_S}{\omega_L}}
A_S\ \exp[i(k_SZ-\omega_ST)]+c.c.
\label{champ-E}\end{equation}
\begin{equation}
\tilde X(Z,T)=\frac12 Q\ \exp[i(k_VZ-\omega_VT)]+c.c.
\label{champ-X}\end{equation}
Hereabove $\alpha'_0$ is the differential
polarizability at equilibrium, $c/\eta$ is the light velocity in the medium,
$N$ is the density of oscillators of mass $m$,
$T_2$ is the relaxation time of the medium.
For a medium initially in the ground state
\begin{equation}
Q(Z,0)=0,
\label{init-medium}\end{equation}
and for an arbitrary set of input pump and Stokes envelopes profiles
(for any value of the missmatch $\Delta k$)
\begin{equation}
A_L(0,T)=I_L(T-\frac\eta cZ),\quad A_S(0,T)=I_S(T-\frac\eta cZ),
\label{bound-laser}\end{equation}
we give here the output values of both light waves profiles
$A_L(L,T)$ and $A_S(L,T)$ in terms of the solution of a Cauchy-Green linear
integral equation.
The solution, constructed by the inverse spectral transform
theory (IST), is actually exact for {\em infinite relaxation times},
and we have proposed an approximate
solution for finite $T_2$ which matches the experiments with high accuracy
\cite{leon-pra}.
The method applies also for a non-vanishing initial state of the medium but the
solution in this case is in general not explicit (this is relevant in
physical situations where the quantum fluctuations of the initial population
density of the two-level medium is taken into account).
The fact that IST can be applied to SRS on the finite interval has been first
proposed by Kaup in 1983 \cite{soliton2}. However the evolution of the spectral
data given there does not correspond to the boundary problem (\ref{bound-laser})
and in particular, as this evolution is {\em homogeneous}, it does not allow for
the growth of the Stokes seed on a medium initially at rest. In a different
context, a {\em nonhomogeneous} evolution of the spectral data has been obtained
in \cite{gabzak} where the self-induced transparency equations are solved for an
arbitrary {\em initial} ($t=-\infty$) population density of the two-level
medium. IST has been later used to solve an initial-boundary value problem on
the half-line for the nonlinear Schr\"odinger equation (NLS) by Fokas and Its
\cite{fokas}, but in this case the required boundary data in $x=0$
for the potential itself renders {\em nonlinear} the evolution of the spectral
transform.
The property of SRS of being solvable on the finite interval results simply from
the nature of the equations for which the
initial-boundary value problem (\ref{init-medium})(\ref{bound-laser}) is well
posed and does not require new constraints when it is given on the finite
interval (this is not so for NLS for which the vanishing boundary values
at infinity become some prescribed boundary value in $x=0$). Consequently the
method applies for every other case of solvable evolutions with nonanalytic
dispersion relations when precisely passing to the finite interval does not
imply adding information or constraint. However we will discover that there are
requirements of {\em analyticity of the boundary data} in order for the problem
to be integrable.
Before going to the method of solution, it is convenient to rescale the system
(\ref{base-physique}) into a {\em dimensionless} system, which
goes with defining the new variables
\begin{equation}
x= \frac1L Z,\quad t=\frac cL(T-\frac\eta c Z).
\label{coordi}\end{equation}
Then the dimensionless rescaled material excitation is defined as
(the differential polarizability $\alpha'_0$ has the dimension of a surface)
\begin{equation}
q(x,t)=\frac i4\ \frac{N\alpha_0'}{\eta c}\sqrt{\omega_L\omega_S}\ LQ(Z,T),
\label{rescale-Q}\end{equation}
while $A_L$ ans $A_S$ are rescaled by using the boundary conditions as
\begin{equation}
a_L(k,x,t)=\frac{A_L(\Delta k,Z,T)}{I_m},\quad
a_S(k,x,t)=\frac{A_S(\Delta k,Z,T)}{I_m},\quad
I_m=\displaystyle{\mathop{\mbox{max}}_{T>0}} |I_L|,
\label{rescale-E}\end{equation}
which scales to 1 the incident laser pump amplitude.
The dependence on the phase missmatch is represented here by the
dimensionless wave number
\begin{equation}
k=\frac12 L(\Delta k-\frac\eta c\Delta\omega),
\label{essential}\end{equation}
which we shall refer to as the {\em essential missmatch parameter}.
Finally, for
\begin{equation}
g_0=\frac{L^2}{16}{N\alpha'_0\epsilon_0\over\eta mc^2}
{\omega_S\over\omega_V}\ I_m^2,
\label{qdyn}\end{equation}
(dimensionless),
the system (\ref{base-physique}) becomes for infinite dephasing time
\begin{equation}
\partial_x a_L=q a_S\ e^{-i\Delta\omega t}\ e^{2ikx},
\quad
\partial_x a_S=-q^* a_L\ e^{i\Delta\omega t}\ e^{-2ikx},
\label{maxsimp}\end{equation}
\begin{equation}
q_t= - g_0 a_L a_S^*\ e^{i\Delta\omega t}\
e^{-2ikx}.\label{qinteq-nu}\end{equation}
Due to the dependence of the field envelopes $a_L$ and $a_S$ on the essential
missmatch $k$, it is necessary to consider the cooperative interaction of all
$k$-components with the medium.
Then we need actually to consider the medium excitation as resulting from
all $k$ components and replace (\ref{qinteq-nu}) with
\begin{equation}
q_t= - g_0\int dk a_L a_S^*\ e^{i\Delta\omega t}\
e^{-2ikx},
\label{qinteq}\end{equation}
where now the input ($x=0$) values of the pump wave $a_L$ and the Stokes
wave $a_S$ are also function of $k$ sharply distributed
around $k=0$ (the resonnance), that we denote by
\begin{equation}
a_L(k,0,t)=J_L(k,t),\quad a_S(k,0,t)=J_S(k,t).
\label{in-A}\end{equation}
We will demonstrate that the system (\ref{maxsimp})(\ref{qinteq}), with the
initial data $q(x,0)$ in $L^1$ and boundary values (\ref{in-A}), where
$J_L(k,t)$ (resp. $J_S(k,t)$) has an analytic continuation in ${\rm Im}(k)>0$
(resp. ${\rm Im}(k)<0$) vanishing as $|k|\to\infty$, is integrable on the
semi-line $x>0$. Moreover the solution furnishes the output values of the pump
and Stokes waves in $Z=L$ (i.e. $x=1$), which gives the solution of the SRS
equations on the finite interval.
\setcounter{equation}{0}
\section{The spectral problem on the semi-line}
\paragraph*{Basic definitions.}
We briefly give the basic notions on the
Zakharov-Shabat spectral problem on the semi-line
for the $2\times2$ matrix $ \nu(k,x,t)$ in the potentials $q(x,t)$ and $r(x,t)$
\begin{equation}
\nu_x+ik[\sigma_3,\nu]=\left(\matrix{0&q \cr r&0}\right)\nu,
\quad x\ge0,\quad t\ge0.
\label{zs}\end{equation}
The solution $\nu$ can be verified to obey
\begin{equation}
{\partial\over\partial x}\det\{\nu\}=0.
\label{det-nu}\end{equation}
Two fundamental solutions, say $\nu^\pm$, are
defined by
\begin{equation}\left(\matrix{\nu_{11}^+ \cr\nu_{21}^+ \cr}\right)=
\left(\matrix{1\cr0\cr}\right)+
\left(\matrix{\int_{0}^{x}dx' q \nu_{21}^+ \cr
\int_{0}^{x}dx' r \nu_{11}^+ e^{2ik(x-x')}\cr}
\right) \label{int-1}\end{equation}
\begin{equation}\left(\matrix{\nu_{12}^+ \cr\nu_{22}^+ \cr}\right)=
\left(\matrix{0\cr1\cr}\right)+
\left(\matrix{-\int_{x}^{\infty}dx' q \nu_{22}^+
e^{-2ik(x-x')}\cr
\int_{0}^{x}dx' r \nu_{12}^+ \cr}
\right) \label{int-2}\end{equation}
\begin{equation}\left(\matrix{\nu_{11}^- \cr\nu_{21}^- \cr}\right)=
\left(\matrix{1\cr0\cr}\right)+
\left(\matrix{\int_{0}^{x}dx' q \nu_{21}^- \cr
-\int_{x}^{\infty}dx' r \nu_{11}^- e^{2ik(x-x')}\cr}
\right) \label{int-3}\end{equation}
\begin{equation}\left(\matrix{\nu_{12}^- \cr\nu_{22}^- \cr}\right)=
\left(\matrix{0\cr1\cr}\right)+
\left(\matrix{\int_{0}^{x}dx' q \nu_{22}^-
e^{-2ik(x-x')}\cr
\int_{0}^{x}dx' r \nu_{12}^- \cr} \right)
\label{int-4}\end{equation}
\paragraph*{Riemann-Hilbert problem.}
In order to derive a well-posed Riemann-Hilbert
problem for $\nu(k)$, we require that $\nu(k)$ be bounded for
large $k$ and hence define it as
\begin{equation}
\nu(k,x,t)=\left\{\matrix{\nu^+(k,x,t), \quad {\mbox{Im}}(k)>0 \cr
\nu^-(k,x,t),\quad {\mbox{Im}}(k)<0}\right\}.
\label{nu-def}\end{equation}
Indeed it obeys
\begin{equation}
k\to\infty\quad:\quad\nu(k)\sim 1+{\cal O}({1\over k}),
\label{k-inf}\end{equation}
which is obtained from the integral equations by integration by part.
The scattering coefficients (functions of $k$ and parametrized by $t$)
are defined for $k$ real, as
\begin{equation}
\rho^+=-\int_0^\infty dx'\ q\nu_{22}^+ e^{2ikx'},\quad
\rho^-=-\int_0^\infty dx'\ r\nu_{11}^- e^{-2ikx'},
\label{rho+-}\end{equation}
\begin{equation}
\tau^+=1+\int_0^\infty dx'\ r\nu_{12}^+,\quad
\tau^-=1+\int_0^\infty dx'\ q\nu_{21}^-.
\label{tau+-}\end{equation}
Following standard methods (see e.g. \cite{leon-jmp}, especially the appendix),
one can prove from the above integral equations the the solution $\nu$ obeys on
the real axis, the following Riemann-Hilbert problem
\begin{equation}
\nu_1^+ - \nu_1^- = -e^{2ikx}\rho^-\nu_2^-, \quad
\nu_2^+ - \nu_2^- = e^{-2ikx}\rho^+\nu_1^+.
\label{R-H}\end{equation}
The method consists simply in comparing e.g. the integral equations for
$(\nu_1^+ - \nu_1^-)e^{-2ikx}$ to the one for $\rho^-\nu_2^-$.
\paragraph*{Bound states.}
As the integrals run on the finite support $[0,x]$,
the solutions $\nu^+_1$ and $\nu^-_2$
of the Volterra integral equations (\ref{int-1})(\ref{int-4}) are analytic
(entire functions of $k$)
in the complex plane. The property (\ref{det-nu}) is used to compute the
determinants of $(\nu^+_1,\nu^+_2)$ and of $(\nu^-_1,\nu^-_2)$ both at $x=0$ and
$x=\infty$, which gives
\begin{equation}
\nu^+_{11}(k,\infty,t)={1\over\tau^+(k,t)},\quad
\nu^-_{22}(k,\infty,t)={1\over\tau^-(k,t)}.
\label{inv-tau}\end{equation}
Hence the quantity $1/\tau^+$ ($1/\tau^-$) is an entire functions of $k$
and can have a number $N^+$
($N^-$) of zeroes $k_n^+$ ($k_n^-$) which, for bounded $r$ and $q$ are simple an
of finite number.
Consequently, the solutions $\nu^-_1$ and $\nu^+_2$ of the Fredholm integral
equations (\ref{int-3}) and (\ref{int-2}), which can be written
also
\begin{equation}\left(\matrix{\nu_{11}^- \cr\nu_{21}^- \cr}\right)=
\left(\matrix{1\cr0\cr}\right)\tau^- -
\left(\matrix{\int_{x}^{\infty}dx' q \nu_{21}^- \cr
\int_{x}^{\infty}dx' r \nu_{11}^- e^{2ik(x-x')}\cr}\right) ,
\label{int-3-bis}\end{equation}
\begin{equation}\left(\matrix{\nu_{12}^+ \cr\nu_{22}^+ \cr}\right)=
\left(\matrix{0\cr1\cr}\right)\tau^+ -
\left(\matrix{\int_{x}^{\infty}dx' q \nu_{22}^+e^{-2ik(x-x')}\cr
\int_{x}^{\infty}dx' r \nu_{12}^+ \cr}\right),
\label{int-2-bis}\end{equation}
have respectively the $N^-$ and $N^+$ simple poles $k_n^-$ and $k_n^+$
of $\tau^-$ and $\tau^+$. Then the integral equations for the residues allow to
obtain
\begin{equation}
\displaystyle\mathop{\rm Res}_{k_n^-}\ \nu_1^-=
C_n^-\nu_2^-(k_n^-)\exp[2ik_n^-x],\quad
C_n^-=\displaystyle\mathop{\rm Res}_{k_n^-}\ \rho^-,
\label{apres2}\end{equation}
\begin{equation}
\displaystyle\mathop{\rm Res}_{k_n^+}\ \nu_2^+=
C_n^+\nu_1^+(k_n^+)\exp[-2ik_n^+x],\quad
C_n^+=\displaystyle\mathop{\rm Res}_{k_n^+}\ \rho^+.
\label{apres1}\end{equation}
As for the Riemann-Hilbert relations (\ref{R-H}), the method consists in comparing
the integral equations for $\exp[-2ik_n^-x]{\rm Res}\ \nu_1^-$ and for
$C_n^-\nu_2^-(k_n^-)$.
\paragraph*{Boundary behaviors.}
An important relation for the following is the so-called {\em unitarity}
relation
\begin{equation}
\tau^+\tau^-=1-\rho^+\rho^-,
\label{unitarity}\end{equation}
obtained by computing $\nu^-_{22}(k,\infty,t)$ by using (\ref{R-H}) and by
comparing the result with (\ref{inv-tau}).
Consequently the boundary behaviors of $\nu$ can be written
\begin{equation} x=0\quad :\quad
\nu^+=\left(\matrix{1 & \rho^+ \cr 0 & 1}\right),\quad
\nu^-=\left(\matrix{1 & 0 \cr \rho^- & 1}\right),
\label{bound-x=0}\end{equation}
\begin{equation}x=\infty\quad :\quad
\nu^+=\left(\matrix{1/\tau^+ & 0\cr -e^{2ikx}\rho^-/\tau^- & \tau^+}\right),
\quad
\nu^-=\left(\matrix{\tau^- & -e^{-2ikx}\rho^+/\tau^+ \cr 0 & 1/\tau^-}\right).
\label{bound-x=a}\end{equation}
\paragraph*{The $\bar\partial$ problem.}
The analytic properties of $\nu$ can be summarized in the following
formula
\begin{equation}
{\frac{\partial\nu}{\partial\bar k}}=\nu\ R \exp[2ik\sigma_3x],
\label{d-bar}\end{equation}
with the {\em spectral transform}
\begin{equation}
R={i\over2}
\left(\matrix{0 & \rho^+\delta^+ \cr
-\rho^-\delta^-& 0} \right)-
2i\pi\left(\matrix{0&\sum C_n^+ \delta(k-k_n^+)\cr
\sum C_n^- \delta(k-k_n^-) & 0} \right),
\label{R-str}\end{equation}
where the distributions $\delta^\pm$ and
$\delta(k-k_n)$ are defined as
\begin{eqnarray}
\int\!\!\!\int d\lambda \wedge d\bar{\lambda} f(\lambda)
\delta^{\pm}(\lambda_I) &&= - 2i \int^{+\infty}_{-\infty} d\lambda_R
f(\lambda_R \pm i0),\nonumber\\
\int\!\!\!\int d\lambda \wedge d\bar{\lambda} f(\lambda)\delta(\lambda-k_n)&&=
f(k_n)
\label{deltapm} \end{eqnarray}
with the notation $\lambda=\lambda_R+i\lambda_I$.
\setcounter{equation}{0}
\section{Inverse spectral problem}
\paragraph*{The Cauchy-Green integral equation.}
The inverse problem is solved by integrating the
$\bar\partial$-equation (\ref{d-bar}) with the boundary (\ref{k-inf}).
We prove here the following theorem:
the solution $f(k,x,t)$ of the Cauchy-Green integral equation
\begin{equation}
{f}(k,x,t)={\bf 1}+{1\over2i\pi}\int\!\!\!\int
\frac{d\lambda\wedge d\bar\lambda}{\lambda-k}
\;\;{f}(\lambda,x,t)R(\lambda,t)\exp[2i\lambda\sigma_3x].
\label{cauchy}\end{equation}
coincides with the solution $\nu(k,x,t)$ of (\ref{int-1})-(\ref{int-4}) if
the reflection coefficient possess a meromorphic continuation
with simple poles $k_n^\pm$ and residues $C_n^\pm$, namely
\begin{equation}\label{rho-analytic}
\pm{\rm Im}(k)>0\ \Rightarrow\ {\partial\over\partial\bar k}\ \rho^\pm(k)=
2i\pi\sum C_n^\pm\ \delta(k-k_n^\pm).
\end{equation}
Such analytical property of the spectral data occur when the potentials are on
compact support. Therefore we shall assume here that the {\em initial datum}
$q(x,0)$ is indeed on compact support and we will have to demonstrate that the
time evolution conserves this analytical property.
Actually the physically interesting case is when $q(x,0)$ vanish identically.
This implies no bound states which will be assumed in the following
(everything can be extended to the case with bound states
as they appear as the poles $k_n^+$ of $\rho^+$ in
${\rm Im}(k)>0$, and those ($k_n^-$) of $\rho^-$ in ${\rm Im}(k)<0$).
\paragraph*{Proof of the Theorem.}
The first step is to verify from (\ref{d-bar}) that the function
${f}$ is solution of the differential problem (\ref{zs}). This is easily done by
following the method of \cite{leon-jmp}, in short: prove the relation
\begin{equation}
{\frac{\partial}{\partial\bar k}}
[({f}_x-ik{f}\sigma_3){f}^{-1}]=0
\label{bas-dbar-x}\end{equation}
and integrate it. Then the solution ${f}$ of (\ref{cauchy}) solves
the spectral problem (\ref{zs}) with
\begin{equation}
\left(\matrix{0&q \cr r&0\cr}\right)=i[\sigma_3,{f}^{(1)}]
\label{pot-nu}\end{equation}
where ${f}^{(1)}$ is the coefficient of $1/k$ in the Laurent expansion of
${f}(k)$.
The second step consists in proving
\begin{equation}
{f}_1^+(k,x,t)=\nu_1^+(k,x,t),\quad {f}_2^-(k,x,t)=\nu_2^-(k,x,t),
\label{first-analytic}\end{equation}
which is acheived just by comparing the values of these vectors in $x=0$.
The functions ${f}^+$ and ${f}^-$ solve the following
coulped vectorial system (forget for a while the $(x,t)$-dependence) on the
real axis ${\mbox{Im}}(k)=0$:
\begin{equation}
{f}_1^+(k)=\left(\matrix{1\cr0}\right)-{1\over2i\pi}\int
{d\lambda\over\lambda-(k+i0)}
\ \rho^-(\lambda){f}_2^-(\lambda)e^{2i\lambda x},
\label{cauchy-1+}\end{equation}
\begin{equation}
{f}_2^-(k)=\left(\matrix{0\cr1}\right)+{1\over2i\pi}\int
{d\lambda\over\lambda-(k-i0)}
\ \rho^+(\lambda){f}_1^+(\lambda)e^{-2i\lambda x}.
\label{cauchy-2-}\end{equation}
Now, by closing the contour of
integration in ${\rm Im}(\lambda)<0$ for ${f}_1^+$ and
${\rm Im}(\lambda)>0$ for ${f}_2^-$, the Cauchy theorem for the
analyticity requirement (\ref{rho-analytic}) leads to
\begin{equation}
\forall x<0\ ,\ {\rm Im}(k)=0\ :\
{f}_1^+(k,x,t)=\left(\matrix{1\cr0}\right),\quad
{f}_2^-(k,x,t)=\left(\matrix{0\cr1}\right).
\label{val-0-1}\end{equation}
Consequently, the two matrices $({f}_1^+,{f}_2^-)$ and $(\nu_1^+,\nu_2^-)$
solve the same first order differential problem and have the same values in
$x=0$, so (\ref{first-analytic}) is proved.
The third step results in proving
\begin{equation}\label{second-analytic}
{f}_1^-(k,x,t)=\nu_1^-(k,x,t),\quad
{f}_2^+(k,x,t)=\nu_2^+(k,x,t),
\end{equation}
which is performed by evaluating the functions
\begin{eqnarray}\label{f-infty}
&&{f}_{12}^+={1\over2i\pi}\int{d\lambda\over\lambda-(k+i0)}\
\rho^+{f}_{11}^+e^{-2i\lambda x}\nonumber\\
&&{f}_{21}^-=-{1\over2i\pi}\int{d\lambda\over\lambda-(k-i0)}\
\rho^-{f}_{22}^-e^{2i\lambda x}
\end{eqnarray}
as $x\to\infty$. A usefull formula is
\begin{equation}\label{lim-distr}
\lim_{x\rightarrow \pm\infty }P\!\!\!
\int\frac{d\lambda}{\lambda-k}\ e^{i(\lambda-k)x}f(\lambda)
=\pm i\pi f(k),
\end{equation}
where $P\!\!\int$ denotes the Cauchy principal value integral, and the
Sokhotski-Plemelj formula
\begin{equation}\label{sokhotski}
\int\frac{d\lambda}{\lambda-(k\pm i0)}\ f(\lambda)
=\pm i\pi f(k)+P\!\!\!\int\frac{d\lambda}{\lambda-k}\ f(\lambda).
\end{equation}
Since it has been already proved that ${f}_1^+=\nu_1^+$ and
${f}_2^-=\nu_2^-$, the functions ${f}_{11}^+$ and ${f}_{22}^-$ have from
(\ref{int-2}) and (\ref{int-3}) bounded behaviors as $x\to\infty$. Consequently
\begin{equation}\label{val-infini-1}
\lim_{x\to+\infty}{f}_{12}^+=0,\quad
\lim_{x\to+\infty}{f}_{21}^-=0.
\end{equation}
The remaining functions ${f}_{11}^-$ and ${f}_{22}^+$ are obtained
by using the differential form of (\ref{cauchy}), namaly
\begin{equation}
{f}_{11}^+ - {f}_{11}^- = -e^{2ikx}\rho^-{f}_{12}^-, \quad
{f}_{22}^+ - {f}_{22}^- = e^{-2ikx}\rho^+{f}_{21}^+.
\label{R-H-f}\end{equation}
With (\ref{val-0-1}), this gives
\begin{equation}\label{val-0-2}
\forall x<0\ ,\ {\rm Im}(k)=0\ :\ {f}_{11}^-(k,x,t)=1,\quad
{f}_{22}^+(k,x,t)=1.
\end{equation}
Finally, the two matrices $({f}_1^-,{f}_2^+)$ and $(\nu_1^-,\nu_2^+)$
solve the same first order differential problem and have the same values in
$x=0$ for the diagonal elements and in $x=\infty$ for the off-diagonal, hence
they are equal.
The theorem is proved and an immediate consequence is that
the coefficients $\rho^\pm$ are effectively given by (\ref{rho+-}).
From now on we will use the single matrix $\nu$ as denoting the common
solution to (\ref{cauchy}) and (\ref{int-1})-(\ref{int-4}).
\paragraph*{Reduction.}
It is convenient for physical applications to
assume the following {\em reduction}
\begin{equation}
r=- \bar q
\label{reduc}\end{equation}
for which
\begin{equation}
(\nu_{11}^\pm)^*=\nu_{22}^\mp,\quad (\nu_{12}^\pm)^*=-\nu_{21}^\mp,
\label{red-nu}\end{equation}
\begin{equation}
\rho^+=-(\rho^-)^*,\quad \tau^+=(\tau^-)^*,\quad
k_n^+=\overline{k}_n^-,\quad C_n^+=\overline{C}_n^-.
\label{red-struc}\end{equation}
The overbar denotes the complex conjugation and the ``star''
\begin{equation}\label{star}
f^*(k)=\overline{f\left(\bar k\right)}.\end{equation}
As a consequence the boundary values of $\nu$ become
\begin{equation}x=0\ :\
\nu^+=\left(\matrix{1 & \rho \cr 0 & 1}\right),\quad
\nu^-=\left(\matrix{1 & 0 \cr -\rho^* & 1}\right),
\label{bound-0}\end{equation}
\begin{equation}x=\infty\ :\
\nu^+=\left(\matrix{1/\tau & 0\cr e^{2ikx}\rho^*/\tau^* & \tau}\right),\quad
\nu^-=\left(\matrix{\tau^* & -e^{-2ikx}\rho/\tau \cr 0 & 1/\tau^*}\right),
\label{bound-a}\end{equation}
where we use the notation $\rho=\rho^+$ and $\tau=\tau^+$, for real $k$.
\paragraph*{Transmission coefficient.}
It is usefull for the following to derive
the relationship between $\tau$ and $\rho$. Although this can be done in
general, we still consider only the case when no bound states are present (this
will be the case of interest). From the definitions (\ref{tau+-}), we have for
large $k$
\begin{equation}
\tau^\pm(k)\sim 1+{\cal O}({1\over k}).
\label{behav-tau}\end{equation}
Defining then ${h}(k)$ as $\ln(\tau^+)$ for ${\mbox{Im}}(k)\ge0$, and
$-\ln(\tau^-)$ for ${\mbox{Im}}(k)\le0$,
its discontinuity on the real $k$-axis can be written by means of
(\ref{unitarity})
\begin{equation}
{h}^+-{h}^-=\ln(1-\rho^+\rho^-).
\label{disc-f}\end{equation}
As from (\ref{behav-tau}), ${h}(k)$ vanishes for large $k$, the above
Riemann-Hilbert problem has the following solution
\begin{equation}
{\rm Im}(k)\ne0\ :\ {h}(k)={1\over2i\pi}\int{d\lambda\over\lambda-k}\
\ln(1-\rho^+\rho^-),
\label{sol-f}\end{equation}
and hence $\tau^\pm$ are given from $\rho^\pm$. In the reduction (\ref{reduc}),
this relation can be written
\begin{equation}
\tau=\sqrt{1+|\rho|^2}\ e^{i\theta},\quad
\theta=-{1\over2\pi}P\!\!\!\!\int{d\lambda\over\lambda-k}\ \ln(1+|\rho|^2).
\label{tau-rho}\end{equation}
\setcounter{equation}{0}
\section{Lax pair and general evolution}
\paragraph*{Lax pair.}
The compatibility
\begin{equation}
U_t-V_x+[U,V]=0
\label{UV}\end{equation}
between the two following spectral problems
\begin{equation}
\nu_x=\nu\ ik\sigma_3+U\nu,\quad U=-ik\sigma_3+\left(\matrix{0&q \cr
r&0}\right),
\label{lax1}\end{equation}
\begin{equation}
\nu_t=\nu\Omega+V\nu,\quad V={1\over2i\pi}\int\!\!\!\int
\frac{d\lambda\wedge d\bar\lambda}{\lambda-k}\ \nu(Me^{2i\lambda\sigma_3x}
-{\partial\Omega\over\partial
\bar\lambda})\nu^{-1},
\label{lax2}\end{equation}
leads to the evolutions
\begin{equation}
{\partial\over\partial t}\left(\matrix{0&q \cr r&0}\right)=
-{1\over2\pi}\int\!\!\!\int[\sigma_3\ ,\ \nu(Me^{2i\lambda\sigma_3x}
-{\partial\Omega\over\partial
\bar\lambda})\nu^{-1}],
\label{evol-q-gene}\end{equation}
\begin{equation}
{\partial\over\partial t}\ R=[R\ ,\ \Omega]+M.
\label{evol-R-gene}\end{equation}
This general result is not proved here (see \cite{leon-jmp} or
\cite{leon-pla}), but we give in the next section the computation of the time
evolution of $\rho^+(k,t)$ (given here by (\ref{evol-R-gene})) directly from the Lax
pair, when the evolution of $q(x,t)$ is given by (\ref{qt-gene}).
\paragraph*{Solvable evolution.}
The entries $\Omega(k,t)$ (diagonal matrix-valued function)
and $M(k,t)$ (off-diago\-nal matrix-valued distribution) are {\em arbitrary},
which allows to solve evolutions with nonanalytic dispersion
relations ($\partial\Omega/\partial\bar k\ne0$) and {\em arbitrary} boundary
values \cite{leon-pla} (the problem of arbitrary boundary values has been first
solved for the SIT equations in \cite{gabzak}).
As we are interested in solving evolutions as (\ref{qinteq})
where the integral runs on the real $k$-axis, we work within the reduction
(\ref{reduc}) and chose (remember the definition (\ref{star}):
\begin{equation}
M(k,t)=\left(\matrix{0 & m(k,t)\delta^+ \cr -m^*(k,t)\delta^- & 0}\right),
\label{M}\end{equation}
and the dispersion relation
\begin{equation}
\Omega(k,t)=\omega(k,t)\sigma_3,\quad
\omega(k,t)=-{1\over\pi}\int_{-\infty}^{+\infty}{d\lambda\over\lambda-k}\
\varphi(\lambda,t),\quad k\not\in{\bf R},
\label{Omega}\end{equation}
analytic except on the real axis where it stands the discontinuity
\begin{equation}
{\partial\omega(k,t)\over\partial\bar k}=
{i\over2}\left(\omega^+(k_R,t)-\omega^-(k_R,t)\right)\delta(k_I)=
\varphi(k_R,t)\delta(k_I).
\label{d-bar-omega}\end{equation}
Then the quantity $\nu(\partial\Omega/\partial
\bar\lambda)\nu^{-1}$ is ill-defined because $\nu(k,x,t)$ itself is
discontinuous on the real axis. To compute it we need to use the identity
\cite{marco}
\begin{equation}
\nu\ {\partial\Omega\over\partial\bar k}\ \nu^{-1}=
{\partial\over\partial\bar k}
(\nu\Omega\nu^{-1})-\nu[Re^{2ik\sigma_3x},\Omega]\nu^{-1}.
\label{magic}\end{equation}
Within the reduction and with the above choice of $M$ and $\Omega$, the
evolution (\ref{evol-q-gene}), becomes by means of (\ref{magic})
\begin{equation}
q_t={2i\over\pi}{\int_{-\infty}^{+\infty}}dk\left[
2\varphi\nu_{11}^+\nu_{12}^-+
m(\nu_{11}^+)^2 e^{-2ikx}+m^*(\nu_{12}^-)^2 e^{2ikx}\right].
\label{qt-gene}\end{equation}
To get the above result, it is necessary to use the Riemann-Hilbert
relations (\ref{R-H}) to express $\nu_{11}^-$ and $\nu_{12}^+$ in terms of
$\nu_{11}^+$ and $\nu_{12}^-$.
This equation, coupled to the system (\ref{lax1}), is the general solvable
evolution with a nonanalytic dispersion law vanishing at large $k$. It is used
now to solve the boundary value problem (\ref{maxsimp})(\ref{qinteq}).
\setcounter{equation}{0}
\section{Solution of SRS}
The above theory allows to solve the nonlinear evolution problem
(\ref{maxsimp})(\ref{qinteq}) with the initial data
of $q(x,0)$ (which actually will be taken to vanish) and boundary value
(\ref{in-A}).
This is done in the following way
\paragraph*{Basic relations.}
As the 3 functions
\begin{equation}
\left(\matrix{a_L\cr a_S\exp[2ikx-i\Delta\omega t]}\right),\quad
\left(\matrix{\nu_{11}^+\cr \nu_{21}^+}\right),\quad
\left(\matrix{\nu_{12}^-\cr \nu_{22}^-}\right)\exp[2ikx],
\label{vectors}\end{equation}
solve the same first-order differential system, they are uniquely related
by their values in $x=0$ and comparing (\ref{in-A}) with (\ref{bound-0})
we obtain readily
\begin{equation}
\left(\matrix{a_L\cr a_S\exp[2ikx-i\Delta\omega t]}\right)=J_L
\left(\matrix{\nu_{11}^+\cr \nu_{21}^+}\right)+
J_S\left(\matrix{\nu_{12}^-\cr \nu_{22}^-}\right)\exp[2ikx-i\Delta\omega t].
\label{exp-vect}\end{equation}
This formula actually gives the solution to the physical problem (compute the
output values from the input values) as soon as the function $\rho^+(k,t)$ is
calculated ($\tau$ is expressed from $\rho$ in (\ref{tau-rho})).
\paragraph*{Computation of $\rho(k,t)$.}
This computation is performed by first determining the functions $\varphi(k,t)$
and $m(k,t)$. This is done by equating the evolution (\ref{qinteq}) with
(\ref{qt-gene})
by means of the relation (\ref{exp-vect}) between $(a_L, a_S)$ and $\nu$.
Using the reduction relations (\ref{red-nu}),
we finally get that the evolution (\ref{qinteq}),
where $(a_L, a_S)$ is expressed in terms of $\nu$ through (\ref{exp-vect}),
reads exactly as (\ref{qt-gene}) for
\begin{equation}
\varphi=-i{\pi\over4}g_0\left(|J_L|^2-|J_S|^2\right),\quad
m=i{\pi\over2}g_0\ J_LJ_S^*e^{i\Delta\omega t}.
\label{evol-func}\end{equation}
The corresponding evolution (\ref{evol-R-gene}), which reads
\begin{equation}
\rho^+_t=-2\omega^+\ \rho^+-2im,\label{evol-rho-base}\end{equation}
then gives by means of (\ref{Omega})
\begin{equation}
\rho^+_t(k,t)=\pi g_0 J_L(k,t)J_S^*(k,t)e^{i\Delta\omega t}-{i\over2}
g_0\rho^+(k,t) \int{d\lambda\over\lambda-(k+i0)}\ (|J_L|^2-|J_S|^2) ,
\label{evol-rho}\end{equation}
\begin{equation}
C_{n,t}^+=C_n^+ \left(-{i\over2}g_0\int{d\lambda\over\lambda-k_n}\
(|J_L|^2-|J_S|^2) \right).
\label{evol-cn}\end{equation}
For everything to work, we have seen in sec. 3 that the analytical requirement
(\ref{rho-analytic}) is essential. It is clear on the above time evolution that
this is ensured for all $t$ if we require that
$J_L(k,t)$ be analytic in the upper half plane and $J_S(k,t)$ in the lower one
(note from (\ref{star}) that $J_S^*(k,t)$ is a function of $k$ analytic in the
upper half plane). In short
\begin{equation}\label{input-analytic}
{\rm Im}(k)>0\ \Rightarrow\ {\partial\over\partial\bar k}\ J_L(k,t)=0,\quad
{\rm Im}(k)<0\ \Rightarrow\ {\partial\over\partial\bar k}\ J_S(k,t)=0.
\end{equation}
The other constraint is that, for all $t$, the function $\rho^+(k,t)$ vanish at
large $k$ in ${\rm Im}(k)>0$. This is guaranteed here because $J_L(k,t)$,
$J_S^*(k,t)$ and $\omega^+(k,t)$ do vanish at large $k$ in ${\rm Im}(k)>0$.
Note that it would not be true for an analytic dispersion relation
(like the polynomial $k^2$ for NLS).
A medium initially at rest corresponds to $q(x,0)=0$, and hence from
(\ref{rho+-}) and (\ref{apres1}) to
\begin{equation}
\rho^+(k,0)=0,\quad C_n^+(0)=0.
\label{init-rho}\end{equation}
Consequently the evolution (\ref{evol-cn}) ensures that no bound states are
created and the equation (\ref{evol-rho}) can be solved explicitely. Then the
solution of the related integral equation (\ref{cauchy-1+})
(\ref{cauchy-2-}) gives $\nu^\pm(k,x,t)$
which in turn allows to compute the light field envelopes $a_L(k,x,t)$ and
$a_S(k,x,t)$.
In particular their values in $x=1$ furnishes the output in the
case of the finite interval of physical length $L$. For instance, with the
solution $\rho^+(k,t)$ of (\ref{evol-rho}), the pump output is obtained by
solving (with $\rho^-=-(\rho^+)^*$):
\begin{equation}
\nu_{11}^+(k,x,t)=1-{1\over2i\pi}\int
{d\lambda\over\lambda-(k+i0)}
\ \rho^-(\lambda,t)\nu_{12}^-(\lambda,x,t)e^{2i\lambda x},
\label{nu-11}\end{equation}
\begin{equation}
\nu_{12}^-(k,x,t)={1\over2i\pi}\int
{d\lambda\over\lambda-(k-i0)}
\ \rho^+(\lambda,t)\nu_{11}^+(\lambda,x,t)e^{-2i\lambda x},
\label{nu-12}\end{equation}
and then it is given by
\begin{equation}
a_L(k,1,t) = J_L(k,t)\nu_{11}^+(k,1,t)
+J_S(k,t)\nu_{12}^-(k,1,t)\exp[2ik-i\Delta\omega t].\end{equation}
We note finally that, in the case of the semi-infinite line ($L\to\infty$), from
the boundary values (\ref{bound-a}), we have the following pump output
\begin{equation}
a_L(k,\infty,t) ={1\over\tau^+}\left(J_L-\rho^+ J_Se^{-i\Delta\omega t}\right),
\label{out-AL}\end{equation}
which has been used in \cite{leon-pra} to interpret the experiments of
\cite{druwen}.
\setcounter{equation}{0}
\section{Evolution of the spectral transform from the Lax pair}
The auxiliary Lax operator given in (\ref{lax2}) can be simplified by using
(\ref{magic}) and the property that $\nu\Omega\nu^{-1}$ vanishes as ${\cal
O}(1/k)$. Then (\ref{lax2}) reduces to
\begin{equation}
\nu_t=X\ \nu,\quad X=
{1\over2i\pi}\int\!\!\!\int\frac{d\lambda\wedge d\bar\lambda}{\lambda-k}\
\nu e^{-i\lambda\sigma_3 x}\left(M+[R,\Omega]\right)
e^{i\lambda\sigma_3 x}\nu^{-1}.
\end{equation}
For the particular choices (\ref{M}) and (\ref{Omega}), the above equation reads
\begin{equation}\label{lax3}
\nu_t=X\ \nu,\quad X(k,x,t)=
-{1\over\pi}\int {d\lambda\over\lambda-k}\ \chi(\lambda,x,t),
\end{equation}
$$
\chi(k,x,t)=(m-i\rho^+\omega^+)
\left(\matrix{-\nu^+_{11}\nu^+_{21} & \nu^+_{11}\nu^+_{11}\cr
-\nu^+_{21}\nu^+_{21} & \nu^+_{11}\nu^+_{21}}\right)\exp[-2ikx]+$$
\begin{equation}
+(m^*-i(\rho^+)^*\omega^-)
\left(\matrix{-\nu^-_{12}\nu^-_{22} & \nu^-_{12}\nu^-_{12}\cr
-\nu^-_{22}\nu^-_{22} & \nu^-_{12}\nu^-_{22}}\right)\exp[2ikx].
\label{chi}\end{equation}
Our purpose here is to recover now the time evolution (\ref{evol-rho-base}) of
the reflection coefficient $\rho(k,t)$ by means of the usual method which
consists in evaluating (\ref{lax3}) at one boundary, say in $x=0$. The boundary
values of both $X^+$ and $X^-$ in $x=0$ are easily calculated from those of
$\nu^+$ and $\nu^-$ in (\ref{bound-x=0}). Then the equation (\ref{lax3}) gives
for $\nu^+$ and $\nu^-$ successively
\begin{equation}\label{rho-t-old}
\rho^+_t=
{1\over\pi}\int {d\lambda\over\lambda-(k+i0)}
(i\omega^+\rho^+-m),
\end{equation}
\begin{equation}0=
{1\over\pi}\int {d\lambda\over\lambda-(k+i0)}
(-i\omega^-(\rho^+)^*+\bar m),
\end{equation}
\begin{equation}0=
{1\over\pi}\int{d\lambda\over\lambda-(k-i0)}
(i\omega^+\rho^+-m),
\label{zero-t-old}\end{equation}
\begin{equation}
-(\rho^+)^*_t=
{1\over\pi}\int {d\lambda\over\lambda-(k-i0)}
(-i\omega^-(\rho^+)^*+\bar m).
\end{equation}
First, these equations are compatible if $\omega^-=(\omega^+)^*$, which is
guaranteed through (\ref{Omega}) by $\varphi\in i{\bf R}$ (see
(\ref{evol-func})). Then the above four equations reduce to two and we select
(\ref{rho-t-old}) and (\ref{zero-t-old}) whose solution goes in two steps.\\
{\em First step}: the equation (\ref{zero-t-old}) is realized if the function
$i\omega^+\rho^+-m$ is analytic in Im$(k)>0$ and vanishes as
$k\to\infty$. This is true by construction for $\omega(k,t)$ and
this is true also for $m(k,t)$ because we have required that
$J_L(k,t)$ be analytic in the upper half plane and $J_S(k,t)$ in the lower one
(rmember that from (\ref{star}) that $J_S^*(k,t)$ is a function of $k$ analytic
in the upper half plane). Finally this holds also for the reflection coefficient
$\rho^+(k,t)$ by the requirement (\ref{rho-analytic}).\\
{\em Second step}: by substraction of (\ref{rho-t-old}) and (\ref{zero-t-old}),
and by use of the Sokhotski formula (\ref{sokhotski}) we arrive precisely at the
time evolution (\ref{evol-rho-base}).
Finally, using the behaviors in $x=\infty$ instead of $x=0$, furnishes the time
evolution of the transmission coefficient $\tau^\pm$.
{\bf AKNOWLEDGEMENTS}
It is a pleasure to thank M. Boiti, A.V. Mikhailov and F. Pempinelli
for enlighting discussions and constructive comments.
| {
"attr-fineweb-edu": 1.741211,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUgA_xK1fBGsWmyRhp | \section{Related work}
\label{sec:rwork}
\subsection{Vision Transformers}
Dosovitskiy \textit{et al}., in \cite{dosovitskiy2020image}, were the first to propose Vision Transformer (ViT) as a pure Transformer backbone for image classification. In their work, the input image is segmented into a series of non-overlapping image patches that are then projected into a linear embedding sequence. This sequence is concatenated with a learnable positional encoding that holds information on the spatial location of patches in the image. Thus, an input sequence is formed that is finally fed to the Transformer for classification.
Following ViT, several variants of Vision Transformers have emerged in the last few years in an attempt to improve their performance on image classification benchmarks. A recent survey and categorisation of Vision Transformers can be found in \cite{liu2021survey}. To enhance performance and accelerate convergence, several literature works proposed the combination of CNNs with Transformers in order to leverage the convolutional biases of image data effectively. Touvron \textit{et al}., in \cite{touvron2021training}, proposed the Data-efficient image Transformer (DeiT) to improve ViT by applying data augmentation and optimisation strategies. Furthermore, based on empirical observations that CNN is a better teacher than Transformer, the authors employed during pre-training a teacher-student strategy, in which the CNN teacher transferred its inductive bias in a soft way to the Transformer student through knowledge distillation.
In a similar fashion, the authors in \cite{d2021convit} proposed ConViT that attaches a parallel convolution branch to the Transformer branch so as to impose convolutional inductive biases via a Gated Positional Self-Attention (GPSA). The proposed GPSA approximates the locality of convolutional layers with local attention or operates as vanilla self-attention layers depending on the image context. In \cite{Yuan_2021_ICCV}, the authors proposed the Convolution-enhanced image Transformer (CeiT) that used a new convolutional module to extract patch embeddings from low-level features and a layer-wise attention for the class embedding to model long-range dependencies.
Recognizing that a fixed resolution across the entire network neglects fine-grained information and brings heavy computational costs, several literature works proposed hierarchical structures for Vision Transformers. In \cite{t2t_2021_ICCV}, the authors proposed the Tokens-To-Token Vision Transformer (T2T-ViT), which gradually structures the patch embedding sequence by combining neighbouring embeddings into single embeddings, i.e., tokens. This helped the network to learn the local structure of patches and produce hierarchical features. Wang \textit{et al}., in \cite{Wang_2021_ICCV}, proposed the Pyramid Vision Transformer (PVT), a Transformer network with the pyramid structure of CNNs. PVT consists of several stages, where the spatial dimensions of the output are reduced in each stage. In addition, a spatial-reduction attention (SRA) layer is responsible for learning keys and values of lower dimensionality. In \cite{Heo_2021_ICCV}, Heo \textit{et al}. proposed the Pooling-based Vision Transformer (PiT), which reduces the spatial dimensions progressively with pooling layers similarly to CNNs.
To improve local attention and enhance the feature extraction capabilities of Transformer-based networks, Han \textit{et al}., in \cite{han2021transformer}, proposed the Transformer-iN-Transformer (TNT) network to combine patch-level and pixel-level representations by splitting the patches into sub-patches. Then, the authors adopted and combined Transformer networks at both levels in order to produce better representation with richer local and global information. On the other hand, Liu \textit{et al}., in \cite{Liu_2021_ICCV}, proposed the Swin Transformer that calculates its representations by shifting windows along the input image to model global and boundary features. More specifically, the self-attention is calculated solely on each window and allows cross-window interactions. Finally, the Swin Transformer merges patches in each layer to create hierarchical features with linear complexity.
On the other hand, understanding that Transformers are data hungry models and that they need sufficiently large datasets to perform well, the authors in \cite{hassani2021escaping} proposed three compact Transformer-based models. ViT-Lite, is nearly identical to the original ViT, but with more suitable smaller patch sizing for small datasets. Compact Vision Transformers (CVT) expand on ViT-Lite by using a novel sequential pooling method that pools the sequential based information that results from the Transformer encoder, eliminating the need for the extra Classification Token. Finally, Compact Convolutional Transformers (CCT) further expand on CVT by adding convolutional blocks to the tokenization step, thus preserving local information and being able to encode relationships between patches, unlike the original ViT.
Traditionally, Vision Transformers process the raw pixel intensities directly in the Euclidean space without considering how different data representations may affect their accuracy. The proposed work can be considered as a method that improves local attention through the use of feature representations in different manifolds to create more descriptive attention maps.
\begin{figure*}[t]
\centering
\includegraphics[width = 0.9\textwidth]{images/vision_transformer.png}
\caption{Overview of the network architecture of a Vision Transformer with its main components.}
\label{fig:vit}
\end{figure*}
\subsection{Manifold Background}
A manifold is a topological space that locally resembles a Euclidean space near each point \cite{kratsios2020non}. Essentially, a manifold is a mapping from one space to another, allowing similar features to appear closer to each other, while dissimilar features move further apart. Manifolds have been widely employed in computer vision tasks due to the fact that feature representations in different manifolds carry special statistical and geometrical properties that may provide different discriminative power for a given task \cite{huang2017geometry}. Two widely employed special types of manifolds used to describe image sets and videos in the literature, are the symmetrical positive definite (SPD) and Grassmann manifolds.
In \cite{huang2017geometry}, the authors utilized properties of the Riemmanian geometry on manifolds and proposed a new similarity method based on SPD features for clustering tasks. Yu \textit{et al}., in \cite{yu2019contour} proposed the contour covariance as a region descriptor for accurate image classification. Such a descriptor is a point on the manifold of SPD. In a similar fashion, the authors in \cite{jeong2021efficient} proposed the transformation of the input space to a SPD manifold (i.e., covariance matrix) and the use of a novel continuous manifold neural network, called ODE-RGRU, for action recognition and sleep staging classification. Whereas, the authors in \cite{chu2022collaborative} proposed a collaborative representation-based image set classification algorithm to model the original image set with covariance matrices and learn powerful representations for improved classification performance. In \cite{wang2021symnet}, the authors proposed SymNet, a SPD manifold deep learning network for image set classification, in which an image set is represented as a non-singular covariance matrix on the SPD manifold. The main drawback of employing covariance features in a deep learning framework is the non-linearity of the SPD manifold that introduces challenging optimization problems.
On the other hand, Grassmannian geometry has been widely employed in several computer vision tasks, such as sign language recognition, image classification and action recognition. In \cite{dimitropoulos2016classification}, the authors introduced Linear Dynamic System (LDS) features as data representations in the Grassmann manifold for accurate detection of fire and smoke events in video sequences. In \cite{konstantinidis2018deep}, the authors proposed the use of LDS histograms as points in the Grassmann manifold for accurate sign language recognition. In another work \cite{konstantinidis2018skeleton}, the same authors proposed the Grassmannian Pyramid Descriptor (GPD) to extract temporal representations from skeletal sequences for action recognition. In \cite{dimitropoulos2017grading}, the authors performed automated grading of invasive breast carcinoma from medical images by considering each image as a set of multidimensional spatially-evolving signals that can be efficiently modelled using Vector of Locally Aggregated Descriptors (VLAD) representations on the Grassmann manifold. In a different approach, the authors in \cite{dimou2018lds} introduced LDS features in a ResNet architecture, achieving state-of-the-art performance in image classification. Finally, the authors in \cite{wei2022neighborhood} proposed an unsupervised dimensionality reduction algorithm based on Neighborhood Preserving Embedding (GNPE) to project image sets modelled in high-dimensional Grassmann manifold to a relative low-dimensional one of higher discriminative capabilities, thus dealing with the high computational cost involved with Grassmann manifolds.
To further leverage the discriminative power of feature representations in different manifolds, other research works have attempted to fuse multiple manifold representations. Recently, Wang et al. employed and fused representations in the SPD and Grassmann manifolds for clustering purposes \cite{wang2020adaptive}. However, Transformer-based networks typically focus only on the Euclidean space of pixel intensities, overlooking alternative data representations. Thus, leveraging the statistical properties of different manifolds, this work proposes a novel multi-manifold multi-head attention for Vision Transformers that combine feature representations from three manifolds (i.e., Euclidean, SPD and Grassmann) to learn a highly descriptive attention map that can better identify the important context of input images. The fusion of representations in different manifolds can guide a Transformer-based network to better model the underlying structure of the input space, leading to improved classification results.
\section{Methodology}
\label{sec:method}
This section initially introduces the main components of a Vision Transformer and then presents in detail the proposed multi-manifold multi-head attention.
\subsection{Vision Transformers}
Initially, a Vision Transformer extracts patch embeddings from an input image and adds position embeddings to them to form the input sequence. This sequence is fed to the Transformer encoder network that comprises alternating multi-head attention and multi-layer perceptron (MLP) layers. The output of the Transformer encoder is a feature representation that passes through a linear layer for classification. The general network architecture of a Vision Transformer is presented in Fig. \ref{fig:vit}.
\begin{figure*}[t]
\centering
\includegraphics[width = 0.8\textwidth]{images/mma_architecture.png}
\caption{Proposed multi-manifold multi-head attention that replaces the standard multi-head attention, shown in a green rectangle in Fig. 1}
\label{fig:mmha}
\end{figure*}
\subsubsection{Patch Embeddings}
Given an input image $\textbf{x} \in \mathbb{R}^{H\times W \times C}$, where $H$, $W$, $C$ are the height, width and channels of the image, respectively, a Vision Transformer divides the image into non-overlapping patches, which are then flattened and converted to a sequence of vectors $\textbf{x}_P \in \mathbb{R}^{L\times P^2 \times C}$, where $L = \frac{HW}{P^2}$ is the sequence length and $P$ is the size of the patch. Afterwards, each vector of the sequence is linearly projected into the space of the hidden dimension of the Transformer encoder through a trainable linear projection (i.e., fully connected) layer $\textbf{W}_p \in \mathbb{R}^{P^2 \times C \times D}$, where $D$ is the hidden dimension of the layer. Other works combine image patch extraction and linear projection into a single step through a 2D convolution operation \cite{vit_2dconv}. From an implementation perspective, a 2D convolution is beneficial as GPUs are optimized for such operations, while there is also no need to first split an image into patches (i.e., patches are effortlessly formed due to the 2D convolution operation).
\subsubsection{Position Embeddings}
Position embeddings are vectors that are added to the patch embeddings and provide positional information to them. In this way, position embeddings provide some sense of order in the input sequence, allowing the Transformer encoder to model both the content of the patches and their spatial location with respect to the other image patches. The most common position embeddings are either vectors with sine and cosine frequencies \cite{dosovitskiy2020image} or learned embeddings \cite{d2021convit,chu2021conditional}.
\subsubsection{Multi-head Attention}
The multi-head attention is the most important layer of the Transformer encoder. It is basically a transformation layer that maps an input sequence $\textbf{X}\in \mathbb{R}^{L\times d_x}$, where $L$ is the sequence length and $d_x$ the dimension of the input sequence, to three different vectors, namely the query $\textbf{Q}$, the key $\textbf{K}$ and the value $\textbf{V}$. These vectors are generated as:
\begin{equation}
\textbf{Q} = \textbf{XW}_q, ~ \textbf{K} = \textbf{XW}_k,~ \textbf{V} = \textbf{XW}_v
\end{equation}
where $\textbf{W}_q\in \mathbb{R}^{d_x\times d_q}$, $\textbf{W}_k\in \mathbb{R}^{d_x\times d_k}$, and $\textbf{W}_v\in \mathbb{R}^{d_x\times d_v}$ are three different weight matrices with $d_q$, $d_k$ and $d_v$ being the dimensions of the query, key and value vectors, respectively. Since the dimensions of these vectors are equal to each other, for the rest of the manuscript they will be denoted simply as $d$. With the query, value and key matrices defined, the scaled dot-product attention is equal to:
\begin{equation}
Attention(\textbf{Q},\textbf{K},\textbf{V}) = \mathrm{softmax}~ (\frac{\textbf{QK}^T}{\sqrt{d}}) \textbf{V}
\end{equation}
The obtained attention weights are assigned to the elements of the value vector $\textbf{V}$ and show in which elements the layer attends to so as to produce richer feature representations for a given task. Instead of using a single attention to project the input into a feature subspace with limited modelling capabilities, Vaswani \textit{et al}., in \cite{vaswani2017attention}, proposed a multi-head self-attention (MHSA) that performs different linear projections of the input at different subspaces. This is achieved by parallel attention layers, called heads, concatenated together. MHSA is computed as:
\begin{gather}
\textbf{Q}_i = \textbf{XW}_{q}^i, ~~ \textbf{K}_i = \textbf{XW}_{k}^i, ~~ \textbf{V}_i = \textbf{XW}_{v}^i\\
\textbf{S}_i = Attention(\textbf{Q}_i, \textbf{K}_i, \textbf{V}_i),~~ i = 1, 2, \dots, h\\
MHSA(\textbf{Q},\textbf{K},\textbf{V}) = concat(\textbf{S}_1, \textbf{S}_2, \dots, \textbf{S}_h)\textbf{W}_{o}
\end{gather}
where $h$ is the total number of heads, $\textbf{W}_o \in \mathbb{R}^{hd\times d_{model}}$ is the weight projection matrix with $d_{model}$ being the size of the projection space ($d = d_{model}/h$), $\textbf{S}_i \in \mathbb{R}^{L \times L }$ is the attention matrix of each head and $\textbf{W}_{q}^i,~\textbf{W}_{k}^i,~\textbf{W}_{v}^i \in \mathbb{R}^{d_{model}\times d }$ are the weight matrices for the query, key and value vectors of each head $i$, respectively. Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. This means that the model will be able to gather more positional data because each head will focus on various regions of the input and have a more comprehensive representation after the combination of the vectors.
\subsection{Multi-manifold Multi-head Attention}
Inspired by the need for more descriptive attention mechanisms and leveraging the fact that different manifolds possess different statistical and geometrical properties, this section introduces the multi-manifold multi-head attention that can be used to replace the standard multi-head attention in any Vision Transformer, as shown in Fig. \ref{fig:mmha}. The proposed attention employs three different manifolds, namely Euclidean, Symmetrical Positive Definite (SPD) and Grassmann to produce richer feature representations. By transforming the input image patches to points in the different manifolds and computing distances between them in these manifolds, this work aims to compute an attention matrix with high discriminative power to better model the underlying structure of the input space. Next, each manifold is described in detail, along with how the attention matrices computed in each manifold are fused to achieve more powerful feature representations and thus more accurate classification results in computer vision tasks.
\subsubsection{Euclidean Manifold}
Currently, a typical approach for Vision Transformers in the literature is to consider that the query and key vectors passed as input to the multi-head attention are points in a high-dimensional feature space, whose geometrical properties are equal to the ones in the Euclidean space, and thus their distance can be computed accordingly. Given query $\textbf{Q} \in \mathbb{R}^{L \times d}$ and key $\textbf{K} \in \mathbb{R}^{L \times d}$ vectors, their distance is given by their scaled dot-product:
\begin{equation}
\textbf{D}_{E}(\textbf{Q},\textbf{K}) = \frac{\textbf{Q}\textbf{K}^T}{\sqrt{d_k}}
\end{equation}
The distance $\textbf{D}_{E} \in \mathbb{R}^{h\times L \times L}$, with $h$ representing the number of heads in the attention layer, expresses the similarity between query and key vectors, with higher values denoting vectors that are far away from each other in the Euclidean manifold.
\subsubsection{SPD Manifold}
The SPD manifold is a specific type of Riemann manifold composed of points expressed as square matrices $\textbf{M}$ of size $d\times d$ and it is denoted as:
\begin{equation}
\mathcal{S}_{++}^d = \{ \textbf{M} \in \mathbb{R}^{d\times d}: \textbf{u}^T\textbf{M}\textbf{u} > 0 ~ \forall ~\textbf{u} \in \mathbb{R}^{d} - \{0_d\}\}
\end{equation}
For a matrix to be considered as point in a SPD manifold, it should be symmetrical and have positive eigenvalues. Covariance matrices possess such properties and thus they can be considered points in a SPD manifold. Covariance matrices have been widely employed in the literature to model appearance and texture features in computer vision tasks \cite{yu2019contour, bhattacharya2016covariance}. As a result, the inclusion of covariance matrices in the computation of the multi-head attention is considered beneficial to the performance of a Vision Transformer due to incorporating additional information about the input, enhancing the discrimination power of the output feature representation. There are several metrics that can be used to measure the distance between points in a SPD manifold \cite{vemulapalli2015riemannian}, however, this work employs the Frobenius distance as it is not restricted by the values of the elements in the covariance matrices, unlike log-based distances.
Given query $\textbf{Q} \in \mathbb{R}^{L \times d}$ and key $\textbf{K} \in \mathbb{R}^{L \times d}$ vectors, the covariance matrices of these vectors are initially computed as:
\begin{gather}
\textbf{C}_Q = cov(\textbf{Q}) = \textbf{E}[(\textbf{Q} - \textbf{E}[\textbf{Q}])(\textbf{Q} - \textbf{E}[\textbf{Q}])^T]\\
\textbf{C}_K = cov(\textbf{K}) = \textbf{E}[(\textbf{K} - \textbf{E}[\textbf{K}])(\textbf{K} - \textbf{E}[\textbf{K}])^T]
\end{gather}
Due to their properties, the covariance matrices $\textbf{C}_Q$, $\textbf{C}_K \in \mathbb{R}^{L\times L}$ lie as points in the SPD manifold. The scaled Frobenius distance between these matrices is then calculated as:
\begin{equation}
\label{eq:spd_dist}
\textbf{D}_{SPD}(\textbf{C}_Q,\textbf{C}_K) = \frac{{|| \textbf{C}_Q -\textbf{C}_K ||}_F}{\sqrt{d}}
\end{equation}
where ${|| \cdot ||}_F$ denotes the Frobenius norm. Similar to the distance in the Euclidean manifold, the distance $\textbf{D}_{SPD} \in \mathbb{R}^{h\times L \times L}$ in Eq. \ref{eq:spd_dist} expresses the similarity between query and key vectors, with higher values denoting vectors that are far away from each other in the SPD manifold.
\subsubsection{Grassmann Manifold}
The Grassmann manifold is another well-known special type of Riemann manifold that embeds all $p$-dimensional linear subspaces that lie in a $d$-dimensional Euclidean space. The Grassmann manifold, denoted as $ \mathcal{G}(p,d)$, can be represented by the set of orthogonal matrices from the orthogonal group $\mathcal{O}(p)$ as follows:
\begin{equation}
\mathcal{G}(p,d) = \{ \textbf{X}\in \mathbb{R}^{d\times p} : \textbf{X}^T\textbf{X}=\textbf{I}_p \}/ \mathcal{O}(p),
\end{equation}
where \textbf{X} represents any point on the Grassmann manifold. Grassmann manifolds have been widely employed in the literature to model sequential and time-varying signals as any linear dynamic system can be easily transformed to a point in the Grassmann manifold \cite{konstantinidis2018skeleton,dimitropoulos2017grading}. As a result, the transformation of the input space to points in the Grassmann manifold can provide to a Vision Transformer additional information regarding texture and color variations in an image patch, leading to enriched feature representations with high discriminative power.
Several metrics have been defined to measure the distance between Grassmmanian points. The most common technique is to embed the manifold into the space of symmetric matrices with the mapping $\Pi : \mathcal{G}(p,d) \rightarrow \mathrm{Sym}(d), ~ \Pi(\textbf{X}) = \textbf{X}\textbf{X}^T$, which is a one-to-one, continuous and differentiable mapping \cite{harandi2013dictionary}. Moreover, to avoid the computation of the coordinates of all projected data and their pairwise distances as well as to improve efficiency, the kernel form of the projection distance \cite{zhang2018grassmannian} is adopted.
Given query $\textbf{Q} \in \mathbb{R}^{L \times d}$ and key $\textbf{K} \in \mathbb{R}^{L \times d}$ vectors, they first need to be transformed into orthogonal matrices so that they can represent points in the Grassmann manifold. To this end, the reduced QR decomposition is applied using the Gram-Schmidt process \cite{bjorck1967solving} to decompose a real matrix into an orthogonal matrix $\textbf{G}$ with orthogonal vectors as columns ($\textbf{G}^T\textbf{G}=\textbf{I}$) and a triangular matrix $\textbf{R}$. Through the QR decomposition, the orthogonal matrices $\textbf{G}_Q \in \mathbb{R}^{L\times d}$ and $\textbf{G}_K \in \mathbb{R}^{L\times d}$ are derived that represent the corresponding query and key vectors as points in the Grassmann manifold.
\begin{gather}
\textbf{Q}=\textbf{G}_Q\textbf{R}\\
\textbf{K}=\textbf{G}_K\textbf{R}
\end{gather}
Then, the projection distance \cite{harandi2013dictionary} is employed to calculate the scaled distance between the two points in the Grassmann manifold:
\begin{equation}
\label{eq:grass_dist}
\textbf{D}_{G}(\textbf{G}_Q,\textbf{G}_K) = \frac{{||\textbf{G}_Q\textbf{G}_Q^T - \textbf{G}_K\textbf{G}_K^T||}_F}{\sqrt{d}}
\end{equation}
where ${|| \cdot ||}_F$ denotes the Frobenius norm. As with the other manifold distances, the distance $\textbf{D}_{G} \in \mathbb{R}^{h\times L \times L}$ in Eq. \ref{eq:grass_dist} expresses the similarity between query and key vectors, with higher values denoting vectors that are far away from each other in the Grassmann manifold.
\begin{table*}[!t]
\centering
\caption{Ablation study on the ViT-Lite-6/4 model with the early fusion of manifold representations}
\label{tab:ablation}
\begin{tabular}{c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c}
\hline
\multicolumn{3}{c}{\textbf{Manifolds}} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{\textbf{Accuracy}}\\
\textbf{Euclidean} & \textbf{SPD} & \textbf{Grassmann} & \textbf{Params (M)} & \textbf{FLOPS (G)} & \textbf{CIFAR-10} & \textbf{CIFAR-100} \\
\hline
X & & & 3.20 & 0.22 & 90.94 & 69.2 \\
& X & & 3.20 & 0.22 & 90.49 & 70.38 \\
& & X & 3.20 & 0.23 & 88.96 & 67.48 \\
X & X & & 3.20 & 0.23 & \textbf{92.86} & 71.93 \\
X & & X & 3.20 & 0.24 & 91.72 & 72.48 \\
& X & X & 3.20 & 0.24 & 90.99 & 71.35 \\
X & X & X & 3.20 & 0.25 & 92.41 &\textbf{72.5} \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Ablation study on the ViT-Lite-6/4 model with the late fusion of manifold representations}
\label{tab:ablation_late_fusion}
\begin{tabular}{c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c@{\hskip 0.5in}c}
\hline
\multicolumn{3}{c}{\textbf{Manifolds}} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{\textbf{Accuracy}}\\
\textbf{Euclidean} & \textbf{SPD} & \textbf{Grassmann} & \textbf{Params (M)} & \textbf{FLOPS (G)} & \textbf{CIFAR-10} & \textbf{CIFAR-100} \\
\hline
X & & X & 6.38 & 0.44 & \textbf{91.34} & 71.47 \\
X & X & & 6.38 & 0.44 & \textbf{91.34} & 71.26 \\
& X & X & 6.38 & 0.45 & 88.81 & 67.93\\
X & X & X & 9.56 & 0.67 & 91.2 & \textbf{71.77} \\
\hline
\end{tabular}
\end{table*}
\subsubsection{Fusion of Manifolds}
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=0.34\textwidth]{images/early_fusion.png}}
\\
\subfloat[]{\includegraphics[width=0.4\textwidth]{images/late_fusion.png}}
\caption{Network architecture with manifolds in (a) early and (b) late fusion. The operator $\otimes$ is used to denote concatenation.}
\label{fig:fusion}
\end{figure}
After the computation of the individual distance matrices in each manifold, a fusion is needed to derive the final distance matrix. Instead of experimenting with fixed weights for each distance matrix, the proposed layer employs a 2D convolution operation to let the Transformer network learn on its own the optimal way to merge the distance matrices. More specifically, given the distance matrices $\textbf{D}_{E}$, $\textbf{D}_{SPD}$ and $\textbf{D}_{G} \in \mathbb{R}^{h\times L \times L}$ in the Euclidean, SPD and Grassmann manifolds, respectively, the proposed multi-manifold multi-head attention initially concatenates them to derive a new distance matrix $\textbf{D}_{f} \in \mathbb{R}^{3h \times L \times L}$. Then, the new distance matrix is passed through a 2D convolutional layer that learns an effective mapping of the distances in the different manifolds and generates a fused distance matrix with size $h \times L \times L$. The final refined attention map is derived after a softmax operation is applied. The new feature representation $\textbf{V}' \in \mathbb{R}^{L\times d}$ that is computed after the multiplication of the value vector $\textbf{V}$ with the attention map is equal to:
\begin{equation}
\textbf{V}' = \mathrm{softmax}(\mathrm{Conv2D}(\textbf{D}_{f}))\textbf{V}
\end{equation}
Other than the proposed early fusion of manifold representations, a second configuration for the late fusion of manifolds has also been implemented in this work, as shown in Fig. \ref{fig:fusion}. More specifically, this configuration concerns the use of Vision Transformers in parallel and each Transformer receiving as input the representation of the input space in a specific manifold and the extraction of manifold specific feature representations. The different feature representations are then concatenated together and fed to the classifier for image recognition. Given the manifold specific feature representations $\textbf{V}'_E=\mathrm{softmax}(\textbf{D}_E)\textbf{V}$, $\textbf{V}'_{SPD}=\mathrm{softmax}(\textbf{D}_{SPD})\textbf{V}$ and $\textbf{V}'_G=\mathrm{softmax}(\textbf{D}_G)\textbf{V}$ for the Euclidean, SPD and Grassmann manifolds, respectively, the final feature representation $\textbf{V}' \in \mathbb{R}^{L\times 3d}$ is equal to:
\begin{equation}
\textbf{V}' = \mathrm{concatenate}(\textbf{V}'_E, \textbf{V}'_{SPD}, \textbf{V}'_G)
\end{equation}
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=0.09\textwidth]{images/c100_vis_a.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.09\textwidth]{images/c100_vis_b.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.09\textwidth]{images/c100_vis_c.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.09\textwidth]{images/c100_vis_d.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.09\textwidth]{images/c100_vis_e.png}}
\caption{Visualization of results in CIFAR-100 using the ViT-Lite-6/4 model: (a) Input images and attention maps (heatmaps and overlaid on images) using (b) the Euclidean manifold, (c) the Grassmann manifold, (d) the SPD manifold and (e) all three manifolds.}
\label{fig:c100_vis}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/c100_vit_e.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/c100_vit_g.png}}
\\
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/c100_vit_s.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/c100_vit_esg.png}}
\caption{Visualization using t-SNE of 20 random classes of CIFAR-100 using the ViT-Lite-6/4 model when the (a) Euclidean, (b) Grassmann, (c) SPD and (d) all three manifolds are employed.}
\label{fig:c100_tsne}
\end{figure}
\section{Experimental Results}
\label{sec:results}
This section presents experimental results in different datasets to demonstrate the advantages of employing the proposed multi-manifold multi-head attention as a substitute of the standard multi-head attention in Vision Transformers.
\subsection{Implementation details}
The proposed multi-manifold multi-head attention was introduced in several state-of-the-art Transformer-based networks as a replacement to the standard multi-head attention in order to evaluate their performance. More specifically, the ViT-Lite-6/4, CVT-6/4 and CCT-7/3x2 models proposed in \cite{hassani2021escaping}, as well as the Swin-T model proposed in \cite{Liu_2021_ICCV} were employed, giving rise to the MMA-ViT-Lite-6/4, MMA-CVT-6/4, MMA-CCT-7/3x2 and MMA-Swin-T models, respectively, when the proposed multi-manifold multi-head attention was introduced. For the Swin-T model in particular, a patch size of $2$, a window size of $4$, an embedding dimension of $96$, an mlp ratio of $2$, depths of $(2,6,4)$ and number of heads equal to $(3,6,12)$ for the different layers were selected.
Experiments were conducted on three well-known image classification datasets, namely CIFAR-10, CIFAR-100 \cite{krizhevsky2009learning} and T-ImageNet \cite{le2015tiny} and all models were trained from scratch. The CIFAR-10 dataset consists of $50$K training and $10$K test images equally distributed among $10$ object classes, CIFAR-100 consists of $50$K training and $10$K test images equally distributed among $100$ classes, while T-ImageNet consists of $100$K training and $10$K validation images equally distributed among $200$ classes. In T-ImageNet, the validation images were used to test the performance of the deep models since the provided test images are not annotated.
All experiments were run with a fixed batch size of $128$ and for $200$ epochs for CIFAR-10 and CIFAR-100 and $300$ epochs for T-ImageNet. In addition, for CIFAR-10 and CIFAR-100, the input images were of size $32\times 32$ pixels, while for T-ImageNet, the input images were of size $64\times 64$ pixels. The AdamW optimizer \cite{loshchilov2017decoupled} was used with a weight decay of $0.01$, a base learning rate of $5e^{-4}$ and a cosine learning rate scheduler that adjusts the learning rate during training \cite{he2019bag}. A warmup of 10 epochs was applied by increasing gradually the learning rate from 0 to the initial value of the cosine learning rate scheduler \cite{goyal2017accurate}. Label smoothing \cite{szegedy2016rethinking} with a probability $\epsilon=0.1$ was applied during training, where the true label is considered to have a probability of $1-\epsilon$ and the probability $\epsilon$ is shared between the other classes. Moreover, a kernel matrix of $1\times1$ was utilized in the 2D convolutional layer that fuses the multi-manifold distance matrices.
In addition, extensive data augmentation techniques were applied to enhance the performance of the transformers. Auto-Augment \cite{cubuk2019autoaugment} was adopted in order to transform the training data with adaptive learnable transformations, such as shift, rotation, and color jittering. Moreover, the Mixup strategy \cite{zhang2017mixup} was also employed, which generated weighted combinations of random sample pairs from the training images. The code for the experimental evaluation of the tested Transformers was implemented using the PyTorch framework.
\subsection{Ablation study}
\label{sec:ablation}
\begin{table*}[t]
\centering
\caption{Results on CIFAR-10, CIFAR-100 and T-ImageNet}
\label{tab:image_cifar}
\begin{tabular}{l@{\hskip 0.6in}c@{\hskip 0.6in}c@{\hskip 0.6in}c@{\hskip 0.6in}c@{\hskip 0.6in}c}
\hline
\textbf{Method} & \textbf{Params (M)} & \textbf{FLOPS (G)} & \textbf{CIFAR-10} & \textbf{CIFAR-100} & \textbf{T-ImageNet} \\
\hline
ResNet-100 \cite{he2016identity} &1.70 & 0.25 & 93.39 & 72.78 & - \\
ResNet-164 \cite{he2016identity} & 1.73 & 0.26 & 94.54 & 75.67 & -\\
EfficientNet-B0 \cite{tan2019efficientnet} & 3.70 & 0.12 & 94.66 & 76.04 & - \\ \hline
Linformer \cite{wang2020linformer} & 3.96 & 0.28 & 92.45 & 70.87 & - \\
Performer \cite{DBLP:conf/iclr/ChoromanskiLDSG21}& 3.85 & 0.28 & 91.58 & 73.11 & -\\
Reformer \cite{Kitaev2020Reformer} & 3.39 & 0.25 & 90.58 & 73.02 & - \\
Couplformer-7 \cite{lan2021couplformer}& 3.85 & 0.28 & 93.44 &74.53 & -\\ \hline
ViT-Lite-6/4 \cite{hassani2021escaping} & 3.20 & 0.22 & 90.94 & 69.2 & 49.18\\
\textbf{MMA-ViT-Lite-6/4} & 3.20 & 0.25 & 92.41 & 72.5 & 53.16 \\ \hline
CVT-6/4 \cite{hassani2021escaping} & 3.19 & 0.22 & 92.58 & 72.25 & 51.45\\
\textbf{MMA-CVT-6/4} & 3.19 & 0.24 & 93.53 & 75.92 & 55.87\\
\hline
Swin-T \cite{Liu_2021_ICCV} & 7.05 & 0.24 & 91.88 & 72.34 & 60.64 \\
\textbf{MMA-Swin-T} & 7.05 & 0.36 & 92.94 & 73.7 & 61.57 \\
\hline
CCT-7/3×2 \cite{hassani2021escaping} & 3.86 & 0.29 & 93.65 & 74.77 & 61.07 \\
\textbf{MMA-CCT-7/3×2} & 3.86 & 0.32 & \textbf{94.74} & \textbf{77.5} & \textbf{64.41}\\
\hline
\end{tabular}
\end{table*}
Initially, experiments were conducted to determine the contribution of each manifold to the classification results in the CIFAR-10 and CIFAR-100 datasets. To this end, the ViT-Lite-6/4 model, proposed in \cite{hassani2021escaping}, was employed as the backbone network and its multi-head attention was substituted by the proposed multi-manifold multi-head attention, utilizing all possible combinations of manifold distances. The results of the ablation study regarding the early fusion of manifolds are presented in Table \ref{tab:ablation}.
From the results of Table \ref{tab:ablation}, it can be seen that the Euclidean manifold usually contains more important information than the other two manifolds alone for accurate image classification. More specifically, in CIFAR-10 the use of the SPD manifold leads to a drop of $0.45\%$ in accuracy, while the use of the Grassmann manifold leads to a drop of almost $2\%$ in accuracy. Similar observations can be made for the CIFAR-100 dataset, although the use of the SPD manifold in this case leads to an increase of about $1.2\%$ in the accuracy when compared to the Euclidean manifold. On the other hand, any combination of two manifolds leads to a significant increase in the accuracy of the ViT-Lite-6/4 model in both CIFAR-10 and CIFAR-100 with respect to employing only the Euclidean manifold. When all three manifolds are fused, an accuracy of $92.41\%$ and $72.5\%$ in CIFAR-10 and CIFAR-100, respectively, is achieved, with the results in the challenging CIFAR-100 dataset being the best, while the results in the CIFAR-10 dataset being slightly inferior than in the case when the Euclidean and the SPD manifolds are employed. Simultaneously, it can be observed that the increase in accuracy is accompanied by a small increase in the floating point operations (FLOPs) and almost no increase in the number of network parameters.
\begin{figure}[!t]
\centering
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/model_perform_c100.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/model_perform_tim.png}}
\\
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/general_perform_c100.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.24\textwidth]{images/general_perform_tim.png}}
\caption{Effect of the proposed multi-manifold multi-head attention on the tested Vision Transformers trained on CIFAR-100 (left) and T-ImageNet (right) in terms of (a),(b) model performance and (c),(d) generalization ability.}
\label{fig:performance}
\end{figure}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_a.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_cvt_e.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_cvt_all.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_swin_e.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_swin_all.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_cct_e.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.13\textwidth]{images/tim_vis_cct_all.png}}
\caption{Visualization of attention maps in T-ImageNet: (a) Input images, attention maps (heatmaps and overlaid on images) using the (b) CVT-6/4, (c) MMA-CVT-6/4, (d) Swin-T, (e) MMA-Swin-T, (f) CCT-7-3x2 and (g) MMA-CCT-7-3x2 models.}
\label{fig:tim_vis}
\end{figure*}
Similar observations regarding the early fusion of manifolds can be made when the late fusion of the manifold specific feature representations is performed, as shown in Table \ref{tab:ablation_late_fusion}. In this case, the fusion of all three manifolds leads to a performance of $71.77\%$ in CIFAR-100 and $91.2\%$ in CIFAR-10, with the latter being slightly inferior to the combination of Euclidean and Grassmann or Euclidean and SPD manifold combinations. A comparison between the early fusion of manifolds presented in Table \ref{tab:ablation} and the late fusion of manifolds shown in Table \ref{tab:ablation_late_fusion} shows that the early fusion leads to superior performance in both datasets, while utilizing fewer network parameters and FLOPs. More specifically, the early fusion of manifolds outperforms the late fusion of manifolds by $0.7\%$ in accuracy, while employing more than $62\%$ fewer parameters and FLOPs, showing the importance of fusing the manifold representations inside the Transformer encoder for the computation of a refined attention map.
Additionally, from the results above, it can be deduced that the SPD and Grassmann manifolds contain supplementary information to the Euclidean manifold by modelling the appearance, color and texture variations in images. These additional features guide the Transformer network towards a better modelling of the underlying input space, enabling it to achieve improved classification results with a small increase in the number of FLOPs. For the rest of the experiments, it is assumed that the proposed multi-manifold multi-head attention employs the early fusion of all three manifolds since this combination leads to the optimal classification results.
To further clarify the benefits of employing additional data manifolds, a visualization of attention maps in a few images from the CIFAR-100 dataset is illustrated in Fig. \ref{fig:c100_vis}. It can be observed that the fusion of data representations in different manifolds guides the network to pay attention to additional information that can be beneficial for the classification of an image. For instance, although the Euclidean manifold allows the network to pay attention on the ears of a kangaroo or the hump of a camel, the fusion of the Euclidean with the SPD and Grassmann manifolds enable the network to pay attention to the entire body of the kangaroo and both the hump and the legs of a camel, thus increasing the confidence of the network in its classification results. Similar observations can be made for the rest of the visualization results.
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=0.32\textwidth]{images/tim_cvt_e.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.32\textwidth]{images/tim_swin_e.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.32\textwidth]{images/tim_cct_e.png}}
\\
\subfloat[]{\includegraphics[width=0.32\textwidth]{images/tim_cvt_esg.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.32\textwidth]{images/tim_swin_esg.png}}
\hspace{1mm}%
\subfloat[]{\includegraphics[width=0.32\textwidth]{images/tim_cct_esg.png}}
\caption{Visualization using t-SNE of 20 random classes of T-ImageNet using (a) CVT-6/4, (b) Swin-T, (c) CCT-7-3x2, (d) MMA-CVT-6/4, (e) MMA-Swin-T and (f) MMA-CCT-7-3x2.}
\label{fig:tim_tsne}
\end{figure*}
Finally, Fig. \ref{fig:c100_tsne} depicts the distribution of $20$ random object classes from the CIFAR-100 dataset, as formulated by the output of the ViT-Lite-6/4 model with different manifold representations, using the t-distributed stochastic neighbor embedding (t-SNE) algorithm \cite{van2008visualizing}. t-SNE is a nonlinear dimensionality reduction method, which is very suitable for visualizing high-dimensional data to $2$ or $3$ dimensions. To achieve its goal, t-SNE constructs a distribution of samples in the high-dimensional space, a similar distribution in the low-dimensional embedding and tries to minimize the Kullback-Leibler (KL) scatter between the two distributions with respect to the location of the embedding points. To this end, the feature vectors from the output of the Transformer encoder prior to the fully-connected layer (Fig. \ref{fig:vit}) are employed as the high-dimensional input to the t-SNE algorithm, which then computes a two-dimensional output. From the visualization of Fig. \ref{fig:c100_tsne}, it can be observed that the fusion of manifolds leads to data points of the same class and color being closer to each other, while outliers are significantly reduced. These results verify the capability of the proposed MMA-ViT-Lite-6/4 model to better describe the underlying input space and achieve superior classification results with respect to the original ViT-Lite-6/4 model.
\subsection{Comparison with state-of-the-art}
To further demonstrate the benefits of the proposed multi-manifold attention, different Transformer network architectures were chosen and their attention was substituted with the proposed multi-manifold attention. Table \ref{tab:image_cifar} presents a comparison of these Transformers with state-of-the-art CNN- and Transformer-based models in well-known image classification datasets (i.e., CIFAR-10, CIFAR-100 and T-ImageNet).
The results demonstrate the performance improvement in all tested datasets when the proposed multi-manifold attention was employed. More specifically, the performance of the MMA-ViT-Lite-6/4 model has been improved by $1.47\%$, $3.2\%$ and $3.98\%$ in CIFAR-10, CIFAR-100 and T-ImageNet, respectively, while the MMA-CVT-6/4 model achieved an improvement of $0.95\%$, $3.67\%$ and $4.42\%$ in CIFAR-10, CIFAR-100 and T-ImageNet, respectively, with just a small increase in GFLOPs. Similarly, the performance of the MMA-Swin-T model has been improved by $1.06\%$, $1.36\%$ and $\%$, in CIFAR-10, CIFAR-100 and T-ImageNet, respectively. Finally, the MMA-CCT-7/3x2 model outperformed CCT-7/3x2 in all three datasets, managing to achieve state-of-the-art performance with respect to the other CNN- and Transformer-based models.
Additional conclusions can be drawn by observing Fig. \ref{fig:performance} that presents the benefits of the proposed multi-manifold attention in terms of model performance and generalization ability. From these results, it can be deduced that all tested Transformers, irrespective of their network architecture, are significantly improved when the proposed multi-manifold attention is employed through a decrease in the training and validation losses and an increase in the validation accuracy with only a small increase in the number of operations (i.e., GFLOPs).
Finally, a visualization of attention maps for the CVT-6/4, Swin-T and CCT-7/3x2 Transformers, as well as a visualization of the distribution of $20$ random classes of the T-ImageNet dataset using t-SNE are presented in Fig. \ref{fig:tim_vis} and Fig. \ref{fig:tim_tsne}, respectively. From the attention maps in Fig. \ref{fig:tim_vis}, it can be deduced that the use of the proposed multi-manifold attention guides all tested Transformers to pay more attention to significant parts of the object of interest, such as the legs of a spider, the arch rib of a bridge and the fur of a chimpanzee, thus leading to more accurate classification results. On the other hand, from the distribution of a few of the T-ImageNet classes in Fig. \ref{fig:tim_tsne}, it can be observed that the proposed multi-manifold attention leads to more compact classes (i.e., points of the same class closer to each other) and less stray points for all tested Transformers.
These results validate the notion that employing data representations in different manifolds can guide any network, irrespective of its network architecture, to better model the underlying input space and achieve optimal classification results. This is achieved due to the fact that manifolds are governed by different statistical and geometrical rules, allowing a lower intra-class and a higher interclass variance between the representations in a manifold space, with respect to another. Along with the fact that different data representations can model various aspects of an image, such as appearance, color and texture, a Vision Transformer can greatly benefit from the proposed multi-manifold attention. As a result, the proposed multi-manifold attention can effectively substitute the standard attention in Vision Transformers, thus significantly improving their image classification performance.
\section{Conclusion}
\label{sec:conclusion}
In this work, a novel multi-manifold multi-head attention is proposed that can substitute the standard attention in any Transformer-based network irrespective of its architecture. The proposed attention transforms the image representation into points that lie on three distinct manifolds, namely Euclidean, SPD and Grassmann, allowing rich information on an image's content to be captured and processed. Through the calculation and fusion of distances in the three manifolds, a refined attention with highly descriptive and discriminative power is generated, enabling an accurate modelling of the underlying input space. The experimental results with different Transformer-based network architectures and on well-known image classification results verify the effectiveness of the proposed multi-manifold attention in achieving accurate image classification results.
\section*{Acknowledgment}
This work has been supported from General Secretariat for Research and Technology under Grant agreement no. $T6\Upsilon B\Pi00238$ "Q-CONPASS: Dynamic Quality CONtrol on Production lines using intelligent AutonomouS vehicleS".
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.583008,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUgDTxK7kjXLlze4cy | \section{Introduction and main results}
Hawkes processes have been introduced by Hawkes \cite{hawkes}, and are now widely used in many applications, for example: modelling of earthquake occurrences \cite{hawkesadamopoulos,ogata88},
finance~\cite{bacrydelattrehoffmannmuzySPA, bacrydelattrehoffmannmuzy,bacrymuzy2016},
genetics \cite{reynaudbouretschbath}, neurosciences \cite{chevalliercaceresdoumicreynaudbouret,ditlevsenlocherbach,reynaudbouretrivoirardtuleaumalot}.
Hawkes processes are random point processes on the line (see~\cite{daleyverejones2003,daleyverejones2008,Jacod-Shyryaev2003} for an introduction)
where each atom is associated with a (possibly signed) reproduction measure generating further atoms or adding repulsion.
When the reproduction measure is nonnegative, Hawkes and Oakes \cite{hawkesoakes} have provided a cluster representation of the Hawkes processes
based on immigration of ancestors, each of which is at the head of the branching point process of its offspring.
Exponential concentration inequalities for ergodic theorems and tools for statistical applications have been developed,
\emph{e.g.}, by Reynaud-Bouret and Roy~\cite{reynaudbouretroy}
in this case by using a coupling \emph{à la} Berbee \cite{berbee}.
For many applications however, it is important to allow the reproduction measure to be a signed measure.
The positive part of the measure can be interpreted as self-excitation, and its negative part as self-inhibition.
For instance, in neurosciences this can be used to model the existence of a latency period
before the successive activations of a neuron.
A large part of the literature on Hawkes processes for neurosciences uses large systems approximations by mean-field limits (\emph{e.g.}, \cite{chevallier,delattrefournierhoffmann,delattrefournier,ditlevsenlocherbach}) or stabilization properties
(\emph{e.g.}, \cite{duartelocherbachost} using Erlang kernels). Here, we consider a single Hawkes process for which the reproduction measure is a signed measure and aim to extend the ergodic theorem and deviation inequalities obtained in \cite{reynaudbouretroy} for a nonnegative reproduction measure.
A main issue here is that when inhibition is present then the cluster representation of \cite{hawkesoakes} is no longer valid.
An important tool in our study is a coupling construction of the Hawkes process with signed reproduction measure and of a Hawkes process with a positive measure.
The former is shown to be a thinning of the latter, for which the cluster representation is valid.
We then define renewal times for these general Hawkes processes.
We introduce an auxiliary strong Markov process for this purpose.
This allows to split the sample paths into i.i.d.\ distributed excursions, and use
limit theorems for i.i.d.\ sequences.
In order to obtain concentration inequalities, a main difficulty is to obtain exponential bounds for the tail distribution of the renewal times. In the case in which the reproduction function is nonnegative, we associate to the Hawkes process
a $M/G/\infty$ queue, and control the length of its excursions using Laplace transform techniques. These results have independent interest in themselves. We then extend our techniques to Hawkes processes
with signed reproduction functions using the coupling.
\subsection{Definitions and notations}
\label{sec:defs}
\subsubsection*{Measure-theoretic and topological framework}
Throughout this paper, an appropriate filtered probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{t\ge0},\mathbb{P})$ satisfying the usual assumptions is given. All processes will be assumed to be adapted.
Let $\mathcal{N}(\mathbb{R})$ denote the space of counting measures on the real line $\mathbb{R}=(-\infty, +\infty)$
which are boundedly finite; these are the
Borel measures with values in $\mathbb{N}_0\cup \{+\infty\}$ (where $\mathbb{N}_0=\{0,1,\ldots\}$) which are finite on any bounded set. The space $\mathcal{N}(\mathbb{R})$ is endowed with the weak topology
$\sigma(\mathcal{N}(\mathbb{R}),\mathcal{C}_{bs}(\mathbb{R}))$ and the corresponding Borel $\sigma$-field, where $\mathcal{C}_{bs}$ denotes the space of continuous functions with bounded support.
If $N$ is in $\mathcal{N}(\mathbb{R})$ and $I\subset \mathbb{R}$ is an interval then $N|_I$ denotes the restriction of $N$ to $I$.
Then $N|_I$ belongs to the space $\mathcal{N}(I)$ of boundedly finite counting measures on $I$.
By abuse of notation, a point process on $I$ is often identified with its extension which is null
outside of $I$,
and in particular $N|_I \in \mathcal{N}(I)$ with ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_I N \in \mathcal{N}(\mathbb{R})$.
Accordingly, $\mathcal{N}(I)$ is endowed with the trace topology and $\sigma$-field.
A random point process on $I\subset \mathbb{R}$ will be considered as a random variable taking values in the Polish space $\mathcal{N}(I)$.
We shall also consider random processes with sample paths in the Skorohod space
$\mathbb{D}(\mathbb{R}_+,\mathcal{N}(I))$.
All these spaces are
Polish, see~\cite[Prop.\ A2.5.III, Prop.\ A2.6.III]{daleyverejones2003}.
\subsubsection*{Hawkes processes}
In this paper we study a random point process on the real line $\mathbb{R}=(-\infty,+\infty)$
specified by a stochastic evolution on the half-line $(0,+\infty)$ and
an initial condition given by a point process on the complementary half-line $(-\infty,0]$.
This is much more general than considering a stationary version of the point process,
does not require its existence, and can be used to prove the latter.
The time origin $0$ can be interpreted as the start of some sort of action with regards to the process
(\emph{e.g.}, computation of statistical estimators).
In the following definition of a Hawkes process with a signed reproduction measure,
the initial condition $N^0$ is always assumed to be $\mathcal{F}_0$-measurable
and $N^h|_{(0,+\infty)}$ to be adapted to $(\mathcal{F}_t)_{t\ge0}$. We refer to \cite[Sect.\ 7.2]{daleyverejones2003} for the definition of the conditional intensity measure and denote $x^+=\max(x,0)$ for $x\in\mathbb{R}$.
\begin{defi}
\label{def:Hawkes}
Let $\lambda>0$, a signed measurable function $h: (0,+\infty) \to \mathbb{R}$,
and a boundedly finite point process $N^0$ on $(-\infty,0]$
with law $\mathfrak{m}$ be given.
The point process $N^h$ on $\mathbb{R}$ is a Hawkes process on $(0,+\infty)$ with initial condition $N^0$
and reproduction measure $\mu(dt) \triangleq h(t)\,dt$
if $N^h|_{(-\infty,0]}=N^0$ and the conditional intensity measure of $N^h|_{(0,+\infty)}$
w.r.t.\ $(\mathcal{F}_t)_{t\ge0}$
is absolutely continuous w.r.t.\ the Lebesgue measure and has density
\begin{equation}
\Lambda^h: t\in (0,+\infty) \mapsto
\Lambda^h(t)= \biggl(\lambda+\int_{(-\infty,t)} h(t-u)\,N^h(du)\biggr)^+\,.
\label{def:Lambda}
\end{equation}
\end{defi}
Hawkes processes can be defined for reproduction measures $\mu$ which are not absolutely continuous
w.r.t.\ the Lebesgue measure, but we shall consider here this case only. This avoids in particular
the issue of multiplicities of points in $N^h$.
Since $h$ is the density of $\mu$, the support of $h$ is naturally defined as the support of the measure $\mu$:
\begin{equation*}
\textnormal{supp}(h) \triangleq \textnormal{supp}(\mu) \triangleq (0,+\infty) - \bigcup_{G \;\text{open}, \;|\mu|(G) =0}G\,,
\end{equation*}
where $|\mu|(dt)=|h(t)|\,dt$ is the total variation measure of $\mu$. We assume
w.l.o.g.\ that $h = h{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\textnormal{supp}(h)}$ and define
\[
L(h) \triangleq \sup(\textnormal{supp}(h)) \triangleq \sup\{t >0 , |h(t)|>0\}\in [0,+\infty]\,.
\]
The constant $\lambda$ can be considered as the intensity of a Poisson immigration phenomenon on $(0,+\infty)$.
The function $h$ corresponds to self-excitation and self-repulsion phenomena:
each point of $N^h$ increases, or respectively decreases, the conditional intensity measure wherever the appropriately translated
function $h$ is positive (self-excitation), or respectively negative (self-inhibition).
\par
In the sequel, the notation $\mathbb{P}_\mathfrak{m}$ and $\mathbb{E}_\mathfrak{m}$ is used to
specify that $N^0$ has distribution $\mathfrak{m}$. In the case where
$\mathfrak{m}=\delta_{\nu}$
some $\nu\in \mathcal{N}((-\infty,0])$, we use the notation $\mathbb{E}_\nu$ and $\mathbb{P}_\nu$.
We often consider the case when $\nu=\emptyset$,
the null measure for which there is no point on $(-\infty,0]$.
In Definition \ref{def:Hawkes}, the density $\Lambda^h$ of the conditional intensity measure of $N^h$
depends on $N^h$ itself, hence existence and uniqueness results are needed.
In Proposition~\ref{prop:couplage}, under the further assumptions that $\|h^+\|_1 <1$ and that
\begin{equation*}
\forall t>0,\quad \int_0^t \mathbb{E}_{\mathfrak{m}} \Big(\int_{(-\infty,0]}h^+(u-s)\,N^0(ds) \Big)\ du < +\infty\,,
\end{equation*}
we prove that the Hawkes processes can be constructed as the solution of the equation
\begin{equation}
\label{eq:Ng}
\left\{
\begin{aligned}
&N^{h} = N^0+\int_{(0,+\infty)\times(0,+\infty)} \delta_u {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\theta\leq \Lambda^{h}(u)\}}\,Q(du,d\theta)\,,
\\
&\Lambda^h(u) =\biggl(\lambda+\int_{(-\infty,u)} h(u-s)\,N^h(ds)\biggr)^+ \,,
&&u >0,
\end{aligned}
\right.
\end{equation}
where $Q$ is a $(\mathcal{F}_t)_{t\ge0}$-Poisson point process on $(0,+\infty)\times (0,+\infty)$ with
unit intensity, characterized by the fact that for every $t,h,a>0$, the random variable
$Q((t,t+h]\times (0,a])$ is $\mathcal{F}_{t+h}$-measurable, independent of $\mathcal{F}_t$, and Poisson of parameter $h a$.
Such equations have been introduced and studied by Brémaud et al. \cite{bremaudmassoulie, bremaudnappotorrisi, massoulie}.
Let us remark that for a given $N^0$, the counting process $(N^h_t)_{t\ge0}$ with sample paths in $\mathbb{D}(\mathbb{R}_+,\mathbb{N})$ defined by $N^h_t=N^h((0,t])$ satisfies a pure jump time-inhomogeneous stochastic differential equation which is equivalent to the formulation \eqref{eq:Ng}.
If $h$ is a nonnegative function satisfying $\|h\|_1<1$, then
there exists an alternate existence and uniqueness proof
based on a cluster representation involving subcritical continuous-time Galton-Watson trees, see \cite{hawkesoakes},
which we shall describe and use later.
\subsection{Main Results}
Our goal in this paper is to establish limit theorems for a Hawkes process $N^h$ with general reproduction function $h$. We aim at studying the limiting behaviour of the process on a sliding finite time window of length $A$. We therefore introduce a time-shifted version of the Hawkes process.
Using classical notations for point processes, for $t\in\mathbb{R}$ we define
\begin{equation}
\label{def:shift}
S_t : N\in \mathcal{N}(\mathbb{R})\mapsto S_tN \triangleq N(\cdot + t) \in \mathcal{N}(\mathbb{R})\,.
\end{equation}
Then $S_t N$ is the image measure of $N$ by the shift by $t$ units of time, and
if $a<b$ then
\begin{equation}
\label{def:St}
\begin{aligned}
S_t N((a,b]) &= N((t+a,t+b])\,,
\\
(S_t N)|_{(a,b]} &= S_t(N|_{(t+a,t+b]})=N|_{(t+a,t+b]}(\cdot+t)\,
\end{aligned}
\end{equation}
(with abuse of notation between $N|_{(t+a,t+b]}$ and ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{(t+a,t+b]}N$, etc.).
The quantities of interest will be of the form
\begin{equation}
\label{eq:forme}
\frac1T \int_0^T f((S_tN^h)|_{(-A,0]})\,dt = \frac{1}{T}\int_0^T f\big(N^h(\cdot+t)|_{(-A,0]}\big) dt
\end{equation}
in which $T>0$ is a finite time horizon, $A>0$ a finite window length, and $f$
belongs to the set $\mathcal{B}_{lb}(\mathcal{N}((-A,0]))$ of real Borel functions on $\mathcal{N}((-A,0])$
which are locally bounded, \emph{i.e.}, uniformly bounded on
$\{\nu \in \mathcal{N}((-A,0]) : \nu((-A,0])\le n\}$ for each $n\ge1$.
Such quantities appear commonly in the field of statistical inference of random processes;
time is labelled so that observation has started by time $-A$.
Using renewal theory, we are able to obtain results without any non-negativity assumption on the reproduction function $h$.
We first establish an ergodic theorem and a central limit theorem for such quantities. We then generalize the concentration inequalities which were obtained by Reynaud-Bouret and Roy \cite{reynaudbouretroy} under the assumptions that $h$ is a subcritical reproduction law
This leads us to make the following hypotheses on the reproduction function $h = h{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\textnormal{supp}(h)}$.
\begin{hyp}
\label{hyp:h}
The signed measurable function $h: (0,+\infty) \to \mathbb{R}$ is such that
\begin{equation*}
L(h) < \infty\,,
\qquad
\|h^+\|_1 \triangleq\int_{(0,+\infty)} h^+(t)\,dt <1\,.
\end{equation*}
The distribution $\mathfrak{m}$ of the initial condition $N^0$ is such that
\begin{equation}\label{eq:hyp-momentN0}
\mathbb{E}_{\mathfrak{m}} \big(N^0(-L(h),0]\big) < \infty.
\end{equation}
\end{hyp}
Under these assumptions, we may and will assume that the window $A<\infty$ is such that $A\ge L(h)$.
Then the quantities~\eqref{eq:forme} actually depend only on the restriction
$N^0 |_{(-A,0]}$ of the initial condition $N^0$ to $(-A,0]$.
Thus, in the sequel we identify $\mathfrak{m}$ with its marginal on $\mathcal{N}((-A,0])$ with abuse of notation.
Note that even though~\eqref{eq:hyp-momentN0} does not imply that $\mathbb{E}_\mathfrak{m}\big(N^0((-A,0])\big)<\infty$, our results hold under \eqref{eq:hyp-momentN0} (see Remark \ref{rque:A-L} below).
The following important results for the Hawkes process $N^h$ are obtained using its regeneration structure,
which will be investigated using a Markov process we now introduce.
In Proposition \ref{prop:XMarkov} we prove that if $A\ge L(h)$ then
the process $(X_t)_{t\ge 0}$ defined by
\[
X_t\triangleq (S_t N^h)|_{(-A,0]} \triangleq N^h|_{(t-A,t]}(\cdot+t)
\]
is a strong Markov process which admits an unique invariant law denoted by $\pi_A$,
see Theorem \ref{thm:exist-uniq-inv-law} below.
We introduce $\tau$, the first return time to $\emptyset$ (the null point process)
for this Markov process defined by
\begin{equation}
\label{def:tau}
\tau \triangleq \inf\{t>0 : X_{t-}\neq \emptyset, X_{t} =\emptyset\}=\inf\{t>0 : N^h[t-A,t)\neq 0, N^h(t-A,t] =0\}\,.
\end{equation}
The probability measure $\pi_A$ on $\mathcal{N}((-A,0])$ can be classically represented as the intensity of an occupation measure over an excursion: for any non-negative Borel function $f$,
\begin{equation}\label{def:piA}
\pi_A f
\triangleq
\frac1{\mathbb{E}_\emptyset(\tau)} \mathbb{E}_\emptyset\biggl(\int_0^{\tau} f((S_t N)|_{(-A,0]}) \,dt\biggr)\,.
\end{equation}
Note that we may then construct a Markov process $X_t$ in equilibrium on $\mathbb{R}_+$ and a time-reversed Markov process in equilibrium on $\mathbb{R}_+$, with identical initial conditions (drawn according to $\pi_A$) and independent transitions, and build from these a Markov process in equilibrium on $\mathbb{R}$. This construction would yield a stationary version of $N^h$ on $\mathbb{R}$.
We now state our main results.
\begin{thm}[Ergodic theorems]
\label{thm:point-erg+laws}
Let $N^h$ be a Hawkes process with immigration rate $\lambda >0$, reproduction function $h: (0,+\infty) \to \mathbb{R}$,
and initial condition $N^0$ with law $\mathfrak{m}$, satisfying Assumption~\ref{hyp:h}.
Let $A<\infty$ be such that
$A\ge L(h)$, and $\pi_A$ be the probability measure on $\mathcal{N}((-A,0])$ defined by \eqref{def:piA}.
\begin{enumerate}[a)]
\item
\label{lln}
If $f \in \mathcal{B}_{lb}(\mathcal{N}((-A,0]))$ is nonnegative or $\pi_A$-integrable, then
\begin{equation*}
\frac1T \int_0^T f((S_tN^h)|_{(-A,0]})\,dt \xrightarrow[T\to\infty]{\mathbb{P}_{\mathfrak{m}}-\textnormal{a.s.}} \pi_A f\,.
\end{equation*}
\item
\label{cv-to-equ}
Convergence to equilibrium for large times holds in the following sense:
\[
\mathbb{P}_\mathfrak{m}\bigl((S_tN^h)|_{[0,+\infty)} \in \cdot \bigr)
\xrightarrow[t\to\infty]{\textnormal{total variation}}
\mathbb{P}_{\pi_A}(N^h|_{[0,+\infty)}\in \cdot) \,.
\]
\end{enumerate}
\end{thm}
The following result provides the asymptotics of the fluctuations around the convergence result~\ref{lln}),
and yields asymptotically exact confidence intervals for it. We define the variance \begin{equation}
\label{sig2f}
\sigma^2(f)
\triangleq \frac1{\mathbb{E}_\emptyset(\tau)}
\mathbb{E}_\emptyset\biggl(\biggl(\int_0^{\tau} \big(f((S_tN^h)|_{(-A,0]}) -\pi_Af\big)\,dt\biggr)^2 \biggr) .
\end{equation}
\begin{thm}[Central limit theorem]
\label{thm:clt}
Let $N^h$ be a Hawkes process with immigration rate $\lambda >0$, reproduction function $h: (0,+\infty) \to \mathbb{R}$,
and initial law $\mathfrak{m}$, satisfying Assumption~\ref{hyp:h}.
Let $A<\infty$ be such that $A\ge L(h)$,
the hitting time $\tau$ be given by \eqref{def:tau},
and the probability measure $\pi_A$ on $\mathcal{N}((-A,0])$ be given by \eqref{def:piA}.
If $f \in \mathcal{B}_{lb}(\mathcal{N}((-A,0]))$ is $\pi_A$-integrable and satisfies
{$\sigma^2(f)<\infty$} then
\begin{equation*}
\label{eq:clt}
\sqrt{T} \biggl( \frac1T \int_0^T f((S_tN^h)|_{(-A,0]}) \,dt - \pi_A f \biggr)
\xrightarrow[T\to\infty]{\textnormal{in law}} \mathcal{N}(0, \sigma^2(f))\,.
\end{equation*}
\end{thm}
Now we provide non-asymptotic exponential concentration bounds for
Theorem~\ref{thm:point-erg+laws}~\ref{lln}).
The first entrance time at $\emptyset$ is defined by
\begin{equation}
\label{def:tau0}
\tau_0 \triangleq \inf\{t{\ge}0 : N^h(t-A,t] =0\}\,.
\end{equation}
Recall that $x^+=\max(x,0)$ and $x^-=\max(-x,0)$ for $x\in\mathbb{R}$, and let set
$(x)_{\pm}^k = (x^{\pm})^k$.
\begin{thm}[Concentration inequalities]
\label{thm:non-asympt-exp-bd}
Let $N^h$ be a Hawkes process with immigration rate $\lambda >0$, reproduction function $h: (0,+\infty) \to \mathbb{R}$,
and initial law $\mathfrak{m}$, satisfying Assumption~\ref{hyp:h}.
Let $A<\infty$ be such that $A\ge L(h)$.
Consider the hitting time $\tau$ given by \eqref{def:tau},
the entrance time $\tau_0$ given by \eqref{def:tau0},
and the probability measure on $\mathcal{N}((-A,0])$ defined in \eqref{def:piA}.
Consider $f \in \mathcal{B}_{lb}(\mathcal{N}((-A,0]))$ taking its values in a bounded interval $[a,b]$,
and define $\sigma^2(f)$ as in \eqref{sig2f} and
\begin{align*}
& c^\pm(f) \triangleq
\sup_{k\ge3}
\Biggl(
\frac2{k!}
\frac{\mathbb{E}_\emptyset\bigl(\bigl(\int_0^{\tau} (f((S_tN^h)|_{(-A,0]}) -\pi_Af)\,dt\bigr)^k_\pm \bigr)}%
{\mathbb{E}_\emptyset(\tau) \sigma^2(f)}
\Biggr)^{\frac1{k-2}}\,,
\\
& c^\pm(\tau)
\triangleq
\sup_{k\ge3}
\biggl(
\frac2{k!}
\frac{\mathbb{E}_\emptyset\bigl( (\tau-\mathbb{E}_\emptyset(\tau))_\pm^k \bigr)}%
{\mbox{Var}_\emptyset (\tau)}
\biggr)^{\frac1{k-2}}\,,\\
&
c^+(\tau_0)
\triangleq
\sup_{k\ge3}
\biggl(
\frac2{k!}
\frac{\mathbb{E}_{\mathfrak{m}}\bigl((\tau_0-\mathbb{E}_\mathfrak{m}(\tau_0))_+^k\bigr)}%
{\mbox{Var}_\mathfrak{m}(\tau_0)}
\biggr)^{\frac1{k-2}}\,.
\end{align*}
Then, for all $\varepsilon>0$ and $T$ sufficiently large
\begin{align}
\label{eq:conc-ineq}
&\mathbb{P}_\mathfrak{m}\biggl( \biggl|\frac1T \int_0^T f((S_tN^h)|_{(-A,0]}) - \pi_A f \biggr| \ge \varepsilon \biggr)
\notag\\
&\quad\le
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8 T \sigma^2(f) + 4 c^+(f)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8 T \sigma^2(f) + 4 c^-(f)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8T |b-a|^2\frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}+ 4 |b-a| c^+(\tau)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8T |b-a|^2\frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}+ 4 |b-a| c^-(\tau)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{(\sqrt{T}\varepsilon - 2|b-a| \mathbb{E}_\mathfrak{m}(\tau_0))^2}
{8|b-a|^2\mbox{Var}_\mathfrak{m} (\tau_0) + 4 |b-a| c^+(\tau_0)(\sqrt{T}\varepsilon - 2|b-a| \mathbb{E}_\mathfrak{m}(\tau_0)) }
\right).
\end{align}
If $N|_{(-A,0]} =\emptyset$ then the last term of the r.h.s. is null and the upper bound is true with $T$ instead of $T-\sqrt{T}$
in the other terms.
\end{thm}
{This concentration inequality can be simplified, using upper bounds for the constants $c^\pm(f)$ and $c^\pm(\tau)$. In the following corollary, we use explicitly the fact that the hitting time $\tau$ admits an exponential moment (see Proposition \ref{prop:tau}).
\begin{cor}\label{cor:dev_simple}
Under assumptions and notation of Theorem~\ref{thm:non-asympt-exp-bd}, there exists $\alpha>0$ such that $\mathbb{E}_\emptyset(e^{\alpha\tau})<\infty$. We set $$v=\frac{2(b-a)^2}{\alpha^2}\Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\emptyset(e^{\alpha\tau}) e^{\alpha \mathbb{E}_\emptyset(\tau)},\quad \text{and}\quad c= \frac{|b-a|}{\alpha}\,.$$
Then for all $\varepsilon>0$
\begin{align*}
\mathbb{P}_\emptyset\biggl( &\biggl|\frac1T \int_0^T f((S_tN^h)|_{(-A,0]}) - \pi_A f \biggr| \ge \varepsilon \biggr)\le 4 \exp\left(-\frac{\Bigl( T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau) \Bigr)^2}{4 \left(2v+ c(T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau)) \right)} \right)\,,
\end{align*}
or equivalently for all $1 \ge \eta>0$
\begin{equation}
P_\emptyset\biggl( \biggl|\frac1T \int_0^T f((S_tN^h)|_{(-A,0]})dt - \pi_A f \biggr| \ge \varepsilon_\eta \biggr)\leq \eta\,,
\end{equation}
where
\begin{align*}
\varepsilon_\eta=\frac{1}{T}\left(|b-a|\mathbb{E}_\emptyset(\tau)-2c\log\big(\frac{\eta}{4}\big)+\sqrt{4c^2\log^2\big(\frac{\eta}{4}\big)-8 v \log\big(\frac{\eta}{4}\big)}\right)\,.
\end{align*}
\end{cor}
}
\begin{rem}\label{rque:A-L}
All these results hold under \eqref{eq:hyp-momentN0} even if
$\mathbb{E}_\mathfrak{m}(N^0((-A,0]))=+\infty$. Indeed,
\begin{align*}
\frac{1}{T}\int_0^T f\big(N^h(\cdot+t)|_{(-A,0]}\big) dt
=\frac{1}{T}\int_0^{A-L(h)} &f\big(N^h(\cdot+t)|_{(-A,0]}\big) dt\\
&+\frac{1}{T}\int_{A-L(h)}^T f\big(N^h(.+t)|_{(-A,0]}\big) dt\,.
\end{align*}
The first r.h.s.\ term converges $\mathcal{P}_\mathfrak{m}$-a.s.\ to zero, even when multiplied by $\sqrt{T}$. For the second r.h.s.\ term, we can apply the Markov property at time $A-L(h)$ (which will be justified when proving that $(S_.N^h)|_{(-A,0]}$ is a Markov process) and show that $$\mathbb{E}_{(S_{A-L(h)}N^h)|_{(-A,0]}}(N^0((-A,0]))<+\infty.$$
\end{rem}
\section{Hawkes processes}
In this Section, we first give a constructive proof of Eq.~\eqref{eq:Ng}, which yields a coupling between $N^h$ and $N^{h^+}$ satisfying $N^h\leq N^{h^+}$.
The renewal times on which are based the proofs of our main results are the instants at which the intensity $\Lambda^h$
has returned and then stayed at $\lambda$ for a duration long enough to be sure that the dependence on the past has vanished, in order to be able to write the process in terms of i.i.d.\ excursions.
The coupling will allow us to control the renewal times for $N^h$ by the renewal times for $N^{h^+}$.
When dealing with $h^+$, we use the well-known cluster representation for a Hawkes process
with nonnegative reproduction function. This representation allows us to interpret the renewal times as times at which an $M/G/\infty$ queue is empty,
and we use this interpretation in order to obtain tail estimates for the interval between these times.
\subsection{Solving the equation for the Hawkes process}
In this paragraph, we give an algorithmic proof of the existence and uniqueness of strong solutions of Equation~\eqref{eq:Ng}.
This algorithmic proof can be used for simulations, which are shown in Fig. \ref{fig:eds}.
\begin{prop}
\label{prop:couplage}
Let $Q$ be a $(\mathcal{F}_t)_{t\ge0}$-Poisson point process on $(0,+\infty)\times (0,+\infty)$ with
unit intensity
Consider Equation~\eqref{eq:Ng}, \emph{i.e.},
\begin{equation*}
\left\{
\begin{aligned}
&N^{h} = N^0+\int_{(0,+\infty)\times(0,+\infty)} \delta_u {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\theta\leq \Lambda^{h}(u)\}}\,Q(du,d\theta)\,,
\\
&\Lambda^h(u) =\biggl(\lambda+\int_{(-\infty,u)} h(u-s)\,N^h(ds)\biggr)^+ \,,
&&u >0\,,
\end{aligned}
\right.
\end{equation*}
in which $h: (0,+\infty)\to \mathbb{R}$ is a signed measurable reproduction function, $\lambda>0$ an immigration rate,
and $N^0$ an initial condition in $\mathcal{N}((-\infty,0])$ and law $\mathfrak{m}$.
Consider the similar equation for $N^{h^+}$ in which $h$ is replaced by $h^+$.
Assume that
\begin{equation} \label{cond_h}
\|h^+\|_1 <1
\end{equation}
and that the distribution $\mathfrak{m}$ of the initial condition $N^0$ satisfies
\begin{equation} \label{cond_N0}
\forall t>0,\quad \int_0^t \mathbb{E}_{\mathfrak{m}} \Big(\int_{(-\infty,0]}h^+(u-s)\,N^0(ds) \Big)\ du < +\infty.
\end{equation}
\begin{enumerate}[a)]
\item
\label{prop:couplage-i}
Then there exists a pathwise unique strong solution $N^h$ of Equation~\eqref{eq:Ng},
and this solution is a Hawkes process in the sense of Definition \ref{def:Hawkes}.
\item
\label{prop:couplage-ii}
The same holds for $N^{h^+}$, and moreover $N^h \le N^{h^+}$ a.s. (in the sense of measures).
\end{enumerate}
\end{prop}
\begin{rem}
In order to prove the strong existence and pathwise uniqueness of the solution of Eq.~\eqref{eq:Ng}, we propose a proof based on an algorithmic construction similar to the Poisson embedding of \cite{bremaudmassoulie}, also referred in \cite{daleyverejones2008} as thinning.
A similar result is also proved in these references using Picard iteration techniques, with Assumption~\eqref{cond_N0} replaced by the stronger hypothesis that there exists $D_{\mathfrak{m}}>0$ such that
\begin{equation}
\forall t>0,\quad \mathbb{E}_{\mathfrak{m}} \bigg(\int_{(-\infty,0]}|h(t-s)| \,N^0(ds) \bigg) < D_{\mathfrak{m}}\,.
\end{equation}
When $h$ is nonnegative, the result can be deduced from the cluster representation of the self-exciting Hawkes process, since $N^h([0,t])$ is upper bounded by the sum of the sizes of a Poisson number of sub-critical Galton-Watson trees, see \cite{hawkesoakes,reynaudbouretroy}.
\end{rem}
\begin{rem}
Proposition~\ref{prop:couplage} does not require that $L(h)$ be finite. When $L(h)<\infty$ then
the assumption \eqref{cond_N0} can be rewritten as
\begin{equation}
\int_0^{L(h)} \mathbb{E}_\mathfrak{m} \bigg(\int_{(-L(h),0]}h^+(u-s) \,N^0(ds) \bigg)\ du < +\infty\,.
\label{cond_N0:cas_supph_compact}
\end{equation}
A sufficient condition for \eqref{cond_N0:cas_supph_compact} to hold is that $\mathbb{E}_\mathfrak{m}(N^0(-L(h),0])<+\infty$. Indeed, using the Fubini-Tonelli theorem, the l.h.s.\ of \eqref{cond_N0:cas_supph_compact} can be bounded by
$\|h^+\|_1 \,\mathbb{E}_\mathfrak{m}(N^0(-L(h),0])$. {Therefore, the results of Proposition \ref{prop:couplage} hold under Assumptions \ref{hyp:h}}.
\end{rem}
Before proving Proposition \ref{prop:couplage}, we start with a lemma showing that Assumption~\eqref{cond_N0} implies a milder condition which will be used repeatedly in the proof of the proposition.
\begin{lem}\label{lem:conditionN0}
Suppose that Assumption~\eqref{cond_N0} is satisfied. Then for any nonnegative random variable $U$ and $r>0$,
\begin{equation*}
\mathbb{P}_{\mathfrak{m}}\bigg(\int_U^{U+r} \int_{(-\infty,0]}h^+(t-s) \,N^0(ds) \,dt < +\infty,~~ U< +\infty \bigg)=\mathbb{P}_{\mathfrak{m}}(U<+\infty)\,.
\end{equation*}
\end{lem}
\begin{proof}
First note that, for every integer $n$,
\[
\int_0^n\int_{(-\infty,0]}h^+(t-s) \,N^0(ds) dt < +\infty\,,\;
\mathbb{P}_{\mathfrak{m}}-\text{a.s.},
\]
using condition \eqref{cond_N0} and the Fubini-Tonelli theorem. This leads easily to
\[
\mathbb{P}_{\mathfrak{m}}\bigg( \forall n \ge 0,~\int_0^n\int_{(-\infty,0]}h^+(t-s) \,N^0(ds) dt < +\infty \bigg) = 1\,,
\]
and, for a positive real number $r$, to
\[
\mathbb{P}_{\mathfrak{m}}\bigg(\forall u\ge0,~\int_u^{u+r} \int_{(-\infty,0]}h^+(t-s) \,N^0(ds) dt < +\infty\bigg)=1\,,
\]
which gives the announced result.
\end{proof}
\begin{proof} [Proof of Proposition \ref{prop:couplage}]
Proofs of both \ref{prop:couplage-i}) and \ref{prop:couplage-ii}) will be obtained by induction on the successive atoms of $N^h$.
\paragraph{Proof of \ref{prop:couplage-i}): initialization.}
Let
\begin{align}
\label{def:Lambda0}
&\Lambda^h_0(t)=\biggl(\lambda+\int_{(-\infty,0]}h(t-s) \,N^0(ds)\biggr)^+\,,
&& t>0\,,
\\
\label{def:U1h}
&U_1^h=\inf\biggl\{u > 0 : \int_{(0,u]} \int_{(0,\Lambda_0^h(v)]} \,Q(dv,d\theta)>0 \biggr\}\,,
\end{align}
with the usual convention that $\inf \emptyset = +\infty$.
First note that conditionally on $N^0$,
\[
Q(\{(v,\theta) \in (0,\varepsilon] \times (0,+\infty) : \theta \le \Lambda_0^h(v)\})
\]
follows a Poisson law with parameter $\int_0^{\varepsilon} \Lambda^h_0(t) dt$. Using Assumption \eqref{cond_N0} and Lemma \ref{lem:conditionN0}, we can find $\varepsilon_0 >0$ such that
$\int_0^{\varepsilon_0} \int_{(-\infty,0]}h^+(t-s) \,N^0(ds) dt < +\infty$.
We thus have, $\mathbb{P}_{\mathfrak{m}}$-a.s.,
\begin{align*}
\int_0^{\varepsilon_0} \Lambda^h_0(t) dt
&= \int_0^{\varepsilon_0} \biggl(\lambda+\int_{(-\infty,0]}h(t-s) \,N^0(ds)\biggr)^+\, dt
\\
&\le \lambda \varepsilon_0 +\int_0^{\varepsilon_0} \int_{(-\infty,0]}h^+(t-s) \,N^0(ds) dt < + \infty\,.
\end{align*}
Consequently, $Q(\{(v,\theta) \in (0,\varepsilon_0] \times (0,+\infty) : \theta \le \Lambda_0^h(v)\})$ is finite $\mathbb{P}_{\mathfrak{m}}$-a.s. If it is null then $U_1^h=+\infty$ and $N^h=N^0$. Else, $U_1^h$ is the first atom on $(0,+\infty)$ of the point process of conditional intensity $\Lambda_0^h$. Since $\Lambda^h_0(t)= \Lambda^h(t)$ for $t\in (0,U_1^h]$, thus $U^h_1$ is also the first atom of $N^h$ on $(0,+\infty)$.\\
On $\{U_1^h = +\infty\}$, we define $U_k^h =+\infty$ for all $k \ge 2$.
\paragraph{Proof of \ref{prop:couplage-i}): recursion.}
Assume that we have built $U_1^h, \dots ,U_k^h$ such that on the event $\{U_k^h < +\infty\}$ these are the first $k$ atoms of $N^h$ in increasing order. We are going to construct $U_{k+1}^h$, which will be when finite an atom of $N^h$ greater than $U_k^h$.
On $\{U_k^h = +\infty\}$ we set $U_{k+1}^h = +\infty$.
Henceforth, we work on $\{U_k^h < +\infty\}$. Let
\begin{align}
\label{def_int_part}
&\Lambda^h_k(t) =\biggl(\lambda+\int_{(-\infty,0]}h(t-s) \,N^0(ds) + \int_{(0,U_k^h]} h(t-s) \,N^h (ds)\biggr)^+\,,
\qquad
t>0\,,
\\
&U_{k+1}^h=\inf\biggl\{u > U_k^h : \int_{(U_k^h,u]}\int_{(0,\Lambda_k^h(v)]} \,Q(dv,d\theta)>0 \biggr\}\,.
\notag
\end{align}
As in Step 1, we first prove that there exists $\varepsilon >0$ such that $Q(\mathcal{R}_{\varepsilon})$ is a.s. finite, where
\[
\mathcal{R}_{\varepsilon}=\{(v,\theta) : v \in (U_k^h,U_k^h +\varepsilon],\, \theta \in (0,\Lambda_k^h(v)]\}\,.
\]
Since the random function $\Lambda_k^h$ is measurable with respect to $\mathcal{F}_{U_k^h}$, conditionally on $\mathcal{F}_{U_k^h}$, $Q(\mathcal{R}_{\varepsilon})$ follows a Poisson law with parameter $\int_{U_k^h}^{U_k^h +\varepsilon} \Lambda^h_k(t) dt$
(see Lemma~\ref{lem:Q})
so that
$$\mathbb{P}(Q(\mathcal{R}_{\varepsilon}) < +\infty) = \mathbb{E}\Big(\mathbb{P}(Q(\mathcal{R}_{\varepsilon}) < +\infty \,|\,\mathcal{F}_{U_k^h} )\Big) = \mathbb{E}\left(\mathbb{P}\left(\int_{U_k^h}^{U_k^h +\varepsilon} \Lambda^h_k(t) dt < +\infty \,\bigg|\,\mathcal{F}_{U_k^h} \right)\right).$$
Using the fact that $x \le x^+$ and the monoticity of $x \mapsto x^+$, we obtain
from~\eqref{def_int_part} that
\begin{align*}
\int_{U_k^h}^{U_k^h +\varepsilon} \Lambda^h_k(t) dt
\le \lambda \varepsilon
&+ \int_{U_k^h}^{U_k^h +\varepsilon} \int_{(-\infty,0]}h^+(t-s) \,N^0(ds) dt
\\
&+ \int_{U_k^h}^{U_k^h +\varepsilon} \int_{(0,U_k^h]} h^+(t-s) \,N^h (ds) dt\,.
\end{align*}
On $\{U_k^h <+\infty\}$ the second term in the r.h.s.\ is finite thanks to Assumption~\eqref{cond_N0} and Lemma \ref{lem:conditionN0}. It is thus also finite, a.s., on $\{U_k^h <+\infty\}$, conditionally on $\mathcal{F}_{U_k^h}$.
Now, using the Fubini-Tonelli Theorem and Assumption \eqref{cond_h}, we obtain that
\begin{align*}
\int_{U_k^h}^{U_k^h +\varepsilon} \int_{(0,U_k^h]} h^+(t-s) \,N^h (ds) dt &= \int_{(0,U_k^h]} \biggl(\int_{U_k^h}^{U_k^h +\varepsilon} h^+(t-s)dt \biggr) \,N^h (ds) \\
& \le \|h^+\|_1 \,N^h((0,U_k^h]) = k \|h^+\|_1 < +\infty.
\end{align*}
This concludes the proof of the finiteness of $\int_{U_k^h}^{U_k^h +\varepsilon} \Lambda^h_k(t) dt $, so that $Q(\mathcal{R}_{\varepsilon}) <+\infty$, $\mathbb{P}_{\mathfrak{m}}$-a.s.
If $Q(\mathcal{R}_{\varepsilon}) $ is null then $U_{k+1}^h = +\infty$ and thus $N^h = N^0 +\sum_{i=1}^k \delta_{U_i^h}$. Else, $U_{k+1}^h$ is actually a minimum, implying that $U_k^h < U_{k+1}^h$ and, since $\Lambda^h$ and $\Lambda^h_k$ coincide on $(0,U_{k+1}^h)$, that $U_{k+1}^h$ is the $(k+1)$-th atom of $N^h$.
We have finally proved by induction the existence of a random nondecreasing sequence $(U_k^h)_{k \ge 1}$, which is either stationary equal to infinity, or strictly increasing. On this last event, the $U_k^h$ are exactly the atoms of the random point process $N^h$ on $(0, +\infty)$.
To complete the proof, it is enough to prove that $\lim_{k \rightarrow +\infty} U_k^h = +\infty$, $\mathbb{P}_{\mathfrak{m}}$-a.s. For this, we compute $\mathbb{E}_{\mathfrak{m}}(N^h(0,t))$ for $t >0$. For all $k \ge 1$,
\begin{align*}
\mathbb{E}_{\mathfrak{m}}\big(N^h (0,t \wedge U_k^h)\big)&=\mathbb{E}_{\mathfrak{m}}\bigg(\int_0^{t\wedge U_k^h} \Lambda^h(u)du\bigg)\\
&=\mathbb{E}_{\mathfrak{m}}\bigg(\int_0^{t\wedge U_k^h} \bigg(\lambda+ \int_{(-\infty ,u)} h(u-s)\,N^h(ds)\bigg)^+\ du\bigg)\\
& \le \lambda t+\mathbb{E}_{\mathfrak{m}}\bigg(\int_0^t\int_{(-\infty,0]} h^+(u-s)\,N^0(ds)du \bigg)
\\
&\hphantom{\null \le \lambda t}
+ \mathbb{E}_{\mathfrak{m}}\bigg(\int_0^{t\wedge U_k^h}\int_{(0,u)}h^+(u-s)\,N^h(ds) du \bigg).
\end{align*}
Using the nonnegativity of $h^+$ and Assumption \eqref{cond_N0},
\[
\mathbb{E}_{\mathfrak{m}}\bigg(\int_0^t \int_{(-\infty,0]} h^+(u-s)\,N^0(ds)du \bigg)
\le \int_0^t \mathbb{E}_{\mathfrak{m}}\bigg( \int_{(-\infty,0]} h^+(u-s)\,N^0(ds) \bigg) du <+\infty\,.
\]
For the last term, we use again the Fubini-Tonelli theorem and obtain
\begin{align*}
\mathbb{E}_{\mathfrak{m}}\bigg(\int_0^{t\wedge U_k^h} \int_{(0,u)} h^+(u-s) \,N^h(ds)\ du\bigg)& = \mathbb{E}_{\mathfrak{m}}\bigg(\int_{(0,t\wedge U_k^h)} \int_s^{t\wedge U_k^h} h^+(u-s)du \,N^h(ds)\bigg)\\
& \leq \|h^+\|_1 \,\mathbb{E}_{\mathfrak{m}}\bigg( N^h(0,t\wedge U_k^h)\bigg).
\end{align*}
These three inequalities and the fact that $\|h^+\|_1<1$, see Assumption \eqref{cond_h}, yield that
\begin{align}
0\leq \mathbb{E}_{\mathfrak{m}}\big(N^h(0,t\wedge U_k^h)\big) \leq \frac{1}{1-\|h^+\|_1} \bigg(\lambda t + \int_0^t \mathbb{E}_{\mathfrak{m}}\bigg( \int_{(-\infty,0]} h^+(u-s)\,N^0(ds) \bigg) du\bigg) \label{etape5}
\end{align}
where the upper bound is finite and independent of $k$.
As a consequence, we necessarily have that $\lim_{k\rightarrow +\infty} U_k^h=+\infty$, a.s. Otherwise, there would exist $T>0$ and $\Omega_0$ such that $\mathbb{P}(\Omega_0)>0$ and $\lim_{k\rightarrow +\infty} U_k^h\leq T$ on $\Omega_0$. But this would entail that $\mathbb{E}_{\mathfrak{m}}(N^h(0,T\wedge U_k^h))\geq (k-1) \mathbb{P}_{\mathfrak{m}}(\Omega_0)$ which converges to $+\infty$ with $k$ and cannot be upper bounded by \eqref{etape5}. \\
Note additionally that once we know that $\lim_{k\rightarrow +\infty} U_k^h=+\infty$, a.s., we can use
the Beppo-Levi theorem, which leads to $\mathbb{E}_{\mathfrak{m}}\big(N^h(0,t)\big) < +\infty$ for all $t>0$.
Note that uniqueness comes from the algorithmic construction of the sequence $(U^h_k)_{k\geq 1}$.
\paragraph{Proof of \ref{prop:couplage-ii}).}
The assumptions of the theorem are valid both for $h$ and for $h^+$, and the result \ref{prop:couplage-i})
which we have just proved allows to construct strong solutions $N^h$ and $N^{h^+}$ of Eq.~\eqref{eq:Ng} driven by the same Poisson point process $Q$. Proving \ref{prop:couplage-ii}) is equivalent to showing that the atoms of $N^h$ are also atoms of $N^{h^+}$, which we do using the following recursion.
If $U_1^h = +\infty$ then $N^h$ as no atom on $(0,+\infty)$ and there is nothing to prove.
Else, we first show that the first atom $U^h_1$ of $N^h$ is also an atom of $N^{h^+}$. The key point is to establish that
\begin{equation} \label{eq:comp_int}
\forall t \in (0,U_1^h),~~ \Lambda^h (t)\leq \Lambda^{h^+}(t).
\end{equation}
Indeed, from the definition of $U_1^h$, there exists an atom of the Poisson measure $Q$ at some $(U_1^h,\theta)$ with $\theta \le \Lambda^h\big((U_1^h)_-\big)$. If (\ref{eq:comp_int}) is true we may deduce that $(U_1^h,\theta)$ is also an atom of $Q$ satisfying $\theta \le \Lambda^{h^+}\big((U_1^h)_-\big)$, and thus that $U_1^h$ is also an atom of $N^{h^+}$.
We now turn to the proof of (\ref{eq:comp_int}).
For every $t\in (0,U_1^h)$, we clearly have
\[
\Lambda^h (t) = \Lambda^h_0(t)
\triangleq \bigg(\lambda+\int_{(-\infty,0]}h(t-s) \,N^0(ds)\bigg)^+\,,
\]
we use the fact that $x \mapsto x^+$ is nondecreasing on $\mathbb{R}$ to obtain that
\[
\Lambda^h (t) \le \lambda+\int_{(-\infty,t)}h^+(t-s) \,N^{h^+}(ds) \triangleq \Lambda^{h^+}(t)\,.
\]
We now prove that if $U_1^h, \dots , U_k^h$ are atoms of $N^{h^+}$ and $U_{k+1}^h<+\infty$ then $U_{k+1}^h$ is also an atom of $N^{h^+}$.
By construction, $\Lambda^h(t)=\Lambda^h_k(t)$ for all $t \in (0, U_{k+1}^h)$, and there exists $\theta >0$ such that $(U_{k+1}^h, \theta)$ is an atom of $Q$ satisfying $\theta \le \Lambda^h((U_{k+1}^h)_-)$. To obtain that $U_{k+1}^h$ is also an atom of $N^{h^+}$, it is thus enough to prove that
\begin{equation*}
\forall t \in [U_k^h,U_{k+1}^h),~~ \Lambda^h (t)\leq \Lambda^{h^+}(t).
\end{equation*}
Using that $h \le h^+$ and the induction hypothesis that the first $k$ atoms $U_1^h, \dots , U_k^h$
of $N^h$ are also atoms of $N^{h^+}$, we obtain for all $t \in (U_k^h,U_{k+1}^h)$ that
\[
\int_{(0,U_k^h]} h(t-s) \,N^h (ds) \le \int_{(0,U_k^h]} h^+(t-s) \,N^h (ds) \le \int_{(0,t)} h^+(t-s) \,N^{h^+} (ds)\,.
\]
This upper bound and the definition \eqref{def_int_part} of $\Lambda_k^h$ yield that, for all $t \in (U_k^h,U_{k+1}^h)$,
\[
\Lambda^h_k(t) \le \Lambda^{h^+}(t)\,,
\]
and since $\Lambda_k^h$ and $\Lambda^h$ coincide on $(0, U_{k+1}^h)$, we have finally proved that $U_{k+1}^h$ is an atom of $N^{h^+}$. This concludes the proof of the proposition.
\end{proof}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=4.5cm,width=7cm]{hawkespositif.pdf} & \includegraphics[height=4.4cm,width=7cm]{hawkesgeneral.pdf}\\
(a) & (b)
\end{tabular}
\caption{
{\small \emph{(a) Hawkes process with a positive reproduction function $h$. (b) Hawkes process with a general reproduction function $h$. The dots in the plane represent the atoms of the Poisson point process $Q$ used for the construction. The atoms of the Hawkes processes are the green dots on the abscissa axis. The bold red curve corresponds to the intensity $\Lambda^h$ and the colored curves represent the partial cumulative contributions of the successive atoms of the Hawkes process. In (b), the bold blue curve corresponds to the intensity of the dominating Hawkes process with reproduction function $h^+$.}}}\label{fig:eds}
\end{center}
\end{figure}
\subsection{The cluster representation for nonnegative reproduction functions} \label{subsec:cluster}
In this subsection, we consider the case in which the reproduction function $h$ is nonnegative.
The intensity process of a corresponding Hawkes process can be written, for $t>0$,
\[
\Lambda^h(t) = \lambda + \int_{(-L(h),t)} h(t-u) \,N^h(du)\,.
\]
The first term can be interpreted as an immigration rate of \emph{ancestors}. Let $(V_k)_{k\ge1}$
be the corresponding sequence of arrival times, forming a Poisson process of intensity $\lambda$.
The second term is the sum of all the contributions of the atoms of $N^h$ before time $t$ and can be seen as self-excitation. If $U$ is an atom of $N^h$, it contributes to the intensity by the addition of the function $t \mapsto h(t-U)$, hence generating new points regarded as its \emph{descendants} or \emph{offspring}. Each individual has a
\emph{lifelength} $L(h)=\sup(\textnormal{supp}(h))$,
the number of its descendants follows a Poisson distribution with mean $\|h\|_1$, and the ages at which it gives birth to them have density $h/\|h\|_1$, all this independently. This induces a Galton-Watson process in continuous time,
see \cite{hawkesoakes,reynaudbouretroy}, and Fig. \ref{fig:cluster}.
To each ancestor arrival time $V_k$ we can associate a cluster of times, composed of the times of birth of its descendents.
The condition $\|h\|_1<1$ is a necessary and sufficient condition for the corresponding Galton-Watson process to be sub-critical, which implies that the cluster sizes are finite almost surely. More precisely, if we define $H_k$ by saying that $V_k+H_k$ is the largest time of the cluster associated with $V_k$, then the $(H_k)_{k\ge1}$ are i.i.d random variables independent from the sequence $(V_k)_{k\ge1}$.
Reynaud-Bouret and Roy \cite{reynaudbouretroy} proved the following tail estimate for $H_1$.
\begin{prop}[{\cite[Prop.\ 1.2]{reynaudbouretroy}}]
\label{prop:H}
Under Assumption \ref{hyp:h}, we have
\begin{equation}
\label{eq:tailH}
\forall x\ge0,\quad \mathbb{P}(H_1 >x)\le \exp \Bigl(-\frac{x}{L(h)}(\|h\|_1-\log \|h\|_1-1)+1-\|h\|_1 \Bigr)\,.
\end{equation}
\end{prop}
If we define
\begin{equation}
\label{def:gamma}
\gamma\triangleq\frac{\|h\|_1-\log(\|h\|_1)-1}{L(h)}
\end{equation}
then $\mathbb{P}(H_1 >x)\le \exp(1-\|h\|_1)\,\exp (-\gamma x)$, and $\gamma$ is an upper bound of the rate of decay of the Galton-Watson cluster length.
\begin{figure}[!ht]
\begin{center}
\includegraphics[height=5cm,width=8cm]{cluster5.pdf}
\caption{{\small \emph{Cluster representation of a Hawkes process with positive reproduction function. The abscissa of the dots give its atoms. Offspring are colored according to their ancestor, and their ordinates correspond to their generation in this age-structured Galton-Watson tree.}}}\label{fig:cluster}
\end{center}
\end{figure}
When $h$ is nonnegative, it is possible to associate to the Hawkes process a $M/G/\infty$ queue.
For $A \ge L(h)$, we consider that the arrival times of ancestors $(V_k)_{k\ge1}$ correspond to the arrival
of customers in the queue and associate to the $k$-th customer a service time $\widetilde{H}_k(A) \triangleq H_k+A$.
We assume that the queue is empty at time $0$, and then
the number $Y_t$ of customers in the queue at time $t\ge0$ is given by
\begin{equation}
\label{def:queue}
Y_t=\sum_{k : V_k\le t} {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{V_k+\widetilde{H}_k(A)>t\}}\,.
\end{equation}
Let $\mathcal{T}_0=0$, and the successive hitting times of $0$ by the process $(Y_t)_{t\geq 0}$ be given by
\begin{equation} \label{def:tcalk}
\forall k\geq 1,\quad \mathcal{T}_{k}=\inf\{t\geq \mathcal{T}_{k-1},\ Y_{t-}\not=0,\ Y_t=0\}.
\end{equation}
The time interval $[V_1,\mathcal{T}_1)$ is called the first busy period, and is the first time interval during which the queue is never empty. Note that the $\mathcal{T}_{k}$ are times at which the conditional intensity of the underlying Hawkes process has returned to $\lambda$ and there is no remaining influence of its previous atoms,
since $\widetilde{H}_k(A) \triangleq H_k+A \ge H_k+L(h)$.
Thus the Hawkes process after $\mathcal{T}_{k}$ has the same law as the Hawkes process with initial condition the null point process $\emptyset \in \mathcal{N}((-A,0])$, translated by $\mathcal{T}_{k}$. This allows us to split the random measure $N^h$ into i.i.d.\ parts. We will prove all this in the next section.
We end this part by giving tail estimates for the $\mathcal{T}_{k}$, which depend on $\lambda$ and on $\gamma$ given in \eqref{def:gamma}
which respectively control the exponential decays of $\mathbb{P}(V_1>x)$ and $\mathbb{P}(H_1>x)$.
\begin{prop}
\label{prop:decrR_1}
In this situation,
for all $x>0$,
if $\lambda<\gamma$, then
$
\mathbb{P}(\mathcal{T}_1>x) = O(\mathrm{e}^{-\lambda x})$,
and if $\gamma\le\lambda$, for any $\alpha<\gamma$ then
$
\mathbb{P}(\mathcal{T}_1>x)= O(\mathrm{e}^{-\alpha x})
$.
Notably, if $\alpha < \min(\lambda, \gamma)$, $\mathbb{E}(\mathrm{e}^{\alpha\mathcal{T}_1})$ is finite.
\end{prop}
\begin{proof}[Proof of Proposition \ref{prop:decrR_1}]
The proof follows from Proposition \ref{prop:H}, from which we deduce that the service time $\widetilde{H}_1=H_1+A$ satisfies:
\begin{equation}
\label{eq:queueH1tilde}
\mathbb{P}(\widetilde{H}_1>x)=\mathbb{P}(H_1>x-A) \le \exp(-(x-A)\gamma +1-\|h\|_1) = O(\mathrm{e}^{-\gamma x})\,.
\end{equation}
We then conclude by applying Theorem \ref{thm:carl} to the queue $(Y_t)_{t\geq 0}$ defined by \eqref{def:queue}.
\end{proof}
Theorem \ref{thm:carl} in Appendix establishes the decay rates for the tail distributions of $\mathcal{T}_1$ and of the length of the busy period $[V_1,\mathcal{T}_1)$. It has an interest in itself, independently of the results for Hawkes processes considered here.
\section{An auxiliary Markov Process}
When the reproduction function $h$ has a bounded support, $N^h|_{(t,+\infty)}$ depends on $N^h|_{(-\infty,t]}$ only through $N^h|_{(t-L(h),t]}$. The process $t \mapsto N^h|_{(t-L(h),t]}$ will then be seen to be Markovian, which yields regenerative properties for $N^h$. It is the purpose of this section to formalize that idea by introducing an auxiliary Markov process.
\subsection{Definition of a strong Markov process}
We suppose that Assumption \ref{hyp:h} holds and consider the Hawkes process $N^h$ solution of the corresponding Equation~\eqref{eq:Ng} constructed in Proposition~\ref{prop:couplage}.
We recall that $L(h)<\infty$.
Then, for any $t>0$ and $u\in (-\infty,-L(h)]$, $h(t-u)=0$, and thus
\begin{equation}
\label{def:Lambda-A}
\Lambda^h(t) = \biggl(\lambda+\int_{(-\infty,t)}h(t-u)\,N^h(du)\biggr)^+
= \biggl(\lambda+\int_{(-L(h),t)} h(t-u)\,N^h(du)\biggr)^+\,.
\end{equation}
In particular $N^h|_{(0,+\infty)}$ depends only on the restriction
$N^0|_{(-L(h),0]}$ of the initial condition.
Recall that the shift operator $S_t$ is defined in \eqref{def:shift} and \eqref{def:St}. Note that if $t,s\geq 0$ then $S_{s+t}N^h=S_t S_s N^h=S_s S_tN^h$.
Let $A<\infty$ be such that $A \ge L(h)$.
Consider the $(\mathcal{F}_t)$-adapted process $X=(X_t)_{t\ge 0}$ defined by
\begin{equation}
\label{aux-proc}
X_t = (S_tN^h ) |_{(-A,0]}=N^h|_{(t-A,t]}(\cdot+t)) \,,
\end{equation}
i.e.,
\begin{equation*}
\begin{array}{ccccl}
X_t & : & \mathcal{B}((-A,0]) & \rightarrow & \mathbb{R}_+\\
& & B & \mapsto & X_t(B) = N^h|_{(t-A,t]}(B+t),
\end{array}
\end{equation*}
The measure $X_t$ is the point process $N^h$ in the time window $(t-A,t]$, shifted back to $(-A,0]$.
This is a function of $N^h|_{(-A,+\infty)}$.
Using Equation \eqref{def:Lambda-A} and the remark below it, we see that the law of $N^h|_{(-A,+\infty)}$ depends on the initial condition $N^0$ only through $N^0|_{(-A,0]}$. Therefore, with abuse of notation, when dealing with the process $(X_t)_{t \ge 0}$ we shall use the notations $\mathbb{P}_\mathfrak{m}$ and $\mathbb{E}_\mathfrak{m}$
even when $\mathfrak{m}$ is a law on $\mathcal{N}((-A,0])$,
and $\mathbb{P}_{\nu}$ and $\mathbb{E}_{\nu}$ even when $\nu$ is an element of $\mathcal{N}((-A,0])$.
Note that $X$ depends on $A$, and that we omit this in the notation.
\begin{prop}\label{prop:XMarkov}
Under Assumption \ref{hyp:h}. Let $A<\infty$ be such that $A \ge L(h)$. Then $(X_t)_{t\geq 0}$ defined in~\eqref{aux-proc}
is a strong $(\mathcal{F}_t)_{t\ge0}$-Markov process
with initial condition $X_0=N^0|_{(-A,0]}$ and sample paths in the
Skorohod space $\mathbb{D}(\mathbb{R}_+,\mathcal{N}((-A,0]))$.
\end{prop}
\begin{proof}
This follows from the fact that $N^h$ is the unique solution of Eq.~\eqref{eq:Ng}.
Indeed, let $T$ be a stopping time. On $\{T<\infty\}$, by definition
\[
X_{T+t} = (S_{T+t}N^h ) |_{(-A,0]} = (S_t S_T N^h ) |_{(-A,0]}\,.
\]
Using that $N^h$ satisfies Eq.~\eqref{eq:Ng} driven by the process $Q$, we have
\begin{align*}
S_T N^h & = S_T (N^h|_{(-\infty,T]}) +S_T (N^h|_{(T,+\infty)})\\
& = (S_T N^h)|_{(-\infty,0]} +\int_{(T,+\infty)\times (0,+\infty)} \delta_{u-T} {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\theta\leq \Lambda^h(u)\}}Q(du,d\theta)\\
& = (S_T N^h)|_{(-\infty,0]} +\int_{(0,+\infty)\times (0,+\infty)} \delta_{v}{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\theta\leq \widetilde{\Lambda}^h(v)\}} \ S_T Q(dv,d\theta),
\end{align*}
where $S_T Q$ is the (randomly) shifted process with bivariate cumulative distribution function given by
\begin{equation}
\label{shift-Q}
S_TQ((0,t]\times (0,a]) = Q((T,T+t]\times (0,a])\,,
\qquad
t,a>0,
\end{equation}
and where for $v>0$,
\begin{align*}
\widetilde{\Lambda}^h(v)& = \Lambda^h(v+T)
= \Big(\lambda+\int_{(-\infty,v)}h(v-s)S_T N^h(ds)\Big)^+.
\end{align*}This shows that $S_T N^h$ satisfies Eq.~\eqref{eq:Ng} driven by $S_TQ$ with initial condition $(S_T N^h)|_{(-\infty,0]}$. Since $A\ge L(h)$, moreover $S_TN^h|_{(0,+\infty)}$ actually depends only on $(S_T N^h) |_{(-A,0]} \triangleq X_T$.\\
Let us now condition on $\{T<\infty\}$ and on $\mathcal{F}_T$. Since $Q$ is a $(\mathcal{F}_t)_{t\ge0}$-Poisson point process with unit intensity,
$S_TQ$ is a $(\mathcal{F}_{T+t})_{t\ge0}$-Poisson point process with unit intensity, see Lemma~\ref{lem:Q} for this classic fact.
In particular it is independent of the $\mathcal{F}_T$-measurable random variable $X_T$. Additionally, $X_T$ satisfies Assumption \eqref{cond_N0}, which becomes in this case: for all $r>0$
$$\int_0^r \int_{(-A,0]}h^+(u-s) (S_T N^h)(ds) \ du < +\infty \qquad \mathbb{P}_{\mathfrak{m}}\textnormal{-a.s.}$$
We have indeed that:
\begin{align}
\int_0^r \int_{(-A,0]}&h^+(u-s) (S_TN^h)(ds) du \nonumber \\
& = \int_0^r \int_{(-A+T,T]}h^+(T+u-s) N^h(ds) \ du \nonumber \\
&= \int_T^{T+r} \int_{(-A+T,T]}h^+(v-s) N^h(ds) \ dv \nonumber \\
&\le \int_T^{T+r} \int_{(-\infty,0]}h^+(v-s) N^0(ds) \ dv + \int_T^{T+r} \int_{(0,T]}h^+(v-s) N^h(ds) \ dv \nonumber \\
& \le \int_T^{T+r} \int_{(-\infty,0]}h^+(v-s) N^0(ds) \ dv + \|h^+\|_1 N^h(0,T] \nonumber\\%\label{etape36} \\
&< +\infty\qquad \mathbb{P}_{\mathfrak{m}}\textnormal{-a.s.},\nonumber
\end{align}
since the distribution $\mathfrak{m}$ of $N^0$ satisfies \eqref{cond_N0}, and since we have shown at the end of the proof of Proposition \ref{prop:couplage} that $\mathbb{E}_{\mathfrak{m}}(N^h(0,t]) < +\infty$ for all $t >0$.
Thus the assumptions of Proposition~\ref{prop:couplage} are satisfied, which yields that $(X_{T+t} )_{t\ge0}$ is the pathwise unique, and hence weakly unique, strong solution of Eq.~\eqref{eq:Ng} started at $X_T$ and driven by the
$(\mathcal{F}_{T+t})_{t\ge0}$-Poisson point process $S_TQ$. Hence, it is a process started at $X_T$
which is a $(\mathcal{F}_{T+t})_{t\ge0}$-Markov process with same transition semi-group as $(X_t)_{t\geq 0}$.
If we wish to be more specific, for every bounded Borel function $F$ on $\mathbb{D}(\mathbb{R}_+,\mathcal{N}((-A,0]))$
we set
\[
\Pi F(x) \triangleq \mathbb{E}_x(F((X_t)_{t\geq 0}))
\]
and note that existence and uniqueness in law for~\eqref{eq:Ng} yield that
\[
\mathbb{E}_x(F((X_t)_{t\geq 0}) \,|\, T<\infty, \mathcal{F}_T) = \Pi F(X_T)\,.
\]
This is the strong Markov property we were set to prove.
\end{proof}
\color{black}
\subsection{Renewal of $X$ at $\emptyset$}
Using $(X_t)_{t\geq 0}$ and Proposition~\ref{prop:XMarkov}, we obtain that if $T$ is a stopping time such that $N^h|_{(T-A,T]}=\emptyset$, then $N^h|_{(T,+\infty)}$ is independent of $N^h|_{(-\infty,T]}$ and behaves as $N^h$ started from $\emptyset$ and translated by $T$. Such renewal times lead to an interesting decomposition of $N^h$, enlightening its dependence structure.
The successive hitting times of $\emptyset \in \mathcal{N}((-A,0])$ for the Markov process $X$ are such renewal times.
This subsection is devoted to the study of their properties.
Recall that we have introduced in \eqref{def:tau} the first hitting time of $\emptyset \in \mathcal{N}((-A,0])$
for $X$, given by
\[
\tau \triangleq \inf\{t>0 : X_{t-}\neq \emptyset, X_{t} =\emptyset\}=\inf\{t>0 : N^h[t-A,t)\neq 0, N^h(t-A,t] =0\}\,.
\]
It depends on $A$, but this is omitted in the notation. It is natural to study whether $\tau$ is finite or not. When the reproduction function $h $ is nonnegative, we introduce the queue $(Y_t)_{t \ge 0}$ defined by \eqref{def:queue},
and its return time to zero $\mathcal{T}_1$ defined in \eqref{def:tcalk}. The following result will yield the finiteness of $\tau$.
\begin{lem}
\label{lem:egalite_temps}
Under Assumption \ref{hyp:h}. Let $A<\infty$ be such that $A \ge L(h)$. Let $\tau$ and $\mathcal{T}_1$ be defined in \eqref{def:tau} and \eqref{def:tcalk}.
If $h$ is nonnegative then $\mathbb{P}_{\emptyset}(\tau = \mathcal{T}_1)=1$.
\end{lem}
\begin{proof}
We use the notations defined in Subsection \ref{subsec:cluster}. To begin with, we remark that $\tau >V_1$.
First, let us consider $t$ such that $V_1<t< \mathcal{T}_1$. By definition, there exists $i\ge1$, such that
$$V_i\le t\le V_i+\widetilde{H}_i(A) =V_i+H_i+A.$$
Since the interval $[V_i, V_i+H_i]$ corresponds to the cluster of descendants of $V_i$, there exists a sequence of points of $N^h$ in $[V_i, V_i+H_i]$ which are distant by less than $L(h)$ and thus less than $A$. Therefore, if $t\in[V_i, V_i+H_i]$, then $N^h(t-A,t]>0$
If $t\in[ V_i+H_i, V_i+H_i+A]$, then $N^h(t-A,t]>0$ as well since $V_i+H_i \in N^h$ (it is the last birth time in the Galton-Watson tree steming from $V_i$, by definition of $H_i$).
Since this reasoning holds for any $t\le \mathcal{T}_1$, thus $\tau \ge\mathcal{T}_1$.
Conversely, for any $t \in [V_1,\tau)$, by definition of $\tau$ necessarily $N^h(t-A,t]>0$. Thus there exists an atom of $N^h$ in $(t-A,t]$, and from the cluster representation, there exists $i \ge 1$ such that this atom belongs to the cluster of $V_i$, hence to $[V_i,V_i+H_i]$. We easily deduce that
$$V_i\le t\le V_i+H_i+A$$ and thus $Y_t\ge1$, for all $t \in [V_1,\tau)$. This proves that $\tau\le \mathcal{T}_1$ and concludes the proof.
\end{proof}
To extend the result of finiteness of $\tau$ when no assumption is made on the sign of $h$, we use the coupling between $N^h$ and $N^{h^+}$ stated in Proposition \ref{prop:couplage}, \ref{prop:couplage-ii}).
\begin{prop}\label{prop:tauleqtau^+}
Under Assumptions \ref{hyp:h}.
Let $A<\infty$ be such that $A \ge L(h)$.
Let $\tau$ be defined in \eqref{def:tau}, and $\tau^+$ be defined similarly with $h^+$ instead of $h$. Then
\[
\mathbb{P}_{\mathfrak{m}}(\tau \le \tau^+)=1\,.
\]
\end{prop}
\begin{proof}
We use the coupling $(N^h,N^{h^+})$ of Proposition~\ref{prop:couplage},~\ref{prop:couplage-ii}), which satisfies
$N^h \le N^{h^+}$.
If $\tau=+\infty$, since the immigration rate $\lambda$ is positive, for {any} $t\ge0$ necessarily $N^h(t-A,t]>0$ and thus $N^{h^+}(t-A,t]>0$, which implies that $\tau^+=+\infty$ also, a.s.
Now, it is enough to prove that $\tau\leq \tau^+$ when both times are finite. In this case, since $N^{h^+}$ is locally finite a.s., $\tau^+-A$ is an atom of $N^{h^+}$ such that $N^{h^+}(\tau^+-A,\tau^+]=0$. This implies that $N^{h}(\tau^+-A,\tau^+]=0$.
If $\tau^+-A $ is also an atom of $N^h$, then $\tau\leq \tau^+$.
Else, first prove that $N^h(-A,\tau^+-A)>0$. The result is obviously true if $N^0\not= \emptyset$. When $N^0=\emptyset$, the first atoms of $N^h$ and $N^{h^+}$ coincide because $\Lambda^h_0=\Lambda_0^{h^+}$, where these functions are defined in \eqref{def:Lambda0}. This first atom is necessarily before $\tau^+-A$, and hence $N^h(-A,\tau^+-A)>0$.
The last atom $U$ of $N^h$ before $\tau^+-A$ is thus well defined, and necessarily satisfies $N^h(U,U+A]=0$ and $N^h [U, U+A)\neq 0$ so that
$\tau\leq U+A\leq \tau^+$. We have thus proved that $\tau\leq \tau^+$, $\mathbb{P}_{\mathfrak{m}}$-a.s.
\end{proof}
We now prove that the regeneration time $\tau$ admits an exponential moment which ensures that it is finite almost surely. The results will rely on the coupling between $N^h$ and $N^{h^+}$ and on the results obtained in Section \ref{subsec:cluster}. We define
\begin{equation*}
\gamma^+\triangleq\frac{\|h^+\|_1-\log(\|h^+\|_1)-1}{L(h^+)}\,.
\end{equation*}
\begin{prop}\label{prop:tau}
Under Assumption \ref{hyp:h}.
Let $A<\infty$ be such that $A \ge L(h)$, and assume that $\mathbb{E}_\mathfrak{m}(N^0(-A,0])<+\infty$.
Then $\tau$ given by \eqref{def:tau} satisfies
\[
\forall \alpha < \min(\lambda,{\gamma^+})\,,
\quad
\mathbb{E}_{\mathfrak{m}}(\mathrm{e}^{\alpha \tau}) < +\infty\,.
\]
In particular $\tau$ is finite, $\mathbb{P}_{\mathfrak{m}}$-a.s., and $\mathbb{E}_{\mathfrak{m}}(\tau) < +\infty$.
\end{prop}
\begin{proof}
Using Proposition \ref{prop:tauleqtau^+}, it is sufficient to prove this for $\tau^+$. When $\mathfrak{m}$ is the Dirac measure at $\emptyset$, the result is a direct consequence of Lemma \ref{lem:egalite_temps} and Proposition \ref{prop:decrR_1}. We now turn to the case when $\mathfrak{m}$ is different from $\delta_{\emptyset}$. The proof is separated in several steps.
\paragraph{Step 1: Analysis of the problem.}
To control $\tau^+$, we distinguish the points of $N^h$ coming from the initial condition from the points coming from ancestors arrived after zero. We thus introduce $K= N^0((-A,0])$, the number of atoms of $N^0$, $(V_i^0)_{1\le i \le K}$, these atoms, and $(\widetilde{H}_i^0(A))_{1\le i \le K}$ the durations such that $V_i^0 + \widetilde{H}_i^0(A)-A$ is the time of birth of the last descendant of $V_i^0$. Note that $V_i^0$ has no offspring before time~$0$, so that the reproduction function of $V_i^0$ is a truncation of $h$. We finally define the time when the influence of the past before $0$ has vanished,
given by
$$E=\max_{1 \le i \le K}(V_i^0+\widetilde{H}_i^0(A)),$$
with the convention that $E=0$ if $K=0$. If $K>0$, since $V_i^0\in (-A,0]$ and $\widetilde{H}_i^0(A)\geq A$, we have $E>0$. Note that $\tau^+ \ge E$. \\
We now consider the sequence $(V_i)_{i \ge 1}$ of ancestors arriving after time $0$ at rate $\lambda$. We recall that they can be viewed as the arrival of customers in a $M/G/\infty$ queue with time service of law that of $\widetilde{H}_1(A)$. In our case, the queue may not be empty at time $0$, when $E>0$. In that case, the queue returns to $0$ when all the customers arrived before time $0$ have left the system (which is the case at time $E$) and when all the busy periods containing the customers arrived at time between $0$ and $E$ are over. The first hitting time of $0$ for the queue is thus equal to
\begin{equation}\label{reecriture:tau+}
\tau^+ = \left\{ \begin{array}{ccl}
E & \mbox{ if } & Y_E=0\,,
\\
\inf\{t \ge E : Y_t=0 \} & \mbox{ if } & Y_E>0\,,
\end{array}\right.
\end{equation}
where $Y_t$ is given in \eqref{def:queue} by $Y_t=\sum_{k : 0\leq V_k\le t} {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{V_k+\widetilde{H}_k(A)>t\}}.$
\paragraph{Step 2: Exponential moments of $E$.}
In \eqref{reecriture:tau+}, $E$ depends only on $N^0$ and $(Y_t)_{t\geq 0}$ only on the arrivals and service times of customers entering the queue after time $0$. A natural idea is then to condition with respect to $E$, and for this it is important to gather estimates on the moments of $E$.
Since $V_i^0\leq 0$, we have that
$$0\leq E\leq \max_{1\leq i\leq K} \widetilde{H}^0_i(A).$$
The truncation mentioned in Step 1 implies that the $\widetilde{H}^0_i(A)$ are stochastically dominated by independent random variables distributed as $\widetilde{H}_1$, which we denote by $\bar{H}^0_i(A)$. Thus for $t>0$,
using \eqref{eq:queueH1tilde},
\begin{align*}
\mathbb{P}_{\mathfrak{m}}(E>t)
&\leq \mathbb{P}_{\mathfrak{m}}\big(\max_{1\leq i\leq K} \bar{H}^0_i(A)>t\big)\\
&= 1-\mathbb{E}_{\mathfrak{m}}\Big(\big(1-\mathbb{P}(\widetilde{H}_1(A)> t)\big)^K\Big)\\
&\leq 1 -\mathbb{E}_{\mathfrak{m}}\big((1-C \mathrm{e}^{-{\gamma^+} t})^K\big)\,.
\end{align*}
Thus there exists $t_0>0$ such that for any $t>t_0$,
\begin{align}
\mathbb{P}_{\mathfrak{m}}(E>t) \leq C \mathbb{E}_\mathfrak{m}(N^0(-A,0]) \mathrm{e}^{-{\gamma^+}t}.\label{eq:controleE}
\end{align}As a corollary, we have for any $\beta \in (0,{\gamma^+})$ that
\begin{equation}\label{eq:momentexpE}
\mathbb{E}_\mathfrak{m}\big(\mathrm{e}^{\beta E}\big)<+\infty\,.
\end{equation}
\paragraph{Step 3: Estimate of the tail distribution of $\tau^+$.}
For $t>0$, we have
\begin{align*}
\mathbb{P}_{\mathfrak{m}}(\tau^+>t)& = \mathbb{P}_{\mathfrak{m}}\big(\tau^+>t,\,E> t\big)+\mathbb{P}_{\mathfrak{m}}\big(\tau^+>t,\,E\leq t\big)\\
& \leq\mathbb{P}_{\mathfrak{m}}(E> t)+\mathbb{E}_{\mathfrak{m}}\Big({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{E\leq t\}}\, \mathbb{P}_{\mathfrak{m}}\big(\tau^+>t \,|\, E\big)\Big).
\end{align*}The first term is controlled by \eqref{eq:controleE}. For the second term, we use Proposition \ref{prop:appendice-suitecarl} which is a consequence of Theorem \ref{thm:carl}. For this, let us introduce a constant $\kappa$ such that $\kappa <{\gamma^+}$ if ${\gamma^+} \leq \lambda$ and $\kappa=\lambda$ if $\lambda<{\gamma^+}$. We have:
\begin{equation*}
\mathbb{E}_{\mathfrak{m}}\Big({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{E\leq t\}} \, \mathbb{P}\big(\tau^+>t \,|\, E\big)\Big) \leq \mathbb{E}_{\mathfrak{m}}\big({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{E\leq t\}} \, \lambda C E\,\mathrm{e}^{-\kappa(t-E)}\big)= \lambda C \mathrm{e}^{-\kappa t} \mathbb{E}_\mathfrak{m}\big({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{E\leq t\}} \, E\,\mathrm{e}^{\kappa E}\big).
\end{equation*}
Since $\kappa<{\gamma^+}$, it is always possible to choose $\beta\in (\kappa,{\gamma^+})$ such that \eqref{eq:momentexpE} holds, which entails that
$\mathbb{E}_\mathfrak{m}\big({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{E\leq t\}} \,E\,\mathrm{e}^{\kappa E}\big)$ can be bounded by a finite constant independent of $t$.
Gathering all the results,
\begin{align*}
\mathbb{P}_{\mathfrak{m}}(\tau^+>t) & \leq C \mathbb{E}_\mathfrak{m}(N^0(-A,0]) \mathrm{e}^{-{\gamma^+}t}+ \lambda C' \mathrm{e}^{-\kappa t}=O\big(\mathrm{e}^{-\kappa t}\big).
\end{align*}This yields that $\mathbb{E}_\mathfrak{m}(\mathrm{e}^{\alpha \tau^+})<+\infty$ for any $\alpha<\kappa$, i.e. $\alpha<\min (\lambda,{\gamma^+})$.
\end{proof}
\begin{thm}
\label{thm:exist-uniq-inv-law}
{Under Assumptions \ref{hyp:h}.}
The strong Markov process $X=(X_t)_{t\ge 0}$ defined by \eqref{aux-proc} admits a unique invariant law, $\pi_A$, defined on $\mathcal{N}((-A,0])$ by \eqref{def:piA}:
for every Borel nonnegative function $f$,
\[
\pi_A f = \frac1{\mathbb{E}_\emptyset(\tau)} \mathbb{E}_\emptyset\biggl(\int_0^{\tau} f(X_t) \,dt\biggr)\,.
\]
Moreover, $\pi_A\{\emptyset\} = 1/(\lambda \mathbb{E}_\emptyset(\tau))$
and thus the null measure~$\emptyset$ is a positive recurrent state in the classical sense for $X$.
\end{thm}
\begin{proof}
We recall the classic proof.
Let $(P_s)_{s\ge 0}$ denote the semi-group of $X$ and $f$ be a Borel nonnegative function. Then
\[
\pi_A P_s f = \frac1{\mathbb{E}_\emptyset(\tau)} \mathbb{E}_\emptyset\biggl(\int_0^{\tau} P_s f(X_t) \,dt\biggr)
= \frac1{\mathbb{E}_\emptyset(\tau)} \int_0^\infty \mathbb{E}_\emptyset\bigl({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau>t\}}P_sf(X_t)\bigr)\,dt\,.
\]
Using the Markov property at time $t$ and since $\{\tau>t\}\in \mathcal{F}_t$,
\[
\mathbb{E}_\emptyset\bigl({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau>t\}}P_sf(X_t)\bigr)
=\mathbb{E}_\emptyset\bigl({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau>t\}}\mathbb{E}_\emptyset\bigl(f(X_{t+s}) \mid \mathcal{F}_t \bigr)\bigr)
=\mathbb{E}_\emptyset\bigl({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau>t\}} f(X_{t+s})\bigr)
\]
and thus
\[
\pi_A P_s f
= \frac1{\mathbb{E}_\emptyset(\tau)}\int_0^\infty\mathbb{E}_\emptyset\bigl({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau>t\}} f(X_{t+s})\bigr) \,dt
= \frac1{\mathbb{E}_\emptyset(\tau)} \mathbb{E}_\emptyset\biggl(\int_0^{\tau} f(X_{t+s})\,dt\biggr) \,.
\]
Using the strong Markov property at time $\tau$,
\begin{align*}
\mathbb{E}_\emptyset\biggl(\int_0^{\tau} f(X_{t+s})\,dt\biggr)
&= \mathbb{E}_\emptyset\biggl(\int_s^{\tau+s} f(X_t)\,dt\biggr)\\
&= \mathbb{E}_\emptyset\biggl(\int_s^{\tau} f(X_t)\,dt\biggr) + \mathbb{E}_\emptyset\biggl(\int_{\tau}^{\tau+s} f(X_t)\,dt\biggr)\\
&= \mathbb{E}_\emptyset\biggl(\int_s^{\tau} f(X_t)\,dt\biggr) + \mathbb{E}_\emptyset\biggl(\int_{0}^{s} f(X_t)\,dt\biggr)\\
&= \mathbb{E}_\emptyset\biggl(\int_0^{\tau} f(X_t)\,dt\biggr).
\end{align*}
Thus $\pi_A P_s f=\pi_A f$. We conclude that $\pi_A$ is an invariant law for $(P_s)_{s\ge 0}$.
The proof of its uniqueness is an immediate consequence of Theorem~\ref{thm:point-erg+laws}~\ref{cv-to-equ}), which will be proved later.
Indeed, for any invariant law $\pi$ of $X$ it holds that
\[
\pi =\mathbb{P}_{\pi}(X_t\in \cdot)
\xrightarrow[t\to\infty]{\textnormal{total variation}}
\mathbb{P}_{\pi_A}(X_0\in \cdot) = \pi_A.
\]
From the definition of $\pi_A$, we obtain that
\[
\pi_A\{\emptyset\} = \frac{1}{\mathbb{E}_\emptyset(\tau)} \mathbb{E}_\emptyset \bigg(\int_0^\tau {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\emptyset\}}(X_t)\,dt\bigg)\,.
\]
Under $\mathbb{P}_\emptyset$, an excursion $(X_t)_{t\in (0,\tau]}$ proceeds as follows. First, $X_t=\emptyset$
for $t\in (0,U_1^h)$ with $U_1^h$ the first atom of $N^h$ defined in \eqref{def:U1h}. Under $\mathbb{P}_\emptyset$, $U_1^h$ follows an exponential distribution with expectation $1/\lambda$. Then, $X_t\not=\emptyset$ for $t\in [U_1^h,\tau)$
by definition of $\tau$. We deduce from this that
\[
\pi_A\{\emptyset\}= \frac{\mathbb{E}_\emptyset \big(U_1^h\big)}{\mathbb{E}_\emptyset(\tau)}= \frac{1}{\lambda \mathbb{E}_\emptyset(\tau)}\,.
\]
This concludes the proof.
\end{proof}
The strong Markov property of $X$ yields a sequence of regeneration times $(\tau_k)_{k\geq 0}$, which are the successive visits of $X$ to the null measure $\emptyset$, defined as follows (the time $\tau_0$ has already been introduced in \eqref{def:tau0}):
\begin{align*}
\tau_0 &= \inf\{t\ge 0 : X_t =\emptyset\}
&&\text{(First entrance time of $\emptyset$)}
\\
\tau_k &= \inf\{t>\tau_{k-1} : X_{t-}\neq \emptyset, X_{t} =\emptyset\}\,,
\quad k\ge1\,.
&&\text{(Successive return times at $\emptyset$)}
\end{align*}
They provide a useful decomposition of the path of $X$ in i.i.d.\ excursions:
\begin{thm}
\label{thm:reg-seq}
Let $N^h$ be a Hawkes process satisfying Assumption~\ref{hyp:h}, and $A \ge L(h)$.
Consider the Markov process $X $ defined in \eqref{aux-proc}. Under $\mathbb{P}_\mathfrak{m}$ the following holds:
\begin{enumerate}[a)]
\item
\label{tau-k-finite}
The $\tau_k$ for $k\ge0$ are finite stopping times, a.s.
\item
\label{delay-ind-cycle}
The delay $(X_{t})_{ t \in [0,\tau_0) }$ is independent of the cycles $(X_{\tau_{k-1} + t})_{ t \in [0, \tau_k- \tau_{k-1}) }$ for $k\ge1$.
\item
\label{cycles-iid}
These cycles are i.i.d.\ and distributed as $(X_t)_{t \in[0, \tau)}$ under $\mathbb{P}_\emptyset$.
In particular their durations $(\tau_k-\tau_{k-1})_{k\geq 1}$ are distributed as $\tau$ under $\mathbb{P}_\emptyset$,
so that $\lim_{k\rightarrow +\infty}\tau_k=+\infty$, $\mathbb{P}_\mathfrak{m}$-a.s.
\end{enumerate}
\end{thm}
\begin{proof}
The above items follow classically from the strong Markov property of $X$.
Let us first prove the finiteness of the return times $\tau_k$. For any $\mathfrak{m}$, from the definition of $\tau_0$ and $\tau$, we have that $\tau_0\leq \tau$, $\mathbb{P}_\mathfrak{m}$-a.s. Then, $\mathbb{P}_\mathfrak{m}(\tau_0<+\infty)=1$ follows from Proposition \ref{prop:tau} (i).
For $k\geq 1$, using the strong Markov property of $X$, we have for any $\mathfrak{m}$:
\begin{align*}
\mathbb{P}_\mathfrak{m}(\tau_k<+\infty)
&= \mathbb{E}_\mathfrak{m} \big( {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau_{k-1}<+\infty\}} \, \mathbb{P}_{X_{\tau_{k-1}}}(\tau<+\infty)\big)\\
&= \mathbb{E}_\mathfrak{m}\big({\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l_{\{\tau_{k-1}<+\infty\}} \,\mathbb{P}_\emptyset(\tau<+\infty)\big)\\
& = \mathbb{P}_\mathfrak{m}(\tau_{k-1}<+\infty)=\cdots = \mathbb{P}_{\mathfrak{m}}(\tau_0<+\infty)=1.
\end{align*}
Let us now prove \ref{tau-k-finite}) and \ref{delay-ind-cycle}).
It is sufficient to consider $(X_t)_{t\in [0,\tau_0)}$, $(X_{\tau_0+t})_{t\in [0,\tau_1-\tau_0)}$ and $(X_{\tau_1+t})_{t\in [0,\tau_2-\tau_1)}$. Let $F_0$, $F_1$, and $F_2$ be three measurable bounded real functions on $\mathbb{D}(\mathbb{R}_+,\mathcal{N}(-A,0])$. Then, using the strong Markov property successively at $\tau_1$ and $\tau_0$, we obtain:
\begin{align*}
&\mathbb{E}_{\mathfrak{m}} \Big(F_0\big((X_t)_{t\in [0,\tau_0)}\big)\, F_1\big((X_{\tau_0+t})_{t\in [0,\tau_1-\tau_0)}\big) \,F_2\big((X_{\tau_1+t})_{t\in [0,\tau_2-\tau_1)}\big)\Big)
\\
&\quad = \mathbb{E}_{\mathfrak{m}} \Big(F_0\big((X_t)_{t\in [0,\tau_0)}\big)\Big) \,\mathbb{E}_{\emptyset}\Big(F_1\big((X_{t})_{t\in [0,\tau)}\big)\Big) \,\mathbb{E}_\emptyset\Big( F_2\big((X_{t})_{t\in [0,\tau)}\big)\Big).
\end{align*}
This concludes the proof.
\end{proof}
\section{Proof of the main results}
We translate the statements of the main results in terms of the Markov process $X$.
Let $T>0$ be fixed, and define
\begin{equation}
\label{def:kt}
K_T\triangleq \max\{k\ge0 : \tau_k\le T\} \,,
\end{equation}
which goes to infinity with $T$ since the sequence $(\tau_k)_{k\geq 0}$ increases to infinity.
For a locally bounded Borel function $f$ on $\mathcal{N}((-A,0])$ we define the random variables
\begin{equation}
\label{integrals}
I_k f \triangleq \int_{\tau_{k-1}}^{\tau_k} f (X_t)\,dt\,,
\quad
k\ge1\,,
\end{equation}
which are finite a.s., i.i.d., and of the same law as $\int_{0}^{\tau} f (X_t)\,dt$ under $\mathbb{P}_\emptyset$,
see Theorem~\ref{thm:reg-seq}.
\subsection*{Proof of Theorem~\ref{thm:point-erg+laws}~\ref{lln})}
Assume first that $f\ge0$. Then, with the notation~\eqref{def:kt} and~\eqref{integrals},
\begin{equation*}
\frac1{K_T}\sum_{k=1}^{K_T} I_k f
\le \frac1{K_T}\int_0^T f(X_t)\,dt
\le
\frac1{K_T}\int_0^{\tau_0} f(X_t)\,dt + \frac1{K_T}\sum_{k=1}^{K_T+1} I_kf\,.
\end{equation*}
Since $f$ is locally bounded, $\int_0^{\tau_0} f(X_t)\,dt$ is finite, $\mathbb{P}_\mathfrak{m}$-a.s.,
thus
\[
\mathbb{P}_\mathfrak{m}\left(\frac1{K_T}\int_0^{\tau_0} f(X_t)\,dt \xrightarrow[T\to\infty]{} 0\right)=1.
\]
The strong law of large numbers applied to the i.i.d. sequence $(I_k f)_{k\geq 1}$
yields that
\[
\frac1{K_T}\sum_{k=1}^{K_T+1} I_kf \xrightarrow[T\to\infty]{\mathbb{P}_\mathfrak{m}-\text{a.s.}} \mathbb{E}_\mathfrak{m}(I_1f)
= \mathbb{E}_\emptyset \bigg(\int_0^\tau f(X_t)\ dt\bigg)\,,\quad \mathbb{P}_\mathfrak{m}\text{-a.s.}
\]
Gathering the two last limits,
\begin{equation*}
\frac1{K_T}\int_0^T f(X_t)\,dt \xrightarrow[T\to\infty]{\mathbb{P}_\mathfrak{m}-\text{a.s.}}
\mathbb{E}_\emptyset\biggl(\int_0^{\tau} f(X_t) \,dt\biggr) = \mathbb{E}_\emptyset(\tau) \,\pi_A f ~.
\end{equation*}
Choosing $f=1$ yields that
\begin{equation}
\label{t-et-kt}
\frac{T}{K_T} \xrightarrow[T\to\infty]{\mathbb{P}_\mathfrak{m}-\textnormal{a.s.}} \mathbb{E}_\emptyset(\tau) <\infty\,.
\end{equation}
Dividing the first limit by the second concludes the proof for $f\ge0$.
The case of $\pi_A$-integrable
signed $f$ follows by the decomposition $f=f^+ - f^-$.
\subsection*{Proof of Theorem~\ref{thm:point-erg+laws}~\ref{cv-to-equ})}
This follows from a general result in Thorisson~\cite[Theorem 10.3.3 p.351]{Thorisson2000},
which yields that if the distribution of $\tau$ under $\mathbb{P}_\emptyset$ has a density with respect to the Lebesgue measure
and if $\mathbb{E}_\emptyset(\tau)<+\infty$, then there exists a probability measure $\mathbb{Q}$ on $\mathbb{D}(\mathbb{R}_+,\mathcal{N}(-A,0])$ such that,
for any initial law $\mathfrak{m}$,
\begin{equation*}
\mathbb{P}_\mathfrak{m}\bigl((X_{t+u})_{u\ge0}\in \cdot \bigr)
\xrightarrow[t\to\infty]{\textnormal{total variation}}
\mathbb{Q}\,.
\end{equation*}
Since $\pi_A$ is an invariant law,
$ \mathbb{P}_{\pi_A}\bigl((X_{t+u})_{u\ge0}\in \cdot \bigr) = \mathbb{P}_{\pi_A}(X\in \cdot) $ for every $t\geq 0$. Hence, taking $\mathfrak{m}=\pi_A$ in the above convergence yields that $\mathbb{Q}=\mathbb{P}_{\pi_A}(X\in \cdot)$.
It remains to check the above assumptions of the theorem.
Proposition \ref{prop:tau} yields that $\mathbb{E}_\emptyset(\tau)<+\infty$.
Moreover, under $\mathbb{P}_{\emptyset}$ we can rewrite $\tau$ as
$$\tau=U_1^h+\inf\big\{t>0 : \ X_{(t+U_1^h)_-}\not= \emptyset \mbox{ and } X_{t+U_1^h}= \emptyset\big\}.$$
Using the strong Markov property, we easily prove independence of the two terms in the r.h.s.
Since $U_1^h$ has an exponential distribution under $\mathbb{P}_{\emptyset}$, $\tau$ has a density under $\mathbb{P}_\emptyset$.
\subsection*{Proof of Theorem~\ref{thm:clt}}
Let $\tilde{f} \triangleq f -\pi_A f$.
With the notation~\eqref{def:kt} and~\eqref{integrals},
we have the decomposition
\begin{equation}\label{eq:decompo}
\int_0^T \tilde{f}(X_t)\,dt
= \int_0^{\tau_0} \tilde{f}(X_t)\,dt
+ \sum_{k=1}^{K_T} I_k \tilde{f} + \int_{\tau_{K_T}}^T \tilde{f}(X_t)\,dt\,.
\end{equation}
The $I_k \tilde{f}$ are i.i.d. and
are distributed as $\int_{0}^{\tau} \tilde{f} (X_t)\,dt$ under $\mathbb{P}_\emptyset$,
with expectation~$0$ and variance $\mathbb{E}_\emptyset(\tau) \sigma^2(f)$,
see Theorem~\ref{thm:reg-seq}.
Since $f$ is locally bounded, so is $\tilde{f}$ and
\[
\frac1{\sqrt{T}} \int_0^{\tau_0} \tilde{f}(X_t)\,dt \xrightarrow[T\to\infty]{\mathbb{P}_\mathfrak{m}-\text{a.s.}} 0\,.
\]
Now, let $\varepsilon >0$. For arbitrary $a>0$ and $0<u\le T$,
\[
\mathbb{P}_\mathfrak{m}\biggl(\biggl|\int_{\tau_{K_T}}^T \tilde{f}(X_t)\,dt \biggr| > a\biggr)
\le
\mathbb{P}_\mathfrak{m}(T-\tau_{K_T}> u) +
\mathbb{P}_\mathfrak{m}\biggl(\sup_{0\le s \le u}\biggl|\int_{T-s}^T \tilde{f}(X_t)\,dt \biggr| > a\biggr)\,.
\]
But
\[
\mathbb{P}_\mathfrak{m}(T-\tau_{K_T}> u)= 1 - \mathbb{P}_\mathfrak{m}(\exists t \in [T-u,T] : X_{t-}\neq \emptyset, X_t=\emptyset)
\]
and Theorem~\ref{thm:point-erg+laws}~\ref{cv-to-equ}) yields that
\[
\lim_{T\to\infty}\mathbb{P}_\mathfrak{m}(T-\tau_{K_T}> u)
= 1-\mathbb{P}_{\pi_A}(\exists t \in [0,u] : X_{t-}\neq \emptyset, X_t=\emptyset) ~,
\]
so that there exists $u_0$ large enough such that
\[
\lim_{T\to\infty} \mathbb{P}_\mathfrak{m}(T-\tau_{K_T}> u_0) <\frac{\varepsilon}{2}\,.
\]
Moreover Theorem~\ref{thm:point-erg+laws}~\ref{cv-to-equ}) yields that
\[
\lim_{T\to\infty}\mathbb{P}_\mathfrak{m}\biggl(\sup_{0\le s \le u_0}\biggl|\int_{T-s}^T \tilde{f}(X_t)\,dt \biggr| > a\biggr)
=\mathbb{P}_{\pi_A}\biggl(\sup_{0\le s \le u_0}\biggl|\int_0^s \tilde{f}(X_t)\,dt \biggr| > a\biggr)
\]
and thus there exists $a_0$ large enough such that
\[
\lim_{T\to\infty}\mathbb{P}_\mathfrak{m}\biggl(\sup_{0\le s \le u_0}\biggl|\int_{T-s}^T \tilde{f}(X_t)\,dt \biggr| > a_0\biggr) <\frac{\varepsilon}{2}
\]
and hence
\[
\limsup_{T\to\infty} \mathbb{P}_\mathfrak{m}\biggl(\biggl|\int_{\tau_{K_T}}^T \tilde{f}(X_t)\,dt \biggr| > a_0\biggr) < \varepsilon\,.
\]
This implies in particular that
\[
\frac1{\sqrt{T}} \int_{\tau_{K_T}}^T \tilde{f}(X_t)\,dt \xrightarrow[T\to\infty]{\text{probab.}} 0\,.
\]
It now remains to treat the second term in the r.h.s. of \eqref{eq:decompo}. The classic central limit theorem yields that
\[
\frac1{\sqrt{T}} \sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k\tilde{f}
\xrightarrow[T\to\infty]{\textnormal{in law}}
\frac1{\sqrt{\mathbb{E}_\emptyset(\tau)}} \mathcal{N}(0, \mathbb{E}_\emptyset(\tau)\sigma^2(f)) = \mathcal{N}(0, \sigma^2(f))
\]
and we are left to control
\[
\Delta_T \triangleq
\frac1{\sqrt{T}}\sum_{k=1}^{K_T} I_k\tilde{f}
- \frac1{\sqrt{T}}\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k\tilde{f}\,.
\]
Let $\varepsilon>0$ and define
\[
v(T,\varepsilon) \triangleq \{\lfloor (1 -\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor, \ldots,
\lfloor (1 +\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor\}\,.
\]
Note that $$(1-\varepsilon^3)\frac{T}{\mathbb{E}_\emptyset(\tau)}<\frac{T}{\mathbb{E}_\emptyset(\tau)}<(1+\varepsilon^3)\frac{T}{\mathbb{E}_\emptyset(\tau)}~, $$
which implies that $\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor \in v(T,\varepsilon)$.
In view of~\eqref{t-et-kt}, there exists $t_\varepsilon$ such that if $T\ge t_\varepsilon$
\[
\mathbb{P}_\mathfrak{m}(K_T \in v(T,\varepsilon)) >1-\varepsilon\,.
\]
For $T\ge t_\varepsilon$, we thus have on $\{K_T \in v(T,\varepsilon)\}$ that
\begin{align*}
|\Delta_T |
& \leq \Biggl| \frac{1}{\sqrt{T}} \sum_{k=\lfloor (1 -\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor}^{K_T} I_k\tilde{f} \Biggl|+ \Biggl| \frac{1}{\sqrt{T}} \sum_{k=\lfloor (1 -\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k\tilde{f} \Biggl|\\
& \leq \frac{2}{\sqrt{T}}
\max_{n\in v(T,\varepsilon)}
\Biggl|
\sum_{k=\lfloor (1 -\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor}^n I_k\tilde{f}
\Biggr|\, .
\end{align*}
Using now Kolmogorov's maximal inequality~\cite[Sect.\ IX.7 p.234]{Feller1968} we obtain
\[
\mathbb{P}_\mathfrak{m}(|\Delta_T | \ge \varepsilon)
\le \frac{\lfloor (1 +\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor - \lfloor (1 -\varepsilon^3)T/\mathbb{E}_\emptyset(\tau)\rfloor}{\varepsilon^2 T/4} \mathbb{E}_\emptyset(\tau) \sigma^2(f) \le 8\sigma^2(f) \varepsilon \,.
\]
Since $\varepsilon >0$ is arbitrary, we conclude that
\[
\Biggl| \frac1{\sqrt{T}}\sum_{k=1}^{K_T} I_k\tilde{f}
- \frac1{\sqrt{T}}\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k\tilde{f}
\Biggr| \xrightarrow[T\to\infty]{\text{probab.}} 0\,.
\]
These three convergence results and Slutsky's theorem yield the convergence result.
\subsection*{Proof of Theorem~\ref{thm:non-asympt-exp-bd}}
With the notation $\tilde{f} \triangleq f -\pi_A f$ and~\eqref{integrals}, let us consider the decomposition
\begin{align}
\int_0^T \tilde{f} (X_t)\,dt
=
\int_0^{\tau_0} \tilde{f}(X_t)\,dt
+ \sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k \tilde{f}
+ \int_{\tau_{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor}}^T \tilde{f}(X_t)\,dt \,.
\end{align}
The $I_k \tilde{f}$ are i.i.d. and
are distributed as $\int_{0}^{\tau} \tilde{f} (X_t)\,dt$ under $\mathbb{P}_\emptyset$,
with expectation~$0$ and variance $\mathbb{E}_\emptyset(\tau) \sigma^2(f)$,
see Theorem~\ref{thm:reg-seq}.
Since $f$ takes its values in $[a,b]$,
\[
\biggl|
\int_0^{\tau_0} \tilde{f} (X_t)\,dt
\biggr|
\le |b-a| \tau_0 \,,
\qquad
\biggl|
\int_{\tau_{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor}}^T \tilde{f} (X_t)\,dt
\biggr|
\le |b-a||T-\tau_{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor}|\,.
\]
But
\begin{align*}
T-\tau_{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor}
&= - \tau_0 - \sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} (\tau_k-\tau_{k-1}) +T
\\
&= - \tau_0 -\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau)) + T -\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor \mathbb{E}_\emptyset(\tau)\end{align*}
in which
$
0\le T-\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor \mathbb{E}_\emptyset(\tau) < \mathbb{E}_\emptyset(\tau)
$
and the $\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau)$ are i.i.d., have same law as $\tau-\mathbb{E}_\emptyset(\tau) $
under $\mathbb{P}_\emptyset$, and have expectation $0$ and variance $\mbox{Var}_\emptyset (\tau)$.
Thus,
\begin{align*}
&\mathbb{P}_\mathfrak{m}\biggl( \biggl|\frac1T \int_0^T f(X_t)\,dt - \pi_A f \biggr| \ge \varepsilon \biggr)
\\
&\le \mathbb{P}_\mathfrak{m}\left( \left| \sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k \tilde{f} \right| + |b-a|
\left( 2\tau_0 +\left|\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))\right|
+ \mathbb{E}_\emptyset(\tau)\right)\ge T\varepsilon
\right)\,.
\end{align*}
Now, using that
$$T\varepsilon -|b-a| \mathbb{E}_\emptyset(\tau) -2|b-a| \mathbb{E}_\mathfrak{m}(\tau_0)
= 2 \frac{(T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)}2 + \sqrt{T}\varepsilon -2|b-a| \mathbb{E}_\mathfrak{m}(\tau_0)~,$$we obtain that
\begin{align}
\label{dec-proba}
&\mathbb{P}_\mathfrak{m}\biggl( \biggl|\frac1T \int_0^T f(X_t)\,dt - \pi_A f \biggr| \ge \varepsilon \biggr)
\notag\\
&\quad\le
\mathbb{P}_\mathfrak{m}\left(
\left|
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k \tilde{f}
\right|
\ge \frac{(T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)}2
\right)
\notag\\
&\qquad
+
\mathbb{P}_\mathfrak{m}\left(
\left|
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))
\right|
\ge \frac{(T-\sqrt{T})\varepsilon- |b-a| \mathbb{E}_\emptyset(\tau)}{2|b-a|}
\right)
\notag\\
&\qquad+
\mathbb{P}_\mathfrak{m}\left(
\tau_0-\mathbb{E}_\mathfrak{m}(\tau_0)
\ge \frac{\sqrt{T}\varepsilon - 2|b-a| \mathbb{E}_{\mathfrak{m}}(\tau_0)}{2|b-a|}
\right).
\end{align}
We aim to apply Bernstein's inequality~\cite[Cor.\ 2.10 p.25, (2.17), (2.18) p.24]{massart2007concentration} {to bound the three terms of the right hand side. We recall that to apply Bernstein inequality to random variables $X_1,\dots X_N$, there should exist constants $c$ and $v$ such that
$$ \sum_{k=1}^N\mathbb{E}_\mathfrak{m}\left[X_k^2\right]\le v,\quad \text{ and } \quad \sum_{k=1}^N\mathbb{E}_\mathfrak{m}\left[(X_k)_+^n\right]\le \frac{n!}{2}vc^{n-2},\quad \forall n\ge3.$$
}First,
\[
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\mathfrak{m}\bigl( ( I_k \tilde{f})^2 \bigr)
= \Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\emptyset(\tau) \sigma^2(f)
\le T \sigma^2(f)
\]
and, for $n\ge3$,
\begin{multline*}
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\mathfrak{m}\bigl( (I_k \tilde{f})_\pm^n \bigr)
= \Big\lfloor \frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\mathfrak{m}\bigl( (I \tilde{f})_\pm^n\bigr)
\\
\le
\frac{n!}2 T \sigma^2(f)
\biggl(
\sup_{k\ge3}
\biggl(
\frac2{k!}\frac{\mathbb{E}_\mathfrak{m}\bigl( (I\tilde{f})_\pm^k \bigr)}{\mathbb{E}_\emptyset(\tau)\sigma^2(f)}
\biggr)^{\frac1{k-2}}
\biggr)^{n-2}
\triangleq
\frac{n!}2 T \sigma^2(f)(c^\pm(f))^{n-2}\,.
\end{multline*}
Then,
\[
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\mathfrak{m}\bigl( (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))^2 \bigr)
= \Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mbox{Var}_\emptyset (\tau)
\le T \frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}
\]
and, for $n\ge3$,
\begin{multline*}
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\mathfrak{m}\bigl( (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))_\pm^n \bigr)
=\Big\lfloor T/\mathbb{E}_\emptyset(\tau)\Big\rfloor \mathbb{E}_\emptyset\bigl( (\tau-\mathbb{E}_\emptyset(\tau))_\pm^n \bigr)
\\
\le
\frac{n!}2 T \frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}
\biggl(
\sup_{k\ge3}
\biggl(
\frac2{k!}
\frac{\mathbb{E}_\emptyset\bigl( (\tau-\mathbb{E}_\emptyset(\tau))_\pm^k \bigr)}%
{\mbox{Var}_\emptyset (\tau)}
\biggr)^{\frac1{k-2}}
\biggr)^{n-2}
\triangleq
\frac{n!}2 T \frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}(c^\pm(\tau))^{n-2}\,.
\end{multline*}
Lastly, $\mathbb{E}_\mathfrak{m}\bigl((\tau_0-\mathbb{E}_\mathfrak{m}(\tau_0))^2\bigr) = \mbox{Var}_\mathfrak{m}(\tau_0)$ and, for $n\ge3$,
\begin{align*}
&\mathbb{E}_\mathfrak{m}\bigl((\tau_0-\mathbb{E}_\mathfrak{m}(\tau_0))_+^n\bigr)
\\
&\quad \le \frac{n!}2 \mbox{Var}_\mathfrak{m}(\tau_0)
\biggl(
\sup_{k\ge3}
\biggl(
\frac2{k!}
\frac{\mathbb{E}_\mathfrak{m}\bigl((\tau_0-\mathbb{E}_\mathfrak{m}(\tau_0))_+^k\bigr)}%
{\mbox{Var}_\mathfrak{m}(\tau_0)}
\biggr)^{\frac1{k-2}}
\biggr)^{n-2}
\triangleq
\frac{n!}2 \mbox{Var}_\mathfrak{m}(\tau_0)(c^+(\tau_0))^{n-2}\,.
\end{align*}
Applying~\cite[Cor.\ 2.10 p.25, (2.17), (2.18) p.24]{massart2007concentration}
to the r.h.s. of~\eqref{dec-proba} yields that
\begin{align*}
&\mathbb{P}_\mathfrak{m}\biggl( \biggl|\frac1T \int_0^T f(X_t)\,dt - \pi_A f \biggr| \ge \varepsilon \biggr)
\notag\\
&\quad\le
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8 T \sigma^2(f) + 4 c^+(f)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8 T \sigma^2(f) + 4 c^-(f)((T-\sqrt{T})\varepsilon - |b-a|\mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8T |b-a|^2\frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}+ 4 |b-a| c^+(\tau)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau))^2}
{8T |b-a|^2\frac{\mbox{Var}_\emptyset (\tau)}{\mathbb{E}_\emptyset(\tau)}+ 4 |b-a| c^-(\tau)((T-\sqrt{T})\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)) }
\right)
\notag\\
&\qquad+
\exp\left(
-\frac{(\sqrt{T}\varepsilon - 2|b-a| \mathbb{E}_\mathfrak{m}(\tau_0))^2}
{8|b-a|^2\mbox{Var}_\mathfrak{m} (\tau_0) + 4 |b-a| c^+(\tau_0)(\sqrt{T}\varepsilon - 2|b-a| \mathbb{E}_\mathfrak{m}(\tau_0)) }
\right)
\end{align*}
which is \eqref{eq:conc-ineq}.
\subsection*{Proof of Corollary \ref{cor:dev_simple}}
Under $\mathbb{P}_\emptyset$, $\tau_0=0$ and thus Equation \eqref{dec-proba} reads:
\begin{align}\label{etape6}
\mathbb{P}_\emptyset\biggl( \biggl|\frac1T \int_0^T f(X_t)\,dt - \pi_A f \biggr| \ge \varepsilon \biggr)
&\le
\mathbb{P}_\emptyset\left(
\left|
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k \tilde{f}
\right|
\ge \frac{T\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)}2
\right)
\\
&
+
\mathbb{P}_\emptyset\left(
\left|
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))
\right|
\ge \frac{T\varepsilon- |b-a| \mathbb{E}_\emptyset(\tau)}{2|b-a|}
\right).\nonumber
\end{align}
Similarly as for the proof of Theorem \ref{thm:non-asympt-exp-bd} we apply Bernstein inequality for each of the terms in the right hand side. However, in order to simplify the obtained bound, we change the upper bounds of the moments of $I_k\tilde{f}$ and $\tau_k-\tau_{k1}-\mathbb{E}_\emptyset(\tau)$. Namely we use the fact that for all $n\ge1$,
\[
\mathbb{E}_\emptyset(\tau^n)\le \frac{n!}{\alpha^n}\mathbb{E}_\emptyset(e^{\alpha\tau}) \quad \text{and} \quad \mathbb{E}_\emptyset(|\tau - \mathbb{E}_\emptyset(\tau)|^n)\le \frac{n!}{\alpha^n}\mathbb{E}_\emptyset(e^{\alpha\tau}) e^{\alpha \mathbb{E}_\emptyset(\tau)}.
\]
Since $\tau$ is a nonnegative random variable, $e^{\alpha \mathbb{E}_\emptyset(\tau)} \ge 1$ and in the sequel it will be more convenient to use the following upper bound: for all $n\ge1$,
\[
\mathbb{E}_\emptyset(\tau^n)\le \frac{n!}{\alpha^n}\mathbb{E}_\emptyset(e^{\alpha\tau}) e^{\alpha \mathbb{E}_\emptyset(\tau)}.
\]
Then
\[
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\emptyset\bigl( ( I_k \tilde{f})^2 \bigr)
\le \Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\emptyset(\tau^2) (b-a)^2
\le \frac{2(b-a)^2}{\alpha^2}\Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\emptyset(e^{\alpha\tau}) e^{\alpha \mathbb{E}_\emptyset(\tau)}\,,
\]
and, for $n\ge3$,
\[
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\emptyset\bigl( |I_k \tilde{f})|^n \bigr)
\le \frac{n!}{2} \left(\Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor |b-a|^2 \frac{2}{\alpha^2}\mathbb{E}_\emptyset(e^{\alpha\tau})e^{\alpha \mathbb{E}_\emptyset(\tau)}\right) \ \Big(\frac{|b-a|}{\alpha}\Big)^{n-2}\,.
\]
Setting
\[
v=\frac{2(b-a)^2}{\alpha^2}\Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\emptyset(e^{\alpha\tau}) e^{\alpha \mathbb{E}_\emptyset(\tau)},\quad \text{and}\quad c= \frac{|b-a|}{\alpha},
\]
and applying Bernstein inequality, we obtain that
\[
\mathbb{P}_\emptyset\left(
\left|
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} I_k \tilde{f}
\right|
\ge \frac{T\varepsilon - |b-a| \mathbb{E}_\emptyset(\tau)}2
\right) \le 2 \exp\left(-\frac{\Bigl( T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau) \Bigr)^2}{4 \left(2v + (T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau))c \right)} \right).
\]
Also
\[
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\emptyset\bigl( (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))^2 \bigr)
\le \frac{2}{\alpha^2} \Big\lfloor \frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \mathbb{E}_\emptyset(e^{\alpha\tau})e^{\alpha \mathbb{E}_\emptyset(\tau)} \,,
\]
and, for $n\ge3$,
\[
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} \mathbb{E}_\emptyset\bigl( |\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau)|^n \bigr)
\le \frac{n!}{2}\left(\Big\lfloor\frac{T}{\mathbb{E}_\emptyset(\tau)}\Big\rfloor \frac{2}{\alpha^2}\mathbb{E}_\emptyset(e^{\alpha\tau})e^{\alpha \mathbb{E}_\emptyset(\tau)} \right) \frac{1}{\alpha^{n-2}}\,.
\]
Applying Bernstein inequality again, we obtain that
\[
\mathbb{P}_\emptyset\left(
\left|
\sum_{k=1}^{\lfloor T/\mathbb{E}_\emptyset(\tau)\rfloor} (\tau_k-\tau_{k-1}-\mathbb{E}_\emptyset(\tau))
\right|
\ge \frac{T\varepsilon- |b-a| \mathbb{E}_\emptyset(\tau)}{2|b-a|}
\right) \le 2 \exp\left(-\frac{\Bigl( T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau) \Bigr)^2}{4 \left(2v + (T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau))c \right)} \right).
\]
Equation \eqref{etape6} gives that
\begin{align*}
\mathbb{P}_\emptyset\biggl( \biggl|\frac1T \int_0^T f(X_t)\,dt - \pi_A f \biggr| \ge \varepsilon \biggr)
&\le 4 \exp\left(-\frac{\Bigl( T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau) \Bigr)^2}{4 \left(2v + (T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau))c \right)} \right).
\end{align*}
To prove the second part of Corollary \ref{cor:dev_simple} we have to solve
\begin{equation}\label{eq:aux}
\eta=4
\exp\left(-\frac{\Bigl( T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau) \Bigr)^2}{4 \left(2v + (T\varepsilon-|b-a|\mathbb{E}_\emptyset(\tau))c \right)} \right)
\end{equation}by expressing $\varepsilon$ as function of $\eta$, for any $\eta\in (0,1)$.
Let us define the following decreasing bijection from $\mathbb{R}_+$ into $\mathbb{R}_-$:
$$\varphi(x)=-\frac{x^2}{4(2v+cx)}.$$
The solution of \eqref{eq:aux} is then $\varepsilon_\eta=(|b-a|\mathbb{E}_\emptyset(\tau)+x_0)/T$ where $x_0$ is the unique positive solution of
$$\varphi(x)=\log\Big(\frac{\eta}{4}\Big)\quad \Leftrightarrow \quad x^2+4c\log\big(\frac{\eta}{4}\big) x + 8v\log\big(\frac{\eta}{4}\big)=0.$$
Computing the roots of this second order polynomial, we can show that there always exist one negative and one positive root as soon as $\eta<4$. More precisely,
$$x_0=-2c\log\big(\frac{\eta}{4}\big)+\sqrt{4c^2\log^2\big(\frac{\eta}{4}\big)-8 v \log\big(\frac{\eta}{4}\big)},$$ which concludes the proof.
| {
"attr-fineweb-edu": 1.979492,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUgV3xK7IDP-3_8f2d | \section{Observations and Data Analysis}
\noindent We had reported in Riaz et al. (2012) a detection for 2MASSW J1207334-393254 (2M1207) in the {\it Herschel} SPIRE bands of 250 and 350$\micron$. A parallel study conducted by Harvey et al. (2012) based on PACS 70 and 160$\micron$ observations shows a bright source at RA = 12:07:35.183; Dec = -39:32:52.64 (120735), located $\sim$25$\arcsec$ east of 2M1207. This is an unclassified source with no SIMBAD matches (other than 2M1207) within 30$\arcsec$. There is no detection for this object in the 2MASS bands. It is detected ($>$2-$\sigma$) in the {\it Spitzer} IRAC 3.6 and 4.5$\micron$ bands, but is undetected in the 5.8 and 8$\micron$ bands. We have also checked the WISE images and there is a detection ($>$2-$\sigma$) for this source at 3.4 and 4.6$\micron$, but it is undetected in the 12 and 22$\micron$ bands. It was faintly detected at a 1-$\sigma$ level in the {\it Spitzer} 24$\micron$ image, but was undetected in the {\it Spitzer} MIPS 70 and 160$\micron$ bands. The {\it Spitzer} observations were obtained in 2007, whereas the PACS 70 and 160$\micron$ observations were taken in 2010. The object 120735 is an $\sim$8mJy source in the PACS 70$\micron$ observation, whereas the 1-$\sigma$ confusion noise in the {\it Spitzer} 70$\micron$ image is $\sim$2mJy. Therefore a 4-$\sigma$ detection with {\it Spitzer} would have been possible. The non-detection of this source in the {\it Spitzer} MIPS data could be due to possible variability or the low signal-to-noise of the data. We do not know the nature of this source. If it is a galaxy then it is unlikely to be variable, but then the non-detection in some of the bands is puzzling. To check the field in the SPIRE images for 2M1207, we had used the previously available {\it Spitzer} MIPS observations, since the PACS data was not public at the time the analysis was done. 2M1207 is a prominently bright detection in the 24$\micron$ band, while the source 120735 is a faint detection at a 1-$\sigma$ level (Fig.~\ref{images}). Comparing the {\it Spitzer} MIPS and the SPIRE fields, we found no clear source detection at the nominal position for 2M1207, while a bright detection was seen at the location of the source 120735. The separation between 2M1207 and the bright object centroid is comparable to the offset from the nominal position of the target ($\sim$14.3$\arcsec$ or $\sim$2.4 pixel). The offset is also identical in all three SPIRE bands. Considering the marginal (1-$\sigma$) detection of the source 120735 in the {\it Spitzer} 24$\micron$ band, its non-detection at {\it Spitzer} 70$\micron$, and the mentioned offset in the SPIRE bands, we had identified the bright object in the SPIRE images as 2M1207. A comparison now with the PACS images indicates this to be a misidentification. There is the possibility of variability for the source 120735 and the contamination from it cannot be properly accounted for, but considering how bright the object 120735 is in the PACS images, its emission is likely to dominate the SPIRE photometry. Given these uncertainties, we have revised the SPIRE fluxes for 2M1207 by measuring the emission at its nominal position, without considering the positional offset. All SPIRE measurements are upper limits (Table~\ref{fluxes}). The 500$\micron$ upper limit is the same as estimated in the original paper. Both objects lie in a confusion noise dominated region in the 500$\micron$ image (Fig~\ref{images}; bottom panel), and the flux value at the nominal position of 2M1207 and the object 120735 location is the same. The flux value at 500$\micron$ is higher than 250 or 350$\micron$ because of the higher confusion noise in this band.
\begin{figure}
\includegraphics[width=53mm]{24-2.eps} \\
\includegraphics[width=53mm]{160-2.eps} \\
\includegraphics[width=53mm]{250-5.eps} \\
\includegraphics[width=53mm]{500-2.eps} \\
\caption{2M1207 images: ({\it top}) {\it Spitzer} 24$\micron$; ({\it second}) PACS 160$\micron$; ({\it third}) SPIRE 250$\micron$; ({\it bottom}) SPIRE 500$\micron$. 2M1207 is marked by a cross, Source120735 is marked by a circle. The pixel scale is 2.4$\arcsec$ pix$^{-1}$ in the {\it Spitzer} 24$\micron$ band, 0.4$\arcsec$ pix$^{-1}$ in PACS, 6$\arcsec$ pix$^{-1}$ in the SPIRE 250$\micron$ band, and 14$\arcsec$ pix$^{-1}$ in the 500$\micron$ band. In all images, North is up and East is to the left. }
\label{images}
\end{figure}
\vspace{0.05in}
\noindent {\bf Revision to Section \S 3}
\vspace{0.01in}
\noindent Using the sub-mm upper limits, the best model fit is for an outer disc radius of 50 AU and a disc mass of 0.1 $M_{Jup}$ (Fig.~\ref{model}). The present model fit is the same as presented in Riaz \& Gizis (2008). We refer the readers to that paper for details on the rest of the fitting parameters.
\begin{figure}
\includegraphics[width=60mm]{1207-new.eps}
\caption{The best model fit for 2M1207A disc (black line). Also shown is the contribution from the disc (blue) and the stellar photosphere (grey). The Spitzer/IRS spectrum is shown in red. The optical, near- and mid-infrared photometry plotted is listed in Riaz \& Gizis (2007). }
\label{model}
\end{figure}
\begin{table}
\caption{Observations for 2M1207}
\label{fluxes}
\begin{tabular}{cc}
\hline
Band & Flux [mJy] \\ \hline
250$\micron$ & $<$5.2 \\
350$\micron$ & $<$5 \\
500$\micron$ & $<$16 \\
\hline
\end{tabular}
\end{table}
\vspace{0.05in}
\noindent {\bf Revision to Section \S 4.1}
\vspace{0.05in}
\noindent A disc mass of $\sim$0.1 $M_{Jup}$ places 2M1207 among the weaker discs in Taurus. The relative disc mass for 2M1207 [{\it log} ($M_{disc}/M_{*}$) = -2.4] is comparable to the weakly accreting systems in TWA, such as, Hen 3-600.
\vspace{0.05in}
\noindent {\bf Revision to Section \S 4.2}
\vspace{0.05in}
\noindent The core accretion mechanism is still unlikely for the same reasons as discussed in Section \S 4.2.1 in the original version of the paper (Riaz et al. 2012). For disc fragmentation (Section \S 4.2.2), our argument was that even a very low mass disc could produce a fragment of $\sim$0.035$M_{Jup}$, which can then grow over time to form a 5 $M_{Jup}$ mass object. The main requirement for such a case is for the initial mass of the disc to be higher than its current estimate (at least 10--20 $M_{Jup}$). An upper limit on the disc mass of $\sim$0.1 $M_{Jup}$ thus still does not rule out disc fragmentation, since the system is relatively old and we have not observed it during its early stages when fragmentation could have occurred ($<$0.1 Myr). The alternative mechanisms, as discussed in Section \S 4.2.3 in the original paper, would still be applicable.
\vspace{0.05in}
\noindent {\bf Revision to Section \S 4.3}
\vspace{0.05in}
\noindent We had estimated in Riaz et al. (2012) an outer disc radius of 50 -- 100 AU. Our current model fit for $R_{max}$ of 50 AU is consistent with the previous estimate, and thus the possibility of the planetary mass companion truncating the disc is still applicable. We refer the readers to the discussion in Section \S 4.3 in the original version of the paper.
| {
"attr-fineweb-edu": 1.658203,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUgajxK7IDOB8TjDVh | \section{Introduction}
In this paper we explore a canonical basis for a
significant class of theories of many-particle correlations.
Growth in computing capacity has fueled increasingly comprehensive studies
of assemblies of interacting elements and how these come to determine
the behavioral complexity of such assemblies, at simulational and
analytical levels. The resulting numerics feed back to expand theoretical
concepts of how a system's elementary components, with their interactions,
co-operate in subtle collective phenomena.
Among long-established formulations are
the conserving $\Phi$-derivable approximations after Kadanoff and Baym
\cite{kb1,kb2}
and an especially significant candidate, parquet theory.
\cite{pqt1,pqt2,pqt2a,pqt3}
Our program also covers parquet-like variants such as the
induced interaction
\cite{bb1,bb2,bb3}
which has a successful record in its own right for problems of strong
correlations. There is conceptual merit in codifying
the intuition behind these heuristic models in a more top-down way.
$\Phi$ derivability concerns the response structure that emerges from
constructing, as its generator, an effective correlation energy
functional $\Phi$.
In parquet one constructs the correlated two-body
scattering amplitude directly.
The interrelationship of parquet and $\Phi$ derivability
has been analyzed previously
\cite{js,roger,janis1,janis2}
though not from a Hamiltonian point of view.
Parquet, its relatives and the $\Phi$-derivable descriptions
are all constructed
by choosing judiciously, if by hand, physically dominant substructures
out of the complete set of correlation energy diagrams.
Parquet theory stands out by including topologically the largest
conceivable set of particle-particle-only and particle-hole-only
pair scattering processes. This maximally pair-coupled topology
makes it worth seeking canonical grounds for parquet to shed a different
light on its structure.
Emerging from a formalism relatively unfamiliar to many-body practice,
our conclusions turn out to resonate strongly with the
diagrammatic investigation by Smith.
\cite{roger}
To establish a Hamiltonian basis for the class of theories in question,
we adapt the strategy originally devised by Kraichnan
\cite{k1,k2}
and applied recently to a series of simpler
self-consistent diagrammatic models.
\cite{KI}
These are $\Phi$-derivable in the sense of Baym-Kadanoff;
they each possess a model Luttinger-Ward-like
correlation energy functional
\cite{lw}
as the generator of static and dynamic response and correlation functions
which, while approximate, strictly conserve particle number,
momentum and energy at both microscopic and global levels.
In essential form the parquet scattering amplitude is subsumed
under a specific $\Phi$-derivable description, the
fluctuation-exchange (FLEX) approximation.
\cite{pqt3}
Although considered incomplete and subject to refinement, the simplest
configuration of parquet is thus already a component of a correlated
model whose desirable microscopic conservation properties follow naturally.
What is not in place is a Hamiltonian underpinning for FLEX.
To arrive at any conserving response formulation a price is paid in
committing to a canonical Hamiltonian description. Granted all its
consequent analytical benefits, there is a caveat on the possibility of
further consistent refinement of parquet beyond its basic form emerging
directly from FLEX: diagrammatic iteration of the renormalized two-body
parquet amplitude, feeding it back into the one-body self-energy
functional, cannot achieve control over conservation.
\cite{roger}
We revisit this in the following.
Kraichnan's embedding of the many-body problem in a larger space departs
substantially from traditional diagrammatic reasoning. It is central to the
project because it is applicable to systems with pair interactions. In
principle it should have something to say about parquet.
One starts by injecting the physical Hamiltonian into a much larger sum
(going to infinity) of identical but distinguishable replicas. A collective
representation is introduced over this total Hamiltonian. Next,
isomorphic copies of these large collective Hamiltonians are themselves
summed into a grand Hamiltonian, but with the interaction potential of
each collective copy now partnered by an individual coupling factor.
The factor depends only on the collective indices as $V$ depends
only on the physical indices. Adjoined to $V$ in this way,
the coupling can be defined stochastically.
Provided the couplings transform in their abstract indices as the
elementary pair potential transforms in its physical indices, the end
result is a Hamiltonian in which the original physical form is embedded.
Left unmodified, with all coupling factors set to unity, the expanded
system recovers the exact physics. If modified appropriately, all the
unitary properties of the collective Hamiltonians, and of their grand sum,
are unaffected.
Contingent upon the functional form of the couplings, any operator
expectation over their distribution in the super-assembly allows subsets
of the exact correlation-energy diagrams at any order to survive
when their overall coupling-factor product works out as unity,
thus imparting immunity to taking the expectation. All other products of
random coupling factors are asymptotically killed off by mutually
destructive interference in the expectation: an extension
of the random-phase approximation.
\cite{dpelem}
The key to the strategy is that, up to averaging over Kraichnan's couplings,
the super-ensemble represents a well defined many-body Hamiltonian. Each
collective member is distinguished by its own assignment of coupling factors
and corresponds to a precisely defined Fock space. This means that any exact
identity in the hierarchy of analytic Green functions will survive averaging
if (and only if) the averaging process is done consistently on both sides
of the relation. This covers the Ward-Pitaevsky identities between one-body
self-energy and two-body response kernel, and Kramers-Kr\"onig analyticity
leading to the frequency sum rules for the correlation functions.
\cite{pn}
Relations that do not rely on analyticity are not preserved, however.
We clarify the distinction in the following.
One is therefore justified in discussing a canonical Hamiltonian for the
diagrammatic approximation giving the expectation for $\Phi$ over the
distribution of Kraichnan's coupling factors.
To cite Baym:
\cite{kb2}
``One reason underlying the fact that these approximations have such a
remarkable structure has been discovered by Kraichnan, who has shown that
in a certain sense they are exact solutions to model Hamiltonians
containing an infinite number of stochastic parameters''.
Unlike in a physically guided, constructive
$\Phi$-derivable model, the approximation is encoded here
{\em a priori} in the couplings of the Kraichnan
Hamiltonian. This does not mean ``from first principles'', as
the intuitive task of isolating dominant terms is merely shifted
from the choice of a diagram subset for $\Phi$ to that of an
appropriate Kraichnan coupling (K-coupling hereafter).
It really means that classes of conserving consistency properties,
though not all, that are fundamental in the canonical
description hold automatically after averaging.
Section II starts with a minimal review of $\Phi$ derivability,
recalling properties essential in building up a conserving
many-body expansion. Kraichnan's formalism
\cite{k1,k2,KI}
is then introduced and a form of it proposed,
including all possible pairwise-only interactions.
In Sec. III we revisit the logical development of the pairwise
correlation structure of the Kraichnan model's response
to a perturbation. This teases out real physical effects otherwise
dormant, or virtual, in the self-consistent structure of $\Phi$ itself.
Finally in Sec. IV we arrive at the parquet equations' scaffold,
displaying its provenance from the Hamiltonian defined after Kraichnan.
There too we discuss conceptual points of difference
between parquet topology interpreted within the Hamiltonian outlook,
and attempts to enlarge the topology by an iterative feedback;
these do not accord with $\Phi$ derivability.
Despite their affinity, $\Phi$ derivability and parquet analysis
exhibit complementary inherent shortcomings deeply linked to the
general nature of so-called planar diagrammatic expansions.
\cite{roger,js}
This invites care in considering which sets of physical problems
are better served by one or other of the two approaches.
We offer concluding observations in Sec. V.
\section{Precise Hamiltonians for Approximate Models}
\subsection{Correlation energy}
Our many-body system has the second-quantized Hamiltonian
\begin{eqnarray}
{\cal H}
&=&
\sum_k \varepsilon_k a^*_k a_k
\cr
&&
+ {1\over 2}
{\sum_{k_1 k_2 k_3 k_4}}\!\!\!\!\!'
{\langle k_1 k_2 | V | k_3 k_4 \rangle}
a^*_{k_1} a^*_{k_2} a_{k_3} a_{k_4};
\cr
\cr
&&
{\langle k_1 k_2 | V | k_3 k_4 \rangle}
\equiv
\delta_{s_1 s_4} \delta_{s_2 s_3}
V({\bf k}_1 - {\bf k}_4)
\label{kII01}
\end{eqnarray}
in terms of one-particle creation operators $a^*$ and
annihilation operators $a$. The first right-hand term is the usual
total kinetic energy, the second term is the pairwise interaction.
For simplicity we discuss a spin- (or isospin-) independent
scalar $V$ but this can be relaxed
without invalidating the argument for pair interactions.
Here, again for simplicity, we address a spatially uniform system
for which momentum is a good quantum number; index $k$ stands for the
wave vector and spin pair $({\bf k},s)$,
writing $a^*_k$ as the creation operator
with $a_k$ the annihilation operator, both satisfying fermion
anticommutation. The summation ${\sum}'_{k_1 k_2 k_3 k_4}$
comes with the restriction $k_1+ k_2 = k_3 + k_4$. In a
neutral uniform Coulomb system,
potential terms with $k_2 - k_3 = 0 = k_4 - k_1$
are canceled by the background and are excluded.
The ground-state energy resulting from the full Hamiltonian includes
a correlation component $\Phi[V]$, the essential generator for the
diagrammatic expansions that act as vocabulary to the grammar
of the analysis. Here we go directly to $\Phi[V]$ and for full discussion
of the interacting ground-state structure we refer to
the classic literature.
\cite{kb2,lw}
The correlation energy can always be
written as a coupling-constant integral
\begin{eqnarray}
\Phi[V] \equiv {1\over 2}\int^1_0 {dZ\over Z}
G[Z V]\!:\!\Lambda[Z V; G]\!:\!G[Z V]
\label{kII02}
\end{eqnarray}
in which $G[V]$ is the complete renormalized two-point Green function of the
system, describing propagation of a single particle in the presence of
all the rest, and $\Lambda[V; G]$ is the fully renormalized four-point
scattering amplitude whose internal structure manifests all the possible
modes by which the propagating particles (via $G$) interact via $V$.
Single dots ``$\cdot$'' and double dots ``:'' denote single and double
internal integrations respectively, over frequency, spin and wave vector,
rendering $G\!:\!\Lambda\!:\!G$ an energy expectation value.
The renormalized Green function satisfies Dyson's equation
\begin{eqnarray}
G[V]
&=&
G^{(0)} + G^{(0)}\!\cdot\! \Sigma[V;G]\!\cdot\! G[V]
\label{kII03}
\end{eqnarray}
with $G^{(0)}$ as the noninteracting Green function
$G^{(0)}_k(\omega) \equiv (\omega - \varepsilon_k)^{-1}$
with $\Sigma[V;G]$ as the self-energy.
Equation (\ref{kII03}) links back to the correlation-energy functional
self-consistently through the variation that defines the self-energy
\begin{eqnarray}
\Sigma[V;G]
&\equiv&
\frac{\delta \Phi[V]}{\delta G[V]} = \Lambda[V; G]\!:\! G[V].
\label{kII04}
\end{eqnarray}
We recall the basic requirements on $\Lambda$. In the
expansion to order $n$ in $V$ within any particular linked structure
of $G\!:\!\!\Lambda\!\!:\!G$ reduced to its bare
elements, there will be $2n$ bare propagators.
The integral effectively treats each $G^{(0)}$ as distinguishable, and
there is a $2n$-fold ambiguity as to which bare propagator should be
the seed on which the given contribution is built up. That is, integration
replicates the same graph $2n$ times from any particular $G^{(0)}$
in the integral; but the structure contributes once only in $\Phi$. The
coupling-constant formula removes the multiplicity to all orders.
The essential feature of $\Lambda$ in the exact correlation energy
functional is the following symmetry: consider the skeleton
$G^{(0)}\!:\!\!\Lambda[V; G^{(0)}]\!\!:\!G^{(0)}$.
Removal of any $G^{(0)}$ from the skeleton,
{\em at any order in} $V$, must result in the same unique
variational structure; all lines are equivalent, The same applies
when all bare lines are replaced with dressed ones.
\cite{kb1}
This is due to unitarity and ultimately to the Hermitian character
of the Hamiltonian. It also follows that $\Lambda$ must be pairwise
irreducible: removing any two propagators $G$ from $G\!:\!\Lambda\!:\!G$
cannot produce two unlinked self-energy insertions, or else there
would be inequivalent $G$s in a contribution of form
$G\!:\!\!\Lambda_1\!\!:\!GG\!:\!\!\Lambda_2\!\!:\!G$.
These conditions impose a strongly restrictive graphical structure
upon the four-point scattering kernel entering into the self-energy.
\subsection{$\Phi$ derivability}
Other than the generic symmetry of $G$ in $\Phi$, the
variational relationships among $\Lambda$, $\Sigma$ and $G$ do not
depend on topological specifics. Those relationships
were thus adopted as defining criteria by Baym and Kadanoff
\cite{kb1,kb2}
for constructing conserving approximations: the $\Phi$-derivable models.
Choosing a subset of skeleton diagrams from the full
$\Phi[V]$ with every $G^{(0)}$ topologically equivalent and
replacing these with dressed lines guarantees unitarity
of the effective model $\Lambda$ and secures microscopic conservation
not only at the one-body level but also for the pairwise dynamic
particle-hole response under an external perturbation.
$\Phi$ derivability necessarily entails an infinite-order approximation to
the correlation structure in terms of the bare potential. While a finite
choice of skeleton diagrams of $\Phi$ fulfills formal conservation, it must
still lead to an infinite nesting of bare interactions linked by pairs of
renormalized $G$s. Self-consistency in Eqs. (\ref{kII03}) and (\ref{kII04})
is a fundamental feature of all $\Phi$-derivable models.
\subsection{Kraichnan Hamiltonian}
The authoritative references for Kraichnan Hamiltonians
are the original papers of Kraichnan.
\cite{k1,k2}
Here we follow the more recent paper by one of us, hereafter called KI.
\cite{KI}
As per the Introduction, Kraichnan's construction proceeds by
two ensemble-building steps. First, one generates an assembly of $N$
functionally identical distinguishable copies of the exact Hamiltonian,
Eq. (\ref{kII01}).
The total Hamiltonian is
\begin{eqnarray}
{\cal H}_N
&=&
\sum^N_{n=1} \sum_k \varepsilon_k a^{*(n)}_k a^{(n)}_k
\cr
&&
+ {1\over 2} \sum^N_{n=1} {\sum_{k_1 k_2 k_3 k_4}}\!\!\!\!\!'
{\langle k_1 k_2 | V | k_3 k_4 \rangle}
a^{*(n)}_{k_1} a^{*(n)}_{k_2} a^{(n)}_{k_3} a^{(n)}_{k_4};
~~~ ~~
\label{kII05}
\end{eqnarray}
the creation and annihilation operators
with equal index $n$ anticommute as normal;
for values of $n$ that differ, they commute.
At this point one goes over to a collective description of
the $N$-fold ensemble by Fourier transforming over index $n$.
For integer $\nu$ define the collective operators
\begin{widetext}
\begin{eqnarray}
a^{*[\nu]}_k
&\equiv&
N^{-1/2} \sum^N_{n=1} e^{2\pi i \nu n/N} a^{*(n)}_k
~~ {\rm and}~~
a^{[\nu]}_k
\equiv
N^{-1/2} \sum^N_{n=1} e^{-2\pi i \nu n/N} a^{(n)}_k.
\label{kII06}
\end{eqnarray}
These preserve anticommutation up to a term
strongly suppressed by mutual interference among unequal phase factors
and at most of vanishing order $1/N$.
The argument is a random-phase one:
\cite{dpelem}
\begin{eqnarray}
[a^{*[\nu]}_k, a^{[\nu']}_{k'}]_+
&=&
\frac{1}{N} \sum_{n,n'} e^{2\pi i(\nu n - \nu' n')/N}
[a^{*(n)}_k, a^{(n')}_{k'}]_+
\cr
&=&
\frac{1}{N} \sum_n e^{2\pi i(\nu - \nu')n/N} [a^{*(n)}_k, a^{(n)}_{k'}]_+
+ \frac{2}{N} \sum_{n \neq n'} e^{2\pi i(\nu n - \nu' n')/N}
a^{*(n)}_k a^{(n')}_{k'}
\cr
&=&
\delta_{kk'}\delta_{\nu\nu'} + {\cal O}(N^{-1}).
\label{kII07}
\end{eqnarray}
Similarly for $[a^{[\nu]}_k, a^{[\nu']}_{k'}]_+$. In practice, any
term in the Wick expansion of physical expectations that links
dissimilar elements $n \neq n'$ will not contribute in any case.
The transformation yields the new representation
\begin{eqnarray}
{\cal H}_N
&=&
\sum^N_{\nu=1} \sum_k \varepsilon_k a^{*[\nu]}_k a^{[\nu]}_k
+ {1\over 2N} {\sum_{k_1 k_2 k_3 k_4}}\!\!\!\!\!' ~~
\sum^N_{\nu_1 \nu_2 \nu_3 \nu_4}
\delta_{\nu_1+\nu_2, \nu_3+\nu_4}
~{\langle k_1 k_2 | V | k_3 k_4 \rangle}~
a^{*[\nu_1]}_{k_1} a^{*[\nu_2]}_{k_2} a^{[\nu_3]}_{k_3} a^{[\nu_4]}_{k_4}
\cr
&\equiv&
\sum_{\ell} \varepsilon_k a^*_{\ell} a_{\ell}
+
{1\over 2N} {\sum_{\ell_1 \ell_2 \ell_3 \ell_4}}\!\!\!\!'
~{\langle k_1 k_2 | V | k_3 k_4 \rangle}~
a^*_{\ell_1} a^*_{\ell_2} a_{\ell_3} a_{\ell_4}
\label{kII08}
\end{eqnarray}
where in the last right-hand expression we condense the notation so
$\ell \equiv (k, \nu)$ and the restriction on the sum now comprises
$\nu_1 + \nu_2 = \nu_3 + \nu_4$ (modulo $N$) as well as the
constraint on the momenta; equivalently, $\ell_1 + \ell_2 = \ell_3 + \ell_4$.
\subsection{Modifying the Hamiltonian}
\centerline{
\includegraphics[height=4truecm]{KII00.eps}
}
{{\bf FIG. 1.} {\small Construction of the Kraichnan Hamiltonian.
(a) The exact many-body Hamiltonian is embedded in a large sum of $N$
identical but distinguishable duplicates. A Fourier transform over the
identifying index $n = 1, 2, ... N$ is performed. To each physical
interaction potential ${\langle k_1k_2|V|k_3k_4 \rangle}$ a new parameter
$\varphi_{\nu_1\nu_2|\nu_3\nu_4}$ is attached, labeled by the new Fourier
indices and transforming in them as does $V$ in its physical indices.
(b) The modified Hamiltonian is again embedded in a large sum of $M$
replicas, but each replica is now assigned a unique set of
factors $\varphi$. The resulting Kraichnan Hamiltonian remains Hermitian.
Setting every instance of $\varphi$ to unity recovers the exact
expectations resulting from the original, physical Hamiltonian. If values are
specifically structured but otherwise randomly assigned to the $M$-fold
ensemble $\{\varphi\}$, only a selected subset of the physical
correlations survives while the rest are suppressed by random phasing.
}}
\vskip 0.25cm
\end{widetext}
Performing averages given the extended Kraichnan Hamiltonian
of Eq. (\ref{kII08}), as it stands, simply recovers the exact
expectations for the originating one; any cross-correlations between
distinguishable members are identically zero. However, embedding the
physical Hamiltonian within a collective description opens a novel
degree of freedom for treating interactions. Figure 1 summarizes the
whole process. From now on we concentrate on the interaction part of
Eq. (\ref{kII08}), denoted by ${\cal H}_{i;N}$,
since the one-body part conveys no new information.
We will not consider issues of convergence here; for particular
implementations they are carefully discussed in the original papers.
\cite{k1,k2}
We can modify the
behavior of the interaction part, keeping it Hermitian, by
adjoining a factor $\varphi_{\nu_1\nu_2|\nu_3\nu_4}$ such that
\begin{eqnarray}
{\cal H}_{i;N}[\varphi]
&\equiv&
{1\over 2N} {\sum_{\ell_1 \ell_2 \ell_3 \ell_4}}\!\!\!\!'
~{\langle k_1 k_2 | V | k_3 k_4 \rangle}
~\varphi_{\nu_1\nu_2|\nu_3\nu_4}\cr
&&
~~~ ~~~ ~~~ ~~~ ~~~ ~~~
\times
a^*_{\ell_1} a^*_{\ell_2} a_{\ell_3} a_{\ell_4}.
\label{kII09}
\end{eqnarray}
The expression remains Hermitian if and only if the factor has the same
symmetry properties as the potential under exchange of its indices. Thus
\begin{eqnarray}
\varphi_{\nu_4\nu_3|\nu_2\nu_1}
=
\varphi^*_{\nu_1\nu_2|\nu_3\nu_4};
~~~
\varphi_{\nu_2\nu_1|\nu_4\nu_3}
=
\varphi_{\nu_1\nu_2|\nu_3\nu_4}.
\label{kII10}
\end{eqnarray}
The additional procedure of taking expectations
will no longer match those for the exact physical Hamiltonian
unless, evidently, $\varphi$ is unity.
The crux, however, is that all identities among
expectations, dependent on causal analyticity,
will still be strictly respected.
The Kraichnan Hamiltonian remains well formed in its own right
(its Fock space is complete)
except that now it describes an abstract system necessarily
different from the physical one that motivated it. The task is
to tailor it to recover the most relevant aspects of the real
physics in reduced but tractable form.
The last step in the logic considers the much larger sum $\mathbb{H}$
of collective Hamiltonians all of the form of Eq. (\ref{kII09}),
with an interaction part $\mathbb{H}_i$ encompassing a
distribution $\{\varphi\}$ of coupling factors prescribed by a common rule:
\begin{eqnarray}
\mathbb{H}_i
\equiv
\sum_{\{\varphi\}} {\cal H}_{i;N}[\varphi].
\label{grh}
\end{eqnarray}
In Eq. (\ref{grh}) the sum ranges over the prescribed couplings.
Each Hamiltonian in the family is Hermitian,
so $\mathbb{H}$ must be also. As stated, all physical quantities -- with
one exception -- preserve their canonical
interrelationships as their expectations run through $\varphi$.
The exception is for identities relying explicitly on the completeness
of Fock space associated with $\mathbb{H}$; for, Kraichnan's ensemble
averaging destroys completeness owing to its decohering action.
Consider, symbolically, the ensemble projection operator
\begin{eqnarray*}
\mathbb{P}
\equiv \prod_{\{\varphi\}} \sum_{\Psi[\varphi]} \Psi[\varphi] \Psi^*[\varphi].
\end{eqnarray*}
Overwhelmingly the orthonormal eigenstates of $\mathbb{H}$
will be products of correlated, highly entangled, Kraichnan-coupled
superpositions of states in the Fock space of each collective member
over the distribution $\{\varphi\}$.
Any expectation over the K-couplings, directly for $\mathbb{P}$, will
cause only those terms to survive whose components in every factor
$\varphi_{\nu_1\nu_2|\nu_3\nu_4}$ within $\Psi[\varphi]$
find a counterpart in $\Psi^*[\varphi]$;
see Eq. (\ref{kII12}) below for the structure of $\varphi$.
Any other legitimate but off-diagonal cross-correlations interfere mutually
and are suppressed.
Numerically, the integrity of the Kraichnan projection operator
$\mathbb{P}$ is not preserved.
\cite{nonan}
Among other things, this loss of coherence leads to
a clear computational distinction, within the same
$\Phi$-derivable approximation,
of static (instantaneous) correlation functions over against dynamic ones.
Given that distinction, the dynamic and static response functions will
still keep their canonical definitions and the sum-rule relations among
them are still preserved.
\cite{KI,fgetal}
To align the forthcoming presentation to the notion of crossing symmetry
\cite{zqt}
for fermion interactions, we take the further
step of antisymmetrizing the potential $V$. This is readily done
in the interaction Hamiltonian, which now reads
\begin{eqnarray}
{\cal H}_{i;N}[\varphi]
&\equiv&
\!\!{1\over 2N}\! {\sum_{\ell_1 \ell_2 \ell_3 \ell_4}}\!\!\!\!'
\!\varphi_{\nu_1\nu_2|\nu_3\nu_4}
{\langle k_1 k_2 | \overline V | k_3 k_4 \rangle}
a^*_{\ell_1} a^*_{\ell_2} a_{\ell_3} a_{\ell_4}
\cr
&&
\label{kII11}
\end{eqnarray}
where\[
{\langle k_1 k_2 | \overline V | k_3 k_4 \rangle}
\equiv
\frac{1}{2}
( {\langle k_1 k_2 | V | k_3 k_4 \rangle}
- {\langle k_2 k_1 | V | k_3 k_4 \rangle} ).
\]
Invocations of the pair potential will now refer to Eq. (\ref{kII11}).
Care has to be taken with signs for composite ``direct'' and
``exchange'' objects that turn out actually to be mixtures of both
(yet still needing to be topologically distinguished), to make sure
the accounting for $V$ itself stays consistent.
\centerline{
\includegraphics[height=4truecm]{KII01.eps}
}
{{\bf FIG. 2.} {\small Scheme for the self-consistent Hartree-Fock
interaction energy, derived by insertion into Eq. (\ref{kII11}) of
the Kraichnan coupling $\varphi^{\rm HF}_{\nu_1\nu_2|\nu_3\nu_4}
\equiv \delta_{\nu_1 \nu_4} \delta_{\nu_2 \nu_3}$. Dots: the
antisymmetrized pair interaction. Broken lines: the originating
potential. Full lines: one-body propagators.
(a) Contributions to the interaction energy. (b) Self-consistency
is made evident in the Dyson equation for the single-particle
propagator $G$, where $G^{(0)}$ is the noninteracting counterpart.
Nesting of $G\!:\!\overline V$ in the self-energy contribution means that
the bare potential is present to all orders, albeit as a highly reduced
subset of the physically exact self-energy.
}}
\vskip 0.25cm
We end the review of the Kraichnan formalism by
recalling the simplest example for $\varphi$ generating the
exchange-corrected random-phase, or Hartree-Fock, approximation.
This choice is
$\varphi^{\rm HF}_{\nu_1\nu_2|\nu_3\nu_4}
\equiv \delta_{\nu_1 \nu_4} \delta_{\nu_2 \nu_3}$.
\cite{KI}
The diagrammatic outcome
of this non-stochastic Ansatz is illustrated in Figure 2. The expectation
${\langle {\cal H}_{i;N}[\varphi^{\rm HF} V] \rangle}$
of the interaction energy over $\varphi^{\rm HF}$ consists, almost trivially,
of a pair of one-body Hartree-Fock Green functions attached
to a single node representing $\overline V$.
The physically richer stochastic definitions of $\varphi$,
due originally to Kraichnan
\cite{k1,k2}
were generalized and adapted in KI.
\cite{KI}
They will again be used in the next Section to build up a Kraichnan
Hamiltonian for the parquet-generating correlation energy
functional and all objects derived from it variationally.
\section{A Hamiltonian for Parquet}
\subsection{The channels and their couplings}
In simplest form, the parquet equations take the bare interaction
$\overline V$ and from it build up all possible iterations that
require propagation of pairs of particles from one interaction to the next.
This excludes any contributions to the interaction energy functional
in which no interaction nodes are directly linked by such a pair;
they cannot be broken down into simpler particle-pair processes.
Two examples are shown in Fig. 3. We comment later on how these can always
be added legitimately but {\em ad hoc} to the minimal $\Phi$ functional
of immediate interest.
\centerline{
\includegraphics[height=3truecm]{KII02.eps}
}
{{\bf FIG. 3.} {\small Two skeleton diagrams for the exact correlation
energy not reducible to pairwise-only propagation.
(a) Next-order component, beyond first and second in $\overline V$,
fulfilling the symmetry for $\Phi$ derivability
but with no two nodes directly linked by a pair of single-particle
propagators. (b) Next-higher-order term.
Such terms are not generated by any Kraichnan formulation of the
pairwise-only parquet Hamiltonian but can be added freely,
albeit only {\em ad hoc}, to its $\Phi$-derivable functional.
}}
\vskip 0.25cm
There are three possible choices of randomized K-couplings for $\varphi$,
each corresponding to the three channels included in parquet:
the $s$ channel explicitly selects propagation of pairs of particles,
the $t$ channel covers particle-hole pair propagation associated with
long-range screening in the random-phase approximation,
and its complement the $u$ channel describes the Hartree-Fock-like
exchange counterpart to $t$. Recall that while
antisymmetrization of the bare potential from $V$ to $\overline V$
superposes the actual $t$ and $u$ contributions, one has to continue
distinguishing their diagrammatic representations
topologically (and their relative sign) to
preserve the quantitative outcomes of the Hamiltonian, Eq. (\ref{kII11}).
For each possible channel we define a stochastic coupling:
\begin{eqnarray}
s {\rm ~channel:~~}
\sigma _{\nu_1 \nu_2 | \nu_3 \nu_4}
&\equiv&
\exp[\pi i(\xi_{\nu_1 \nu_2} - \xi_{\nu_3 \nu_4})];
\cr
~~~ ~~~ ~~~
\xi_{\nu \nu'} \in [-1,1] ~~{\rm and}
&& \!\!\!\!
\xi_{\nu' \nu}
= \xi_{\nu \nu'},
\cr
t {\rm ~channel:~~}~
\tau _{\nu_1 \nu_2 | \nu_3 \nu_4}
&\equiv&
\exp[\pi i(\zeta_{\nu_1 \nu_4} + \zeta_{\nu_2 \nu_3})];
\cr
~~~ ~~~ ~~~
\zeta_{\nu\nu'} \in [-1,1] ~~{\rm and}
&& \!\!\!
~\zeta_{\nu'\nu} = -\zeta_{\nu\nu'},
\cr
u {\rm ~channel:~~}
\upsilon _{\nu_1 \nu_2 | \nu_3 \nu_4}
&\equiv&
\exp[\pi i(\vartheta_{\nu_1 \nu_3} + \vartheta_{\nu_2 \nu_4})];
~~~ ~~~
\cr
~~~ ~~~ ~~~
\vartheta_{\nu \nu'} \in [-1,1] ~~{\rm and}
&& \!\!\!\!
\vartheta_{\nu' \nu} = -\vartheta_{\nu \nu'}.
\label{kII12}
\end{eqnarray}
Their full outworkings are detailed in KI.
The uniformly random numbers $\xi, \zeta, \vartheta$ are independently
distributed; expectations over them mutually decouple and all factors
conform to Eq. (\ref{kII10}). Each is designed so that, in
the stochastic average of the diagrammatic expansion of $\Phi$, product
chains whose phases cancel identically from start to finish are
immune to the averaging. All other product chains fail to cancel.
Being stochastic they interfere destructively, vanishing in the limit of
an arbitrarily large ensemble.
With respect to $t$ and $u$ channels, note that an exchange
of labels $1 \leftrightarrow 2$ or
$3 \leftrightarrow 4$ effectively swaps the definitions
and thus the actions of their K-couplings. This is consistent
with the physics of these channels as mutual exchange counterparts.
In the modality of Eq. (\ref{kII12}), $\sigma $ generates the
particle-particle Brueckner-ladder functional while the
ring approximation is generated by $\tau $.
Last, $\upsilon $ also results in a
Brueckner-like functional where particle-hole ladders replace
particle-particle ones.
\cite{KI}
There are no other options for pairwise propagation,
just as with parquet.
\subsection{Maximal pairwise coupling}
None of the K-couplings of Eq. (\ref{kII12}), alone or in twos, can
cover all conceivable scattering arrangements strictly
between particle and/or hole propagator pairs.
All three must combine sequentially in all possible ways,
while preventing any potential
replication of terms if two or more K-couplings led to the survival of
identical $\Phi$ terms. To first and second order in
$\overline V$ one can show that all three elementary couplings
generate identical contributions, inducing overcounting which would
propagate throughout the nesting of self-energy insertions.
The solution to overcounting is to combine the couplings of
Eq. (\ref{kII12}) to inhibit any concurrency. We propose the
candidate parquet K-coupling to be
\begin{eqnarray}
\varphi
&\equiv&
1 - (1 - \sigma )(1 - \tau )(1 - \upsilon ) ~~{\rm so}
\cr
\varphi_{\nu_1 \nu_2 | \nu_3 \nu_4}
&=&
\sigma _{\nu_1 \nu_2 | \nu_3 \nu_4} + \tau _{\nu_1 \nu_2 | \nu_3 \nu_4}
+ \upsilon _{\nu_1 \nu_2 | \nu_3 \nu_4}
\cr
&&
-~ \sigma _{\nu_1 \nu_2 | \nu_3 \nu_4} \tau _{\nu_1 \nu_2 | \nu_3 \nu_4}
\cr
&&
-~ \tau _{\nu_1 \nu_2 | \nu_3 \nu_4} \upsilon _{\nu_1 \nu_2 | \nu_3 \nu_4}
\cr
&&
-~ \upsilon _{\nu_1 \nu_2 | \nu_3 \nu_4} \sigma _{\nu_1 \nu_2 | \nu_3 \nu_4}
\cr
&&
+~ \sigma _{\nu_1 \nu_2 | \nu_3 \nu_4} \tau _{\nu_1 \nu_2 | \nu_3 \nu_4}
\upsilon _{\nu_1 \nu_2 | \nu_3 \nu_4},
\label{kII13}
\end{eqnarray}
preserving overall the Hermitian property specified by Eq. (\ref{kII10}).
In any diagram expanded to a given order in $\overline V$,
the products of K-couplings in Eq. (\ref{kII13}) may or may not
resolve into a set of elementary closed cycles
whose multiplicative chain is identically unity when $\varphi$-averaged
(this means, by way of definition, that any sub-chain hived off
within an elementary cycle would necessarily vanish through
phase interference).
Chains not resolving into a set of independent closed cycles
over the contribution will be quenched to vanish
in the Kraichnan expectation.
In Fig. 4(a) we show schematically the structures of the three
possible pairwise multiple-scattering combinations contributing
explicitly to the correlation energy functional $\Phi$, Fig. 4(b).
The K-coupling $\sigma $ leads to the particle-particle Brueckner
t-matrix $\Lambda_s$, while $\tau $ leads to the screened interaction
$\Lambda_t$ and lastly $\upsilon $ is the $t$-exchange complement leading
to the particle-hole Brueckner-type ladder $\Lambda_u$; this carries
an implicit sign change relative to $\Lambda_t$ owing to the difference
of one fermion loop count. From Eq. (\ref{kII13}) all processes
combine so the renormalized one-body propagators $G$ carry
self-energy insertions to
all orders in which $s$-, $t$- and $u-$ processes act synergetically, not
competing in parallel but entering sequentially.
\centerline{
\includegraphics[height=4.5truecm]{KII03.eps}
}
\vskip 0.25cm
{{\bf FIG. 4.} {\small (a) Definition of the fundamental all-order
$s, t$ and $u$ interactions. Dots: antisymmetrized pair potential.
(b) Symbolic definition of $\Phi$, the correlation energy functional
(weightings induced by Eq. (\ref{kII02}) are understood), following Kraichnan
averaging over all K-couplings $\sigma , \tau , \upsilon $ as in
Eq. (\ref{kII13}) to remove overcounting when different K-couplings
lead to identical diagrams. Although the skeleton graphs for $\Phi$
appear simple, their complexity is hidden within the self-consistent
nesting of self-energy insertions in the propagators (solid lines)
according to Eqs. (\ref{kII02})--(\ref{kII04}). Since the $stu$
correlation energy is identical to that of the fluctuation-exchange model
\cite{pqt3,sb},
the Kraichnan construction already subsumes the essence of parquet.
The combinatorial $stu$ structure is fully revealed only when the
response to an external perturbation is extracted
(see following).
}}
\vskip 0.2cm
Should two or even three channels have coincident closed cycles,
the structure of $\varphi$ makes certain that the net contribution
from this coincidence is always precisely unity;
Eq. (\ref{kII13}) ensures
that there is no overcounting if $GG$ pairings from different
channels gave rise to the same diagrammatic structure.
Such terms can turn up only once in their locations within
the expansion for $\Phi$, including iteratively in the self-energy parts.
All allowed pairwise-only combinations of scatterings, and only those,
survive the expectation over $\{\varphi\}$ to lead to a legitimate
$\Phi$-derivable correlation term with all its
symmetries and conserving properties.
\cite{k1,KI,kb2}
The individual energy
functional of each component Hamiltonian in Eq. (\ref{grh}),
being exact in its particular configuration prior to averaging,
automatically has these symmetries
in the renormalized expansion of Eq. (\ref{kII02}).
These are inherited
by the diagrammatic structure of every term that
survives the taking of expectations and ultimately by
the complete averaged $\Phi$.\\
\subsection{$\Phi$-derivable response}
Having arrived at the maximally paired structure of $\Phi$ in Fig. 4(b)
given the K-couplings of Eq. (\ref{kII13}), the work of obtaining the
parquet equations from it has been done, in one sense, in the analysis
detailed by Bickers
\cite{pqt3}
for the equivalent heuristic FLEX model.
However, the Hamiltonian prescription's ramifications lead beyond
the derivation of classic parquet.
\centerline{
\includegraphics[height=7.0truecm]{KII04.eps}
}
{{\bf FIG. 5.} {\small Systematic removal
of a propagator $G$ internal to the self-energy
$\Sigma[\varphi\overline V; G] = \Lambda\!:\!G$,
after Baym and Kadanoff,
\cite{kb1,kb2}
generates the primitive scattering kernel $\Lambda'$.
Removal of $G(32)$, solid line, simply regenerates $\Lambda$.
Removing any internal $G$ other than $G(32)$ yields additional terms
required for $\Phi$ derivability (microscopic conservation).
Top line: beyond the $s$-channel ladder $\Lambda_s$ the
non-crossing symmetric $t$-like term $\Lambda_{s;t}$ and
$u$-term $\Lambda_{s;u}$ are generated.
Middle line: generation of $\Lambda_t$ and the non-symmetric
$\Lambda_{t;s}$ and $\Lambda_{t;u}$. Bottom line: generation of
$\Lambda_u$ with $\Lambda_{u;t}$ and $\Lambda_{u;s}$.
}}
\vskip 0.20cm
The goal, then, is to reconstruct
a parquet-like scattering amplitude $\Gamma$ using the ingredients
provided by the Kraichnan machinery. We are still left to show
its relation to the scattering function $\Lambda$,
generator of the $\Phi$-derivable diagrammatic $stu$ expansion.
While the renormalized structure of $\Lambda$ seems sparse
compared with $\Gamma$ for parquet,
\cite{pqt3}
the structure for actual comparison is not $\Lambda$ but begins
with the variation
\begin{eqnarray}
\Lambda'
&\equiv&
\frac{\delta \Sigma}{\delta G} = \frac{\delta^2 \Phi}{\delta G \delta G},
\label{kII18}
\end{eqnarray}
which in fact is the source of the Ward-Pitaevsky identities.
\cite{pn}
One goes from there to set up the complete scattering interaction
$\Gamma'$ for the total system response to a perturbation.
The full outcome of the derivation of $\Gamma'$ is the
dynamical theory of Baym and Kadanoff;
\cite{kb2}
in it, conservation entails the additional family of non-parquet
diagrams $\Lambda'' = \Lambda' - \Lambda$ shown in Fig. 5.
These contribute to every order of iteration.
The topologies contained in $\Lambda''$ are {\em not} explicit in
the renormalized $\Lambda$ embedded within $\Phi[\varphi\overline V;G]$.
Not being crossing symmetric they are not permitted, much less generated,
within parquet. As with the normal parquet structures that we aim to
exhibit from the stochastic Hamiltonian construction, the apparently
extra correlation effects, actually mandated by conservation, remain
virtual in the renormalized summation for $\Phi$ until elicited by an
external probe.
Figure 5 details how functional differentiation gives rise to the
non-parquet terms, typical of all $\Phi$-derivable descriptions.
We want to trace how the purely parquet crossing symmetric
$\Gamma$ diagrams make up a nontrivial component of the complete set for
$\Gamma'$, the total Baym-Kadanoff response kernel. $\Gamma$
is not equivalent to $\Gamma'$; it is a proper subset.
\cite{pqt3}
We emphasize the necessary presence, for $\Phi$ derivability, of the
non-symmetric components $\Lambda''$. These are the approximate system's
attempt to match its $u$ terms, for example,
with partner terms topologically like the two complementary
channels $t$ and $s$; the same applies correspondingly to
the primary $s$ and $t$ terms. However, the question is less why
they break antisymmetry but how their presence fits into the
cancellation of terms for conservation to govern the model's response.
\section{Derivation of the parquet equations}
\subsection{Origin within response analysis}
To unpack the nested correlations hidden in
the renormalized form of $\Phi$
we turn to the full Kraichnan Hamiltonian prior to averaging
and derive the response to a one-body nonlocal perturbation
${\langle k' | U | k \rangle}$, which generally will have a
time dependence also.
\cite{kb1,kb2}
External perturbations do not couple
to the collective index $\nu$ but physically only to labels
$k$. The interaction Hamiltonian in Eq. (\ref{kII11}) is augmented:
\begin{eqnarray}
{\cal H}_{i;N}[\varphi; U]
&\equiv&
\sum_{ll'} {\langle k' | U | k \rangle} a^*_{l'} a_l
+ {\cal H}_{i;N}[\varphi; U=0].
~~~
\label{kII18.1}
\end{eqnarray}
Response to a local field is generated by setting
${\langle k' | U | k \rangle} \to U(q)\delta_{k',k+q}$,
dynamically linking (contracting)
the propagators that terminate and start at $U$.
Physical expectations are taken next, while
retaining the individual K-couplings $\varphi$
to keep track of all pair processes.
We use matrix notation with repeated indices
to expand the intermediate sums.
The two-body Green function is $\delta G/\delta U$.
\cite{kb1}
Working from Eq. (\ref{kII03}), vary $G^{-1}$ for
\begin{widetext}
\begin{eqnarray*}
\delta G^{-1}(12)
&=&
- \delta U(12) - \delta \Sigma[\varphi,G](12)
~~~{\rm or}
\cr
G^{-1}(12') \delta G(2'1') G^{-1}(1'2)
&=&
\delta U(12) + \frac{\delta \Sigma(12)}{\delta G(43)}
\frac{\delta G(43)}{\delta U(56)}
~~{\rm so}
\cr
\frac{\delta G(21)}{\delta U(56)}
\equiv
G(25)G(61)
&&\!\!\!\!\!\!
+~ G(21')G(2'1) \Lambda'(1'3|2'4)\varphi_{1'3|2'4}
\frac{\delta G(43)}{\delta U(56)}
\end{eqnarray*}
where $\varphi$ explicitly partners the
effective interaction $\Lambda'$. There is no overcounting of
the primary $s,t$ and $u$ contributions of $\Lambda$ since,
once a line in any self-energy insertion is opened, it will not
reconnect to its originating structure, joining instead a new
and different (ultimately closed) loop. Symbolically, with $I$
the two-point identity,
\begin{eqnarray}
[II - GG\!:\!\Lambda'\varphi]\!:\!\frac{\delta G}{\delta U}
&=&
GG ~~{\rm so}
\cr
\cr
\frac{\delta G}{\delta U}
&=&
[II - GG\!:\!\Lambda'\varphi]^{-1}\!:\!GG
\cr
\cr
&=&
GG + GG\!:\![II - GG\!:\!\Lambda'\varphi]^{-1}\Lambda'\varphi\!:\!GG.
\label{kII18.3}
\end{eqnarray}
\end{widetext}
Recalling Eq. (\ref{kII02}), the form of the generating kernel
$\Lambda$ (without the non-crossing-symmetric components
from Fig. 5) can be read off from the structure of $\Phi$
as in Fig. 4, with the subsidiary kernels
$\Lambda_s, \Lambda_t$ and $\Lambda_u$:
\begin{eqnarray}
\Lambda
&=&
\Lambda_s + \Lambda_t - \Lambda_u
~~ {\rm where}
\cr
\cr
\Lambda_s
&=&
\overline V + {\phi}^{-1}\overline V\sigma \!:\! GG\!:\!\Lambda_s\varphi;
\cr
\Lambda_t
&=&
\overline V + {\phi}^{-1}\overline V\tau \!:\! GG\!:\!\Lambda_t\varphi;
\cr
\Lambda_u
&=&
\overline V + {\phi}^{-1}\overline V\upsilon\!:\! GG\!:\!\Lambda_u\varphi.
\label{kII15}
\end{eqnarray}
To put the interactions on the same
representational footing as $\overline V$, we factor out the outermost
K-coupling, ${\phi}$. Intermediate chains that cancel right across
will finally cancel with ${\phi}^{-1}$ appropriate to each channel.
In Eq, (\ref{kII15}) the $u$-channel term of $\Lambda$,
being the exchange of the $t$-channel, carries the sign
tracking the structural antisymmetry of $\Lambda_t$
on swapping particle (or hole) end points and restoring to $V$
its proper weight of unity in the intermediate summations.
From Eq. (\ref{kII18.3}) the complete four-point amplitude is defined:
\begin{eqnarray}
\Gamma'
&\equiv&
{\phi}^{-1}\Lambda'\varphi\!:\![II - GG\!:\!\Lambda'\varphi]^{-1}
\cr
&=&
{\phi}^{-1}[II - \Lambda'\varphi\!:\!GG]^{-1}\!:\!\Lambda'\varphi\!,
\label{kII18.4}
\end{eqnarray}
In terms of $\Gamma'$ the conserving two-body Green function becomes
\begin{eqnarray}
\frac{\delta G}{\delta U}
&=&
GG\!:\![II + \Gamma'\varphi\!:\!GG].
\label{kII18.5}
\end{eqnarray}
\subsection{Parquet equations}
At this stage we specialize to the crossing symmetric sub-class of the
expansion dictated by Eq. (\ref{kII18.4}).
After dropping $\Lambda''$ all the crossing-symmetric terms
are gathered. The Equation is truncated and defines the
now crossing symmetric kernel
\begin{eqnarray}
\Gamma
&\equiv&
{\phi}^{-1}\Lambda\varphi\!:\![II - GG\!:\!\Lambda\varphi]^{-1}
\cr
&=&
{\phi}^{-1}[II - \Lambda\varphi\!:\!GG]^{-1}\!:\!\Lambda\varphi\!,
\label{kII18.6}
\end{eqnarray}
keeping in mind that the crossing symmetric $\Lambda$ consists
only of the primary structures embedded in $\Phi$. While $\Gamma$
inherits antisymmetry, it forfeits conservation at the two-body level
that is guaranteed for $\Gamma'$.
\cite{err}
As with Eq. (\ref{kII18.4}) above,
Eq. (\ref{kII18.6}) sums $\Gamma$ differently from parquet;
but the underlying architecture of $\Gamma$ is the same.
In the Equation, $s$-, $t$- and $u$-processes combine
in all possible ways while inhibited from acting concurrently.
The resolution of $\Gamma$ becomes a bookkeeping exercise:
to make a systematic species by species inventory of all its
permissible pair-only scattering sequences, irreducible
in the parquet sense, within each channel, finally to weave these
into all possible reducible contributions.
In Kraichnan's description one resums $\Gamma$ by tracking how selective
filtering works through the three possible K-couplings, while in the
parquet approach one enforces, on the intermediate $GG$ pairs, the three
distinct modes of momentum, energy and spin transfer characterizing
the $s, t$ and $u$ channels. Our operation is the same as the pairwise
topological argument for $\Gamma$ in FLEX, detailed in Ref.
\cite{pqt3}.
To achieve it, the first set of equations isolates the components
that are not further reducible within each particular channel:
\begin{eqnarray}
\Gamma_s
&\equiv&
\overline V
+ {\phi}^{-1} \Gamma\tau \!:\! GG\!:\!\Gamma_t\varphi
- {\phi}^{-1} \Gamma\upsilon\!:\! GG\!:\!\Gamma_u\varphi;
\cr
\Gamma_t
&\equiv&
\overline V
- {\phi}^{-1} \Gamma\upsilon\!:\! GG\!:\!\Gamma_u \varphi
+ {\phi}^{-1} \Gamma\sigma \!:\! GG\!:\!\Gamma_s\varphi;
\cr
\Gamma_u
&\equiv&
\overline V
+ {\phi}^{-1} \Gamma\sigma \!:\! GG\!:\!\Gamma_s\varphi
+ {\phi}^{-1} \Gamma\tau \!:\! GG\!:\!\Gamma_t\varphi.
~~~ ~~~
\label{kII15.2}
\end{eqnarray}
Manifestly, the components of $\Gamma_s$ couple only via $t$ or $u$,
excluding any $s$-channel processes where cutting a pair sequence
$\sigma GG$ yields two detached diagrams. Thus, taking the Kraichnan
expectation of $(\Gamma_s - \overline V)\sigma$, nothing survives; and so on
for the other channels. The arrangement generates every legitimate
convolution involving internally closed cycles of propagation through
every channel within $\Gamma$ while ensuring irreducibility of the
three component kernels.
Finally the complete $\Gamma$ is assembled:
\begin{eqnarray}
\Gamma
&=&
\overline V
+ {\phi}^{-1} \Gamma\sigma \!:\! GG\!:\!\Gamma_s\varphi
+ {\phi}^{-1} \Gamma\tau \!:\! GG\!:\!\Gamma_t\varphi
\cr
&&
~~~ ~~~
- {\phi}^{-1} \Gamma\upsilon\!:\! GG\!:\!\Gamma_u\varphi.
\label{kII15.3}
\end{eqnarray}
In the Kraichnan average only the pairwise terms we have highlighted
will survive. Then Eqs. (\ref{kII15.2}) and (\ref{kII15.3}) become
identical to the FLEX parquet equations.
\cite{pqt3}
For each channel the total amplitude can also be recast
to show its reducibility explicitly:
\begin{eqnarray*}
\Gamma
&=&
\Gamma_s + {\phi}^{-1} \Gamma\sigma \!:\!GG\!:\!\Gamma_s\varphi
\cr
&=&
\Gamma_t + {\phi}^{-1} \Gamma\tau \!:\!GG\!:\!\Gamma_t\varphi
\cr
&=&
\Gamma_u - {\phi}^{-1} \Gamma\upsilon\!:\!GG\!:\!\Gamma_u\varphi.
\end{eqnarray*}
\subsection{Extension of the parquet equations}
\subsubsection{Complete specification of $\Gamma'$}
The $stu$ based formalism leads to an
interaction energy functional $\Phi$ equivalent to the
fluctuation-exchange approximation introduced by Bickers {\em et al.}.
\cite{sb}
The radical difference is that its properties are not imparted
intuitively; they are established from a Hamiltonian. The
consequence of this canonical provenance is to set a limit
on what is possible diagrammatically for a conserving,
optimally pairwise-correlated model.
The FLEX model generates the parquet topology naturally by generating,
as we have done within Kraichnan's formalism, the variational
structure of the $\Phi$-derivable self-energy $\Sigma$.
Without the additional step of obtaining the perturbative response,
the intimate link between the renormalized topology of $\Phi$
and the architecture of parquet, which is otherwise implicit within
the self-consistent correlation energy functional, does not emerge.
A widespread line of thought in parquet literature assumes there is
no distinction between, on the one hand, the scattering kernel $\Lambda$
in the self-energy $\Sigma$ and, on the other,
$\Gamma$ acting as the kernel for the total two-body response
within parquet. For $\Phi$-derivable models this is not permissible,
if only because they yield two different numerical
estimates for the static pair correlation function,
of which only one meets the exact formulation
by functional differentiation of $\Phi$ with respect to $V$.
\cite{KI,fgetal}
This reflects the loss of Fock-space completeness.
Later we revisit the implications, for consistency in conservation,
of the parquet model's assumption $\Lambda \equiv \Gamma$.
We have a basis to build up more elaborate extensions of the FLEX parquet
model following the Kraichnan-based analysis. First, the complete
$\Phi$-derivable, conserving pair-scattering kernel $\Gamma'$
may always be obtained from the original representation
Eq. (\ref{kII18.4}), but it is more practical to adapt the parquet
equations (\ref{kII15.2}) and (\ref{kII15.3}). Noting that the
non-parquet term $\Lambda'' = \Lambda' - \Lambda$ is absolutely
irreducible within both parquet and Kadanoff-Baym (recall Fig. 5),
the bare potential is replaced with
\[
{\cal V} \equiv \overline V + \Lambda''.
\]
Return to Eq. (\ref{kII15.2}), this time to define
\begin{eqnarray}
\Gamma'_s
&\equiv&
{\cal V}
+ {\phi}^{-1} \Gamma'\tau \!:\!GG\!:\!\Gamma'_t\varphi
- {\phi}^{-1} \Gamma'\upsilon\!:\!GG\!:\!\Gamma'_u\varphi;
\cr
\Gamma'_t
&\equiv&
{\cal V}
- {\phi}^{-1} \Gamma'\upsilon\!:\!GG\!:\!\Gamma'_u\varphi
+ {\phi}^{-1} \Gamma'\sigma \!:\!GG\!:\!\Gamma'_s\varphi;
\cr
\Gamma'_u
&\equiv&
{\cal V}
+ {\phi}^{-1} \Gamma'\sigma \!:\!GG\!:\!\Gamma'_s\varphi
+ {\phi}^{-1} \Gamma'\tau \!:\!GG\!:\!\Gamma'_t\varphi
~~~ ~~~
\label{kII15.5}
\end{eqnarray}
with the ultimate result
\begin{eqnarray}
\Gamma'
&=&
{\cal V}
+ {\phi}^{-1} \Gamma'\sigma \!:\!GG\!:\!\Gamma'_s\varphi
+ {\phi}^{-1} \Gamma'\tau \!:\!GG\!:\!\Gamma'_t\varphi
\cr
&&
~~~
- {\phi}^{-1} \Gamma'\upsilon\!:\!GG\!:\!\Gamma'_u\varphi.
\label{kII15.6}
\end{eqnarray}
We stress that the only feature that matters in
Eqs. (\ref{kII15.5}) and (\ref{kII15.6})
is the topological arrangement of the elements of the response kernel,
exhausting all possible interplays among the three $\Phi$-derivable
channels independently of crossing symmetry.
Now we address the addition to the energy functional, presumably by
physical intuition, of absolutely pair-irreducible graphs for $\Phi$.
(Two are shown in Fig. 3.)
\subsubsection{Contributions from pair-irreducible terms of $\Phi$}
\centerline{
\includegraphics[height=2.truecm]{KII05.eps}
}
{{\bf FIG. 6.} {\small (a) Fourth-order $stu$-irreducible crossing symmetric
graph, valid as a primitive input to the standard parquet equations but
not $\Phi$-derivable. While the graph can be generated by removing an
interaction node from its analog in Fig. 3(a), when closed with two final
propagators as in (b) it is forced to carry inequivalent propagators
(dotted lines). Thus it is disqualified from any $\Phi$-derivable
approximation since it cannot lead to a unique self-energy functional.
}}
\vskip 0.20cm
Primitive additions to $\Lambda'$ can be incorporated once again via
Eqs. (\ref{kII15.5}) and (\ref{kII15.6}).
In $\Phi$ derivability the choice of symmetric structures for the
$\Lambda$ kernel is highly constraining. Readers can convince themselves,
with a bit of sketching, that no such three-node term exists.
Nor is there an $stu$-irreducible four-node term for
$\Phi$ that has the needed symmetry. Whereas the crossing symmetric
four-node ``envelope'' graph depicted in Fig. 6(a) is a valid
irreducible interaction in parquet,
\cite{pqt3}
when incorporated as a fully closed diagram it must carry
inequivalent propagators, making it inadmissible in any
$\Phi$-derivable subset of the correlation energy.
The next-order $\Phi$-derivable skeleton beyond second
is that of Fig. 3(a), with five interaction nodes.
Its variation with respect to any node -- removal
of a node from Fig. 3(a) -- generates a two-body correlation
with parquet's envelope graph as its kernel.
However there is no systematic link between such a variation and parquet.
The issue with adding higher-order $stu$-irreducible terms to $\Phi$,
again assuming that they held some novel physical effects,
is that one gets back to adding many-body correlations heuristically,
without a Hamiltonian basis. Strictly, then, the sum-rule identities
no longer come for free but require individual validation
(this has been done up to the third-frequency-moment rule
\cite{fgetal}).
Kraichnan's procedure is limited to pair interactions; so far, it is hard
to envisage how any Hamiltonian extension could generate these
additional complex objects. Nevertheless adding a totally
pairwise-irreducible structure satisfying Baym-Kadanoff symmetry will
not spoil $\Phi$ derivability.
\subsection{Parquet and $\Phi$-derivability}
In establishing full parquet the $\Phi$-derivable FLEX approximation
has been taken as a suitable entry point for successive iterations aimed
at approaching the full structure;
but the initial self-energy $\Lambda\!:\!G$ is considered to
fall short of a maximally correlated parquet. It is deemed
necessary to feed the FLEX-derived crossing symmetric $\Gamma$ in
Eq. (\ref{kII15.3}) back into $\Sigma$ in Eq. (\ref{kII04}) via the
replacements
\cite{pqt3}
(ensuring that $G\!\!:\!\Gamma\!\!:\!G$ does not double
up on terms previously included)
\begin{eqnarray}
\Sigma(13)
&\leftarrow&
{\widehat \Gamma}(12|34)G(42) ~~\text{in which}~~
\cr
\cr
{\widehat \Gamma}(12|34)
&\leftarrow&
\overline V(12|34)
\cr
&&
+~ \Gamma(12|3'4')G(4'2')G(3'1')\overline V(1'2'|34),
~~~ ~~
\label{lw47.0}
\end{eqnarray}
with the nonconforming piece, $\Gamma'' = \Gamma' - \Gamma$, naturally
absent. Substitution of ${\widehat \Gamma}$ for $\Lambda$ in the self-energy
assumes that no distinction should be made between
the approximate self-energy kernel and the approximate two-body response
kernel: that, as in the exact theory, they are one and the same.
\cite{pqt2a,pqt3}
As a generator of new primitively irreducible structures
Eq. (\ref{lw47.0}) can be iterated at will.
In view of how the generic parquet Equations (\ref{kII15.5})
and (\ref{kII15.6}) always build up from at least the leading primitive
irreducible, namely $\overline V$, any resulting $\Gamma$ must always incorporate
$\Lambda$ from Eq. (\ref{kII15}). Reopening lines in
the self-energy ${\widehat \Gamma}[V, G]\!:\!G$ is always going to regenerate
pieces including the unwanted non-parquet term $\Lambda''$.
To compare the behaviors of the different self-energy kernels for
$stu$ and standard parquet, we use a result of Luttinger and Ward.
\cite{lw}
Equation (47) of that Reference
provides an alternative formulation of the correlation energy when
$\Lambda$ in Eq. (\ref{kII02}) is the exact $\Gamma$ interaction:
\begin{eqnarray}
\Phi[V;G]
&=&
- {\langle \ln(I - G^{(0)}\!\cdot\!\Sigma~\cdot) \rangle}
- G[V]\!:\!\Sigma
\cr
&&
+~ \int^1_0 \frac{dz}{2z} G[V]\!:\!\Gamma[zV;G[V]]\!:\!G[V].
\label{lw47.1}
\end{eqnarray}
The difference between the coupling-constant integral on the right-hand
side of this identity and its counterpart in Eq. (\ref{kII02}) is that the
former keeps track only of the combinatorial factors for the $V$s in the
original linked skeleton diagrams $\Gamma[V;G^{(0)}]$ but now with $G[V]$,
containing $V$ at full strength, in place of each bare line $G^{(0)}$.
By contrast, in the integral on the right-hand side of Eq. (\ref{kII02})
the coupling factor attaches to all occurrences of $V$; that is,
including those within $G[V]$ itself.
The correlation energy as given in Eq. (\ref{lw47.1}) leads to two
identities. Exploiting the equivalence of all propagators in the closed
structure $G\!\!:\!\Gamma\!\!:\!G$ within the integral,
varying on both sides with respect to the self-energy gives
\begin{eqnarray}
\frac{\delta \Phi}{\delta \Sigma}
&=&
(I - G^{(0)}\!\cdot\!\Sigma)^{-1}\!\cdot\!G^{(0)} - G
- \frac{\delta G}{\delta \Sigma}\!:\!\Sigma
\cr
&&
~~~ ~~~
+ \frac{\delta G}{\delta \Sigma}\!:\!\Gamma[V; G]\!:\!G
\cr
&=&
- \frac{\delta G}{\delta \Sigma}\!:\!(\Sigma - \Gamma[V; G]\!:\!G)
\cr
&=&
0
\label{lw47.2}
\end{eqnarray}
on using Eq. (\ref{kII04}).
The vanishing of this derivative establishes the
correlation energy as an extremum with respect to perturbations,
as these add linearly to $\Sigma$.
Next,
\begin{eqnarray}
\frac{\delta \Phi}{\delta G}
&=&
{\Bigl( (I - G^{(0)}\!\cdot\!\Sigma)^{-1}\!\cdot\!G^{(0)} - G \Bigr)}
\!\cdot\!\frac{\delta \Sigma}{\delta G}
\cr
&&
~~~ ~~~
+~ \Gamma[V; G]\!:\!G
\cr
&=&
\Sigma.
\label{lw47.3}
\end{eqnarray}
Consistency with Eq. (\ref{kII04}) is confirmed.
We look at how Eq. (\ref{lw47.1}) works in the $\Phi$-derivable case.
Since it applies canonically in the case of the full Kraichnan Hamiltonian,
the form survives the expectation over the $stu$ couplings, as will the
form of the variational derivatives; the skeletal topology of the
integrals on the right sides of Eqs. (\ref{kII02}) and (\ref{lw47.1})
is the same. In the
stochastic expectations on the right-hand side of
Eq. (\ref{lw47.1}), $\Gamma$ goes over to the reduced $stu$ structure
$\Lambda$ depicted in Fig. 4. This is because, in the coupling-constant
integral, the pattern of surviving and suppressed products of factors
$\varphi$ is identical with that leading to Eq. (\ref{kII02}).
Define the $\Phi$-derivable correlation energy $\Phi_{\rm KB}$ from the
corresponding Eq. (\ref{lw47.1}).
All propagators in the structure $G\!\!:\!\Lambda\!\!:\!G$ being
equivalent, the variation in Eq. (\ref{lw47.2}) again leads to
\begin{eqnarray}
\frac{\delta \Phi_{\rm KB}}{\delta \Sigma}
&=&
- \frac{\delta G}{\delta \Sigma}\!:\!(\Sigma - \Lambda\!:\!G)
= 0
\label{lw47.4}
\end{eqnarray}
so the extremum property holds for the approximate correlation energy.
The relation Eq, (\ref{lw47.3}) becomes
\begin{eqnarray}
\frac{\delta \Phi_{\rm KB}}{\delta G}
&=&
\Lambda\!:\!G = \Sigma
\label{lw47.5}
\end{eqnarray}
since the symmetry of the integral $G\!:\!{\delta\Lambda/\delta G}\!:\!G$
works once more as for Eq. (\ref{kII04}) to recover the self-energy.
The analysis is now applied to the classic parquet expansion,
whose candidate correlation energy functional,
defined from Eq. (\ref{lw47.1}),
we will call $\Phi_{\rm PQ}$. In this instance one gets
\begin{eqnarray}
\frac{\delta \Phi_{\rm PQ}}{\delta \Sigma}
&=&
- \frac{\delta G}{\delta \Sigma}\!:\!{\Bigl( \Sigma - \Gamma[V; G]\!:\!G
- \Delta\Gamma[V; G]\!:\!G \Bigr)};
~~~
\cr
\cr
\Delta\Gamma[V; G]
&\equiv&
\int^1_0 \frac{dz}{z} ( \Gamma[zV;G[V]] - z\Gamma[V;G[V]] )
\cr
&&
- \int^1_0 \frac{dz}{2z}
\frac{\delta \Gamma[zV;G[V]]}{\delta G}\!:\!G[V].
\label{lw47.6}
\end{eqnarray}
This does not vanish because the parquet structure $G\!:\!\Gamma\!:\!G$
contains inequivalent propagators. Therefore Eq. (\ref{lw47.2}) fails.
The same topological absence of $\Phi$-derivable symmetry spoils the
complementary attempt to define $\Phi_{\rm PQ}$ from the alternative
fundamental expression Eq. (\ref{kII02}).
Figure 7 typifies the issue. At third order in the bare potential
the parquet iteration of $\Gamma$ obtained from FLEX produces a new
absolutely irreducible term at fourth order whose skeleton
contribution to $\Phi$ would carry inequivalent propagators,
as already shown in Fig. 6(b).
Since the symmetry leading to a well defined $\Phi$ must be present
at all orders it follows that no formulation of parquet, built on
pairwise-only scattering, can be $\Phi$-derivable.
Conversely, the $stu$ construction \`a la Kraichnan is the only strictly
pairwise-correlated model that has a Hamiltonian basis while exhibiting
the essential parquet topology.
\centerline{
\includegraphics[height=4.0truecm]{KII06.eps} }
{{\bf FIG. 7.} {\small The iterative parquet algorithm Eq. (\ref{lw47.0}),
starting from the FLEX self-energy, is incompatible with $\Phi$ derivability.
(a) Differentiation of the self-energy term at third order in the interaction
gives a term in the parquet kernel series.
(b) Iteration of the self-energy in the parquet algorithm must close the
structure from (a) by adding an interaction, avoiding overcounting
of reducible terms. This generates a novel irreducible component
in the parquet series. A final closure generates the linked
correlation-energy diagram of Fig. 6(b),
which is not a legitimate $\Phi$-derivable contribution.
Since $\Phi$ derivability must hold at every order, no level of
iteration of the parquet kernel can fulfill it.
}}
\vskip 0.25cm
The failure of the relation Eq. (\ref{lw47.6}) to vanish has the
more serious implication that a parquet model does not correspond to a
system with a well defined ground-state energy. Securing that would
require a $\Phi_{\rm PQ}$ conforming to the criteria of Baym and Kadanoff.
If such a functional can be constructed to satisfy Eq. (\ref{lw47.3}),
say, it will not have the canonical Luttinger-Ward form of either
Eqs. (\ref{kII02}) or (\ref{lw47.1}). The question of the existence
of a stable ground-state configuration stays undecided for parquet.
So far we have shown how both $\Phi$-derivable and parquet models fail
to produce forms for the correlation energy that are fully consistent
both with respect to conservation and to crossing symmetry. However,
unlike parquet, $\Phi$ derivability preserves internal consistency
in the sense of Luttinger and Ward,
\cite{lw}
in particular $\Phi$ as an extremum with respect to external perturbations.
Our conclusions on the limits of both parquet and $\Phi$-derivable models
coincide fully with those of the diagrammatic analysis of Smith.
\cite{roger}
That analysis applies as well to more elaborate $\Phi$-derivable structures
beyond the one corresponding to $stu$/FLEX, underwritten by its Kraichnan
Hamiltonian. In his different functional-integral approach,
oriented towards critical behavior, Jani\v{s}
\cite{janis1,janis2}
likewise remarks on the discrepancy between the parquet kernel's analytical
properties and those obtained from $\Phi$ derivability.
\subsection{Crossing symmetry and response}
Notionally, while crossing symmetry will apply to scattering off an open
system, response analysis concerns a closed system and thus a different
interplay of two-body vertex and one-body self-energy correlations. Any
complement to the extra term $\Gamma''$,
if found neither in $\Gamma$ itself nor in the self-energy insertions
subsumed in the total two-body response, could make no contribution to that
conserving response within its approximating $\Phi$-derivable framework.
If needed for conservation, the counter-term must show up somewhere.
\cite{gllg}
\centerline{
\includegraphics[height=6truecm]{KII08.eps}
}
{{\bf FIG. 8.} {\small Damping terms in the conserving
high-frequency summation of the two-body electron-gas polarization
function, exact to second order in $V$ (full horizontal lines),
after Fig. 2 of Glick and Long.
\cite{gllg}
Wavy lines terminating with {\bf x} are couplings to the external
probe, directed lines are free propagators.
Terms (a), (b), (c), (f), (h), and (i) have their kernel
in $\Lambda'$ as generated from $\Phi$.
For consistency, these two-body vertex components are
summed concurrently with the one-body insertions
(d), (e), (g), and (j) that come from the uncorrelated
bubble $GG$. The overall topology in terms of bare lines
does not discriminate between self-energy and vertex terms,
and its systematic cancellations rely on more than manifest
crossing symmetry.
}}
\vskip 0.25cm
Specializing to the purely computational aspect of the $stu$ and parquet
analyses, we draw attention to Fig. 2 of the paper by Glick and Long,
\cite{gllg}
here replicated in Fig. 8. It
exhibits the dominant high-frequency contributions to the imaginary
(damping) part of the polarization function for the electron gas and
derives from the bare expansion of the density response, generated
by $\delta^2 \Phi/\delta U\delta U$ when the exact correlation energy
is truncated at second order in the bare potential $V$.
Self-energy insertions from the externally coupled propagators
must be computed in systematic superposition with the corresponding
interaction-vertex contributions.
Glick and Long's example demonstrates that, to account systematically
for the dynamical correlations in the response, self-energy contributions
from the propagators external to the two-body interaction $\Gamma$ enter,
as well as those internal to it. This means that a protocol broader than
explicit crossing symmetry determines the bookkeeping that produces the
overall conserving result. In the $\Phi$-derivable approach, a similar
pattern of cancellation also provides the counterbalancing mechanism for
the non-parquet component $\Gamma''$.
The following is of interest. The $\Phi$-derivable model, truncated beyond
second order in $\overline V$, reproduces precisely the diagrams of Fig. 8. At second
order, the structure of the kernel $\Lambda$ of $\Phi$ is ambiguously defined
(degenerate, if one likes); it may be envisaged to manifest in any of the
channels $s,t$ or $u$, which is the very reason for forming the composite
K-coupling of Eq. (\ref{kII13}) to avoid overcounting. Nevertheless,
perturbing the system lifts the structural degeneracy, with all three channels
emerging on an equal footing in the Kadanoff-Baym functional derivation of
the second-order kernel $\Lambda'$. In this quite special case $\Gamma'$
is crossing symmetric, yet crossing symmetry is not uniquely assignable to
the generating kernel $\Lambda$. Beyond second order the channel ambiguity
is lifted and crossing symmetry for $\Lambda'$ is lost; but what the
second-order case highlights is that the $stu$ parquet structure is inherent
in $\Phi$ derivability, even if in a weaker sense and
even if insufficient to secure strict crossing symmetry in general.
One can refer to Fig. 5 to see this stated graphically.
Equation (\ref{kII02}) for the correlation energy implies that its
fundamental expansion is in powers of the underlying bare interaction $V$
regardless of where it occurs structurally.
This implies in turn that one should look again at the expansion
in terms of the bare propagator $G^{(0)}$ rather than focus
exclusively of the full propagator $G$. As indispensable as $G$ is
as a construct in making sense of the correlation physics,
it tends to hide those instances of $V$ within the propagators that
counterbalance its presence in the skeleton graphs defining the vertex
components; a concealment that, as suggested by Fig. 8, masks how
cancellations pair up among two-body and one-body self-energy elements.
The parquet model's self-energy structures for $G$ are set by crossing
symmetry through the feedback imposed on the self-energy kernel. There,
it is the skeletal topology of $\Gamma$ that governs the processes of
cancellation.
For $\Gamma'$ in the complete $\Phi$-derived two-body Green function,
conservation operates otherwise: as in Fig. 8, competing effects must
cancel in a determinate superposition. It is this that conditions the
topology of the approximate $\Gamma'$, not the other way around.
Since $G$ is an infinitely nested functional of $V$, the
renormalized $\Lambda$ and $\Gamma'$ can well differ diagrammatically
while their bare-expansion analogs, respectively ${\widetilde \Lambda}$
and ${\widetilde \Gamma'}$ will not.
These last two cannot differ in their structure because the
{\em only} topological distinction between the bare graphs of $\Phi$
and the bare graphs of the derived correlated response is the external
perturbation nodes attached to the bare propagators. In other words,
\[
\widetilde \Gamma' = \widetilde \Lambda.
\]
Unlike $\Gamma'$, the linked diagrams of the bare expansion for
$\widetilde \Gamma'$ do not differentiate between one-body and
(two-body) vertex contributions.
They expose the self-energy insertions not only internal to the two-body
Green function but in the external incoming and outgoing lines as well.
By themselves, the internal arrangements of the renormalized four-point
kernel are insufficient for response. The response function's graphs are
closed: it is a contraction of the two-body Green function.
\cite{kb1}
The outer connections of $\Gamma'$ must terminate in two particle-hole
pairs $GG$ to obtain the dynamically correlated contribution.
In the overall accounting the leading uncorrelated particle-hole bubble
$GG$ also plays an explicit role.
The physical response function is the same whether written in
terms of $\widetilde \Gamma'$ or of $\Gamma'$. It follows that
in the latter's renormalized setting the non-parquet component
$\Gamma''$, embedded in the complete response, finds its canceling
counterparts among the self-energies. For standard parquet, despite
the bootstrap Eq. (\ref{lw47.0}), the self-energy terms
in the internal propagating pairs $GG$ are not necessarily tuned to
overall cancellation; crossing symmetry reflects only the skeletal form
of $\Gamma$, not its dynamics. It is an additional assumption that
cancellations in parquet are looked after automatically.
In practice, they are not. Figure 8 gives a clue as to why.
\section{Summary}
We have recovered the parquet equations from an augmented form of
Hamiltonian within Kraichnan's fundamental stochastic embedding
prescription.
\cite{k1,k2}
Our particular re-interpretation of the parquet model inherits
the entire suite of conserving analytic (causal) identities from
the exact many-body description for its generating model Hamiltonian.
Relations that rely explicitly on Fock-space completeness are not
preserved, since Kraichnan averaging must decohere classes of
interaction-entangled multiparticle states
(for example, structures as in Fig. 3).
On the way we have examined the seeming paradox of a fully conserving
pairwise-maximal $\Phi$-derivable theory with crossing symmetric kernel
yet leading to a non-symmetric response kernel on one side
(while still including standard parquet in its structure),
and on the other the pairwise-maximal parquet theory in both elementary
and iterated forms, maintaining crossing symmetry but not conservation.
This prompts thought on which philosophy to follow in
formulating many-body approximations, and for which purpose.
The second lesson of this work goes to a conception of how model correlation
theories operate vis \`a vis the conservation laws in a system closed to
external particle exchange. In understanding fluctuations and response,
the parquet construction can be applied fruitfully within a canonically
founded perspective that respects parquet's pairwise-maximal
topology in logical independence from manifest crossing symmetry,
inherited from the distinct open-system physics of nuclear scattering.
In nuclear scattering, at any rate conceptually,
\cite{zqt}
free fermionic constituents arrive from asymptotic infinity to
encounter an open assembly of the same species. They interact strongly and
the free final products scatter off to infinity. One then expects
the outcome to be governed by the optical theorem,
crossing symmetry and thus the forward-scattering sum rule.
\cite{bb2}
In a setting such as transport, the problem involves constituents
that are always confined to the medium, interacting collectively
and strongly while coupling weakly to an external perturbing probe.
A closed scenario interrogates the system very differently.
Accounting of the self-energy contributions from the initial and
final particle-hole $GG$ pairs as well as the uncorrelated bubble $GG$,
coupled via the probe, now matters, and reflects the main philosophical
difference between standard parquet and its $\Phi$-derivable re-reading
in the ambit of response. The role of the vertex terms demands attention
to systematic counterbalancing from the self-energy terms, including from
incoming and outgoing particle-hole states. Such processes are
assured in $\Phi$-derivability, while in parquet they are assumed.
The elegant application of crossing symmetry to particle-antiparticle
processes, fusing them seamlessly with the less problematic but
structurally disparate particle-particle pair processes,
is a foremost idea in many-body understanding.
For $\Phi$ derivability defined by a Hamiltonian, centered
upon conservation and oriented towards response, one is led to
a violation of crossing symmetry in the derived $stu$ scattering kernel.
In the context of self-energy-versus-vertex accounting, this may
be offset partly through the pattern of mutual
cancellations ensuring conservation.
A fully conserving parquet response theory, no longer crossing symmetric
but sharing the identical pairwise-only arrangement of the original parquet
equations, emerges naturally from the Hamiltonian description of the
$stu$/FLEX approximation. The caveat is that, in it, the
correlation-energy kernel $\Lambda$ and the scattering kernel $\Gamma'$
functionally derived from it remain distinct in playing distinct
roles in the renormalized physics. If their differing structures are
conflated, conservation fails.
The puzzle remains. $\Phi$ derivability in an approximate expansion leads to
crossing-symmetry violations, yet in maintaining conservation it suggests
that the violating components are systematically canceled by other means.
Imposing crossing symmetry on an approximating subset of the two-body
scattering amplitude would seem to take care of systematic cancellation,
yet not in a way that conserves.
Understanding in greater detail just how cancellation acts would therefore
provide a much needed clarification.
How might Kraichnan's idea in itself be taken further?
First, the present analysis is readily extended both to non-uniform
cases and a least to some instances where singular behaviour in any
of the pair channels may break ground-state symmetry.
Applying it to analyze nonperturbative many-body formalisms is
also promising. Variational and coupled-cluster methods
are potential candidates. The stochastic embedding approach
pioneered by Kraichnan may not be the only way to set
approximate many-body approaches on a canonical footing. However,
the power of the method in guaranteeing all the conserving
analytic identities that link one- and two-body correlation functions,
even in approximation and beyond linear response, speak compellingly
for revisiting an original and penetrating analysis long
celebrated in the turbulence-theory community
\cite{frisch}
yet, with rare exceptions,
\cite{kb2}
largely unnoticed by its sister community of many-body theory.
\section*{Acknowledgments}
We acknowledge the support of our respective institutions: for FG, the
University of New South Wales; for TLA, Kent State University.
| {
"attr-fineweb-edu": 1.157227,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUa-U5ixsDMKtEnizb | \section{Introduction}
\pacs{14.20.-Dh, 13.75.Cs, 13.75.Gx, 11.10.St}
\section{Introduction}
\vspace{-12.0cm}
\widetext
{NT@UW-99-27, DOE/ER/40561-67, FZJ-IKP(TH)-1999-17 }
\narrowtext
\vspace{12.0cm}
The pion nucleon-nucleon vertex function is needed in many different
places in hadron physics, such as the nucleon-nucleon interaction
\cite{MHE}, pion-nucleon scattering and pion photoproduction
\cite{surya96,sato96}, deep-inelastic scattering \cite{thom83}, and as
a possible explanation of the Goldberger-Treiman discrepancy
\cite{braat72,Jones75,durso77,coon90,domin85}. Commonly, one
represents the vertex function by a phenomenological form factor. The
cut-off parameters employed in the various calculations range from
$\Lambda^{(1)}_{\pi N N} = 300$ MeV in pion photoproduction to
$\Lambda^{(1)}_{\pi N N} = 1700$ MeV in some One-Boson Exchange models
of the nucleon-nucleon interaction, assuming a monopole
representation (i.e. $n=1$ in Eq. (\ref{ff}), see below).
The Skyrmion model \cite{Meis86,Kai87} gives
$\Lambda^{(1)}_{\pi N N} = 860$ MeV. A lattice gauge calculation gets
$\Lambda^{(1)}_{\pi N N} = 750 \pm 140$ MeV \cite{Liu95}.
Unfortunately, the $\pi NN$ form factor cannot be determined
experimentally. It is an off-shell quantity which is inherently
model-dependent. For a given model,
the form factor -- besides being a parametrization of the vertex function --
summarizes those processes which are not calculated explicitly.
The simplest class of meson-theoretical models of the nucleon-nucleon
interaction includes the exchange of one meson only.
In these models, one needs "hard" form factors for the following reason.
In One-Boson Exchange potentials, the tensor
force is generated by one-pion and one-rho exchange.
A cut-off
below 1.3 GeV would reduce the tensor force too strongly and
make a description of the quadrupole moment of the deuteron
impossible \cite{Ericson83,MHE}.
This situation changes when the exchange of two correlated bosons
between nucleons is handled explicitly.
The exchange of an interacting $\pi\rho$ pair
generates additional tensor strength at
large momentum transfers. This implies softer cut-offs
for the genuine one-pion exchange \cite{Janssen93,Janssen94}.
Meson theory allows to undress the phenomenological form factors
at least partly by calculating those processes which contribute
most strongly to the long range part of the vertex
functions \cite{Janssen93,plumper}.
A physically very transparent way to include the most important
processes is given by dispersion theory.
The imaginary part of the form factor
in the time-like region is given by the unitarity cuts.
In principle, one should consider the full three-pion continuum.
A reasonable approximation is to reduce the three-body problem
to an effective two-body problem by representing the two-pion
subsystems by effective mesons \cite{amado}. An explicit calculation
of the selfenergy of the effective meson incorporates the effects
of three-body unitarity.
In the case of the pion-nucleon form factor, one
expects that $\pi\rho$ and $\pi\sigma$
intermediate states are particularly relevant.
Here, "$\sigma$ " is understood as an abbreviation for the isoscalar
two-pion subsystem.
In many early calculations \cite{durso77,dillig},
the effect
of the $\pi\sigma$ intermediate states was found to be
negligible. In these calculations,
a scalar $\sigma\pi\pi$ coupling has been used.
Nowadays, such a coupling is disfavored because it is not chirally
invariant.
In the meson-theoretical model for pion-nucleon scattering of Ref.
\cite{schutz94},
the exchange of two correlated mesons in the t-channel has been
linked to the two-pion scattering model of Ref. \cite{lohse90}.
The resulting effective potential can be simulated
by the exchange of an effective sigma-meson in the t-channel,
if a {\it derivative} sigma two-pion coupling is adopted.
In the present work, we want to investigate the effect of the
pion-sigma channel on the pion form factor
using a derivative coupling.
The meson-meson scattering matrix $T$ is an essential building block
of our model. Formally, the scattering matrix is obtained by
solving the Bethe-Salpeter equation,
$ T=V+VGT, $
starting from a pseudopotential $V$.
Given the well-known difficulties in solving the
four-dimensional Bethe Salpeter equation, one rather solves
three-dimensional equations, such as
the Blankenbecler-Sugar equation (BbS) \cite{blankenbecler}
or related ones \cite{gross,jennings}.
The two-body propagator $G$ is chosen
to reproduce the two-particle unitarity cuts in the physical
region. The imaginary part of $G$ is uniquely defined in this way,
but for the real part, there is complete freedom which leads
to an infinite number of reduced equations \cite{jennings}.
The energies of the interacting particles are well-defined
for on-shell scattering only. For off-shell scattering,
there is an ambiguity.
Different choices of the energy components may affect the off-shell
behaviour of the matrix elements. As long as one is exclusively
interested in the scattering of one kind of particles, e.g. only
pions, one can compensate the modifications of the off-shell
behaviour by readjusting the coupling constants. This gets more
difficult as soon as one aims for a consistent model of many
different reactions. Moreover, in the calculation of the
form factor, the scattering kernel $V$ may have singularities
which do not agree with the physical singularities due to e.g.
three-pion intermediate states \cite{janssen2}.
In contrast to the Blankenbecler-Sugar reduction,
time-ordered perturbation theory (TOPT)determines the off-shell
behaviour uniquely. Moreover, only physical
singularities corresponding to the decay into multi-meson
intermediate states can occur \cite{schweber}.
For the present purpose, we
therefore will employ time-ordered perturbation theory.
\section{The Meson--Meson Interaction Model}
The Feynman diagrams defining the pseudopotentials for
$\pi\rho$ and $\pi\sigma$ scattering are shown in Fig. \ref{rhopi}
and Fig. \ref{sigpi}.
We include both pole diagrams as well as t-channel and u-channel
exchanges. The transition potential is given by one-pion exchange
in the t-channel (see Fig. \ref{rhosig}).
In Ref. \cite{Janssen93}, $\pi\rho$
scattering has been investigated neglecting the $A_1$-exchange
in the u-channel.
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig1.eps,width=8cm}}
\end{center}
\caption{Diagrams describing the $\pi\rho
\rightarrow \pi\rho$ potential}
\label{rhopi}
\end{figure}
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig2.eps,width=8cm}}
\end{center}
\caption{Diagrams describing the $\pi\sigma
\rightarrow \pi\sigma$ potential}
\label{sigpi}
\end{figure}
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig3.eps,width=5cm}}
\end{center}
\caption{Diagram describing the $\pi\rho
\rightarrow \pi\sigma$ transition potential}
\label{rhosig}
\end{figure}
The $\pi\pi\rho$ and $A_1\pi\rho$ interactions are chosen according to
the Wess-Zumino Lagrangian
\cite{zumino} with $\kappa = \frac{1}{2}$.
The two-pion sigma vertex is defined by a derivative coupling \cite{schutz94}.
For the three-sigma vertex we take a scalar coupling \cite{schutz94}.
The Lagrangians employed read explicitly:
\begin{eqnarray}
{\cal L}_{\pi\pi\rho} & = & -g_{\pi\pi\rho}\ (\vec{\pi} \times
\partial_{\mu} \vec{\pi}) \cdot \vec{\rho}^{\,\mu}\\ {\cal
L}_{\rho\rho\rho} & = &
\frac{1}{2}g_{\pi\pi\rho}(\partial_{\mu}\vec{\rho}_{\nu} -
\partial_{\nu}\vec{\rho}_{\mu}) \cdot (\vec{\rho}^{\,\mu} \times
\vec{\rho}^{\,\nu})\\ {\cal L}_{A_1\pi\rho} & = &
\frac{g_{\pi\pi\rho}}{m_{A_1}} \Big[(\vec{A}_{\mu} \times
\partial_{\nu}\vec{\pi}) - (\vec{A}_{\nu} \times
\partial_{\mu}\vec{\pi})\nonumber\\ & & + \frac{1}{2} \left\{\vec{\pi}
\times (\partial_{\mu}\vec{A}_{\nu} -
\partial_{\nu}\vec{A}_{\mu})\right\}\Big]
(\partial^{\mu}\vec{\rho}^{\,\nu} - \partial^{\nu}\vec{\rho}^{\,\mu})
\\ {\cal L}_{\omega\pi\rho} & = &
\frac{g_{\omega\pi\rho}}{m_{\omega}}\ \epsilon^{\mu\alpha\lambda\nu}
\partial_{\alpha}\vec{\rho}_{\mu}\ \partial_{\lambda}\vec{\pi}\
\omega_{\nu} \\
{\cal L}_{\pi\pi\sigma} & = & \frac{f}{2m_{\pi}}\
\partial_{\mu}\vec{\pi} \cdot \partial^{\mu}\vec{\pi} {\sigma}\\
{\cal L}_{\sigma\sigma\sigma} & = & g_{\sigma\sigma\sigma} m_{\sigma}\
{\sigma}{\sigma}{\sigma}.
\end{eqnarray}
The completely antisymmetric tensor
has the component $\epsilon_{0123}=+1$.
In the presence of derivative couplings, the canonical momenta
conjugate to the fields $\Phi_k$,
$$\pi_k=\frac{\delta{\cal L}}{\delta\dot{\Phi}_k},$$
receive contributions from the interaction Lagrangian. The
corresponding Hamiltonian density
\begin{equation}
{\cal H}\ =\ \sum_k \pi_k \dot{\Phi}_k\ -\ {\cal L}
\end{equation}
consists of the ususal terms plus additional contact terms:
\begin{equation}
{\cal H}\ =\ {\cal H}_0 - {\cal L}_{int} + {\cal H}_{contact}.
\end{equation}
The contact terms ensure that the diagrams of time-ordered
perturbation theory are on-shell equivalent to the corresponding
Feynman diagrams. For our model Lagrangian,
the contact terms, expanded up to the order $g^2, f^2$, and $fg$, are
given by:
\begin{eqnarray}
{\cal H}_{contact} & = &
+\frac{g_{\pi\pi\rho}^2}{2}(\vec{\rho}^{\,0} \times \vec{\pi})^2 +
\frac{f^2}{2m_{\pi}^2} {\sigma}^2 \dot{\vec{\pi}}^2\nonumber\\
& & -
\frac{fg_{\pi\pi\rho}}{m_{\pi}} {\sigma}\dot{\vec{\pi}} \cdot
(\vec{\rho}^{\,0} \times \vec{\pi})\nonumber\\
& & +
\frac{g_{\omega\pi\rho}^2}{2m_{\omega}^4}\left\{\epsilon_{ijk}(\partial^i
\vec{\rho}^{\,j})(\partial^k \vec{\pi})\right\}^2\nonumber\nonumber\\
& & +\frac{2g_{\pi\pi\rho}^2}{m_{A_{1}}^{4}}
\left\{\partial_{\nu}\vec{\pi} \times
(\dot{\vec{\rho}}^{\,\nu}-\partial^{\nu}\vec{\rho}^{\,0})\right\}^2\nonumber\\
& & + \frac{g_{\pi\pi\rho}^2}{2m_{A_{1}}^{2}} \left\{(\partial_k
\vec{\rho}^{\,0} - \dot{\vec{\rho}}_k) \times \vec{\pi}\right\}^2
\end{eqnarray}
The Feynman diagrams are replaced by the corresponding time-ordered
diagrams and a contact diagram, as e.g. shown in Fig.
\ref{topt-bbs-aequi}
for the case of pion-rho scattering via pion exchange in the
s-channel.
\begin{figure}\begin{center}
{$\parbox[c][2.1cm]{1.6cm}
{\psfig{file=fig4.eps,height=2cm,width=1.5cm}}
=
\parbox[c]{1.6cm}
{\psfig{file=fig4.eps,height=2cm,width=1.5cm}}
\;+\;\parbox[c]{1.6cm}
{\psfig{file=fig5.eps,height=2cm,width=1.5cm}}
\;+\; \parbox[c]{1.6cm}
{\psfig{file=fig6.eps,height=2cm,width=1.5cm}}
$}
\caption{Decomposition of the pion-pole Feynman diagram
into time-ordered diagrams and a contact term.}
\label{topt-bbs-aequi}
\end{center}\end{figure}
Form factors are required to ensure convergence. We choose
standard monopole ($n=1$) or dipole ($n=2$) parameterizations,
\begin{equation}\label{ff}
\Gamma^{(n)}(q^2) = \left(\frac{\Lambda^2-m^2}
{\Lambda^2-q^2}\right)^n
\end{equation}
The cut-off parameters $\Lambda^{(n)}$ and the coupling constants
$\frac {g^2}{4\pi}$ are taken from other investigations. In detail,
we employ the following constants.
The coupling constant
$\frac {g^2_{\pi\pi\rho}}{4\pi}=2.84$ can be determined from the decay
$\rho\rightarrow \pi\pi$.
We assume $ g_{\pi\rho A_1}= g_{\pi\pi\rho}$ \cite{zumino}.
The corresponding cut-off parameters
$\Lambda^{(1)}_{\pi\pi\rho}=1500 $ MeV and
$\Lambda^{(2)}_{\pi\rho A_1}=2600 $ MeV
have been taken from Ref. \cite{janssen2}.
The decay $\omega\rightarrow \pi\rho \rightarrow \pi\gamma $
gives the coupling constant
$\frac {g^2_{\pi\rho\omega}}{4\pi}=7.5 $ and
$\Lambda^{(2)}=2200 $ MeV \cite{durso87}.
In the meson-theoretic model for pion-nucleon scattering of Refs.
\cite{schutz94,reuber}, the following constants have been determined:
$\frac {g^2_{\pi\pi\sigma}}{4\pi}=0.25$,
$\Lambda^{(1)}_{\pi\pi\sigma}=1300$ MeV,
$\frac {g^2_{\sigma\sigma\sigma}}{4\pi}=3.5$,
$\Lambda^{(1)}_{\sigma\sigma\sigma}=2000$ MeV.
In Fig.\ref{hos_pot},
we compare the half-off shell scattering kernels $V$ derived
in TOPT and in the BbS reduction for a
center-of-momentum energy $\sqrt{s}=1.2$ GeV.
When the off-shell momentum P is equal to the incoming on-shell
momentum, the potentials $V$ evaluated in the BbS reduction and
in TOPT are identical (see the arrow).
In the BbS reduction,
the zeroth component of the momentum vectors is not well-defined
for off-shell scattering. In Fig.\ref{hos_pot}
we have chosen on-energy shell
components, following Ref. \cite{Janssen93}. For the $A_1$ exchange
(both in the s- and in the u-channel),
both formalisms give fairly similar results. For the rho-exchange
in the t-channel, the $S^{11}$ partial wave shows large differences:
while TOPT predicts an attractive half-off shell matrix element,
the BbS-reduction gets repulsive for momenta larger than 650 MeV.
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig7.eps,width=8cm}}
\end{center}
\caption{Different contributions to the
half off-shell potentials for $\pi\rho$
scattering at $\sqrt{s}=1.2$ GeV for various partial waves as functions
of the off-shell momentum P. The arrow indicates the on-shell momentum
P corresponding to $\sqrt{s}=1.2$ GeV. The solid line represents
the scattering kernel $V$ of TOPT, while the dashed line refers to the
BbS reduction. The dotted line shows the TOPT result omitting the
contact terms. The panels show contributions of specific diagrams
of Fig.\ref{rhopi}; upper left: $A_1$ pole diagram, middle left:
$\pi$ pole diagram, lower left: $\rho-$exchange in the t-channel,
upper right: $A_1$ u-channel exchange, middle right: $\omega$
pole diagram, lower right: $\omega$ u-channel exchange.
}\label{hos_pot}
\end{figure}
The singularities in the scattering kernel $V$ due to unitarity
cuts are handled by chosing an appropriate path in the complex
momentum plane. The scattering equation, after partial wave
decomposition, reads explicitly:
\begin{equation}
T(p,p') = V(p,p') + \\
\int dk k^2 V(p,k) G^{TOPT}(E;k) T(k,p')
\end{equation}
with
\begin{eqnarray}
k=|\vec{k}| e^{-i\Phi},
\end{eqnarray}
where $\Phi$ is a suitably chosen angle \cite{hether}.
$G^{TOPT}(E; k)$ denotes the two-body propagator of time-ordered
perturbation theory.
The resulting $T$-matrix is shown in Fig.\ref{E1200}
for $\sqrt{s}=0.7$ GeV and k=153 MeV for
the total angular momentum J=0. The potential $V$ for pion-rho
scattering is attractive (upper panel).
Iterating the $\pi\rho$ diagrams by
themselves, the attraction is enhanced. The inclusion of the
$\pi\sigma$ channel enhances the attraction even more.
This effect is due to the off-shell transition potential
(middle and lower panel) which is larger than the diagonal $\pi\rho$
potential. The magnitude of the transition potential
can be traced back to the interaction Lagrangian
(see Fig.\ref{hos_pot}).
The derivative
coupling favours large momentum transfers.
The enhancement of the $\pi \rho - \pi \rho $ scattering matrix
$T$ due to these coupled channel effects will shift the maximum
of the spectral function to lower energies and thus lead to a
softer form factor.
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig8.epsi,width=8cm}}
\end{center}
\caption{The full off-shell $T$-matrices for
the transitions $\pi\rho\rightarrow\pi\rho$
(upper panels), $\pi\rho\rightarrow\sigma\pi$ (central panels), and
$\sigma\pi\rightarrow\pi\rho$ (lower panels) are shown for $E_{CM}=1200$MeV
and $k^{in}_{CM}=153 $ MeV(solid lines) as functions of the
real part of the complex
off-shell momentum P.
The $T$-matrix for the transition $\pi\rho\rightarrow\pi\rho$ obtained without
coupling to the
$\sigma\pi$ channel is given by the dashed line.
For comparison, also the corresponding scattering kernels $V$ are
displayed (dotted lines). The real parts of the $T$-matrices are shown
on the left hand side, the imaginary parts on the right hand side.
}\label{E1200}
\end{figure}
\section{The Pion--Nucleon Form Factor}
The present model for the $\pi NN$ vertex function F is shown in
Fig. \ref{Gam_piNN}.
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig9.ps,width=8cm}}
\end{center}
\caption{The $\pi NN$ vertex function}\label{Gam_piNN}
\end{figure}
Before coupling to the nucleon, the pion can disintegrate into
three-pion states which are summarized by both pion-sigma and
pion-rho pairs. We first evaluate the vertex function in the
$N \bar N $ channel. In this channel, the $\pi\rho$ and
$\pi\sigma $ interactions can be summed. After a decomposition
into partial waves, one gets:
\begin{eqnarray}
F_{N\bar{N}\rightarrow\pi} = F^{\,0}_{N\bar{N}\rightarrow\pi}
+ \sum_{n=\rho,\,\sigma}\int dk\,k^2
\nonumber \\
\times f_{\pi\leftarrow\pi n}(t,k) G_{\pi n}(t,k) V_{\pi
n \leftarrow N\bar{N}}(t;k,p_0)\;.
\end{eqnarray}
Here, $\rm p_0$ is the subthreshold
{\it{on--shell}} momentum of the $\rm
N\bar{N}\,$--System \cite{braat72}.
The bare vertex is called
$F^{\,0}_{N\bar{N}\rightarrow\pi}$.
We have to include the self energies $\Sigma_\rho$ and
$\Sigma_\sigma$ of both the $\rho$ and the $\sigma$
into the two-particle propagators $\rm G_{\pi\rho}$ and
$\rm G_{\pi\sigma}$
because the vertex function is needed in the time-like region.
The annihilation potential
$\rm V_{\pi n \leftarrow
N\bar{N}}$ has been worked out in Ref. \cite{janssen4}.
The form factor needed for the $N\bar{N}\rightarrow\pi\rho(\sigma)$
transition has not been determined selfconsistently, but taken
from Ref. \cite{janssen4}.
The dressed
meson--meson $\rightarrow$ pion vertex function
$ f_{\pi\leftarrow\pi n}$ is given by
\begin{eqnarray}
f_{\pi\leftarrow\pi n}(t,k) = f^{\,0}_{\pi\leftarrow\pi n}(t,k)
+ \sum_{m=\rho,\,\sigma} \int dk' {k'}^2
\nonumber
\\
\times f^{\,0}_{\pi\leftarrow\pi
m}(t,k') G_{\pi m}(t,k') T_{\pi m\leftarrow \pi n}(t;k',k)\;.
\end{eqnarray}
The bare vertex function is called $ f^{\,0}$.
The vertex function $f$ requires the off-shell elements of the $T$-matrix
for meson-meson scattering
$ T_{\pi m\rightarrow \pi n}$
discussed in the previous chapter.
Only the partial wave with total angular momentum
$\rm J^\pi=0^-$
of the
$ N\bar{N}\rightarrow\pi$ vertex function is needed.
The form factor
$\rm \Gamma(t)$ is defined as
\begin{eqnarray}
\Gamma(t) & = &
\frac{F_{N\bar{N}\rightarrow\pi}}{F^{\,0}_{N\bar{N}\rightarrow\pi}}\;.
\end{eqnarray}
Now we rely on dispersion theory to obtain the form factor
$\rm \Gamma(t)$ for space-like momentum transfers t
\cite{braat72,durso77}.
We employ a subtracted dispersion relation
\begin{eqnarray}
\Gamma(t) & = & 1+\frac{t-m_{\pi}^2}{\pi} \int\limits_{9m_{\pi}^2}^{\infty}
\frac{Im\Gamma(t')dt'}{(t'-t)(t'-m_{\pi}^2)}\;.\label{subintegral}
\end{eqnarray}
The subtraction constant ensures that the form factor is normalized
to unity for $t=m_{\pi}^2$.
The integration is cut off at
$ t=4\,m_N^{\,2}$. Larger values of $t$ would require to
incorporate diagrams with cuts at larger energies. Such processes
are suppressed because of the improved rate of convergence
of the subtracted dispersion relation.
The imaginary part of the form factor is shown in the left part of
Fig.\ref{spek_Gam}.
\begin{figure}\begin{center}
\parbox{8cm}{\psfig{file=fig10.epsi,width=8cm}}
\end{center}
\caption{Real (right panel) and imaginary (left panel) parts
of the $\pi NN$ form factor as functions of the momentum transfer t.
Solid line: coupled $\pi\rho$ and $\pi\sigma$ channels;
dotted line: only the $\pi\rho$ channel is considered;
dashed line: rescattering in the $\pi\rho$ channel is omitted;
dashed-dotted line: only the $\pi\sigma$ channel is included.
The short-dashed line in the right panel refers to a monopole
form factor with $\Lambda^{(1)}=800$ MeV.
}\label{spek_Gam}
\end{figure}
We confirm earlier findings \cite{durso77,dillig}
that the pion-sigma states by themselves,
even if iterated, do not generate an appreciable contribution
to the spectral function. The $\pi \rho$ intermediate states clearly
dominate. Including rescattering processes in a model with
only $\pi\rho$ states, one finds a large shift of the spectral
function to smaller energies, which emphasizes the importance of the
correlations. The new aspect of our work is the large shift induced
by the coupling between $\pi \rho $ and $ \pi \sigma $ states.
In Ref. \cite{Hol} Holinde and Thomas introduced an effective $\pi'$
exchange contribution to the One-Boson Exchange $NN$ interaction in a
phenomenological way in order to shift part of the tensor force from
the $\pi$ into the $\pi'$ exchange. This allowed them to use a rather
soft cut-off $\Lambda^{(1)}_{\pi NN}=800 MeV$ to describe the $NN$
phase shifts. The maximum of the spectral function shown in Fig.
\ref{spek_Gam} is located at 1.2 GeV ($t \approx 75 m_{\pi}^2$) which
coincides with the mass of the $\pi'$. Thus the $\pi'$ used in Ref.
\cite{Hol} can be interpreted in terms of a correlated coupled
$\pi\rho$ and $\pi\sigma$ exchange. Note, that the form factor
derived here must not be used in Meson-Exchange models of the
nucleon-nucleon interaction, such as discussed in Ref. \cite{MHE}, but
only in models which include the exchange of {\it correlated}
$\pi\rho$ and $\pi\sigma$ pairs.
The form factors obtained via the dipersion relation are shown
in the right part of Fig.\ref{spek_Gam}.
The numerical results can be parameterized by a monopole form
factor.
The inclusion of an uncorrelated $\pi \rho$ exchange leads to a
relatively hard form factor of
$\Lambda^{(1)}_{\pi N N} = 2100$ MeV.
In the present model, using $\pi\rho$ intermediate states only, the
cut-off momentum is reduced to
$\Lambda^{(1)}_{\pi N N} = 1500$ MeV.
If one treats the singularities of the scattering kernel $V$
by approximating the propagator of the virtual pion by a static
one (see the u-channel exchange in Fig. \ref{rhopi}), the
resulting $\pi\rho$ interaction becomes more attractive and
produces a much softer form factor corresponding to
$\Lambda^{(1)}_{\pi N N} = 1000$ MeV \cite{Janssen93}, employing
only $\pi\rho$ intermediate states.
The present model, including both $\pi\rho$ and $\pi\sigma$
intermediate states,
leads to $\Lambda^{(1)}_{\pi N N} = 800$ MeV.
\section{Summary}
Microscopic models of the $\pi NN$ vertex function are required
in order to understand why the phenomenological form factors employed
in models of the two-nucleon interaction are harder than those
obtained from other sources. In Ref. \cite{Janssen93}, a
meson-theoretic model for the $\pi NN$ vertex has been developped.
The inclusion of correlated $\pi\rho$ states gave a form factor
corresponding to $\Lambda^{(1)}_{\pi N N} = 1000$ MeV.
This is still harder than the phenomenological form factors
required in the description of many other physical processes.
Within
the framework of Ref. \cite{Janssen93}, a further
reduction of the cut-off $\Lambda^{(1)}_{\pi N N}$
is impossible. Correlated $\pi\sigma$ states were not considered
in Ref. \cite{Janssen93} because of the results obtained in
Refs. \cite{durso77,dillig}. In the present work we have shown
that the findings of Refs. \cite{durso77,dillig} have to be
revised. Meson-theoretic analyses of $\pi N$ scattering
strongly suggest a derivative $\sigma\pi\pi$ coupling.
This is shown to enhance the off-shell coupling between
$\pi\rho$ and $\pi\sigma$ intermediate states in the dispersion
model for the $\pi NN$ form factor. A softening of the
$\pi NN$ form factor is obtained which largely removes the
remaining discrepancies between the phenomenological
form factors.
{\bf Acknowledgments}
C.H. acknowledges the financial support through a Feodor-Lynen
Fellowship of the Alexander-von-Humboldt Foundation. This work was
supported in part by th U.S. Department of Energy under Grant No.
DE-FG03-97ER41014.
| {
"attr-fineweb-edu": 1.692383,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUa03xK7ICUuXei_HR | \section{ Definitions. Notations. Previous results. Statement of problem.}
\vspace{4mm}
\ Let $ \ (X, \cal{B}, {\bf \mu}) \ $ and $ \ (Y, \cal{C}, {\bf \nu} ) \ $
be two non-trivial probability spaces: $ \ \mu(X) = \nu(Y) = 1 . \ $ We will denote by $ \ |g|_p = |g|L(p) \ $ the ordinary
Lebesgue-Riesz $ \ L(p) \ $ norm of arbitrary measurable numerical valued function $ \ g: X \to R: \ $
$$
|g|_p = |g| L(p) = |g| L_p(X, \mu) := \left[ \int_X |g(x)|^p \ \mu(dx) \right]^{1/p}, \ p \in [1, \infty)
$$
analogously for the (also measurable) function $ \ h: Y \to R \ $
$$
|h|_p = |h|L(p) = |h|L_p(Y,\nu) := \left[ \int_Y |h(y)|^p \ \nu(dy) \right]^{1/p};
$$
and for arbitrary integrable function {\it of two variables} $ \ f: X \otimes Y \to R \ $
$$
|f|_p = |f|L(p) = |f|L_p(X,Y):= \left[ \int_X \int_Y |f(x,y)|^p \ \mu(dx) \ \nu(dy) \right]^{1/p}, \ p \in [1, \infty).
$$
\vspace{4mm}
\ Let $ \ Z_+ = \{ 1,2, 3, \ldots \} \ $ and denote $ \ Z^2_+ = Z_+ \otimes Z_+, \ Z^d_+ = \otimes_{k=1}^d Z_+. \ $
Let also $ \ \{\xi(i) \} \ $ and $ \ \{\eta(j)\},
\ i,j = 1,2,\ldots, \ \xi := \xi(1), \ \eta := \eta(1) $ be {\it common} independent random variables defined on certain probability space
$ \ (\Omega, \cal{M}, {\bf P}) \ $ with distributions correspondingly $ \ \mu, \ \nu: \ $
$$
{\bf P}(\xi(i) \in A) = \mu(A), \ A \in \cal{B};
$$
$$
{\bf P}(\eta(j) \in F) = \nu(F), \ F \in \cal{C}, \eqno(1.0)
$$
so that
$$
{\bf E} |g(\xi)|^p = |g|_p^p = \int_X |g(x)|^p \mu(dx), \ {\bf E} |h(\eta)|_p^p = |h|^p_p = \int_Y |h(y)|^p \ \nu(dy),
$$
and
$$
{\bf E} |f(\xi, \eta)|^p = |f|^p_p = \int \int_{X\otimes Y} |f(x,y)|^p \ \mu(dx) \ \nu(dy).
$$
\ Let also $ \ L \ $ be arbitrary non-empty {\it finite} subset of the set $ \ Z^2_+; \ $ denote by $ \ |L| \ $ a numbers of its
elements (cardinal number): $ \ |L| := \card(L). \ $ It is reasonable to suppose in what follows $ \ |L| \ge 1. \ $ \par
\ Define for any {\it integrable} function $ \ f: X \otimes Y \to R, \ $ i.e. for which
$$
|f|_1 = {\bf E}|f(\xi, \eta)| = \int_X \int_Y |f(x,y)| \ \mu(dx) \ \nu(dy) < \infty,
$$
the following normalized sum
$$
S_L[f] \stackrel{def}{=} |L|^{-1} \sum_{(k(1), k(2)) \in L} f(\xi(k(1), \eta(k(2)), \eqno(1.1)
$$
which is a slight generalization of the classical $ \ U \ $ and $ \ V \ $ statistics, see the classical monograph of
Korolyuk V.S and Borovskikh Yu.V. [18]. Offered here report is the direct generalization of a recent article [30], but we
apply here other methods.\par
\ The {\it reasonableness } of this norming function $ \ |L|^{-1} \ $
implies that in general, i.e. in the non-degenerate case $ \ \Var(S_L) \asymp 1, \ |L| \ge 1. \ $ This propositions
holds true still in the multidimensional case. \par
\ Our notations and some previous results are borrowed from the works of S.Klesov [14] - [17], see also [18]. \par
\vspace{4mm}
\ {\it We will suppose in what follows that the function } $ \ f = f(x,y), \ x \in X, \ y \in Y \ $, {\it as well as both the functions} $ \ g, \ h \ $ {\it are non - negative (a.e). }
\vspace{4mm}
\ The so-called {\it centered case}, i.e. when
$$
{\bf E} f(\xi, \eta) = \int_X \int_Y f(x,y) \ \mu(dx) \ \nu(dy) = 0,
$$
was investigated in many works, see e.g. the classical monograph of Korolyuk V.S and Borovskikh Yu.V. [18]; see also [14]-[17], [26]-[27], [28]
etc. The one-dimensional case was studies in the recent preprint [30]. \par
\vspace{4mm}
\ {\bf Our claim in this report is to derive the moment and exponential bounds for tail of distribution for the normalized sums of
multi-indexed independent non-negative random variables from (1.1). } \par
\vspace{4mm}
\ Offered here results are generalizations of many ones obtained by S.Klesov in an articles [14]-[17], see also [18], [26]-[27],
where was obtained in particular the CLT for the sums of centered multiple variables. \par
\vspace{4mm}
\ The multidimensional case, i.e. when $ \ \vec{k} \in Z_+^d, \ $ will be considered further. \par
\vspace{4mm}
\ The paper is organized as follows. In the second section we describe and investigate the notion of
the degenerate functions and approximation. In the next sections we obtain one of the main results:
the moment estimates for the multi-index sums with non-negative kernel in the two-dimensional case. \par
\ The so-called non-rectangular case is considered in the $\ 4^{th} \ $ section. The fifth section is devoted
to the multivariate case. In the following section we obtain the exponential bounds for distribution of positive multiple sums.\par
\ We show in the seventh section the upper bounds for these statistics. The last section contains as ordinary the concluding remarks. \par
\vspace{4mm}
\ We reproduce here for readers convenience some results concerning the one-dimensional case, see an article [30]. \par
\vspace{4mm}
\ Denote for any r.v. - s. $ \ \eta_j, \ j = 1,2,\ldots \ $ its $ \ k^{th} \ $ absolute moment by $ \ m_j(k): \ $
$$
m_j(k) := {\bf E} |\eta_j|^k, \ k \ge 1;
$$
so that
$$
|\eta_j|_k = \left[ \ m_j(k) \right]^{1/k}.
$$
\ We deduce applying the triangle inequality for the $ \ L(p, \Omega) \ $ norm a very simple estimation
$$
|\sum_{j=1}^n \eta_j|_p \le \sum_{j=1}^n \left[ \ m_j(p) \ \right]^{1/p},
$$
and we do not suppose wherein that the r.v. $ \ \eta_j \ $ are non-negative and independent. \par
\ In order to describe a more fine estimations, we need to introduce some notations. \par
\vspace{4mm}
\ Let us define the so-called {\it Bell's function} of two variables as follows.
$$
B(p,\beta) \stackrel{def}{=} e^{-\beta} \sum_{k=0}^{\infty} \frac{k^p \ \beta^k}{k!}, \ p \ge 2, \ \beta > 0,
$$
and put $ \ B(p) = B(p,1), \ $ so that
$$
B(p) \stackrel{def}{=} e^{-1} \sum_{k=0}^{\infty} \frac{k^p}{k!}, \ p \ge 0.
$$
\ The sequence of the numbers $ \ B(0) = 1, B(1), B(2), B(3), B(4),\ldots \ $ are named as Bell numbers; they appears in combinatorics, theory of probability etc., see [30], [25].
\vspace{4mm}
\ Let the random variable (r.v.) $ \ \tau = \tau[\beta], \ $ defined on certain probability space $ \ (\Omega, F,{\bf P}) \ $ with expectation $ \ {\bf E}, \ $
has a Poisson distribution with parameter $ \ \beta, \ \beta > 0; \ $ write $ \ \Law(\tau) = \Law \tau[\beta] = \Poisson(\beta): \ $
$$
{\bf P}(\tau = k) = e^{-\beta} \frac{\beta^k}{k!}, \ k = 0,1,2,\ldots,
$$
\ It is worth to note that
$$
B(p,\beta) = {\bf E} (\tau[\beta])^p, \ p \ge 0.\eqno(1.2)
$$
\ In detail, let $ \ \eta_j, \ j = 1,2,\ldots \ $ be a sequence of non - negative independent random (r.v.); the case of centered or moreover symmetrical distributed r.v.
was considered in many works, see e.g. [9], [14]-[17], [26]-[27], [30] and so one. \par
\ The following inequality holds true
$$
{\bf E} \left( \sum_{j=1}^n \eta_j \right)^p \le B(p) \ \max \left\{ \ \sum_{j=1}^n {\bf E} \eta_j^p, \ \left( \ \sum_{j=1}^n {\bf E} \eta_j \ \right)^p \ \right\}, \ p \ge 2, \eqno(1.3)
$$
where the "constant" $ \ B(p) \ $ in (1.2) is the best possible, see [30]. \par
\ One of the interest applications of these estimates in statistics, more precisely, in the theory of $ \ U \ $ statistics may be found in the articles [9], [18]. \par
\ Another application. Let $ \ n = 1,2,3,\ldots; \ a, b = \const > 0; \ p \ge 2, \ \mu = \mu(a,b;p) := a^{p/(p-1)} \ b^{1/(1-p)} . \ $
Define the following class of the sequences of an independent non - negative random variables
$$
Z(a,b) \stackrel{def}{=} \left\{ \eta_j, \ \eta_j \ge 0, \ \sum_{j=1}^n {\bf E}\eta_j = a; \ \sum_{j=1}^n {\bf E}\eta_j^p = b \ \right\}. \eqno(1.4)
$$
\ G.Schechtman in proved that
$$
\sup_{ \ n = 1,2,\ldots; \ \{\eta_j \} \in Z(a,b) \ } {\bf E} \left( \ \sum_{j=1}^n \eta_j \ \right)^p = \left( \frac{b}{a} \right)^{p/(p-1)} \ B(p, \mu(a,b;p)). \eqno(1.5)
$$
\vspace{4mm}
\ The introduced above Bell's function allows in turn a simple estimation, see [30], which may be used by practical using.
Indeed, let us introduce the following auxiliary function
$$
g_{\beta}(p) \stackrel{def}{=} \frac{p}{e} \cdot
\inf_{\lambda > 0} \left[\lambda^{-1} \ \exp \left( \ \beta \left(e^{\lambda} - 1 \right) \ \right) \ \right]^{1/p}, \ \beta, p > 0. \eqno(1.6)
$$
\ It is proved in particular in [30] that
$$
B^{1/p}(p, \beta) \le g_{\beta}(p), \ p,\beta > 0. \eqno(1.7)
$$
\vspace{4mm}
\ Let us introduce also the following function
$$
h_0(p, \beta) \stackrel{def}{=} \sup_{k=1,2,\ldots} e^{-\beta} \left\{ \ \frac{k^p \ \beta^k}{k!} \ \right\}; \eqno(1.8)
$$
therefore
$$
B(p,\beta) \ge h_0(p, \beta), \ p, \ \beta > 0. \eqno(1.9)
$$
\vspace{4mm}
\ The last estimate may be simplified in turn as follows. We will apply the following version of the famous Stirling's formula
$$
k! \le \zeta(k), \ \ k = 1,2,\ldots,
$$
where
$$
\zeta(k) \stackrel{def}{=} \sqrt{2 \pi k} \ \left( \frac{k}{e} \right)^k \ e^{1/(12k)}, \ k = 1,2,\ldots \eqno(1.10)
$$
\ Define a new function
$$
h(p, \beta) \stackrel{def}{=} \sup_{x \in(1, \infty)} \
\left\{ e^{ 1/(6 p x )} \cdot \left[ \ \frac{e^{x - \beta} \ x^{p - x - 1/2}}{ \sqrt{2 \ \pi} \ x^x} \ \right]^{1/p} \right\}. \eqno(1.11)
$$
\ We obtained in [30] really the following lower simple estimate for the Bell's function \par
\vspace{4mm}
$$
B^{1/p} (p, \beta) \ge h_0(p, \ \beta), \ B^{1/p} (p, \beta) \ge h(p, \ \beta), \ p, \ \beta > 0. \eqno(1.12)
$$
\vspace{4mm}
\ These estimates may be in turn simplified as follows. Assume that $ \ p \ge 2 \beta, \ \beta > 0, \ p \ge 1; \ $ then
$$
B^{1/p}(p,\beta) \le \frac{p/e}{\ln (p/\beta) - \ln \ln (p/\beta)} \cdot \exp \left\{ \frac{1}{\ln (p/\beta)} - \frac{1}{p/\beta} \right\}. \eqno(1.13)
$$
\vspace{4mm}
\ For example,
$$
B^{1/p}(p) \le \frac{p/e}{\ln p - \ln \ln p} \cdot \exp \left\{ 1/\ln p - 1/p \right\}, \ p \ge 2. \eqno(1.14)
$$
\vspace{4mm}
\ The estimate (1.13) may be simplified as follows
$$
B^{1/p}(p) \le \frac{p}{e \ \ln (p/\beta)} \cdot \left[ 1 + C_1(\beta) \cdot \ \frac{\ln \ln (p/\beta)}{\ln (p/\beta)} \ \right], \eqno(1.15)
$$
where $ \ C_1(\beta) = \const \in (0,\infty), $ and we recall that $ \ p \ge 1, \ p \ge 2\beta. \ $ \par
\vspace{4mm}
\ For example,
$$
B^{1/p}(p) \le \frac{p/e}{\ln p - \ln \ln p} \cdot \exp \left\{ 1/\ln p - 1/p \right\}, \ p \ge 2. \eqno(1.16)
$$
\vspace{4mm}
\ The lower estimate for Bell's function has a form
$$
\ B^{1/p}(p,\beta) \ge \
$$
$$
\beta^{1/\ln(pe/\beta)} \cdot \frac{p}{\ln (pe/\beta)} \cdot \left\{ \exp \left[ \ \frac{\ln p - \ln(pe)/\beta}{\ln[(pe)/\beta]} \ \right] \ \right\}^{-1},
$$
$$
\ p,\beta > 0, p/\beta \ge 2.
$$
\ It may be simplified as follows
$$
B^{1/p}(p) \ge \frac{p}{e \ \ln (p/\beta)} \cdot \left[ 1 - C_2(\beta) \cdot \ \frac{\ln \ln (p/\beta)}{\ln (p/\beta)} \ \right], \eqno(1.17)
$$
where
$ \ C_2(\beta) = \const \in (0,\infty), $ and we recall that $ \ p \ge 2\beta.\ $
\vspace{4mm}
\ We suppose hereafter that both the variables $ \ p \ $ and $ \ \beta \ $ are independent but such that
$$
p \ge 1, \ \beta > 0, \ p/\beta \le 2. \eqno(1.18)
$$
\ It is known [30] that in this case there exist {\it two absolute positive constructive finite constants} $ \ C_3, \ C_4, \ C_3 \le C_4 \ $ such that
$$
C_3 \ \beta \le B^{1/p} (p, \beta) \le C_4 \ \beta. \eqno(1.19)
$$
\vspace{4mm}
\begin{center}
{\it To summarize.}
\end{center}
\ Define the following infinite dimensional random vector
$$
\eta = \vec{\eta} = \{ \eta_1, \eta_2, \eta_3, \ldots \}. \eqno(1.20)
$$
\ Recall that here the r.v. $ \ \{ \eta_j \} \ $ are non-negative and independent. One can suppose also that $ \ m_j(p) = |\eta_j|_p < \infty \ $ for some value $ \ p \in [2, \infty). \ $ \par
\ Put also
$$
Z_{p,n} = Z_{p,n}[\eta] := n^{-1} \sum_{j=1}^n \left[ \ m_j(p) \ \right]^{1/p},
$$
$$
V_{p,n} = V_{p,n}[\eta] := n^{-1} \ \cdot B^{1/p}(p) \cdot \ \max \left\{ \left( \sum_{j=1}^n m_j(p) \right)^{1/p}, \ \left( \ \sum_{j=1}^n m_j(1) \right) \ \right\},
$$
and ultimately
$$
\Theta_{p,n} = \Theta_{p,n}[\eta] :=\min \{ Z_{p,n}[\eta], V_{p,n}[\eta] \}, \eqno(1.21)
$$
$$
\Theta_{p} = \Theta_p[\eta] := \sup_{n = 1,2,3,\ldots} \Theta_{p,n}, \eqno(1.22)
$$
with described above correspondent upper estimations for the function $ \ B^{1/p}(p). \ $\par
\vspace{4mm}
{\bf Proposition 1.1.} We deduce under formulated conditions
$$
\left| n^{-1} \ \sum_{j=1}^n \eta_j \right|_p \le \Theta_{p,n}[\eta]. \eqno(1.23)
$$
\ As a slight consequence
$$
\sup_{n=1,2,3,\ldots} \left| n^{-1} \ \sum_{j=1}^n \eta_j \right|_p \le \Theta_{p}[\eta]. \eqno(1.23a)
$$
\vspace{4mm}
{\bf Remark 1.1.} Of course, one can use in the practice in the estimate (1.21) and in the next relations instead the variable $ \ B^{1/p}(p) \ $ its
upper bounds from the inequality (1.16). \par
\vspace{4mm}
\section{Degenerate functions and approximation. }
\vspace{4mm}
\ {\bf Definition 2.1,} see [26], [27], [30]. The measurable function $ \ f: X \otimes Y \to R \ $ is said to be {\it degenerate, } if it has a form
$$
f(x,y) = \sum_{k(1) = 1}^M \sum_{k(2)=1}^M \lambda_{k(1), k(2)} \ g_{k(1)}(x) \ h_{k(2)}(y), \eqno(2.1)
$$
where $ \ \lambda_{i,j} = \const \in R, \ M = \const = 1,2, \ldots, \infty. \ $ \par
\ One can distinguish two cases in the relation (2.1): ordinary, or equally {\it finite degenerate function,} if in (2.1) $ \ M < \infty, \ $ and
infinite degenerate function otherwise. \par
\ The degenerate functions (and kernels) of the form (2.1) are used, e.g., in the approximation theory, in the theory of random processes and fields,
in the theory of integral equations, in the game theory etc. \par
\ A particular application of this notion may be found in the authors articles [26], [27], [30]. \par
\vspace{4mm}
\ Denotation: $ \ M = M[f] \stackrel{def}{=} \deg(f); \ $ of course, as a capacity of the value $ \ M \ $ one can understood its {\it constant}
minimal value. \par
\ Two examples. The equality (2.1) holds true if the function $ \ f(\cdot, \cdot) \ $ is trigonometrical or algebraical polynomial. \par
\ More complicated example: let $ \ X \ $ be compact metrisable space equipped with the non-trivial probability Borelian measure $ \ \mu. \ $
This imply that an arbitrary non-empty open set has a positive measure. \par
\ Let also $ \ f(x,y), \ x,y \in X \ $ be continuous numerical valued non-negative definite function. One can write the famous Karunen-Loev's
decomposition
$$
f(x,y) = \sum_{k=1}^M \lambda_k \ \phi_k(x) \ \phi_k(y),\eqno(2.2)
$$
where $ \ \lambda_k, \ \phi_k(x) \ $ are correspondingly eigen values and eigen orthonormal
function for the function (kernel) $ \ f(\cdot, \cdot): \ $
$$
\lambda_k \ \phi_k(x) = \int_X f(x,y) \phi_k(y) \ \mu(dy).
$$
\ We assume without loss of generality
$$
\lambda_1 \ge \lambda_2 \ge \ldots \lambda_k \ge \ldots \ge 0.
$$
\vspace{4mm}
\ {\it It will be presumed in this report, i.e. when the function $ \ f = f(x,y) \ $ is non negative,
in addition to the expression (2.1) that all the functions \ } $ \ \{ g_i \}, \ \{ h_j \} \ $
{\it are also non-negative:}
$$
\forall x \in X \ g_i(x) \ge 0, \ \forall y \in Y \ h_j(y) \ge 0. \eqno(2.3)
$$
\ Further, let $ \ B_1, \ B_2, \ B_3, \ \ldots, B_M \ $ be some rearrangement invariant (r.i.) spaces builded correspondingly over the
spaces $ \ X,Y; \ Z,W, \ldots, \ $
for instance, $ \ B_1 = L_p(X), \ B_2 = L_q(Y), 1 \le p,q \le \infty. \ $ If $ \ f(\cdot) \in B_1 \otimes B_2, \ $ we suppose also
in (2.1) $ \ g_i \in B_1, \ h_j \in B_2; \ $ and if in addition in (2.1) $ \ M = \infty, \ $ we suppose that the series in (2.1)
converges in the norm $ \ B_1 \otimes B_2 \ $
$$
\lim_{m \to \infty} || \ f(\cdot) - \sum_{i = 1}^m \sum_{j=1}^m \lambda_{i,j} \ g_i(\cdot) \ h_j(\cdot) \ ||B_1 \otimes B_2 = 0.
\eqno(2.3b)
$$
\ The condition (2.3b) is satisfied if for example $ \ ||g_i||B_1 = ||h_j||B_2 = 1 \ $ and
$$
\sum \sum_{i,j = 1}^M |\lambda_{i,j}| < \infty, \eqno(2.4)
$$
or more generally when
$$
\sum \sum_{i,j = 1}^M |\lambda_{i,j}| \cdot ||g_i||B_1 \cdot ||h_j||B_2 < \infty. \eqno(2.4a)
$$
\ The function of the form (2.1) with $ \ M = M[f] = \deg (f) < \infty \ $ is named {\it degenerate},
notation $ \ f \in D[M]; \ $ we put also $ \ D := \cup_{M < \infty} D[M]. \ $ Obviously,
$$
B_1 \otimes B_2 = D[\infty].
$$
\ Define also for each {\it non-negative} such a function $ \ f \in D \ $ the following quasi-norm, also non-negative:
$$
||f|| D^+(B_1, B_2) \stackrel{def}{=} \inf \left\{ \ \sum \sum_{i,j = 1,2,\ldots,M[f] } |\lambda_{i,j}| \ ||g_i||B_1 \ ||h_j||B_2 \ \right\}, \eqno(2.5)
$$
where all the arrays $ \ \{ \lambda_{i,j} \} , \ \{ g_i\}, \ \{h_j\} \ $ are taking from the representation 2.1; and in addition,
$$
g_i(x) \ge 0, \ h_j(y) \ge 0; \ x \in X, \ y \in Y.
$$
\ We will write for brevity $ \ ||f||D_p := $
$$
||f|| D^+(L_p(X), L_p(Y)) =
\inf \left\{ \ \sum \sum_{i,j = 1,2,\ldots,M[f] } |\lambda_{i,j}| \ |g_i|_p \ |h_j|_p \ \right\}, \eqno(2.5a)
$$
where all the arrays $ \ \{ \lambda_{i,j} \} , \ \{ g_i\}, \ \{h_j\} \ $ are taking from the representation 2.1, of course with non - negative functions $ \ g_i, \ h_j. \ $ \par
\vspace{4mm}
\ Further, let the function $ \ f \in B_1 \otimes B_2 \ $ be given. The error of a degenerate approximation of the non-negative function $ \ f: X \otimes Y \to R \ $
by the degenerate ones of the degree $ \ M, \ $ also with non-negative summands, will be introduced as follows
$$
Q^+_M[f](B_1 \otimes B_2) \stackrel{def}{=} \inf_{\tilde{f} \in D[M]} ||f - \tilde{f}||B_1 \otimes B_2 =
\min_{\tilde{f} \in D^+[M]} ||f - \tilde{f}||B_1 \otimes B_2. \eqno(2.6)
$$
\ Obviously, $ \ \lim Q^+_M[f] (B_1 \otimes B_2) = 0, \ M \to \infty. \ $\par
\ For brevity:
$$
Q^+_M[f]_p \stackrel{def}{=} Q^+_M[f](L_p(X) \otimes L_p(Y)). \eqno(2.6a)
$$
\vspace{4mm}
\ The function $ \ \tilde{f} \ $ which realized the minimum in (2.6), obviously, non-negative, not necessary to be unique,
will be denoted by $ \ Z^+_M[f](B_1 \otimes B_2): \ $
$$
Z^+_M[f](B_1 \otimes B_2):= \argmin_{\tilde{f} \in D^+[M]} ||f - \tilde{f}||B_1 \otimes B_2, \eqno(2.7)
$$
so that
$$
Q^+_M[f](B_1 \otimes B_2) = ||f - Z^+_M[f]||(B_1 \otimes B_2). \eqno(2.8)
$$
\ For brevity:
$$
Z^+_M[f]_p := Z^+ _M [f](L_p(X) \otimes L_p(Y)). \eqno(2.9)
$$
\ Let for instance again $ \ f(x,y), \ x,y \in X \ $ be continuous numerical valued {\it non-negative definite} function, non necessary to be non - negative,
see (2.3) and (2.3a). It is easily to calculate
$$
Q_M[f] (L_2(X) \otimes L_2(X)) = \sum_{k=M + 1}^{\infty} \lambda_k.
$$
\vspace{4mm}
\section{Moment estimates for multi-index sums.}
\vspace{4mm}
\begin{center}
{\bf Two-dimensional case. } \par
\end{center}
\vspace{4mm}
{\bf \ A trivial estimate. } \par
\vspace{4mm}
\ The following simple estimate based only on the triangle inequality, may be interpreted as trivial:
$$
|S_L|(L_p(X) \otimes L_p(Y)) \le \ |f|(L_p(X) \otimes L_p(Y)) = |f|_p, \eqno (3.1)
$$
even without an assumption of the non-negativity of the function $ \ f \ $ and the independents of the r.v $ \ g_k(\xi(i)), \ h_l(\eta(j)). \ $ \par
\vspace{4mm}
{\it Hereafter \ } $ \ p \ge 2. \ $ \par
\vspace{4mm}
\ {\bf The two-dimensional degenerate case. } \par
\vspace{4mm}
\ In this subsection the non-negative kernel-function $ \ f = f(x,y) \ $ will be presumed to be degenerate with minimal constant possible
degree $ \ M = M[f] = 1, \ $ on the other words, {\it factorizable}:
$$
f(x,y) = g(x) \cdot h(y), \ x \in X, \ y \in Y, \eqno(3.2)
$$
of course, with non-negative factors $ \ g,h. \ $ \par
\ Further, we suppose that the set $ \ L \ $ is integer constant rectangle:
$$
L = [1,2,\ldots, n(1)] \otimes [1,2, \ldots, n(2)], \ n(1), n(2) \ge 1.
$$
\ Let us consider the correspondent double sum $ \ S_L[f] = S^{(2)}_L[f] := $
$$
|L|^{-1} \sum \sum_{i, j \in L} g(\xi_i) \ h(\eta_j), \ \eqno(3.3),
$$
where
$$
n = \vec{n} = (n(1), n(2)) \in L, \ n_1, n_2 \ge 1. \eqno(3.3)
$$
\ We have denoting
$$
\vec{g} = \{ g(\xi(i)) \}, \ i = 1,2, \ldots, n(1); \ \vec{h} = \{ h(\eta(j)) \}, \ j = 1,2,\ldots, n(2):
$$
$$
S_L[f] = \left[ n(1)^{-1} \sum_{i=1}^{n(1)} g(\xi(i)) \right] \cdot \left[ n(2)^{-1} \sum_{j=1}^{n(2)} h(\eta(j)) \right].
$$
\ Since both the factors in the right-hand size of the last inequality are independent, we deduce
applying the one-dimensional estimates (1.23), (1.23a):
$$
|S_L|_p \le \Theta_{p, n(1)}[\vec{g}] \cdot \Theta_{p, n(2)}[\vec{h}], \eqno(3.4)
$$
and hence
$$
\sup_{L: |L| \ge 1} |S_L|_p \le \Theta_{p}[\vec{g}] \cdot \Theta_{p}[\vec{h}], \eqno(3.4a)
$$
\vspace{4mm}
{\bf Estimation for an arbitrary degenerate kernel. } \par
\vspace{4mm}
\ In this subsection the function $ \ f(\cdot,\cdot) \ $ is non-negative and degenerate, as well as all the functions $ \ g_{k(1)}(\cdot), \ h_{k(2)}(\cdot): \ $
$$
f(x,y) = \sum \sum_{k(1), k(2) = 1}^M \ \lambda_{k(1), k(2)} \ g_{k(1)}(x) \ h_{k(2)}(y), \eqno(3.5)
$$
where $ \ 1 \le M \le \infty, \ $
$$
g_{k(1)}(\cdot) \in L_p(X), \ h_{k(2)} (\cdot) \in L_p(Y),
$$
and as before $ \ g_{k(1)}(x) \ge 0, \ h_{k(2)}(y) \ge 0. \ $ \par
\vspace{4mm}
\ Denote also by $ \ R = R(f) \ $ the {\it set } of all such the functions $ \ \{g\} = \vec{g} \ $ and $ \ \{h\} = \vec{h} \ $ as well as the sequences of coefficients
$ \ \{\lambda\} = \{ \ \lambda_{k(1), k(2)} \ \} \ $ from the representation (3.5):
$$
R[f] := \{ \{\lambda\}, \ \{g\} = \vec{g}, \ \{h\} = \vec{h} \}:
$$
$$
f(x,y) = \sum \sum_{k(1), k(2) = 1}^M \ \lambda_{k(1), k(2)} \ g_{k(1)}(x) \ h_{k(2)}(y). \eqno(3.5a)
$$
\vspace{4mm}
\ {\it We must impose on the series (3.5) in the case when $ \ M = \infty \ $ the condition of its convergence in the norm of the space} $ \ L_p(X) \otimes L_p(Y). \ $\par
\vspace{4mm}
\ Let us investigate the introduced before statistics
$$
S^{(\lambda)}_{L} = S^{(\lambda)}_L[f] := | L|^{-1} \ \sum \sum_{i,j \in L} f(\xi(i), \eta(j)) =
$$
$$
| L|^{-1} \ \sum \sum_{i,j \in L} \left[ \ \sum \sum_{k(1), k(2) = 1}^M \lambda_{k(1), k(2)} \ g_{k(1)}(\xi(i)) \ h_{k(2)}(\eta(j)) \ \right] =
$$
$$
|L|^{-1} \ \sum \sum_{k(1), k(2) = 1}^M \lambda_{k(1), k(2)} \ \left[ \ \sum \sum_{i,j \in L} g_{k(1)}(\xi_i) \ h_{k(2)}(\eta_j) \ \right]=
$$
$$
\sum \sum_{k(1), k(2) = 1}^M \lambda_{k(1), k(2)} \cdot \left[ \ (n(1))^{-1} \sum_{i=1}^{n(1)} g_{k(1)}(\xi(i)) \ \right] \times
$$
$$
\left[ (n(2))^{-1} \sum_{j=1}^{n(2)} h_{k(2)}(\eta(j)) \right]. \eqno(3.6)
$$
\ We have using the triangle inequality and the estimate (3.4)
$$
\left| S^{(\lambda)}_L[f] \right|_p \le \sum \sum_{k(1), k(2) = 1}^M | \ \lambda_{k(1), \ k(2)} \ | \ \cdot
\Theta_{p, n(1)} \left[ \vec{g}_{k(1)} \ \right] \cdot \Theta_{p, n(2)} \left[ \ \vec{h}_{k(2)} \ \right]. \eqno(3.7)
$$
\ This estimate remains true in the case when $ \ M = \infty, \ $ if of course the right - hand side of (3.7) is finite; in the opposite case it is nothing to make. \par
\vspace{4mm}
\ To summarize, we introduce a new weight norm on the (numerical) array $ \ \vec{\lambda} = \{ \ \lambda_{k(1), \ k(2)} \}, $ more exactly, the sequence of the norms
$$
f \in D(M) \ \Rightarrow |||f||| \Theta_p = |||\vec{\lambda}|||\Theta_p =
$$
$$
|||\vec{\lambda}|||\Theta_p^{(2)} = |||\vec{\lambda}|||\Theta(p;n(1), n(2), \ \{g\}, \ \{h\}) \stackrel{def}{=}
$$
$$
\sum \sum_{k(1), k(2) = 1}^M | \ \lambda_{k(1), \ k(2)} \ | \ \cdot
\ \Theta_{p, n(1)} \left[ \ \vec{g}_{k(1)} \ \right] \cdot \Theta_{p, n(2)} [ \ \vec{h}_{k(2)} \ ].\eqno(3.8)
$$
\vspace{4mm}
{\bf Proposition 3.1.} If $ \ f \in D(M), \ $ then
$$
\left| S^{(\lambda)}_L[f] \right|_p \le |||f||| \Theta_p = |||\vec{\lambda}|||\Theta_p =
$$
$$
|||\vec{\lambda}|||\Theta(p;n(1), n(2), \ \{g\}, \ \{h\}), \eqno(3.9)
$$
and as a consequence
$$
\sup_{ L: |L| \ge 1} \left| S^{(\lambda)}_L[f] \right|_p \le \sup_{n(1), n(2)} |||f||| \Theta_p =\sup_{n(1), n(2)} |||\vec{\lambda}|||\Theta_p\{f\} =
$$
$$
\sup_{n(1), n(2)} |||\vec{\lambda}|||\Theta(p;n(1), n(2), \{g\}, \{h\}). \eqno(3.9a)
$$
\vspace{4mm}
{\bf Main result. Degenerate approximation approach. } \par
\vspace{4mm}
\ {\bf Theorem 3.1.} Let $ \ f = f(x,y) \ $ be arbitrary function from the space $ \ L_p(X) \otimes L_p(Y), \ p \ge 2. \ $ Then
$ \ |S_L[f]|_p \le W_L[f](p), $ where
\vspace{4mm}
$$
\ W_L[f](p) = W_L[f; \{\lambda\}, \ \{g\}, \ \{h\} \ ](p) \stackrel{def}{=}
$$
$$
\inf_{M \ge 1} \left[ \ |||Z_M[f]|||\Theta_p + \ Q^+_M[f]_p \ \right], \eqno(3.10)
$$
where in turn the vector triple $ \ \{\lambda\}, \ \{g\}, \ \{h\} \ $ is taken from the representation (3.5): $ \left[ \{\lambda\}, \ \{g\}, \ \{h\} \ \right] \in R[f]. \ $ \par
\vspace{4mm}
\ As a slight consequence: $ \ \sup_{L: |L| \ge 1} |S_L[f]|_p \le W[f](p), $ where
$$
\ W[f](p) \stackrel{def}{=} \sup_{L: |L| \ge 1} W_L[f](p) =
$$
$$
\sup_{L: |L| \ge 1} \inf_{M \ge 1} \left[ \ |||Z_M[f]|||\Theta_p + \ Q^+_M[f]_p \ \right], \eqno(3.10a)
$$
and the next consequence
$$
\sup_{L: |L| \ge 1} |S_L[f]|_p \le \inf_{ \{\lambda\}, \ \{g\}, \ \{h\} \in R[f] } W_L[f; \{\lambda\}, \ \{g\}, \ \{h\} \ ](p). \eqno(3.10b)
$$
\vspace{4mm}
{\bf Proof } is very simple, on the basis of previous results of this section. Namely, let $ \ L \ $ be an arbitrary non-empty set. Consider a
splitting
$$
f = Z^+_M[f] + ( f - Z^+_M[f] ) =: \Sigma_1 + \Sigma_2.
$$
\ We have
$$
|\Sigma_1|_p \le |||Z_M[f]|||\Theta_p.
$$
\ The member $ \ |\Sigma_2|_p $ may be estimated by virtue of inequality (3.1):
$$
|\Sigma_2|_p \le |L| \ |f - Z^+_M[f]|_p = \ Q^+_M[f]_p.
$$
\ It remains to apply the triangle inequality and minimization over $ \ M. \ $ \par
\vspace{4mm}
\ {\bf Example 3.1.} We deduce from (3.10) as a particular case
$$
\sup_{ L: |L| \ge 1} \left| S^{(\lambda)}_L[f] \right|_p \le \ ||f||D^+_p, \eqno(3.11)
$$
if of course the right-hand side of (3.11) is finite for some value $ \ p, \ p \ge 2. \ $ \par
\ Recall that in this section $ \ d = 2. \ $ \par
\vspace{4mm}
\section{ Non-rectangular case. } \par
\vspace{4mm}
\ We denote by $ \ \pi^+(L) \ $ the set of all rectangular's which are{\it\ circumscribed\ }about
the set $ \ L: \ \pi^+(L) = \{ L^+ \}, $ where
$$
L^+ = \{ [n(1)^{+}, n(1)^{++}] \otimes [n(2)^{+}, n(2)^{++}] \}: \ L^+ \supset L, \eqno(4.1)
$$
and
$$
1 \le n(1)^{+} \le n(1)^{++} < \infty, \ n(1)^{+}, n(1)^{++} \in Z_+,
$$
$$
1 \le n(2)^{+} \le n(2)^{++} < \infty, \ n(2)^{+}, n(2)^{++} \in Z_+.
$$
\vspace{4mm}
\ {\bf Proposition 4.1.}
$$
| S_L|_p \le \inf_{L^+: \ L \subset L^+} \ \left\{ \ \frac{|L^+|}{|L|} \cdot W_{L^+}[f; \{\lambda\}, \ \{g\}, \ \{h\} \ ](p) \ \right\}. \eqno(4.2)
$$
\vspace{4mm}
\ {\bf Proof} is very simple. We have
$$
|L| \ S_L = \sum \sum_{i,j \in L} f(\xi(i), \ \eta(j)),
$$
therefore
$$
\left| \ |L| \ S_L \ \right|_p = \left| \sum \sum_{i,j \in L} f(\xi(i), \ \eta(j)) \ \right|_p \le
$$
$$
\left| \sum \sum_{i,j \in L^+} f(\xi(i), \ \eta(j)) \ \right|_p \le |L^+| \cdot W_{L^+}[f: \{\lambda\}, \ \{g\}, \ \{h\} \ ](p),
$$
by virtue of theorem 3.1. \par
\vspace{4mm}
\section{ Moment estimates for multi-index sums.}
\vspace{4mm}
\begin{center}
{\bf Multidimensional generalization.}\par
\end{center}
\vspace{4mm}
\ Let now $ \ (X_m, \ B_m, \mu_m), \ m = 1,2,\ldots, d, \ d \ge 3 \ $ be {\it a family of probability spaces:} $ \ \mu_m(X_m) = 1; \ $
$ \ X := \otimes_{m=1}^d X_m; \ \xi(m) \ $ be independent random variables having the distribution correspondingly
$ \ \mu_m: \ {\bf P}(\xi(m) \in A_m) = \mu_m(A_m), \ A_m \in B_m; \ $
$ \ \xi_i(m), \ i = 1,2, \ldots, n(m); \ n(m) = 1,2, \ldots, \ n(m) < \infty \ $ be independent copies of $ \ \xi(m) \ $ and also independent
on the other vectors $ \ \xi_i(s), s \ne m, \ $ so that all the random variables $ \ \{ \xi_i(m) \} \ $ are common independent. \par
\ Another notations, conditions, restrictions and definitions. $ \ L \subset Z_+^d, \ |L| = \card(L) > 1; \ j = \vec{j} \in L; \ $
$$
k = \vec{k} = (k(1), k(2), \ldots, k(d)) \in Z_+^d; \ N(\vec{k}) := \max_{j = 1,2, \ldots,d} k(j); \eqno(5.0)
$$
$ \vec{\xi} := \{\xi(1), \xi(2), \ldots, \xi(n(m)) \}; \ \vec{\xi}_i := \{\xi_i(1), \xi_i(2), \ldots, \xi_i(n(m)) \}; $
$ \ X := \otimes_{i=1}^d X_i, \ f:X \to R \ $ be measurable {\it non-negative} function, i.e. such that $ \ f(\vec{\xi}) \ge 0; \ $
$$
S_L[f] := |L|^{-1} \sum_{k \in L} f\left(\vec{\xi}_k \right). \eqno(5.1)
$$
\ The following simple estimate is named as before trivial:
$$
|S_L[f]|_p \le |f|L_p. \eqno (3.0a)
$$
\vspace{4mm}
{\it Recall that by-still hereafter \ } $ \ p \ge 2. \ $ \par
\vspace{4mm}
\ By definition, as above, the function $ \ f: X \to R \ $ is said to be degenerate, iff it has the form
$$
f(\vec{x}) = \sum_{\vec{k} \in Z_+^d, \ N(\vec{k}) \le M} \lambda(\vec{k}) \ \prod_{s=1}^d g^{(s)}_{k(s)}(x(s)), \eqno(5.2)
$$
for some integer {\it constant} value $ \ M, \ $ finite or not, where all the functions $ \ g^{(s)}_k(\cdot) \ $ are in turn non-negative:
$ \ g^{(s)}_k(\xi(k)) \ge 0. \ $ Denotation: $ \ M = \deg[f]. $ \par
\vspace{4mm}
\ Define also as in the two-dimensional case for each such a function $ \ f \in D^+ \ $ the following non-negative quasi-norm
$$
||f|| D^+_p \stackrel{def}{=} \inf \left\{ \ \sum_{\vec{k} \in Z^d_+, \ N(\vec{k}) \le M[f] }
|\lambda(\vec{k})| \cdot \prod_{s=1}^d |g^{(s)}_{k(s)}(\xi(s))|_p \ \right\}, \eqno(5.3)
$$
where all the arrays $ \ \{ \lambda( \vec{k}) \} , \ \{ g_j \}, \ $ are taking from the representation 5.2, in particular, all the summands are non-negative. \par
\ The last assertion allows a simple estimate: $ \ ||f||D^+_p \le || f ||D^{+o}_p, \ $ where
$$
||f|| D^{+o}_p \stackrel{def}{=} \ \sum_{\vec{k} \in Z^d_+, \ N(\vec{k}) \le M[f] }
|\lambda(\vec{k})| \cdot \prod_{s=1}^d |g^{(s)}_{k(s)}(\xi(s))|_p, \eqno(5.3a)
$$
and if we denote
$$
G(p) := \prod_{j=1}^d |g^{(j)}_{k_j}(\xi_j)|_p, \ p \ge 1; \ || \lambda ||_1 := \sum_{\vec{k} \in Z^d_+} |\lambda(\vec{k})|,
$$
then
$$
||f||D^+_p \le ||f||D^{+o}_p \le G(p) \cdot ||\lambda||_1. \eqno(5.3b)
$$
\vspace{4mm}
\ Further, let the non-negative function $ \ f \in B_1 \otimes B_2 \otimes \ldots \otimes B_d \ $ be given.
Here $ \ B_r, \ r = 1,2,\ldots,d \ $ are some Banach functional rearrangement invariant spaces builded correspondingly over the sets $ \ X_m. \ $ \par
The error of a degenerate approximation
of the function $ \ f \ $ by the degenerate and non-negative ones of the degree $ \ M \ $ will be introduced as before
$$
Q^+_M[f](B_1 \otimes B_2 \otimes \ldots B_d) \stackrel{def}{=} \inf_{\tilde{f} \in D^+[M]} ||f - \tilde{f}||B_1 \otimes B_2 \otimes \ldots B_d =
$$
$$
\min_{\tilde{f} \in D^+M]} ||f - \tilde{f}||B_1 \otimes B_2 \otimes \ldots \otimes B_d. \eqno(5.4)
$$
\ Obviously, $ \ \lim Q_M[f] (B_1 \otimes B_2 \otimes \ldots \otimes B_d) = 0, \ M \to \infty. \ $\par
\ For brevity:
$$
Q_M[f]_p \stackrel{def}{=} Q_M[f](L_p(X_1) \otimes L_p(X_2) \otimes \ldots \otimes L_p(X_d)). \eqno(5.5)
$$
\vspace{4mm}
\ The function $ \ \tilde{f} \ $ which realized the minimum in (5.4), not necessary to be unique,
will be denoted by $ \ Z_M[f](B_1 \otimes B_2 \otimes \ldots \otimes B_d) = Z^+_M[f](B_1 \otimes B_2 \otimes \ldots \otimes B_d): \ $
$$
Z^+_M[f](B_1 \otimes B_2 \otimes \ldots \otimes B_d):=
\argmin_{\tilde{f} \in D^+[M]} ||f - \tilde{f}||B_1 \otimes B_2 \otimes \ldots \otimes B_d, \eqno(5.6)
$$
so that
$$
Q^+_M[f](B_1 \otimes B_2 \otimes \ldots \otimes B_d) = ||f - Z^+_M[f]||B_1 \otimes B_2 \otimes \ldots \otimes B_d. \eqno(5.7)
$$
\ For brevity:
$$
Z^+_M[f]= Z_M[f]_p := Z_M [f](L_p(X_1) \otimes L_p(X_2) \otimes \ldots \otimes L_p(X_d) ). \eqno(5.8)
$$
\vspace{4mm}
\ Denote as in the third section for $ \ f \in D(M) \ $ in the multivariate d-dimensional case
$$
|||f|||\Theta_p = |||f|||\Theta_{p,L} \stackrel{def}{=}
$$
$$
\sum_{N(\vec{k}) \le M} \left|\lambda_{\vec{k}} \right| \ \prod_{s=1}^d \Theta_{p, n(s)} \left[g^{(s)}_{k(s)} \right], \eqno(5.9a)
$$
$$
W_L[f](p) = W_L^{(d)}[f](p) \stackrel{def}{=} \inf_M \left[ \ |||Z^+_M[f] + Q_M^+[f]_p \ \right], \eqno(5.9b)
$$
$$
W[f]^{(d)}(p) \stackrel{def}{=} \sup_{L: |L| \ge 1} W_L^{(d)}[f](p). \eqno(5.9c)
$$
\ We deduce analogously to the third section \par
\vspace{4mm}
{\bf Proposition 5.1.} If $ \ f \in D(M), \ $ then
$$
\left| S^{(\lambda)}_L[f] \right|_p \le |||f||| \Theta_{p,L}, \eqno(5.10)
$$
and of course
$$
\sup_{L: |L| \ge 1} \left| S^{(\lambda)}_L[f] \right|_p \le \sup_{L: |L| \ge 1} |||f||| \Theta_{p,L}, \eqno(5.10a)
$$
\vspace{4mm}
\ {\bf Theorem 5.1.} Let $ \ f = f(x) = f(\vec{x}), \ x \in X \ $ be arbitrary non-negative function from the space
$ \ L_p(X_1) \otimes L_p(X_2) \otimes \ldots \otimes L_p(X_d), \ p \ge 2. \ $ Then
$$
|S_L[f]|_p \le W_L^{(d)}[f](p),\eqno(5.11)
$$
$$
\sup_{L: |L| \ge 1} |S_L[f]|_p \le \sup_{L: |L| \ge 1} W_L^{(d)}[f](p) = W^{(d)}[f](p) . \eqno(5.11a)
$$
\vspace{4mm}
\ {\bf Example 5.1.} We deduce alike the example 3.1 as a particular case
$$
\sup_{ L: |L| \ge 1} \left| S_L[f] \right|_p \le (2/3)^d \cdot \left[ \ \frac{p}{e \cdot \ln p} \ \right]^d \cdot ||f||D_p, \eqno(5.12)
$$
if of course the right-hand side of (5.9a) is finite for some value $ \ p, \ p \ge 2. \ $ \par
\vspace{4mm}
\ {\bf Remark 5.1.} Notice that the last estimates (5.10), (5.11), and (5.12) are essentially non-improvable. Indeed, it is
known still in the one-dimensional case $ \ d = 1; \ $ for the multidimensional one it is sufficient to take as a trivial
{\it factorizable } function; say, when $ \ d = 2, \ $ one can choose
$$
f_0(x,y) := g_0(x) \ h_0(y), \ x \in X, \ y \in Y.
$$
\vspace{4mm}
\section{Exponential bounds for distribution of positive multiple sums.}
\vspace{4mm}
\ We intend to derive in this section the uniform relative the amount of summand $ \ |L| \ $ {\it exponential } bounds
for tail of distribution of the r.v. $ \ S_L, \ $ based in turn on the moments bound obtained above as well as on the theory
of the so-called Grand Lebesgue Spaces (GLS). We recall now for readers convenience some facts about these spaces and supplement more. \par
\vspace{4mm}
\ These spaces are Banach functional space, are complete, and rearrangement
invariant in the classical sense, see [4], chapters 1, 2; and were investigated in particular in many works, see e.g. [5], [6]-[7], [13],
[19]-[20], [21], [22]-[25], and so one.\par
\ They are closely related with the so-called {\it exponential} Orlicz spaces, see [5], [6], [7], [22], [23]-[25] etc. \par
\vspace{4mm}
\ Denote for simplicity
$$
\nu_L(p) := W_L^{(d)}[f](p), \ \nu(p) := \sup_{L: |L| \ge 1} \psi_L(p), \eqno(6.1)
$$
and suppose
$$
\exists b = \const \in (1, \infty]; \ \forall p \in (1,b) \ \Rightarrow \psi(p) < \infty. \eqno(6.2)
$$
\ Recall that the norm of the random variable $ \ \xi \ $ in the so-called Grand Lebesgue Space $ \ G \psi \ $ is defined as follows
$$
||\xi||G\psi \stackrel{def}{=} \sup_{p \in (1,b)} \left\{ \frac{|\xi|_p}{\psi(p)} \right\}. \eqno(6.3)
$$
\ Here the {\it generating function} for these spaces $ \ \psi = \psi(p) \ $ will be presumed to be continuous inside the open interval $ \ p \in (1,b) \ $
and such that
$$
\inf_{p \in (1,b) } \psi(p) > 0.
$$
\ The inequalities (5.11) and (5.11a) may be rewritten as follows
$$
||S_L[f]||G\nu_L \le 1; \ \hspace{5mm} \sup_L ||S_L[f]||G\nu \le 1. \eqno(6.4)
$$
\ The so-called tail function $ \ T_{f}(y), \ y \ge 0 \ $ for arbitrary (measurable) numerical valued function (random variable, r.v.)
$ \ f \ $ is defined as usually
$$
T_{f}(y) \stackrel{def}{=} \max ( {\bf P}(f \ge y), \ {\bf P}(f \le -y) ), \ y \ge 0.
$$
\ Obviously, if the r.v. $ \ f \ $ is non-negative, then
$$
T_{f}(y) = {\bf P}(f \ge y), \ y \ge 0.
$$
\ It is known that and if $ \ f \in G\psi, \ ||f||G\psi = 1, \ $ then
$$
T_{f}(y) \le \exp \left( -\zeta_{\psi}^*(\ln(y) \right), \ y \ge e \ \eqno(6.5)
$$
where
$$
\zeta(p) = \zeta_{\psi}(p) := p \ \ln \psi(p).
$$
\ Here the operator (non-linear) $ \ f \to f^* \ $ will denote the famous Young-Fenchel, or Legendre transform
$$
f^*(u) \stackrel{def}{=} \sup_{x \in \Dom(f)} (x \ u - f(x)).
$$
\vspace{5mm}
\ We deduce by means of theorem 5.1 and property (6.5) \par
\vspace{5mm}
\ {\bf Proposition 6.1.}
$$
T_{S_L[f]}(y) \le \exp \left( \ - \nu_L \left( \ \ln(y) \ \right) \ \right), \ y \ge e; \eqno(6.6)
$$
$$
\sup_L \ T_{S_L[f]}(y) \le \exp \left( \ -\nu \left( \ \ln(y) \ \right) \ \right), \ y \ge e. \eqno(6.6a)
$$
\vspace{5mm}
\ {\bf Example 6.1}. \par
\ Let us bring an example, see [30] for the centered r.v. Let $ \ m = \const > 1 \ $ and define $ \ q = m' = m/( m-1). \ $
Let also $ \ R = R(y), \ y > 0 \ $ be positive continuous differentiable {\it slowly varying } at infinity function such that
$$
\lim_{\lambda \to \infty} \frac{R(y/R(y))}{R(y)} = 1. \eqno(6.7)
$$
\ Introduce a following $ \ \psi \ - \ $ function
$$
\psi_{m,R} (p) \stackrel{def}{=} p^{1/m} R^{-1/(m-1)} \left( p^{ (m-1)^2/m } \right\}, \ p \ge 1, m = \const > 1, \eqno(6.7a)
$$
\ Suppose
$$
\nu(p) \le \psi_{m,R} (p), \ p \in [1, \infty);
$$
then [19]-[20], [30] the correspondent exponential tail function has a form
$$
T^{(m,R)}(y) \stackrel{def}{=} \exp \left\{ - C(m,R) \ \ y^m \ R^{ m-1} \left(y^{m-1} \right) \right\}, \ C(m,R) > 0, \ y \ge 1; \eqno(6.7b)
$$
so that
$$
\sup_L T_{S_L}(y) \le T^{(m,R)}(y), \ y \ge 1. \eqno(6.8)
$$
\vspace{4mm}
\ A particular cases: $ \ R(y) = \ln^r (y+e), \ r = \const, \ y \ge 0; $ then the correspondent generating functions has a form (up to multiplicative constant)
$$
\psi_{m,r}(p) = \ p^{1/m} \ \ln^{-r}(p), \ p \in [2, \infty), \eqno(6.9a)
$$
and the correspondent tail function has a form
$$
T^{m,r}(y) = \exp \left\{ \ - K(m,r) \ y^m \ (\ln y)^{r} \ \right\}, \ K(m,r) > 0, \ y \ge e. \eqno(6.9b)
$$
\ Many other examples may be found in [19], [20], [22], [30] etc. \par
\vspace{4mm}
\ {\bf Example 6.2. } Let the function $ \ f: X = \otimes_{s=1}^d X_s \to R \ $ be from the degenerate representation
$$
f(\vec{x}) = \sum_{\vec{k} \in Z_+^d, \ N(\vec{k}) \le M} \lambda(\vec{k}) \ \prod_{j=1}^d g^{(j)}_{k_j}(x_j), \eqno(5.2a)
$$
for some constant integer value $ \ M, \ $ finite or not, where all the functions $ \ g_k^{(j)}(\cdot) \ $ are in turn non - negative:
$ \ g_k^{(j)}(\xi(k)) \ge 0. \ $ Recall the denotation: $ \ M = \deg[f]. $ \par
\ {\it Suppose here and in what follows in this section that}
$$
\sum_{\vec{k} \in Z_+^d, \ N(\vec{k}) \le M} | \ \lambda(\vec{k}) \ | \le 1 \eqno(6.10)
$$
and that each the non-negative r.v. $ \ g_k^{(j)}(\xi(k)) \ $ belongs to some $ \ G\psi_k \ - \ $ space uniformly relative the index $ \ j: \ $
$$
\sup_j| \ g_k^{(j)}(\xi(k)) \ |_p \le \psi_k(p). \eqno(6.11)
$$
\ Of course, as a capacity of these functions may be picked the natural functions for the r.v. $ \ g_k(\xi(k)): \ $
$$
\psi_k(p) \stackrel{def}{=} \sup_j |g_k^{(j)}(\xi(k))|_p,
$$
if the last function is finite for some non-trivial interval $ \ [2, a(k)), \ $ where $ \ a(k) \in (2, \infty]. \ $ \par
\ Obviously,
$$
|f(\vec{\xi})|_p \le \prod_{k=1}^d \psi_k(p),
$$
and the last inequality is exact if for instance $ \ M = 1 \ $ and all the functions $ \ \psi_k(p) \ $ are natural for the family of the
r.v. $ \ g^{(j)}_k(\xi(k)). \ $\par
\vspace{4mm}
\ Define the following $ \ \Psi \ - \ $ function
$$
\beta(p) = \kappa_d[\vec{\xi}](p) \stackrel{def}{=} \prod_{k=1}^d \psi_k(p).
$$
\ The assertion of proposition (5.1) gives us the estimations
$$
\sup_{L: |L| \ge 1} ||S_L[f]||G\kappa \le 1 \eqno(6.12)
$$
and hence
$$
\sup_{L: |L| \ge 1} T_{S_L[f]}(u) \le \exp \left( -v^*_{\kappa}(\ln u) \right), \ u \ge e, \eqno(6.12b)
$$
with correspondent Orlicz norm estimate. \par
\vspace{4mm}
\ {\bf Example 6.3. } \par
\ Suppose again that
$$
\sum_{\vec{k} \in Z_+^d, \ N(\vec{k}) \le M} \ | \lambda(\vec{k}) \ | \le 1
$$
and that the arbitrary r.v. $ \ g^{(j)}_k(\xi(k)) \ $ belongs uniformly relative the index $ \ j \ $ to the correspondent
$ \ G\psi_{m(k), \gamma(k)} \ $ space:
$$
\sup_j | \ g^{(j)}_k(\xi(k)) \ |_p \le p^{1/m(k)} \ [\ln \ p]^{\gamma(k)}, \ p \ge 2, \ m(k) > 0, \ \gamma(k) \in R, \eqno(6.13)
$$
or equally
$$
\sup_j T_{g^{(j)}_k(\xi(k))}(u) \le \exp \left( - C(k) \ u^{m(k)} \ [\ln u]^{ - \gamma(k) } \right), \ u \ge e. \eqno(6.13a)
$$
\ Define the following variables:
$$
m_0 := \left[ \sum_{k=1}^d 1/m(k) \right]^{-1}, \ \gamma_0 := \sum_{k=1}^d \gamma(k) , \eqno(6.14)
$$
$$
\hat{S}_L = \hat{S}_L[f] := e^d \ C^{-d}_R \ S_L. \eqno(6.15)
$$
\ We conclude by means of the proposition 5.1
$$
\sup_{ L: |L| \ge 1} \left| \left| \hat{S}_L \right| \right| G\psi_{m_0, \gamma_0} \le 1 \eqno(6.16)
$$
and therefore
$$
\sup_{ L: |L| \ge 1} T_{\hat{S}_L}(u) \le \exp \left\{ - C(d,m_0, \gamma_0) \ u^{m_0} \ (\ln u)^{ - \gamma_0} \right\},
\ u > e. \eqno(6.17)
$$
\vspace{4mm}
{\bf Example 6.4.}
\vspace{4mm}
\ Let us consider as above the following $ \ \psi_{\beta}(p) \ $ function
$$
\psi_{\beta,C}(p) := \exp \left( C p^{\beta} \right), \ C, \ \beta = \const > 0, \ p \in [1,\infty). \eqno(6.18)
$$
see example 6.3, (6.15)-(6.17). \par
\ Let $ \ g_k^{(j)}(\xi(k)) \ $ be non-negative independent random variables belonging to the certain $ \ G \psi_{\beta,C}(\cdot) \ $ space
uniformly relative the indexes $ \ k,j: $
$$
\sup_j \sup_k ||g_k^{(j)}(\xi(k))|| G \psi_{\beta,C} = 1, \eqno(6.19)
$$
or equally
$$
\sup_j \ \sup_k T_{g_k^{(j)}(\xi(k)}(y) \le \exp \left( \ - C_1(C, \beta) \ [ \ln(1 + y) ]^{1 +1/\beta} \ \right), \ y > 0.
$$
\ Then
$$
\sup_{L: \ |L| \ge 1} T_{S_L}(y) \le \exp \left( \ - C_2(C, \beta) \ [ \ln(1 + y) ]^{1 +1/\beta} \ \right), \ y > 0, \eqno(6.20)
$$
or equally
$$
\sup_{L: |L| \ge 1} || S_L[f]|| G \psi_{\beta,C_3(C, \beta)} = C_4(C, \beta) < \infty. \eqno(6.20a)
$$
\vspace{4mm}
\ {\bf Example 6.5.} Suppose now that the each non-negative random variable $ \ g_k^{(j)}(\xi(k)) \ $ belongs uniformly relative the index $ \ j \ $
to certain $ \ G\psi^{<b(k), \theta(k)>} \ $ space, where $ \ b(k) \in (2, \infty), \ \theta(k) \in R: \ $
$$
\sup_j || \ g_k^{(j)}(\xi(k)) \ || G\psi^{<b(k), \theta(k)>} < \infty,
$$
where by definition
$$
\psi^{<b(k),\theta(k)>}(p) \stackrel{def}{=} C_1(b(k),\theta) \ (b(k)-p)^{ -(\theta(k) + 1)/b(k) }, \ 1 \le p < b(k).
$$
\ This case is more complicates than considered before. \par
\ Note that if the r.v. $ \ \eta \ $ satisfies the inequality
$$
T_{\eta}(y) \le C \ y^{-b(k)} \ [\ln y]^{ \theta(k)}, \ y \ge e,
$$
then $ \eta \in G\psi^{<b(k),\theta(k)>}, $ see the example 6.2.\par
\ One can assume without loss of generality
$$
b(1) \le b(2) \le b(3) \le \ldots b(d).
$$
\ Denote
$$
\nu_k(p) := \psi^{ < b(k), \theta(k) >}(p), \ b(0):= \min_k b(k),
$$
so that $ \ b(0) = b(1) = $
$$
b(2) = \ldots = b(k(0)) < b(k(0) + 1) \le \ldots \le b(d), \ 1 \le k(0) \le d;
$$
$$
\Theta := \sum_{k=1}^{k(0)} (\theta(k) + 1)/b(0),
$$
$$
\upsilon(p) = \upsilon_{\vec{\xi}}[f](p) \stackrel{def}{=} \prod_{l=1}^{k(0)} \nu_l(p) =
C \cdot \left[ \ b(0) - p \ \right]^{ - \Theta }, \ 2 \le p < b(0).
$$
\ Obviously,
$$
\ \prod_{k=1}^d \ \nu_k(p) \le C(d) \ \upsilon(p) = C \ \left[ \ b(0) - p \ \right]^{ - \Theta }, \
C = C_d(\vec{\xi}, \vec{b}, \vec{\theta}, k(0)).
$$
\ Thus, we obtained under formulated above conditions
$$
\sup_{L: |L| \ge 1} |S_L|_p \le C_2 \ (b(0) - p)^{-\Theta}, \ p \in [2, b(0))
$$
with the correspondent tail estimate
$$
\sup_{L: |L| \ge 1} T_{S_L}(y) \le C_3 \ y^{-b(0)} \ [ \ \ln y \ ]^{ \ b(0) \ \Theta}, \ y \ge e.
$$
\vspace{4mm}
\section{Upper bounds for these statistics. }
\vspace{4mm}
\ {\bf A.} A simple lower estimate in the Klesov's (3.4) inequality may has a form
$$
\sup_{L: |L| \ge 1} \left| S^{(2)}_L \right|_p \ge \left| S^{(2)}_1 \right|_p =
\ |g(\xi)|_p \ |h(\eta)|_p, \ p \ge 2, \eqno(7.1)
$$
as long as the r.v. $ \ g(\xi), \ h(\eta) \ $ are independent. \par
\ Suppose now that $ \ g(\xi) \in G\psi_1 \ $ and $ \ h(\eta) \in G\psi_2, \ $ where $ \ \psi_j \in \Psi(b), \ b = \const \in (2, \infty]; \ $
for instance $ \ \psi_j, \ j = 1,2 \ $ must be the natural functions for these r.v. Put $ \ \nu(p) = \psi_1(p) \ \psi_2(p); \ $ then
$$
\nu(p) \le \sup_{L: |L| \ge 1} \left| S^{(2)}_L \right|_p \le K_1^d \cdot \ \nu(p), \ K_1< \infty. \eqno(7.2)
$$
\ Assume in addition that $ \ b < \infty; \ $ then $ \ K_1 \le C(b) < \infty. \ $ We get to the following assertion. \par
\vspace{4mm}
{\bf Proposition 7.1.} We deduce under formulated above in this section conditions
$$
1 \le \frac{ \sup_{L: |L| \ge 1} \left|S_L \right|_p}{\nu(p)} \le C^d(b) < \infty, \ p \in [2,b). \eqno(7.3)
$$
\vspace{4mm}
\ {\bf B. \ Tail approach.} We will use the example 6.2 (and notations therein. ) Suppose in addition that all the (independent) r.v. $ \ \xi(k) \ $
have the following tail of distribution
$$
T_{g_l(\xi(k))}(y) = \exp \left( \ - [\ln(1 + y)]^{1 +1/\beta} \ \right), \ y \ge 0, \ \beta = \const > 0,
$$
i.e. an unbounded support. As we knew,
$$
\sup_{L: \ |L| \ge 1} T_{S_L}(y) \le \exp \left( \ - C_5(\beta,d) \ [ \ln(1 + y) ]^{1 +1/\beta} \ \right), \ y > 0,
$$
On the other hand,
$$
\sup_{L: \ |L| \ge 1} T_{S_L}(y) \ge T_{S_1}(y) \ge \exp \left( \ - C_6(\beta,d) \ [\ln(1 + y)]^{1 +1/\beta} \ \right), \ y > 0. \eqno(7.4)
$$
\vspace{4mm}
\ {\bf C. An example.} Suppose as in the example 6.1 that the independent centered r.v. $ \ g^{(j)}_k(\xi(k)) \ $ have the standard Poisson
distribution:$ \ \Law(\xi(k) ) = \Poisson(1), \ k = 1,2,\ldots,d. \ $ Assume also that in the representation (5.2a) $ \ M = 1 \ $
(a limiting degenerate case). As long as
$$
|g_k^{(j)}(\xi(k))|_p \asymp C \frac{p}{\ln p}, \ p \ge 2,
$$
we conclude by virtue of theorem 5.1
$$
\sup_{L: |L| \ge 1} \left| \ S_L \right|_p \le C_2^d \ \frac{p^{2d}}{ [\ln p]^{2d}}, \ p \ge 2, \eqno(7.5)
$$
therefore
$$
\sup_{L: |L| \ge 1} T_{S_L}(y) \le \exp \left( - C_1(d) \ y^{1/(2d)} \ [\ln y]^{2d} \right), \ y \ge e. \eqno(7.6)
$$
\ On the other hand,
$$
\sup_{L: |L| \ge 1} \left| S_L \right|_p \ge \left| S_1 \right|_p \ge C_3(d) \ \frac{p^d}{ [\ln p]^d},
$$
and following
$$
\sup_{L: |L| \ge 1} T_{S_L}(y) \ge \exp \left( - C_4(d) \ y^{1/d} \ [\ln y]^{d} \right), \ y \ge e. \eqno(7.7)
$$
\vspace{4mm}
\section{Concluding remarks. }
\vspace{4mm}
\ {\bf A.} \ It is interest by our opinion to generalize obtained in this report results onto the
mixing sequences or onto martingales, as well as onto the multiple integrals instead sums. \par
\vspace{4mm}
{\bf B.} \ Perhaps, a more general results may be obtained by means of the so-called method of
majorizing measures, see [1]-[3], [11], [29], [31]-[35]. \par
\vspace{4mm}
{\bf C. } \ Possible applications: statistics and Monte-Carlo method, alike [8], [10] etc.\par
\vspace{4mm}
{\bf D.} \ It is interest perhaps to generalize the assertions of our theorems onto the sequences of domains $ \ \{ \ L \ \} \ $
tending to ``infinity'' in the van Hove sense, in the spirit of an articles [26]-[27], [30]. \par
\vspace{4mm}
{\bf References.}
\vspace{4mm}
{\bf 1. Bednorz W.} (2006). {\it A theorem on Majorizing Measures.} Ann. Probab., {\bf 34,} 1771-1781. MR1825156.\par
\vspace{4mm}
{\bf 2. Bednorz W.} {\it The majorizing measure approach to the sample boundedness.} \\
arXiv:1211.3898v1 [math.PR] 16 Nov 2012 \\
\vspace{4mm}
{\bf 3. Bednorz W.} (2010), {\it Majorizing measures on metric spaces.}
C.R. math. Acad. Sci. Paris, (2010), 348, no. 1-2, 75-78, MR2586748. \\
\vspace{4mm}
{\bf 4. Bennet C., Sharpley R. } {\it Interpolation of operators.} Orlando, Academic
Press Inc., (1988). \\
\vspace{4mm}
{\bf 5. Buldygin V.V., Kozachenko Yu.V.} {\it Metric Characterization of Random
Variables and Random Processes. } 1998, Translations of Mathematics Monograph,
AMS, v.188. \\
\vspace{4mm}
{\bf 6. A. Fiorenza.} {\it Duality and reflexivity in grand Lebesgue spaces.} Collect.
Math. 51, (2000), 131-148.\\
\vspace{4mm}
{\bf 7. A. Fiorenza and G.E. Karadzhov.} {\it Grand and small Lebesgue spaces and
their analogs.} Consiglio Nationale Delle Ricerche, Instituto per le Applicazioni
del Calcoto Mauro Picone", Sezione di Napoli, Rapporto tecnico 272/03, (2005). \\
\vspace{4mm}
{\bf 8. Frolov A.S., Chentzov N.N.} {\it On the calculation by the Monte-Carlo method definite integrals depending on the parameters.}
Journal of Computational Mathematics and Mathematical Physics, (1962), V. 2, Issue 4, p. 714-718 (in Russian).\\
\vspace{4mm}
{\bf 9. Gine,E., R.Latala, and J.Zinn.} (2000). {\it Exponential and moment inequalities for U \ - \ statistics.}
Ann. Probab. 18 No. 4 (1990), 1656-1668. \\
\vspace{4mm}
{\bf 10. Grigorjeva M.L., Ostrovsky E.I.} {\it Calculation of Integrals on discontinuous Functions by means of depending trials method.}
Journal of Computational Mathematics and Mathematical Physics, (1996), V. 36, Issue 12, p. 28-39 (in Russian).\\
\vspace{4mm}
{\bf 11. Heinkel B.} {\it Measures majorantes et le theoreme de la limite centrale dan le space C(S).}
Z. Wahrscheinlichkeitstheory. verw. Geb., (1977). {\bf 38}, 339-351.
\vspace{4mm}
{\bf 12. R.Ibragimov and Sh.Sharakhmetov.} {\it The Exact Constant in the Rosenthal Inequality for Random Variables with Mean Zero.}
Theory Probab. Appl., 46(1), 127–132. \\
\vspace{4mm}
{\bf 13. T. Iwaniec and C. Sbordone.} {\it On the integrability of the Jacobian under
minimal hypotheses.} Arch. Rat.Mech. Anal., 119, (1992), 129-143.\\
\vspace{4mm}
{\bf 14. Oleg Klesov.} {\it A limit theorem for sums of random variables indexed by multidimensional indices. }
Prob. Theory and Related Fields, 1981, {\bf 58,} \ (3), 389-396. \\
\vspace{4mm}
{\bf 15. Oleg Klesov.} {\it A limit theorem for multiple sums of identical distributed independent
random variables. } Journal of Soviet Mathematics, September 1987, V. 38 Issue 6 pp. 2321-2326.
Prob. Theory and Related Fields, 1981, {\bf 58, \ (3),} 389-396. \\
\vspace{4mm}
{\bf 16. Oleg Klesov.} {\it Limit Theorems for Multi-Indexes Sums of Random Variables. } Springer, 2014.\\
\vspace{4mm}
{\bf 17. O. Klesov.} {\it Limit theorems for multi-indexed sums of random variables.}
Volume 71 of Probability Theory and Stochastic Modeling. Springer Verlag, Heidelberg, 2014. \\
\vspace{4mm}
{\bf 18. Korolyuk V.S., Borovskikh Yu.V.} (1994). {\it Theory of U-Statistics.} Kluwner Verlag, Dordrecht,
(translated from Russian).\\
\vspace{4mm}
{\bf 19. Kozachenko Yu. V., Ostrovsky E.I.} (1985). {\it The Banach Spaces of
random Variables of subgaussian Type.} Theory of Probab. and Math. Stat. (in
Russian). Kiev, KSU, 32, 43-57. \\
\vspace{4mm}
{\bf 20. Kozachenko Yu.V., Ostrovsky E., Sirota L} {\it Relations between exponential tails, moments and
moment generating functions for random variables and vectors.} \\
arXiv:1701.01901v1 [math.FA] 8 Jan 2017 \\
\vspace{4mm}
{\bf 21. E. Liflyand, E. Ostrovsky and L. Sirota.} {\it Structural properties of Bilateral Grand Lebesgue Spaces. }
Turk. Journal of Math., 34, (2010), 207-219. TUBITAK, doi:10.3906/mat-0812-8 \\
\vspace{4mm}
{\bf 22. Ostrovsky E.I.} (1999). {\it Exponential estimations for Random Fields and
its applications,} (in Russian). Moscow-Obninsk, OINPE.\\
\vspace{4mm}
{\bf 23. Ostrovsky E. and Sirota L.} {\it Sharp moment estimates for polynomial martingales.}\\
arXiv:1410.0739v1 [math.PR] 3 Oct 2014 \\
\vspace{4mm}
{\bf 24. Ostrovsky E. and Sirota L.} {\it Moment Banach spaces: theory and applications. }
HAIT Journal of Science and Engineering C, Volume 4, Issues 1-2, pp. 233-262.\\
\vspace{4mm}
{\bf 25. Ostrovsky E. and Sirota L.} {\it Schl\''omilch and Bell series for Bessel's functions, with probabilistic applications.} \\
arXiv:0804.0089v1 [math.CV] 1 Apr 2008\\
\vspace{4mm}
{\bf 26. E. Ostrovsky, L.Sirota.} {\it Sharp moment and exponential tail estimates for U-statistics. }
arXiv:1602.00175v1 [math.ST] 31 Jan 2016 \\
\vspace{4mm}
{\bf 27. Ostrovsky E. and Sirota L.} {\it Uniform Limit Theorem and Tail Estimates for parametric U-Statistics.}
arXiv:1608.03310v1 [math.ST] 10 Aug 2016\\
\vspace{4mm}
{\bf 28. Ostrovsky E.I.} {\it Non-Central Banach space valued limit theorem and applications.} In: Problems of the theory of probabilistic
distributions. 1989, Nauka, Proceedings of the scientific seminars on Steklov's Institute, Leningrad, V.11, p. 114-119, (in Russian).\\
\vspace{4mm}
{\bf 29. Ostrovsky E. and Sirota L.} {\it Simplification of the majorizing measures method, with development.} \\
arXiv:1302.3202v1 [math.PR] 13 Feb 2013 \\
\vspace{4mm}
{\bf 30. Ostrovsky E., Sirota L.} {\it Non-asymptotic estimation for Bell function, with probabilistic applications.} \\
arXiv:1712.08804v1 [math.PR] 23 Dec 2017 \\
\vspace{4mm}
{\bf 31. Talagrand M.} (1996). {\it Majorizing measure: The generic chaining.}
Ann. Probab., {\bf 24,} 1049-1103. MR1825156.\\
\vspace{4mm}
{\bf 32. Talagrand M.} (2005). {\it The Generic Chaining. Upper and Lower Bounds of Stochastic Processes.}
Springer, Berlin. MR2133757.\\
\vspace{4mm}
{\bf 33. Talagrand M.} (1987). {\it Regularity of Gaussian processes.} Acta Math. 159 no. 1-2, {\bf 99,} \ 149, MR 0906527.\\
\vspace{4mm}
{\bf 34. Talagrand M.} (1990). {\it Sample boundedness of stochastic processes under increment conditions.} \ Annals of Probability,
{\bf 18,} \ N. 1, 1-49, \ MR10439.\\
\vspace{4mm}
{\bf 35. Talagrand M.} (1992). {\it A simple proof of the majorizing measure theorem.}
Geom. Funct. Anal. 2, no. 1, 118-125. MR 1143666.\\
\vspace{4mm}
\end{document} | {
"attr-fineweb-edu": 1.120117,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUa2DxaKPQoka4QWv8 | \section{Introduction}\label{sec:intro}
Many tasks in natural language processing have been transformed in recent years by the introduction of very large annotated datasets. Prominent examples include paraphrase \citep{ganitkevitch-etal-2013-ppdb}, parsing \citep{nivre-etal-2016-universal}, question answering (\citealp{rajpurkar-etal-2016-squad}), machine translation (MT; \citealp{bojar-etal-2014-findings}), and natural language inference (NLI; \citealp{bowman-etal-2015-large, williams-etal-2018-broad}).
Unfortunately, outside of parsing and MT, these datasets tend to be in English. This is not only an obstacle to progress on other languages, but it also limits the field of NLP itself: English is generally not a representative example of the world's languages when it comes to morphology, syntax, or spelling conventions and other kinds of standardization \citep{Munro:2012}, so it's risky to assume that models and results for English will generalize to other languages.
A natural response to these gaps in our dataset coverage might be to launch new annotation efforts for multiple languages. However, this would likely be prohibitively expensive. For example, based on the costs of SNLI \citep{bowman-etal-2015-large} and MultiNLI \citep{williams-etal-2018-broad}, we estimate that each large dataset for NLI would cost upwards of US\,\$50,000 if created completely from scratch.
At the same time, commercial MT systems have improved dramatically in recent years. They now offer high-quality translations between hundreds of language pairs. This raises the question: can we use these MT systems to translate English-language datasets and use the translated versions to drive more genuinely multilingual development in NLP?
In this paper, we offer evidence that the answer to this question is ``yes''. Using Amazon Translate, we translated SNLI and MultiNLI from English into Turkish, at a tiny fraction of the cost of creating new Turkish NLI datasets from scratch. Turkish is an interesting challenge case in this context, since it is very different from English, most notably in its complex morphology and very free word order. This is likely to stress-test English-to-Turkish MT systems and also present new challenges for NLI. We refer to these datasets collectively as NLI-TR.
In our validation phase (\secref{sec:datasets}), a team of Turkish--English bilingual speakers assessed the quality of a large sample of the translations in NLI-TR. They found the quality to be very high, which suggests that these translated datasets can provide a foundation for NLI work in Turkish.
As an example of the new issues in NLI that these translations help us to address, we consider the role of pretraining and morphological parsing in successful NLI systems for Turkish (\secref{sec:experiments}). For these experiments, we fit classifiers on top of pretrained BERT parameters \citep{devlin-etal-2019-bert}. This allows us to compare the original BERT-base release, the multilingual BERT embeddings released by the BERT team, and the Turkish BERT (BERTurk) emebeddings of \citet{berturk}. We find that the BERTurk embeddings are superior to the others for NLI-TR.
We then assess the use of two morphological parsers for Turkish as preprocessing steps: Zemberek \citep{Zemberek} and Turkish Morphology \citep{ozturel2019syntactically}. We find that the parsers help where training data is sparse, but the need for a parser disappears as the training data increases. This is a striking finding: one might have thought that Turkish would require morphological parsing given its complex word-formation processes. It might be regarded as welcome news, though, since the parsers are expensive to run. In \secref{sec:parsers}, we report on some new optimizations of existing tools to make the relevant parsing jobs feasible, but we would still like to avoid these steps if possible, and it seems that we can for NLI.
\section{Related Work}
Early in the development of textual entailment tasks, \citet{mehdad-etal-2010-towards} argued for multilingual versions of them. This led to subsequent explorations of a variety of techniques, including crowdsourcing translations \citep{negri-mehdad-2010-creating,negri-etal-2011-divide}, relying on parallel corpora to support reasoning across languages \citep{mehdad-etal-2011-using}, and automatically translating datasets using MT systems \citep{mehdad-etal-2010-towards}. This research informed SemEval tasks in 2012 \citep{negri-etal-2012-semeval} and 2013 \citep{negri-etal-2013-semeval} exploring the viability of multilingual NLI. From the perspective of present-day NLI models, these datasets are very small, but they could be used productively as challenge problems.
More recently, \citet{conneau-etal-2018-xnli} reinvigorated work on multilinguial NLI with their XNLI dataset. XNLI provides expert-translated evaluation sets from English into 14 other languages, including Turkish. These are valuable resources for pushing NLI research beyond English. However, having test sets doesn't support direct training on target language data, which is likely to lead to lower overall performance for the resulting systems than we would expect from in-language training.
Although it was not the main focus of the XNLI effort, \citeauthor{conneau-etal-2018-xnli} distributed translations of MultiNLI into other languages, including Turkish. This helped them form a strong baseline for their cross-lingual models, which proved superior in their assessments. However, the quality of the translation system plays a crucial here, as the authors note. Our hope for NLI-TR\ is that it supports effective in-language training.
XNLI's primary focus on test sets rather than training is justified by a wide body of recent results on cross-lingual transfer learning. Multilingual embeddings (embeddings trained on multilingual corpora) have played an important role in these developments. The BERT team \citep{devlin2018bert} released multilingual embeddings and demonstrated their value using XNLI. At the same time, BERT models have been released for a variety of individual languages (see \citealp{Wolf2019HuggingFacesTS}) and specialized domains \citep{alsentzer-etal-2019-publicly,lee2020biobert}. While we might expect the language- and domain-specific embeddings to be superior for the kind of data they were trained on, the multilingual versions might be more efficient in large-scale deployments in diverse environments. Balancing these trade-offs is challenging. Here, we offer some insight into these trade-offs for Turkish.
Turkish is a morphologically-rich language in which new word forms are freely created using suffixation. A word in Turkish bears morpho-syntactic properties in the sense that phrases formed of several words in languages like English can be expressed with a single word form. Several morphological parsers \citep{Zemberek, ozturel2019syntactically, sak2009stochastic} and morphological disambiguation systems \citep{sak2011resources} have been developed for Turkish. The state-of-the-art morphological analyzers can parse with success rates around 95\%. We use two of these parsers in this work to evaluate the role of morphology on NLI systems (\secref{sec:parsers}).
\section{Creating and Validating NLI-TR}\label{sec:datasets}
\begin{table*}[!ht]
\centering
\footnotesize
\setlength{\tabcolsep}{4pt}
\begin{tabular}{ l l *{6}{r} }\toprule
& & \multicolumn{3}{c}{\textbf{English}} & \multicolumn{3}{c}{\textbf{Turkish}} \\ \cmidrule{3-8}
\textbf{Dataset} & \textbf{Fold} & \textbf{Token Count} & \textbf{\begin{tabular}[l]{@{}l@{}}Vocab Size\\ (Cased)\end{tabular}} & \textbf{\begin{tabular}[l]{@{}l@{}}Vocab Size\\ (Uncased)\end{tabular}} & \textbf{Token Count} & \textbf{\begin{tabular}[l]{@{}l@{}}Vocab Size\\ (Cased)\end{tabular}} & \textbf{\begin{tabular}[l]{@{}l@{}}Vocab Size\\ (Uncased)\end{tabular}} \\ \midrule
\multirow{3}{*}{\textbf{SNLI}}&\textbf{Train} & 5900366 & 38565 & 32696 & 4298183 & 78786 & 66599 \\
&\textbf{Dev} & 120900 & 6664 & 6224 & 88668 & 11455 & 10176 \\
&\textbf{Test} & 120776 & 6811 & 6340 & 88533 & 11547 & 10259 \\ \midrule
\multirow{3}{*}{\textbf{MultiNLI}}&\textbf{Train} & 6356136 & 81937 & 66082 & 4397213 & 216590 & 187053 \\
&\textbf{Matched Dev} & 161152 & 14493 & 12659 & 112192 & 27554 & 24872 \\
&\textbf{Mismatched Dev} & 170692 & 12847 & 11264 & 119691 & 26326 & 23941 \\
\bottomrule
\end{tabular}
\caption{Comparative statistics for the English and Turkish NLI datasets. The Turkish translations have larger vocabularies and lower token counts due to the highly agglutinating morphology of Turkish as compared to English.}
\label{tab:dataset_stats}
\end{table*}
\subsection{English NLI Datasets}\label{sec:en-datasets}
We translated the Stanford Natural Language Inference Corpus (SNLI; \citealp{bowman-etal-2015-large}) and the Multi-Genre Natural Language Inference Corpus (MultiNLI; \citealp{MultiNLI}) to create labeled NLI datasets for Turkish, NLI-TR.
SNLI contains $\approx$570K semantically related English sentence pairs. The semantic relations are entailment, contradiction, and neutral. The premise sentences for SNLI are image captions from the Flickr30K corpus \citep{young-etal-2014-image}, and the hypothesis sentences were written by crowdworkers. SNLI texts are mostly short and structurally simple. We translated SNLI while respecting the train, development (dev), and test splits.
MultiNLI comprises $\approx$433K sentence pairs in English, and the pairs have the same semantic relations as SNLI. However, MultiNLI spans a wider range of genres, including travel guides, fiction and non-fiction texts, dialogue, and journalism. As a result, the texts are generally more complex than SNLI. In addition, MultiNLI contains \emph{matched} and \emph{mismatched} dev and test sets, where the sentences in the former set are from the same sources as the training set, whereas the latter consists of texts from different genres than those found in the training set. We translated the training set and both dev sets for NLI-TR.
\subsection{Automatic Translation Effort}
As we noted in \secref{sec:intro}, Turkish is a resource-constrained language with few labeled datasets compared to English. Furthermore, Turkish has a fundamentally different grammar from English that could hinder transfer-learning approaches. These facts motivate our effort to translate SNLI and MultiNLI from English to Turkish. We employ an automatic MT system for this in the hope that it will deliver sufficiently high-quality translations that we can use the resulting dataset for NLI research and system development in Turkish.
We used Amazon Translate, a commercial neural machine translation service. Translation of all folds of SNLI and MultiNLI cost just US\,\$2K (vs.~the $\approx$US\,\$100K we would expect for replicating these two datasets from scratch). We refer to the translated datasets as SNLI-TR and MultiNLI-TR, and collectively as NLI-TR. \Tabref{tab:sample_translations_table} shows translation examples from both datasets.
SNLI-TR and MultiNLI-TR are different from SNLI and MultiNLI in terms of token counts and vocabulary sizes. \Tabref{tab:dataset_stats} illustrates these features before and after translation. For each fold in each dataset, translation decreased the number of tokens in the corpus, but it increased the vocabulary sizes drastically, in both the cased and uncased versions. Both these differences are expected: the agglutinating nature of Turkish means that many multi-word expressions in English naturally translate into individual words. For instance, the four-word English expression ``when in your apartment'' can be translated to the single word ``evinizdeyken''.
\begin{table*}[!ht]
\centering
\begin{tabular}{llcc}
& & \textbf{English} & \textbf{Turkish} \\ \hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{p{2.5cm}|}{\textbf{Premise}} & \multicolumn{1}{p{5cm}|}{Three men are sitting near an orange building with blue trim.} & \multicolumn{1}{p{5cm}|}{\"{U}\c{c} adam mavi s\"{u}slemeli turuncu bir binan{\i}n yan{\i}nda oturuyor.} \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{p{2.5cm}|}{ \textbf{Entailment}} & \multicolumn{1}{p{5cm}|}{ Three males are seated near an orange building with blue trim.} & \multicolumn{1}{p{5cm}|}{ \"{U}\c{c} erkek mavi s\"{u}sl\"{u} turuncu bir binan{\i}n yak{\i}n{\i}nda oturuyor.} \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{p{2.5cm}|}{ \textbf{Contradiction}} & \multicolumn{1}{p{5cm}|}{ Three women are standing near a yellow building with red trim.} & \multicolumn{1}{p{5cm}|}{\"{U}\c{c} kad{\i}n k{\i}rm{\i}z{\i} s\"{u}slemeli sar{\i} bir binan{\i}n yan{\i}nda duruyor.} \\ \cline{2-4}
\multicolumn{1}{|l|}{\multirow{-4}{*}{\textbf{SNLI}}} & \multicolumn{1}{p{2.5cm}|}{ \textbf{Neutral}} & \multicolumn{1}{p{5cm}|}{Three males are seated near an orange house with blue trim and a blue roof.} & \multicolumn{1}{p{5cm}|}{\"{U}\c{c} erkek mavi s\"{u}sl\"{u} ve mavi \c{c}at{\i}l{\i} turuncu bir evin yak{\i}n{\i}nda oturuyor.} \\ \hline
\multicolumn{1}{|l|}{} & \multicolumn{1}{{p{2.5cm}|}}{\textbf{Premise}} & \multicolumn{1}{p{5cm}|}{ All rooms have color TV, alarm clock/radio, en-suite bathrooms, real hangers, and shower massage.} & \multicolumn{1}{p{5cm}|}{T\"{u}m odalarda renkli TV, \c{c}alar saat/radyo, en-suite banyo, ger\c{c}ek ask{\i}lar ve du\c{s} masaj{\i} vard{\i}r.} \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{p{2.5cm}|}{\textbf{Entailment}} & \multicolumn{1}{p{5cm}|}{ All rooms also contain a ceiling fan and outlets for electronics.} & \multicolumn{1}{p{5cm}|}{ T\"{u}m odalarda ayr{\i}ca tavan vantilat\"{o}r\"{u} ve elektronik prizler bulunmaktad{\i}r.} \\ \cline{2-4}
\multicolumn{1}{|l|}{} & \multicolumn{1}{p{2.5cm}|}{ \textbf{Contradiction}} & \multicolumn{1}{p{5cm}|}{ You will not find a TV or alarm clock in any of the rooms.} & \multicolumn{1}{p{5cm}|}{ Odalar{\i}n hi\c{c}birinde TV veya \c{c}alar saat bulunmamaktad{\i}r.} \\ \cline{2-4}
\multicolumn{1}{|l|}{\multirow{-8}{*}{\textbf{MultiNLI}}} & \multicolumn{1}{p{2.5cm}|}{ \textbf{Neutral}} & \multicolumn{1}{p{5cm}|}{Color TVs, alarms, and hangers can be found in all rooms.} & \multicolumn{1}{p{5cm}|}{ T\"{u}m odalarda renkli TV'ler, alarmlar ve ask{\i}lar bulunur.} \\ \hline
\end{tabular}%
\caption{Sample translations from SNLI and MultiNLI into NLI-TR. Each premise is associated with a hypothesis from each of the three NLI categories.}
\label{tab:sample_translations_table}
\end{table*}
\Tabref{tab:dataset_stats} also reflects the complexity difference between SNLI and MultiNLI that we noted in \secref{sec:en-datasets}. Though SNLI contains more sentence pairs than MultiNLI, it has fewer tokens and a smaller vocabulary.
\subsection{Translation Quality Assurance}
Two major risks arise when using MT systems to translate NLI datasets. First, the translation quality might simply be poor. Second, even if the individual sentences are translated correctly, the nature of the mapping from the source to the target language might affect the semantic relations between sentences. For example, English has the words ``boy'' and ``girl'' to refer male and female children, and both those words can be translated to a gender-neutral Turkish word ``\c{c}ocuk''. Now, consider a premise sentence ``A boy is running'' and its contradiction pair ``A girl is running''. Both sentences can be translated fluently into the same Turkish sentence, ``\c{C}ocuk koşuyor'', which changes the semantic relation from contradiction to entailment.
Thus, to determine the viability of NLI-TR\ as a tool for NLI research, we must assess both translation quality and the consistency of the NLI labels. To do this, we assembled a team of ten Turkish--English bilingual speakers who were familiar with the NLI task and were either MSc.\ candidates or graduates in a relevant field.
For our expert evaluation, we focused initially on SNLI-TR. We grouped the translations into example sets of 4 sentences as in \tabref{tab:sample_translations_table}, where the first sentence (premise) is semantically related to the rest (hypotheses). We distributed the sets to the experts so that each set was examined by 5 randomly chosen experts and each expert co-examined approximately the same number of sets with each other expert. Each expert evaluated the translation by (i) grading the translation quality between 1 and 5 (inclusive; 5 the best) and (ii) checking if the semantic relation was altered by the translation. In total, 250 example sets (1000 translated sentences) were examined.
\Tabref{tab:dataset_quality_stats} reports the average quality grade and percentage of preserved semantic relations for each SNLI dataset split. These results are extremely reassuring: average translation quality is near 5 (ceiling) for all the splits, and label consistency is similarly very high. Our comparable effort for MultiNLI-TR is ongoing, but so far the results look equally strong for those translations.
\begin{table*}[tp]
\centering
\setlength{\tabcolsep}{12pt}
\begin{tabular}{l c c}
\toprule
\textbf{Split} & \textbf{Translation Quality} & \textbf{Label Consistency} \\ \midrule
\textbf{Train} & 4.56 (0.78) & 89.40\% \\
\textbf{Dev} & 4.46 (0.90) & 90.00\% \\
\textbf{Test} & 4.45 (0.86) &81.90\% \\
\textbf{All} & 4.46 (0.88) & 85.95\% \\
\bottomrule
\end{tabular}
\caption{Translation quality and label consistency of the translations in SNLI-TR based on expert judgements. For the quality ratings (1--5), we report mean and standard deviation (in parentheses). For label consistency, we report the percentage of SNLI-TR labels judged consistent with the original label.}
\label{tab:dataset_quality_stats}
\end{table*}
To assess the reliability of the translation quality scores, we calculated the Intra-Class Correlation (ICC; \citealt{mcgraw1996forming}). ICC is frequently adopted in medical studies to assess ordinal annotations provided by randomly chosen experts drawn from a team. Its assumptions align well with our evaluation scheme. We obtained an ICC of 0.9083, which suggests excellent agreement \citep{cicchetti1994guidelines,hallgren2012computing}.
We also computed Krippendorff's alpha \citep{krippendorff1970estimating}, which is an inter-annotator agreement metric used more commonly in NLP. This metric is suitable for both nominal and ordinal annotations involving multiple annotators. We calculate intercoder reliability of the ordinally-scaled translation quality score as 0.61, and our label consistency annotations yielded a score of 0.75. These values suggest less overall agreement than our ICC values do, but they are still acceptable, and ICC is arguably the more appropriate metric for our study. Krippendorff's alpha is generally used for large, diverse annotation teams, and its penalties for disagreements are known to be harsh.
Overall, then, it seems that we can trust our estimates of translation quality and label consistency, both of which are very high, thereby justifying further research using NLI-TR.
\section{Experiments}\label{sec:experiments}
To the best of our knowledge, NLI-TR\ is the first large NLI dataset in Turkish. Here, we report on case studies analyzing the effect of pretraining and of morphological parsing on Turkish NLI systems. We offer these initial experiments largely to show that NLI-TR\ is a valuable resource.
\subsection{Case Study I: Comparing BERT models on Turkish NLI Datasets}\label{sec:case-study-i}
The arrival of pretrained model-sharing hubs (e.g., Tensorflow Hub,\footnote{\url{https://github.com/tensorflow/hub}} PyTorch Hub,\footnote{\url{https://pytorch.org/hub}} and Hugging Face Hub\footnote{\url{https://huggingface.co./models}}) has democratized access to Transformer-based models \citep{Vaswani-et-al-2017}, which are mostly in English. Combined with the abundance of labeled English datasets for fine-tuning, this has increased the performance gap between English and resource-constrained languages.
Here, we use NLI-TR\ to analyze the effects of pretraining Transformer-based models. We compare three BERT models trained on different corpora and fine-tune them using NLI-TR. The results help quantify the importance of having high-quality, language-specific resources.
\subsubsection{Experimental Settings}
We compared BERT-English (BERT-EN), BERT-Multilingual, and BERTurk \citep{berturk}. BERT-EN is the original BERT-base model released by \citet{devlin2018bert}, which used an English-only corpus for training. BERT-Multilingual was released by the BERT team as well, and was trained on a corpus containing texts from 104 languages, including Turkish. \citeauthor{berturk}'s BERTurk also uses the same model architecture and is trained on a Turkish corpus ($\approx$30GB). We assess cased and uncased versions of each model.
We fine-tuned each model on train folds of NLI-TR\ separately and fixed the maximum sequence length to 128 for all experiments. Similarly, we used a common learning rate of $2\times 10^{-5}$ and batch size of 8 with no gradient accumulation. We fine-tuned each model for 3 epochs using HuggingFace's Transformers Library \cite{Wolf2019HuggingFacesTS}. We evaluated the models on the test set of SNLI-TR and the \emph{matched} and \emph{mismatched} dev splits of MultiNLI-TR. \Tabref{tab:case_study_i_results_table} reports the accuracy of each model on the evaluation sets.
\begin{table*}[!ht]\centering
\setlength{\tabcolsep}{12pt}
\begin{tabular}{lllll}
\toprule
\textbf{} &\textbf{SNLI-TR} &\multicolumn{2}{c}{\textbf{MultiNLI-TR}} \\\cmidrule{2-4}
\textbf{Model Name} & \textbf{Test} & \textbf{Matched Dev} &\textbf{Mismatched Dev} \\\midrule
\textbf{BERT-EN (Cased)} &82.09\% &69.98\% &70.56\% \\
\textbf{BERT-EN (Uncased)} &70.53\% &60.13\% &61.61\% \\ \midrule
\textbf{BERT-Multilingual (Cased)} &85.12\% &75.97\% &76.34\% \\
\textbf{BERT-Multilingual (Uncased)} &72.25\% &61.78\% &63.79\% \\ \midrule
\textbf{BERTurk (Cased)} &\textbf{87.28\%} &\textbf{79.58\%} &\textbf{80.87\%} \\
\textbf{BERTurk (Uncased)} &80.57\% &70.89\% &72.59\% \\
\bottomrule
\end{tabular}
\caption{Accuracy results for the publicly available BERT models on NLI-TR. The cased BERTurk model performed the best in all three evaluations, highlighting the value of language-specific resources for NLI.}
\label{tab:case_study_i_results_table}
\end{table*}
\subsubsection{Results}
\Tabref{tab:case_study_i_results_table} demonstrates that NLI-TR\ can be used to train very high quality Turkish NLI models. We observe that every model performed better on the test fold of SNLI-TR than the dev folds of MultiNLI-TR, which is an expected outcome given the greater complexity of MultiNLI compared to SNLI. The translation effort seems to have preserved this fundamental difference between the two datasets.
In addition, BERTurk (Cased), which was trained on a Turkish corpus, achieved the highest accuracy, and BERT-Multilingual (Cased), which utilized a smaller Turkish corpus, was ranked the second, consistently on every evaluation fold. The ranking emphasizes the importance of having a Turkish corpus for pretraining. Finally, every cased model outperformed its uncased counterpart, suggesting that casing provides valuable information to models for NLI in general.
\subsection{Case Study II: Comparing Morphological Parsers on Turkish NLI Datasets}\label{sec:parsers}
Turkish is an agglutinating language in which suffixes are commonly cascaded to create new words. This makes morphological parsing crucial for many applications. In this case study, we evaluate the effect of such morphological analysis by comparing three models with different morphological approaches on NLI-TR. We train a BERT model from scratch utilizing each approach for pretraining from \secref{sec:case-study-i} and used NLI-TR\ for fine-tuning. This leads to the striking result that morphology adds additional information where data is sparse, but its importance shrinks as the dataset grows larger.
\subsubsection{Experimental Settings}
\paragraph{Morphological Parsers}
We used Zemberek \citep{Zemberek} and Turkish Morphology \citep{ozturel2019syntactically} as parsers and compared them with an approach that does not do morphological parsing.
Zemberek is a mainstream Turkish NLP library used in research \citep{Buyuk2020context,Kuyumcu2019,Ozer2018diacritic,can2017unsupervised, dehkharghani2016sentiturknet,Gulcehre2015using} and applications such as iOS 12.2 and Open Office. It has 67,755 entries in its lexicon and uses a rule-based parser.
Turkish Morphology is an OpenFST-based \cite{OpenFst} morphological parser that was recently released by Google Research. Its lexicon contains 47,202 entries.
Out of the box, Zemberek can parse 23K tokens per minute, whereas Turkish Morphology can process only 53. We sped up Turkish Morphology to parse 11 times more tokens per minute by implementing a dynamic programming wrapper \citep{bellman1953introduction} that increased the cache hit ratio to 89.9\%. This technique is used by Zemberek already.
\paragraph{Pretraining}
\begin{table*}[tp]
\centering
\setlength{\tabcolsep}{12pt}
\begin{tabular}{lllll}\toprule
\textbf{} &\textbf{SNLI-TR} &\multicolumn{2}{c}{\textbf{MultiNLI-TR}} \\\cmidrule{2-4}
& \textbf{Test} & \textbf{\begin{tabular}[l]{@{}l@{}}Matched\\ Dev\end{tabular}} &\textbf{\begin{tabular}[l]{@{}l@{}}Mismatched\\ Dev\end{tabular}} \\
\midrule
\textbf{No Parser} &76.59\% &58.24\% &60.01\% \\
\textbf{Zemberek} &76.71\% &59.01\% &60.44\% \\
\textbf{Turkish Morphology} &76.36\% &60.13\% &62.00\% \\
\bottomrule
\end{tabular}
\centering
\caption{Accuracy results for different morphology approaches on NLI-TR. To facilitate running many experiments, these results are for pretraining on just one-tenth of the Turkish corpus used by BERTurk and fine-tuning on NLI-TR\ for just one epoch.}
\label{tab:case_study_ii_results}
\end{table*}
We wanted to conduct a wide range of experiments on a limited budget. Thus, we opted to use one-tenth ($\approx$3GB) of the Turkish corpus used by BERTurk \citep{berturk} to pretrain all the BERT models. We split the dataset into 30 equal shards for parallel processing, where each shard comprises 1M sentences. We analyzed each token morphologically using Zemberek and Turkish Morphology and trained a BERT model using the stems of the tokens only. For the model that does not utilize morphological information, we used tokens as they are. We used the \texttt{BertWordPieceTokenizer} class of HuggingFace Tokenizers\footnote{\url{https://github.com/huggingface/tokenizers}} with the same set of parameters for each model.
We trained each model on a single Tesla V100 GPU of NVIDIA DGX-1 system, allocating 128GB memory for 1 day. We used an effective batch size of 128 with gradient accumulation to address memory limitations. We shuffled all shards prior to training to reduce the adverse effects of variance across the sentence styles in the different shards \cite{Goodfellow-et-al-2016}.
\paragraph{Finetuning} We fine-tuned each model on NLI-TR\ with the same setting as in \secref{sec:case-study-i}, with the exception that we trained for only 1 epoch. We measured the accuracy on the evaluation sets with an interval of 1000 training steps to observe the effect of morphological parsing as the dataset grew. \Figref{fig:test_acc} reports the accuracy of all models with respect to fine-tuning steps on NLI-TR\ evaluation sets, and \tabref{tab:case_study_ii_results} shows final accuracy numbers.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{test_accuracies.png}
\vspace{8pt}
\caption{Dev-set accuracy for the two morphological parsers and a model without morphological parsing. The x-axis tracks the size of the training set. We find that morphological parsing is generally helpful in early rounds, when the training set is very small, but that its importance diminishes as the training set increases. These effects are especially clear for the two MultiNLI-TR dev sets.}
\label{fig:test_acc}
\end{figure*}
\subsubsection{Results}
\Figref{fig:test_acc} suggests that morphological parsing is beneficial where the training set is small, but that its importance largely disappears for large training sets. This is reflected also in the final results in \tabref{tab:case_study_ii_results}. We conjecture that this unexpected result traces to the fact that the models under consideration create contextual embeddings of both word and subword tokens \cite{kudo-2018-subword, kudo2018sentencepiece, sennrich-etal-2016-neural}. Given a sufficiently large dataset, it might be that this can approximate the effects of morphological parsing.
The trends are not uniform for SNLI-TR and MultiNLI-TR. For SNLI-TR, all three models display a similar learning curve, with only a slight edge for Zemberek early on. For MultiNLI-TR, models with morphological parsers are more differentiated. However, all three models converge to similar performance at the end of training on both datasets (\Tabref{tab:case_study_ii_results}).
In light of these findings, we suggest avoiding the use of morphological parsers for Turkish NLI where the training set is large, since the benefits of such parsers are generally not enough to offset the cost of running them.
\section{Conclusion}
In this study, we propose a cost- and time-efficient approach to obtaining labeled datasets in resource-constrained languages: machine translation. We machine-translated SNLI and MultiNLI to create the first full NLI dataset of Turkish, NLI-TR. Though English to Turkish translation is a stress-test for MT systems due to the different linguistic structures of the two languages, a team of experts validated the quality and consistency of the translations, suggesting that NLI-TR\ can in fact help address the paucity of datasets for Turkish NLI.
As two illustrative case studies, we used NLI-TR\ to analyze the effects of in-language pretraining and morphological analysis on NLI performance. We observed that a Turkish-only pretraining regime can enhance Turkish models significantly, and that morphology is arguably worthwhile only when the training dataset is small. We propose that MT be more widely adopted for advancing NLP studies on resource-constrained languages. MT can efficiently transfer large and expensive-to-create labeled datasets from English to other languages in many NLP tasks. And, last but not least, MT will only get cheaper, faster, and better over time, thereby further strengthening our core claims.
\section{Acknowledgments}
This research was supported by the AWS Cloud Credits for Research Program (formerly AWS Research Grants). We thank the Alara Dirik, Almira Ba\u{g}lar, Berfu B\"{u}y\"{u}k\"{o}z, Berna Erden, Fatih Mehmet G\"{u}ler, G\"{o}k\c{c}e Uludo\u{g}an, G\"{o}zde Aslanta\c{s}, Havva Y\"{u}ksel, Melih Barsbey, Melike Esma \.{I}lter, Murat Karademir, Selen Parlar, Utku Yavuz for their annotation support and vital contributions. We are grateful also to Stefan Schweter and Kemal Oflazer for sharing the dataset that BERTurk was trained on, and Omar Khattab for valuable advice and discussion.
| {
"attr-fineweb-edu": 1.536133,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUa2fxK7kjXIK5pbQh | \section{Introduction}
\label{sec:intro}
Starting with the seminal work of \cite{Zamolodchikov:1977nu} and \cite{Zamolodchikov:1978xm}, it has been known that there exist integrable quantum field theories in two dimensions whose S-matrices factorize.
This property is tied to the existence of higher-spin conserved currents \cite{Shankar:1977cm}.
More precisely, it was shown in \cite{Parke:1980ki} that the existence of \emph{one} local higher-spin current is sufficient for factorization of the S-matrix in parity symmetric theories, while one needs two currents in theories without parity.
However, even for sigma models with a coset target space, a complete understanding and classification of quantum conserved currents is still lacking.
See, for example, \cite{Fendley:2000qq,Zarembo:2017muf} for reviews on integrability in two-dimensional sigma models.
At the classical level, it is known that sigma models whose target space is a symmetric coset admit a so-called Lax operator formalism, which allows one to systematically construct classically-conserved local higher-spin currents \cite{babelon2003introduction}.
However, the coset being symmetric is neither sufficient nor necessary to diagnose the fate of classical integrability at the quantum level.
It is insufficient because in some symmetric coset sigma models, higher-spin currents fail to be conserved at the quantum level. A famous example is the $\mathbb{C}\mathbb{P}^{N-1}$ sigma model \cite{Goldschmidt:1980wq}.
It is not necessary either, since, even if the coset is not symmetric, one can sometimes construct a Lax operator.
Interesting examples are sigma models on the Schr\"{o}dinger spacetime \cite{SchaferNameki:2009xr,Orlando:2010yh,Kawaguchi:2011wt}.
An approach to directly address quantum integrability was presented by Goldschmidt and Witten \cite{Goldschmidt:1980wq}, where they provided a sufficient condition for the existence of quantum conserved currents in two-dimensional sigma models.\footnote{In this paper we will only consider quantum charges built from local conserved currents. For non-local quantum charges, see, for example, \cite{Luscher:1977rq,Luscher:1977uq,Abdalla:1982yd}. In particular, it was shown that a sufficient (but not necessary) condition for the conservation of the non-local charges in $G/H$ coset sigma models is that $H$ is simple \cite{Abdalla:1982yd}. Integrable examples with $H$ not simple include $O(2N)/O(N)\times O(N)$ \cite{Fendley:2000bw,Fendley:2001hc} and $Sp(2N)/Sp(N)\times Sp(N)$ \cite{Babichenko:2002gz}, where the quantum conservation of the non-local charges is secured by a $\bZ_2$ discrete symmetry \cite{Evans:2004ur}. A similar analysis was performed for the superstring sigma model on $AdS_5\times S^{5}$ in \cite{Puletti:2008ym}. The relation between the local and non-local charges was discussed in \cite{Evans:2004mu}. See also \cite{Loebbert:2018lxk} for discussions on the relation between the non-local charges and the factorization of the S-matrices.}
Their analysis, which we review below, is based on the fact that any sigma model, be it a symmetric coset or not, is conformal at the classical level and has a current for every even integer spin $2n$ built from the stress tensor:
\ie
({\cal J}^{\rm cl}_+, {\cal J}^{\rm cl}_-) := ((T_{++})^n , 0 ) \,.
\fe
Owing to the fact that $\partial_- T_{++} = 0$,
this current is conserved classically
\ie\label{classicalconserve}
\partial_- (T_{++})^n=0\,~~~(\text{classical})\, .
\fe
At the quantum level, the conservation law of this higher-spin current is generally broken and the classical equation \eqref{classicalconserve} is modified to
\ie\label{quantumconserve}
\partial_- (T_{++})^n = A\,~~~(\text{quantum})\, ,
\fe
where $A$ is some local operator with classical dimension $2n+1$ and spin $2n-1$.
This is the standard way in which classical integrability fails to generalize to quantum integrability.
However, if $A$ can be written as a total derivative, i.e. if there exist operators $B_+$ and $B_-$ so that $A = \partial_+ B_- + \partial_- B_+$, one can redefine the current such that it is still conserved.
Explicitly,
\ie
&({\cal J}_+ ^{\rm qu}, {\cal J}_- ^{\rm qu}) := ((T_{++})^n - B_+\,,\, - B_-) \,,\\
&\partial_- {\cal J}^{\rm qu}_+ + \partial_+{\cal J}^{\rm qu}_- = 0\,.
\fe
Thus, there is still a conserved current $( {\cal J}^{\rm qu}_+ ,{\cal J}^{\rm qu}_-)$ at the quantum level.
In a given theory, one can in principle identify all possible $A$ terms, and total derivative terms $B$, that have the right quantum numbers to appear on the right hand side of (\ref{quantumconserve}).
The message of \cite{Goldschmidt:1980wq} is that if the number of $A$ terms is less than or equal to the number of $B$ terms, it is guaranteed that there are conserved higher-spin currents at the quantum level, because any correction on the RHS of \eqref{quantumconserve} can be written as a total derivative, and thus absorbed into a redefinition of the current.
More generally, there might be more than one classically-conserved current of a given spin, rather than just $(T_{++})^n$ \cite{Eichenherr:1981sk,Evans:2000qx}.
This motivates us to consider the combination
\ie\label{eq:IndexClassical}
&\mathcal{I}(J):=\\
&(\# \text{ of classically-conserved spin-$J$ non-derivative currents}) -\left [\, (\#\text{ of $A$'s}) - (\#\text{ of $B$'s})\,\right]\,.
\fe
From the first term on the right hand side, we have to omit currents that are derivatives
(e.g. $\partial_+T_{++}$) because they do not give rise to a charge when integrated on a spatial slice.
If ${\cal I}(J)>0$, it is guaranteed that there are ${\cal I}(J)$ quantum conserved currents by the argument of \cite{Goldschmidt:1980wq} that we reviewed above.
Hence a positive ${\cal I}(J)$ for some $J>2$ provides a {\it sufficient} condition for quantum integrability.
It is not a necessary condition because it is possible that even though ${\cal I}(J)\le 0$ in some case, the model might be fine-tuned such that quantum conserved currents still exist.
In \cite{Goldschmidt:1980wq}, Goldschmidt and Witten considered the classically-conserved current $(T_{++})^2$, and enlisted the possible $A$ and $B$ operators by brute force
in some specific examples.
However, the complexity of this brute-force method quickly goes out of control for larger spin, as well as for a more general target space.
In this paper we systematize the computation of \cite{Goldschmidt:1980wq} to general coset models $G/H$.
Our analysis is based on a simple observation: The difference $\mathcal{I}(J)$ is invariant
under conformal perturbation theory around the UV fixed point, while individual numbers in \eqref{eq:IndexClassical} are not.\footnote{See however the discussion at the end of subsection \ref{subsec:RGflow} for potential subtleties due to nonperturbative corrections.}
This allows us to compute $\mathcal{I}(J)$ at the UV fixed point where the equation of motion simplifies and the theory enjoys conformal symmetry.
Because of this property, we call $\mathcal{I}(J)$ the {\it integrability index}.
We derive a compact expression for a generating function that allows us to compute the indices ${\cal I}(J)$ for all spins.
Our technique is largely inspired by \cite{Henning:2017fpj} which classified higher-dimension operators in effective field theory, and also by the computation of supersymmetric indices \cite{Kinney:2005ej}.
As one important application, we compute the indices for the $O(N)$ model and establish the existence of a spin-6 quantum conserved current in addition to a spin-4 current predicted in \cite{Goldschmidt:1980wq}.
The coset model $G/H$ typically has some tunable parameters.
Over certain loci in this parameter space, there could be extra discrete global symmetries, and these can affect quantum integrability.
The importance of the discrete symmetries for quantum conservation of non-local and local charges was emphasized in \cite{Evans:2004ur}.
The classic example is the $\mathbb{C}\mathbb{P}^1$ sigma model, which has a $2\pi$-periodic $\theta$ angle.
The model is integrable at $\theta=0$ \cite{Zamolodchikov:1977nu} and at $\theta=\pi$ \cite{Zamolodchikov:1992zr} where there is an extra $\bZ_2$ charge conjugation symmetry.
At other values of $\theta$, there is no extra symmetry and the model is not expected to be integrable.
Thus, we will also present a generating function and compute ${\cal I}(J)$ in the presence of discrete global symmetries.
For example in the $\mathbb{CP}^1$ model, we find that
${\cal I}(4) \leq 0$ and ${\cal I}(6) \leq 0$ without imposing the $\bZ_2$ charge conjugation symmetry, while $\mathcal{I}(4) = \mathcal{I}(6) = +1$
when the $\bZ_2$ symmetry imposed.
This is consistent with quantum integrability at $\theta=0$ and $\theta=\pi$.
For the $\mathbb{C}\mathbb{P}^{N-1}$ models with $N>2$, the indices are all negative even after including the discrete symmetry, consistent with the standard lore that the $\mathbb{C}\mathbb{P}^{N-1}$ model for $N>2$ is not integrable.
Finally, we apply our formalism to the flag sigma models $\frac{{U}(N)}{{U}(1)^{N}}$,
which reduces to the $\mathbb{C}\mathbb{P}^1$ model when $N=2$.\footnote{The $\mathbb{C}\mathbb{P}^1$ model has another generalization ${SU}(N)/{SO}(N)$ with global symmetry $PSU(N)$.
This model has a $\bZ_2$-valued $\theta$-angle for $N>2$.
For both values of $\theta$, the model is integrable.
At $\theta=0$, the IR phase is gapped,
while at $\theta=\pi$ the IR is described by the ${SU}(N)_1$ WZW model \cite{Fendley:2000bw,Fendley:2001hc}.}
Aspects of flag sigma models, including their global symmetries, 't Hooft anomalies and phase diagrams have recently received some attention \cite{Bykov:2011ai,Bykov:2012am,Lajko:2017wif,Tanizaki:2018xto,Ohmori:2018qza,Hongo:2018rpy,Bykov:2019jbz}.
In particular, it was argued that over certain loci in parameter space with enhanced discrete global symmetry, the IR phase is gapless and is described by the ${SU}(N)_1$ WZW model \cite{Lajko:2017wif,Tanizaki:2018xto,Ohmori:2018qza}.
We compute the indices ${\cal I}(J)$ for these models, and we find that they are all negative. Thus, it is unlikely that these models are integrable.
The organization of the paper is as follows.
In Section \ref{sec:basics} we review the Lagrangian description of coset sigma models and present a complete set of ``letters" for constructing local operators.
In Section \ref{sec:index} we construct the partition function using a plethystic exponential, define the integrability index and discuss the invariance of the index in conformal perturbation theory.
In Section \ref{sec:examples}, we work out the partition function and the index for
the $\mathbb{C}\mathbb{P}^{N-1}$, $O(N)$ and the flag sigma models.
We conclude in Section \ref{sec:conclusions} and discuss directions for future work.
\section{Coset sigma models}
\label{sec:basics}
\subsection{Lagrangian description of coset models}
\label{sec:setup}
Let us first review the basic properties of sigma models with a coset target space $G/H$.
We do {\it not} require the coset to be symmetric.
Let $\mathfrak{g}$ and $\mathfrak{h}$ be the Lie algebras of $G$ and $H$, respectively.
Using the quadratic form $\langle \, , \rangle$ on $\mathfrak{g}$, we can make an orthogonal decomposition of $\mathfrak{g}$ as
\begin{align}\label{eq:decomposition}
\mathfrak{g}=\mathfrak{h}\oplus \mathfrak{k}\, .
\end{align}
The elements in $\mathfrak{k}$ represent the physical degrees of freedom of the coset.
To keep a concrete example in mind, consider the coset
$\frac{{SU}(2)}{{U}(1)}$, which is nothing but the $O(3)$ or the $S^2$ or the $\mathbb{C}\mathbb{P}^1$ model.
In this case, the full Lie algebra $\mathfrak{g}=\mathfrak{su}(2)$ is spanned by the three Pauli matrices, $\mathfrak{h}$ is the span of the Pauli-Z matrix, and $\mathfrak{k}$ is the span of the Pauli-X and Pauli-Y matrices.
We will work on two-dimensional Minkowski space $\mathbb{R}^{1,1}$ throughout.
The target space of the sigma model is the space $G/H$ of all {\it left} cosets of $H$.
To write the action, we first consider all maps $g: \mathbb{R}^{1,1} \to G$
and then proceed to impose the following local symmetry:
\begin{align}
g(x) &\to g(x)\, h(x)^{-1} \, ,\quad h(x)\in H\,. \label{eq:localright}
\end{align}
In other words, we have to make the identification of maps
$g(x)\sim g(x)\, h(x)^{-1}$ for any $h: \mathbb{R}^{1,1} \to H$.
This restricts us to maps from the spacetime into the space $G/H$ of left-cosets of $H$.
This model admits a {\it global} $G$-symmetry\footnote{To be precise, the global symmetry may not be $G$, but a discrete quotient thereof. For example, the global symmetry of $\mathbb{C}\mathbb{P}^{N-1}={SU(N) \over {U}(N-1)}$ is $PSU(N)$ and not $SU(N)$. This does not affect our arguments in this section, but the role of discrete symmetry will become important later.}
which acts from the left (contrasted with the local $H$ symmetry which acts from the right):
\begin{align}
g(x) &\to g^{\prime}g(x)\, , \quad g^{\prime}\in G \,. \label{eq:globalleft}
\end{align}
To write down the action of the sigma model, we introduce the left-invariant one-form $j$
\begin{equation}
j_{\mu}(x):= g^{-1}(x)\partial_{\mu}g(x) \,.
\end{equation}
Since $j$ is valued in the Lie algebra, one can decompose it using \eqref{eq:decomposition} as
\begin{equation}
j_{\mu}(x) =a_{\mu}(x)+k_{\mu}(x)\,,\qquad
a_{\mu}(x) \in \mathfrak{h}\,,\quad
k_{\mu}(x) \in\mathfrak{k}\,. \label{eq:jsplitak}
\end{equation}
The currents $j_\mu(x)$, $a_\mu(x)$ and $k_\mu(x)$ are invariant under the global $G$ transformations (\ref{eq:globalleft}), while they transform under the local $H$ transformations (\ref{eq:localright}) as
\begin{align}
j &\to h \, j\, h^{-1} - dh\, h^{-1}\,, \\
a &\to h \, a\, h^{-1} - dh\, h^{-1}\,, \\
k &\to h \, k \, h^{-1} \,. \label{eq:trnfklocal}
\end{align}
In particular, $a_\mu(x)$ transforms as a gauge field under the local action of $H$ from the right.
The covariant derivative built out of $a_{\mu}$
\begin{align}
D_\mu := \partial_\mu + a_\mu \label{eq:covder}
\end{align}
transforms via conjugation under the $H$ gauge transformations, $D_{\mu} \bullet \to h\, (D_{\mu} \bullet)\, h^{-1}$.
The action for the sigma model with target space $G$ (without topological terms) is
$\int d^2x\, {\rm tr}\, j_\mu j^\mu$.
Now we have to gauge the $H$ symmetry.
For that purpose, we introduce a gauge field $A_\mu \in \mathfrak{h}$
and the covariant derivative acting on $g(x)$ as
$g^{-1} \mathbb{D}_\mu g = g^{-1} \partial_\mu g - A_\mu$.
Now we can manipulate the action
\begin{align*}
{\rm tr}\, (g^{-1} \mathbb{D}_\mu g)^2
&= {\rm tr}\, (g^{-1} \partial_\mu g)^2
+ {\rm tr}\, A_\mu^2 - 2 \, {\rm tr}\, A_\mu (g^{-1}\partial_\mu g)\\
&= {\rm tr}\, (k_\mu^2 + a_\mu^2)
+ {\rm tr}\, A_\mu^2 - 2 \, {\rm tr}\, A_\mu a_\mu\\
&= {\rm tr}\, k_\mu^2 + {\rm tr}\, (a_\mu - A_\mu)^2\, .
\end{align*}
In going to the second line, we split $g^{-1} \partial_\mu g = j_\mu = a_\mu + k_\mu$ and used the orthogonality of $\mathfrak{h}$ and $\mathfrak{k}$.
Integrating out $A$, we see that the action of the sigma model can be expressed as
\begin{equation}\label{eq:action}
S[g] = \frac{R^{2}}{2} \int d^{2}x \,\,{\rm tr}\left[k_{\mu}(x)\, k^{\mu}(x)\right]\,,
\end{equation}
where the positive real number $R$ characterizes the size of the coset.
As desired, the action is invariant under the local $H$ transformation of $k$ (\ref{eq:trnfklocal}).
We have used the notation $S[g]$ on the left hand side to emphasize that fact that we start with maps $g:\mathbb{R}^{1,1} \to G$ and then view $a_\mu(x)$ and $k_\mu(x)$ as being determined by $g(x)$.
Even though it might seem that there is no $a_\mu(x)$ dependence on the right hand side, this is not the case, as will become explicit in the equation of motion (\ref{eq:EOM}) below.
Let us now derive the equation of motion starting from the action \eqref{eq:action}.
We make the first order variation $g \to (1+\epsilon)g$,
and write the variation of the Lagrangian from (\ref{eq:action}) as
being proportional to ${\rm}\, k^\mu \delta k_\mu$.
Under $g \to (1 + \epsilon)g$, the
variation of the current $j_\mu$ is
$\delta j_\mu = g^{-1} (\partial_\mu \epsilon) g$.
Now we use
$\delta k_\mu = \delta j_\mu - \delta a_\mu = g^{-1} (\partial_\mu \epsilon) g - a_\mu$, and the orthogonality of $\mathfrak{h}$ and $\mathfrak{k}$ to get
\begin{equation}
\delta S = R^2 \int d^2x\, {\rm tr}\left[\partial_\mu\epsilon \, (g\, k^\mu \, g^{-1}) \right]\,. \label{eq:deltaSepsilon}
\end{equation}
Therefore, the equation of motion reads
\begin{equation}\label{eq:eom}
\partial_\mu \left(g\, k^\mu \, g^{-1}\right)=0 \, .
\end{equation}
This is equivalent to $\partial_\mu k^\mu + [j_\mu , k^\mu] = 0$.
We now make the decomposition $j_\mu = a_\mu + k_\mu$ as in (\ref{eq:jsplitak}) and since $[k_\mu, k^\mu]=0$, we get an equivalent form of the equation of motion
\begin{equation}\label{eq:EOM}
D_{\mu}\, k^{\mu}(x)=
\partial_{\mu}\, k^{\mu}(x)+ [a_\mu(x), k^\mu(x)] =
0\,,
\end{equation}
where the covariant derivative $D_{\mu}$ acting on adjoint fields was defined in \eqref{eq:covder}.
The equation of motion \eqref{eq:eom} also shows that the current
\begin{equation}\label{eq:conserv}
J^\mu(x) := g(x) \, k^\mu(x)\, g^{-1}(x)
\end{equation}
is conserved $\partial_{\mu}J^{\mu}(x)=0$.
From the variation of the action (\ref{eq:deltaSepsilon}), we see that $J^\mu(x)$ is nothing but the Noether current of the global $G$ symmetry.
Let us also comment that $J_\mu(x)$ is invariant under the local $H$ transformations (\ref{eq:localright}) and (\ref{eq:trnfklocal}).
We end this section with a result that we shall use later.
The identity $d(g^{-1}dg) + g^{-1} dg \wedge g^{-1} dg = 0$
implies that $\partial_\mu j_\nu - \partial_\nu j_\mu + [j_\mu, j_\nu] = 0$.
Writing $j = a+k$ and decomposing this identity into $\mathfrak{h}$ and $\mathfrak{k}$ sectors, we get
\begin{align}
[D_\mu, D_\nu] &= - [k_\mu, k_\nu]\, \Big\vert_\mathfrak{h} \,, \label{eq:simplify1}\\
D_\mu k_\nu - D_\nu k_\mu &= - [k_\mu, k_\nu] \, \Big\vert_\mathfrak{k}\,, \label{eq:simplify2}
\end{align}
where the right-hand-sides designate the restriction of $[k_\mu, k_\nu]$ to $\mathfrak{h}$ or $\mathfrak{k}$.
For a symmetric coset, $[k_\mu, k_\nu]\vert_\mathfrak{k} = 0$.
\subsection{Description of local operators}
\label{sec:localopdesc}
We are interested in classifying the possible $A$ terms that appear in the conservation law of a conserved current of spin $J$.
For concreteness, we only consider the case when the current is a singlet under the global $G$ symmetry, as this is relevant for operators like $(T_{++})^n$.
Our methods can be adapted to the case when the current transforms nontrivially under the global $G$ symmetry.
We need a way to count local operators that are invariant both under the global $G$ symmetry and the $H$ gauge transformations. Such analysis was performed in \cite{Henning:2017fpj} for effective field theories in higher dimensions, and we apply their techniques to coset models in two dimensions.
Local operators can be built from $g(x)$, $k_\mu(x)$, and their covariant derivatives.
Let us recall the transformation properties of these fields
\begin{align}
g(x) &\to g^{\prime} \, g(x) \, h^{-1}(x) \, ,\\
k_{\mu}(x) &\to h(x) \, k_{\mu}(x) \, h^{-1} (x) \, , \\
D_{\mu} \bullet &\to h(x) \, (D_{\mu} \bullet) \, h^{-1}(x)\, .
\end{align}
First, we show that the fields $g(x)$ and $g^{-1}(x)$ can be omitted from this list.
To see this, note that the covariant derivatives acting on $g(x)$ or $g^{-1}(x)$ can be written as
\begin{equation}
D_{\mu}g=\partial_{\mu}g -g a_{\mu}=g k_{\mu}\,,\qquad D_{\mu}g^{-1}=\partial_{\mu}g^{-1}+a_{\mu}g^{-1}=-k_{\mu}g^{-1}\,.
\end{equation}
Therefore, for the purpose of enumerating a complete set of operators, one can assume that the covariant derivatives never act on $g$ or $g^{-1}$.
Then, $g$ and $g^{-1}$ must appear only in the combination $g^{-1}g=1$, since the other fields are already invariant under the global $G$ symmetry.
This completes the proof.
Now our task boils down to classifying local operators that consist only of the following symbols:
\begin{equation}
D_{\mu} \,,\qquad k_{\mu}^a\,,\quad
a\in \{1, \ldots, \dim \mathfrak{k}\}\, ,
\end{equation}
where we have now made the Lie algebra index of $k$ explicit.
Acting with the covariant derivatives on $k$, one obtains the basic building blocks, which we shall call ``letters".
Note that acting with $D_\mu$ on $k_\nu$ via (\ref{eq:covder}) keeps us within $\mathfrak{k}$ because
$[\mathfrak{h}, \mathfrak{k}] \subset \mathfrak{k}$.
Examples of such letters are
\begin{equation}
(D_{+}k_{+})^a\,,\quad (D_{+}D_{-}k_{+})^a\,,\quad (D_{+} D_- D_+ k_{-})^a\qquad \text{etc.}
\end{equation}
We should however keep in mind that not all the letters are independent.
Firstly, owing to the relation \eqref{eq:simplify1}, one can effectively treat the covariant derivatives as mutually commuting objects; the non-commuting parts of the covariant derivatives can be expressed in terms of two $k^a$'s using \eqref{eq:simplify1}.
Thus we can reduce the set of letters to
\begin{equation}\label{eq:letterset1}
(D_{+})^{n}(D_{-})^{m}k_{+}\,,\qquad (D_{-})^{n}(D_{+})^{m}k_{-}\,.
\end{equation}
Secondly, one can replace the operators of the form $D_{-}k_{+}$ or $D_{+}k_{-}$ with operators without covariant derivatives using the equation of motion \eqref{eq:EOM} and the relation \eqref{eq:simplify2}, which we display here again in lightcone coordinates,
\begin{equation}
\begin{aligned}
D_{+}k_{-}+D_{-}k_{+}&=0\,,\\
D_{+}k_{-}-D_{-}k_{+}&=-[k_{+},k_{-}]\big|_{\mathfrak{k}}\,.
\end{aligned}
\end{equation}
Using these two relations, we get explicit expressions for $D_- k_+$ and $D_+ k_-$ in terms of products of $k^a$'s.
Using these expressions in \eqref{eq:letterset1}, we conclude that the complete set of letters is given by
\begin{equation}\label{eq:completeletter}
k_{+}^{(n)} := (D_{+})^{n}k_{+}\,, \qquad
k_{-}^{(n)} := (D_{-})^{n}k_{-}\,.
\end{equation}
In other words, we only need to consider letters with all plus or all minus indices.
As the final step, we need to impose invariance under the $H$ gauge transformations.
(Recall that all the letters are already invariant under the global $G$-symmetry.)
For this purpose, we note that because $[\mathfrak{h}, \mathfrak{k}] \subset \mathfrak{k}$,
the vector space $\mathfrak{k}$ forms a representation $r$ of $\mathfrak{h}$.
The representation $r$ is, in general, not an irrep and we can decompose
$r = \oplus_i \, r_i$, where $r_i$'s are irreps of $H$.
For instance, in the case of ${O}(N)/{O}(N-1)$ coset, the index $a$ in
$k^a_\mu(x)$ can take $N-1$ possible values, and $k$ transforms in the vector representation of ${O}(N-1)$.
In the case of ${SU}(N+1)/{U}(N)$ coset, the index $a$ in
$k^a_\mu(x)$ can take $2N$ possible values, and $k$ transforms in the
$N \oplus \overline{N}$ of $U(N)$.
Thus, in general we can write
\begin{equation}\label{eq:decompose}
k_{\mu}=\sum_{i} \, [k_{\mu}]_{r_i}\, .
\end{equation}
The covariant derivatives do not change the representations, and so the letters
\begin{equation}\label{eq:letterdecomposed}
[k_{+}^{(n)}]_{r_i} := (D_{+})^{n}[k_{+}]_{r_i} \,,\qquad
[k_{-}^{(n)}]_{r_i} := (D_{-})^{n}[k_{-}]_{r_i}
\end{equation}
also transform in the representation $r_i$.
Finally, we need to solve the group-theoretic problem of constructing $H$-invariant objects out of products of the letters in \eqref{eq:letterdecomposed}.
We do this in the next section by constructing a generating function for $H$-invariant operators via Haar integration over the group $H$.
\section{An index for quantum integrability}
\label{sec:index}
In this section we introduce an algorithmic approach to diagnose the fate of
classically-conserved currents at the quantum level \cite{Goldschmidt:1980wq}.
Our computational techniques are inspired by \cite{Henning:2017fpj} and \cite{Kinney:2005ej}.
We will first work in the UV (which enjoys conformal invariance) to
construct a generating function and to define the index.
At the end, we will explain why the index is invariant in the regime of conformal perturbation theory around the UV fixed point.
\subsection{Generating function for local operators}
At the UV fixed point, the equations of motion \eqref{eq:EOM} become linear and we have conformal symmetry.
This allows us to organize operators by their conformal dimension and spin,
\begin{equation}
Z(q,x):= \sum_{\text{inv}\,\mathcal{O}}
q^{\Delta_\mathcal{O}}x^{J_\mathcal{O}} \, ,
\label{eq:defz1}
\end{equation}
where we only include operators that are $H$-invariant.
As usual, $\Delta_{\mathcal{O}}$ and $J_{\mathcal{O}}$ denote the dimension and the spin of the operator $\mathcal{O}$, respectively.
As discussed in Section \ref{sec:localopdesc}, the complete set of single-letter operators is given by $(D_{+})^{n}k_{+}$ and $(D_{-})^{n}k_{-}$ (see (\ref{eq:completeletter})).
This leads to the following generating function for single-letter operators:
\begin{equation}
\widehat{f}(q,x) := \sum_{n=0}^{\infty} q^{n+1}\left(x^{n+1}+x^{-(n+1)}\right)=\frac{xq}{1-xq}+\frac{x^{-1}q}{1-x^{-1}q}\,.
\label{eq:fhat}
\end{equation}
In $\widehat{f}$, we only kept track of the scaling dimension and spin, but we also need to keep track of the quantum numbers under $H$ transformations.
For this purpose, we introduce the fugacities $y$, which is a vector of length equal to the number of Cartan generators of $H$.
We decompose the current $k_\mu^a$ into irreducible representations of $H$ as in \eqref{eq:decompose}, and multiply by the character for each representation $\chi_{r_i}(y)$.
This leads to the following formula for the single-letter generating function $f(q,x,y)$:
\begin{equation}
\label{eq:fqxy}
f(q,x,y):=
\widehat{f}(q,x)\, \chi_{r}(y)
= \widehat{f}(q,x)\left(\sum_{i}\chi_{r_i}(y)\right)\,.
\end{equation}
The next step is to express the multi-letter generating function
in terms of the single-letter generating function.
To see how the computation goes, let us consider one particular single-letter operator with definite dimension $\Delta$, spin $J$ and charge vector $R$ under the Cartan generators.
Such an operator contributes a monomial to the generating function
\begin{equation}
f^{(\Delta, J, R)}(q,x,y)=q^{\Delta}x^{J}y^{R}\, .
\end{equation}
Here, $y^R$ is a shorthand for $\prod_{i=1}^{\text{rank}\, \mathfrak{h}} y_i^{R_i}$.
If we construct multi-letter operators using only this operator, the partition function would read
\begin{align}
Z^{(\Delta, J, R)}(q,x,y)&=1+q^{\Delta}x^{J}y^{R}+(q^{\Delta}x^{J}y^{R})^2+\cdots \nonumber \\
&= \exp \left[ - \log (1 - q^{\Delta}x^{J}y^{R}) \right]
\nonumber \\
&=\exp\left[\sum_{m=1}^{\infty}\frac{1}{m}f^{(\Delta, J, R)}(q^{m},x^{m},y^{m})\right]\,. \label{eq:multilettermono}
\end{align}
In reality, there are infinitely many single-letter operators and the multi-letter partition function would be given by a product of the factor \eqref{eq:multilettermono} corresponding to each single-letter operator.
This leads to the following expression for the multi-letter partition function:
\begin{equation}\label{eq:defofZqxy}
Z(q,x,y)=
\exp\left[\sum_{m=1}^{\infty}\frac{1}{m}\,
f(x^{m},q^{m}, y^m)
\right]\, ,
\end{equation}
with $f(x,q,y)$ given in (\ref{eq:fqxy}).
The expression on the right hand side is also known as the plethystic exponential \cite{Kinney:2005ej}.
To obtain the generating function for gauge-invariant operators, we simply need to integrate over the fugacities with the Haar measure on $H$:
\begin{equation}\label{eq:Cartanintegral}
Z(q,x)=\int d\mu_H (y)\,Z(q,x,y)\,.
\end{equation}
Below we will also encounter cases where we cannot restrict Haar integrals to the Cartan.
In such cases, we cannot introduce the fugacities $y$, so we need a general element $h \in H$ in our formulas.
In particular, the equations (\ref{eq:fqxy}) and (\ref{eq:defofZqxy}) are replaced by
\begin{align}
f(q,x,h) &=
\widehat{f}(x,q) \, \chi_{r}(h)
= \widehat{f}(x,q) \left(\sum_{i}\chi_{r_i}(h)\right)\, , \label{eq:deffqxh}\\
Z(q,x,h) &=
\exp\left[\sum_{m=1}^{\infty}\frac{1}{m}\,
f(x^{m},q^{m}, h^m)
\right]\, .\label{eq:intermsoftrace}
\end{align}
The projection to the gauge-invariant operators can be achieved by integrating
$Z(q,x,h)$ against the Haar measure $d\mu_{H}$, generalizing (\ref{eq:Cartanintegral}):
\begin{equation}
\label{eq:zqxh}
Z(q,x)=\int d\mu_H(h)\,Z(q,x,h)\, .
\end{equation}
Equations (\ref{eq:fhat}), (\ref{eq:deffqxh}), (\ref{eq:intermsoftrace}) and (\ref{eq:zqxh}) are our main results that make the computations of \cite{Goldschmidt:1980wq} algorithmic.
\subsection{Discrete symmetries}
\label{sec:discrete}
Now we extend these formulas to include discrete symmetries which are crucial for quantum integrability of certain models.
To be concrete, let us consider a sigma model with an internal $\mathbb{Z}_2$ symmetry, whose group elements are given by $1$ and $\sigma$, with $\sigma^2=1$.
One can take the $\mathbb{Z}_2$ symmetry into account by considering the modified partition function
\begin{align}
\widetilde{Z}(q,x)&:=
\frac{1}{2}\left[Z(q,x)+Z_{\sigma}(q,x)\right]\,,
\end{align}
with $Z(q,x)$ as before (\ref{eq:defz1}) and
\begin{equation}
Z_{\sigma}(q,x) :=
\sum_{\text{inv}\, \mathcal{O}}
\left[\sigma \, q^{\Delta_\mathcal{O}}x^{J_\mathcal{O}}\right]\,.
\end{equation}
In other words, we insert $\frac{1+\sigma}{2}$ in the partition function, which projects to the $\mathbb{Z}_2$-invariant sector.
Again, we restrict ourselves to analyze operators that are invariant under the global discrete symmetries, but it is straightforward to generalize to operators in nontrivial representations of the symmetry.
The formula for $Z_\sigma$ is a straightforward generalization of (\ref{eq:intermsoftrace}).
\begin{equation} \label{eq:defzsigma}
Z_{\sigma}(q,x,h)=\exp\left[\sum_{m=1}^{\infty}
\frac{1}{m} \widehat{f}(x^{m},q^{m})\,
{\rm tr}_{r}\left((\sigma h)^{m}\right)\right]\, .
\end{equation}
In general, $\sigma$ maps the representation $r$ to itself, but it can take us between the representations $r_i$.
For example, in the case of $O(N)/O(N-1)$, there is only one representation, and $\sigma$ keeps us within this representation.
In the case of $SU(N+1)/U(N)$, the fundamental and the anti-fundamental representations get exchanged by $\sigma$.
\subsection{Index for quantum integrability}
The partition function (\ref{eq:intermsoftrace}) is defined at the UV free CFT point of the sigma model, restricted to the $H$-invariant sector.
Thus, we can expand the partition function of the UV theory into a sum of characters of the two-dimensional global conformal group,
\begin{equation}\label{eq:partition}
Z(q,x)=\sum_{\Delta, J} c(\Delta,J)\, \chi_{\Delta,J} (q,x)\,,
\end{equation}
where the non-negative integer $c(\Delta,J)$ counts the number of global primaries with dimension $\Delta$ and spin $J$.
As reviewed in appendix \ref{app:inversion}, we have two types of characters: short characters for conserved currents and the more typical long characters for everything else.
In terms of $c(\Delta,J)$, the index \eqref{eq:IndexClassical} for the UV CFT can be expressed simply as
\begin{align}
\mathcal{I}(J) = c(J,J) - c(J+1,J-1) \,. \label{eq:defindex}
\end{align}
The first term denotes the number of primary conserved currents of spin $J$ in the UV CFT.\footnote{Note that descendants can also satisfy a conservation law, but being total derivatives they do not give rise to a charge when integrated on a spatial slice.}
The second term counts the number of primary operators with dimension $J+1$ and spin $J-1$.
This is precisely the type of operators that can appear as $A$ terms in $\partial_- \mathcal{O}_{J,J}$, and cannot be absorbed into a redefinition of the current.
Thus, there exists a quantum conserved current if this number is strictly positive.
Indeed, this is just the criterion of \cite{Goldschmidt:1980wq}.
The novelty in our work is that we are choosing to work at the UV fixed point which allows us to exploit conformal symmetry.
To summarize, we can diagnose quantum integrability of a coset model by computing the generating function using the formulas (\ref{eq:fhat}), (\ref{eq:deffqxh})-(\ref{eq:zqxh}),
reading off the expansion coefficients (\ref{eq:partition}) to construct the index (\ref{eq:defindex}), and checking
\begin{equation}\label{eq:diagnosis}
\mathcal{I}(J) >0 \quad \implies \, \text{There exists a quantum conserved current of spin $J$}\, .
\end{equation}
Further, if $\mathcal{I}(J)>0$, the number of quantum conserved currents is at least $\mathcal{I}(J)$.
In appendix \ref{app:inversion}, we discuss an ``inversion formula" which allows us to compute $\mathcal{I}(J)$ as an integral transform of $Z(q,x)$.
In practice, for low spin operators, it is often easier to explicitly series expand the partition function $Z(q,x)$ and read off the coefficients $c(\Delta, J)$.\footnote{For practical computations it is also useful to note that $\mathcal{I}(J) = a(J,J)- a(J-1,J-1) - a(J+1,J-1) + a(J,J-2)$ where $a(\Delta,J)$ is the coefficient of $q^\Delta x^J$ in the expansion of $Z(q,x)$. Note also that $a(\Delta,J)$ would be the total number of operators taking into account the full non-linear equations of motion, because (\ref{eq:completeletter}) is the complete set of letters. In particular, $a(J+1,J-1)$ would be the number of $A$ terms in \cite{Goldschmidt:1980wq}.}
\subsection{Invariance of the index under conformal perturbation theory}
\label{subsec:RGflow}
We now comment on an important feature of the index, which is its invariance under conformal perturbation theory around the UV fixed point.
When we move away from the UV fixed point, some spin-$J$ conserved current $\mathcal{O}_{J,J}$ can cease to be conserved.
This is because the conformal multiplet of $\mathcal{O}_{J,J}$ can combine with a multiplet whose primary $\mathcal{O}_{J+1,J-1}$ has dimension $J+1$ and spin $J-1$.
In the process, the conformal multiplet of $\mathcal{O}_{J,J}$ becomes a long multiplet that satisfies the relation $\partial_{-}\mathcal{O}_{J,J}=\mathcal{O}_{J+1,J-1}$.
When this happens, the first term in \eqref{eq:IndexClassical} reduces by one.
At the same time, the third term in \eqref{eq:IndexClassical} also increases by one since now the operator $\mathcal{O}_{J+1,J-1}$ is a total divergence.
As a result, the difference $\mathcal{I}(J)$ remains invariant.
To see this in a concrete example, let us consider the case of the $\mathbb{C}\mathbb{P}^{N-1}$ model, which will be discussed in more detail in Section \ref{sec:cpn} below.
In computing $\mathcal{I}(4)$ for this case, we will find that $c(4,4) = c(5,3) = 2$, and so $\mathcal{I}(4)= 2-2 = 0$.
Let us compare this to \cite{Goldschmidt:1980wq}.
They find that there is just one candidate conserved operator with $\Delta=J=4$, namely the operator $(T_{++})^2$.
They also find four $A$ operators and three $B$ operators, and thus one primary with $\Delta=5$ and $J=3$.
Thus, with their way of counting, the index would be $\mathcal{I}(4) = 1 - 1 =0$.
The reason for the discrepancy is the following.
In the free limit, one operator with $\Delta=J=4$ is $(T_{++})^2$, and let us call the other one $\mathcal{O}_{4,4}$.
The free equations of motion imply that $\partial_- \mathcal{O}_{4,4} = 0$.
What happens as we flow away from the UV is that we get a modified relation
$\partial_- \mathcal{O}_{4,4} = \mathcal{O}_{5,3}$, where $\mathcal{O}_{5,3}$ is one of the primary operators contributing to $c(5,3)=2$.
Thus, we lose one conserved operator because $\mathcal{O}_{4,4}$ is no longer conserved, and we lose one $A$ term because $\mathcal{O}_{5,3}$ is now a total divergence. As a result, the index remains invariant.
The above argument is valid in the regime of conformal perturbation theory, where we can grade the local operators by their scaling dimensions at the UV fixed point.
There is a potential subtlety related to nonperturbative corrections.
The coset sigma models discussed in this paper are asymptotically free, but acquire a mass gap nonperturbatively in the infrared.
In the presence of such a mass gap, one can write $A$ terms for
$\partial_- \mathcal{J}_{+\ldots +}$ with dimension less than $J+1$.
The sufficiency condition of \cite{Goldschmidt:1980wq} can in principle be violated by this nonperturbative effect, but we are not aware of any example where this happens.
\section{Examples}
\label{sec:examples}
\subsection{$\mathbb{C}\mathbb{P}^1$ model}
\label{sec:cp1}
We now apply the strategy above to the $\mathbb{CP}^1$ model with a general $\theta$ angle. The coset for the $\mathbb{CP}^1$ sigma model is $\frac{{ SU}(2)}{{ U}(1)} $.
The $\mathbb{CP}^1$ sigma model is integrable at $\theta=0$ \cite{Zamolodchikov:1977nu} and at $\theta=\pi$ \cite{Zamolodchikov:1992zr}, and the global symmetry at these two points is ${O}(3)={SO}(3)\rtimes \bZ_2$.
The $\mathbb{C}\mathbb{P}^1$ model is not expected to be integrable for other values of the $\theta$, where the global symmetry is simply $SO(3)$.
Let us first compute the index without imposing the charge conjugation symmetry, corresponding to the $\mathbb{CP}^1$ sigma model with a generic $\theta$ angle.
The coset degrees of freedom consist of the charge $+1$ representation and the charge $-1$ representation of the $U(1)$ quotient group.
The $U(1)$ character is simply $\text{tr}(h) = y+y^{-1}$, where $y=e^{i\phi}$ is the $U(1)$ fugacity. Hence,
\begin{align}
&{\rm tr}(h^{m})=y^{m}+y^{-m}\,\,.
\end{align}
The multi-letter partition function $Z(q,x,y)$ is constructed following (\ref{eq:fhat}), (\ref{eq:fqxy}) and (\ref{eq:defofZqxy}).
We project to $U(1)$ invariant operators using (\ref{eq:Cartanintegral}), which in this case becomes
\begin{align}
Z(q,x) = \oint \frac{dy}{2\pi \i y} Z(q,x,y)\, .
\end{align}
We get the following result for the indices:
\ie
{\cal I}(4) = 0\,,~~~{\cal I}(6) = -1\,,~~~{\cal I}(8) = -5\,,~~~{\cal I}(10)= -15\,,~~~{\cal I}(12) = -33\,,\,\cdots
\fe
Recall that ${\cal I}(J)>0$ is a sufficient condition for the existence of quantum conserved spin $J$ currents.
Hence without imposing charge conjugation symmetry, our analysis does not predict quantum conserved currents for the $\mathbb{CP}^1$ model, consistent with the expectation that the $\mathbb{CP}^1$ model is not integrable at a generic $\theta$ angle.
Next, we compute the index for the $\mathbb{CP}^1$ sigma model at $\theta=0,\pi$, where there is a $\bZ_2$ charge conjugation symmetry and the model is known to be integrable.
The $\mathbb{Z}_2$ charge conjugation symmetry maps a charge $+1$ state to a charge $-1$ state, and extends the quotient group from $U(1)$ to ${O}(2)$.
The $k_\mu^a$ form a two-dimensional representation of the $O(2)$ group in which the group element can be expressed as
\begin{align}
h&=
\left(
\begin{matrix}
\cos \phi & \sin \phi \\
- \sin \phi & \cos \phi
\end{matrix}
\right)\, , \quad
\sigma=\left(\begin{array}{cc} 1&0\\0&-1\end{array}\right)\, ,
\end{align}
with $\phi \in [0,2\pi)$.
Using this matrix representation, the trace ${\rm tr}((\sigma h)^{m})$ can be computed straightforwardly and we get
\begin{align}
\tr((\sigma h)^{m})=1+(-1)^{m}\,.
\label{eq:trsigmahcp1}
\end{align}
Now we can compute the full partition function for the $\mathbb{CP}^1$ sigma model with charge conjugation symmetry
\begin{equation}
\widetilde{Z}(q,x) = \oint \frac{dy}{2\pi \i y} \,
\frac{1}{2} \, [\, Z(q,x,y)+Z_{\sigma}(q,x,y) \, ] \,,
\end{equation}
with $Z_\sigma$ computed via (\ref{eq:defzsigma}) using (\ref{eq:trsigmahcp1}).
Using this new partition function, we get the following indices:
\begin{equation}
\mathcal{I}(4) =1\,,\quad \mathcal{I}(6)=1\,,\quad \mathcal{I}(8)=0\,,\quad \mathcal{I}(10)= - 4\,,\quad \mathcal{I}(12)=-11\,\cdots
\end{equation}
Thus, there exist quantum conserved currents of spin-4 and spin-6, making the model integrable after incorporating discrete symmetry.
The existence of the spin-4 current was shown in the original work of \cite{Goldschmidt:1980wq}, and we further established that there is a spin-6 current.
Our analysis does not predict conserved currents of even higher spin.\footnote{Incidentally, the indices for the charge conjugation odd currents are all negative.}
We will see in Section \ref{sec:on} that this spin-6 quantum conserved current also exists in all the $O(N)$ models.
\subsection{$\mathbb{C}\mathbb{P}^{N-1}$ model}
\label{sec:cpn}
The $\mathbb{CP}^{N-1}$ model is the sigma model with target space
$\frac{SU(N)}{U(N-1)} $.
The index $a$ in $k_\mu^a$ transforms in a direct sum of the fundamental and the anti-fundamental representations of $U(N-1)$.
The characters for these representations are given by
\begin{equation}
\begin{aligned}
&\chi_{\square}(y_1,\ldots,y_{N-1})=\sum_{k}y_k\,,\qquad \chi_{\bar{\square}}(y_1,\ldots,y_{N-1})=\sum_{k}y_k^{-1}\,.
\end{aligned}
\end{equation}
The integration measure is given by
\begin{align}
\int d\mu(y) = \frac{1}{(N-1)!}\left(\prod_{k=1}^{N-1}
\oint\frac{dy_k}{2\pi \i y_k}\right) \prod_{i<j}(y_i-y_j)(y_i^{-1}-y_j^{-1})\,.
\end{align}
Computing the index using these formulae we obtain
\begin{align}
\mathcal{I}(4) = -2\,,\quad \mathcal{I}(6)= -6\, ,
\end{align}
independent of $N$.
Since all these numbers are negative, it is unlikely that there are conserved higher-spin currents at the quantum level. Let us see if imposing charge conjugation symmetry can help.
The $U(N-1)$ group element and charge conjugation matrix $\sigma$ in the representation
$r = \Box \oplus \overline{\Box}$ are given by
\begin{align}
r(h) = \left(
\begin{matrix}
h & 0 \\
0 & h^*
\end{matrix}
\right)\, , \quad
\sigma = \left(
\begin{matrix}
0 & I_{N-1} \\
I_{N-1} & 0
\end{matrix}
\right)\, ,
\end{align}
where $h \in U(N-1)$.
The traces $\tr[(\sigma h)^m]$ needed in (\ref{eq:defzsigma}) vanish for odd $m$ and for even $m$ reduce to
$2\,\tr[(hh^*)^{\frac{m}{2}}]$.
We now compute Haar integrals analytically over $h$ using the so-called Weingarten functions for the unitary group. The first two examples are
\begin{align}
\int dU\, U_{ij} U^*_{i'j'} &= \frac{1}{d} \, \delta_{ii'}\delta_{jj'}\, ,\\
\int dU\, U_{i_1j_1} U_{i_2j_2} U^*_{i_1'j_1'} U^*_{i_2'j_2'} &=
\frac{
\delta_{i_1 i_1'} \delta_{i_2i_2'} \delta_{j_1 j_1'} \delta_{j_2 j_2'} +
\delta_{i_1 i_2'} \delta_{i_2i_1'} \delta_{j_1 j_2'} \delta_{j_2 j_1'}
}{d^2-1} \nonumber \\
&\quad - \frac{
\delta_{i_1 i_1'} \delta_{i_2i_2'} \delta_{j_1 j_2'} \delta_{j_2 j_1'} +
\delta_{i_1 i_2'} \delta_{i_2i_1'} \delta_{j_1 j_1'} \delta_{j_2 j_2'}
}{d(d^2-1)}\, .
\end{align}
Here $dU$ is the Haar measure on $U(d)$ normalized such that $\int dU = 1$.\footnote{See \cite{2011arXiv1109.4244P} for a Mathematica package that computes Weingarten integrals symbolically.} The result of the index computation is that
\begin{align}
\mathcal{I}(4) = 0\, , \quad
\mathcal{I}(6) = -1\, ,
\end{align}
independent of $N$.
The discrete symmetry increases the indices, but they are still not positive, and so the classically-conserved currents may not survive quantum-mechanically.
This is consistent with the fact that the $\mathbb{CP}^{N-1}$ models with $N>2$ are not expected to be integrable \cite{Abdalla:1980jt,Gomes:1982qh}.
\subsection{$O(N)$ model}
\label{sec:on}
The $O(N)$ model can be viewed as the sigma model with target space
$ \frac{SO(N)}{SO(N-1)}$.
In other words the target space is the sphere $S^{N-1}$.
For simplicity, we assume that $N-1$ is even.
The index $a$ in the current $k_{\mu}^a$ transforms under
the vector representation of ${SO}(N-1)$, and its character is given by
\begin{equation}
\chi (y)=\sum_{i=1}^{(N-1)/2}(y_i+y_i^{-1} )\,.
\end{equation}
The measure factor for integrating over the Cartan is given by
\begin{equation}
d\mu (y)=
\prod_{i}\frac{dy_i}{2\pi \i \, y_i}\,
\prod_{i<j}(1-y_iy_j)(1-\frac{y_i}{y_j})\,.
\end{equation}
Using these formulas, together with (\ref{eq:fhat}), (\ref{eq:fqxy}), \eqref{eq:defofZqxy} and \eqref{eq:Cartanintegral}, one can compute $\mathcal{I}(J)$.
The results for small $N$ are summarized in Table \ref{tab:on}, and for the spin-4 case agree with the findings in \cite{Goldschmidt:1980wq}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $\mathcal{I}(4)$ & $\mathcal{I}(6)$ & $\mathcal{I}(8)$ \\
\hline
$N=3$ & $0$ & $- 1$ & $- 5$ \\
$N=5$ & $1$ & $0$ & $- 1$ \\
$N=7$ & $1$ & $1$ & $0$ \\
$N=9$ & $1$ & $1$ & $0$ \\
\hline
\end{tabular}
\qquad \qquad
\begin{tabular}{|c|c|c|c|}
\hline
& $\mathcal{I}(4)$ & $\mathcal{I}(6)$ & $\mathcal{I}(8)$ \\
\hline
$N=3$ & $1$ & $ 1$ & $0$ \\
$N=5$ & $1$ & $1$ & $0$ \\
$N=7$ & $1$ & $1$ & $0$ \\
$N=9$ & $1$ & $1$ & $0$ \\
\hline
\end{tabular}
\caption{The first few indices for the $O(N)$ sigma model $\frac{SO(N)}{SO(N-1)}$.
On the left are indices without imposing any discrete symmetry,
and on the right are indices when we impose the charge conjugation symmetry.
We take $N$ to be odd for simplicity.
The case with $N=3$ is the same as the $\mathbb{C}\mathbb{P}^1$ case considered in Section \ref{sec:cp1}.
Thus our analysis confirms the presence of a spin-$4$ conserved current and we predicts a new spin-$6$ conserved current at the quantum level.}
\label{tab:on}
\end{table}
Since $\mathcal{I}(4)>0$ for all values of $N$ except $N=3$, this shows quantum integrability for $N>3$.
For $N=3$, which is the same as the $\mathbb{C}\mathbb{P}^1$ model, we need to take into account discrete symmetries, as we also saw in Section \ref{sec:cp1}.
So we now proceed to impose the $\mathbb{Z}_2$ charge conjugation symmetry which extends the quotient group from $SO(N-1)$ to $O(N-1)$.
Since the charge conjugation $\sigma={\rm diag}(1,1,\ldots,1,-1)$ maps the vector representation of $SO(N-1)$ to itself, the computation of the modified partition function \eqref{eq:defzsigma} boils down to computing ${\rm tr}[(\sigma h)^{m}]$ in the vector representation and integrating the plethystic exponential over $SO(N-1)$.
This integral cannot be reduced to an integral over the Cartan since charge conjugation does not commute with generic group elements of $SO(N-1)$.\fn{Recall that $O(N-1)=SO(N-1)\rtimes \mathbb{Z}_2$.}
Nevertheless, as shown in Appendix C of \cite{Henning:2017fpj}, one can still simplify the integral into multiple abelian integrals. Their analysis is based on the fact that, for any $h\in SO(N-1)$, $\sigma h$ can be brought to the following block-diagonal matrix by conjugation,
\begin{equation}
\sigma h \mapsto \left(\begin{array}{cccc}R_1&\cdots&0&0\\\vdots&\ddots&\vdots&\vdots \\0&\cdots&R_{\frac{N-3}{2}}&0\\0&\cdots&0&J\end{array}\right)\,,
\end{equation}
with
\begin{equation}
R_k= \left(\begin{array}{cc}\cos\theta_k&\sin\theta_k\\-\sin \theta_k&\cos\theta_k\end{array}\right)\,,\qquad J=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right)\,.
\end{equation}
We just state the outcome, referring to \cite{Henning:2017fpj} for details: The modified partition function $Z_\sigma$ can be computed by replacing the character and the measure with
\begin{align}
\tilde{\chi} (y)&=\tilde{y}_{+}+\tilde{y}_{-}+\sum_{i=1}^{(N-3)/2}(y_i+y_i^{-1} )\,,\\
d\tilde{\mu}(y)&=\frac{d\tilde{y}_+}{2\pi \i (\tilde{y}_{+}-1)} \frac{d\tilde{y}_{-}}{2\pi \i (\tilde{y}_{-}+1)}
\prod_{i}\frac{dy_i (1-y_i^2)}{2\pi \i \, y_i}\,
\prod_{i<j}(1-y_iy_j)(1-\frac{y_i}{y_j})\,,
\end{align}
where the integration contours for $\tilde{y}_{\pm}$ are around $\pm 1$ respectively.
Using these expressions, we computed $\mathcal{I}(J)$ for small odd $N$
(including $N=3$) and found that
\begin{equation}
\mathcal{I}(4)=1\,,\qquad\mathcal{I}(6)=1\,,\qquad\mathcal{I}(8)=0\,,
\end{equation}
independent of $N$.
Thus our analysis is consistent with quantum integrability of the $O(N)$ model with $\mathbb{Z}_2$ symmetry. In addition to the spin-4 conserved current established in \cite{Goldschmidt:1980wq}, we have predicted a spin-6 conserved current at the quantum level.\fn{Our index analysis does not predict the existence of the conserved currents with spin $>8$ although it is likely that there exist infinitely many higher-spin conserved currents given that the model is integrable. Note also that in \cite{Spector:1986ek} it was claimed that there exists a quantum conserved current for each even spin.
This is incorrect as \cite{Spector:1986ek} overcounts operators that are total derivatives, because they include operators which vanish owing to the equation of motion.}
\subsection{Flag sigma models $\frac{U(N)}{U(1)^N}$}
\label{sec:flag}
As one last example, we compute the index for the flag sigma model
$\frac{U(N)}{{U}(1)^{N}}$, which has been studied recently in \cite{Lajko:2017wif,Tanizaki:2018xto,Ohmori:2018qza}.
This is also an example where the coset is not symmetric (for $N>2$).
Note that the $N=2$ flag sigma model is the $O(3)$ or the $\mathbb{C}\mathbb{P}^1$ model.
The flag sigma model has a $N(N-1)$-dimensional parameter space preserving the
$PSU(N)$ global symmetry.
Over special loci on the parameter space, the model has enhanced discrete symmetries and 't Hooft anomalies.
It has been argued that over certain special loci on the moduli space the model is gapless in the IR and is described by the $SU(N)_1$ WZW model \cite{Lajko:2017wif,Tanizaki:2018xto,Ohmori:2018qza}.
The index $a$ in $k_\mu^a$ takes $(N^2-N)$ possible values corresponding to the roots of $SU(N)$.
The charge of $(k_\mu)^{ij}$ (with $i,j =1,\ldots, N$, $i\neq j$) under the $n$-th $U(1)$ factor in $U(1)^N$ is $\delta_{i,n} - \delta_{j,n}$.
The required $U(1)^N$ character is
\ie\label{flagch}
\chi(y) = \sum_{\scriptstyle \substack{i,j=1 \\ i\neq j}}^N y_i y_j^{-1}\,.
\fe
We first computed the indices $\mathcal{I}(J)$ without imposing any discrete symmetry, and the results are given on the left in Table \ref{tab:flag}.
All the indices are negative.
Hence our analysis does not predict higher-spin quantum conserved currents in the flag sigma model at a generic point in the parameter space.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $\mathcal{I}(4)$ & $\mathcal{I}(6)$ & $\mathcal{I}(8)$ \\
\hline
$N=3$ & $-47$ & $- 262$ & $- 1263$ \\
$N=4$ & $- 371$ & $- 3834$ & $- 32235$ \\
$N=5$ & $- 1605$ & $- 27794$ & $- 379760$ \\
\hline
\end{tabular}\qquad \qquad
\begin{tabular}{|c|c|c|c|}
\hline
& $\mathcal{I}(4)$ & $\mathcal{I}(6)$ & $\mathcal{I}(8)$ \\
\hline
$N=3$ & $-4$ & $- 20$ & $- 105$ \\
$N=4$ & $- 7$ & $- 79$ & $- 682$ \\
$N=5$ & $- 10$ & $- 139$ & $- 1722$ \\
\hline
\end{tabular}
\caption{The first few indices for the flag sigma model $\frac{U(N)}{U(1)^{N}}$.
On the left are indices without imposing any discrete symmetry, and on the right are indices while imposing the $S_N \times \mathbb{Z}_2$ symmetry. The indices are all negative, which means that our counting analysis does not predict higher-spin quantum conserved currents.}
\label{tab:flag}
\end{table}
Next, we compute the index at the ``origin" of the parameter space, where the enhanced discrete symmetry is $S_N \times \mathbb{Z}_2$, with $S_N$ the permutation group on $N$ elements.
A permutation $\sigma \in S_N$ acts on the current $(k_\mu)^{ij}$ via
$(k_\mu)^{ij} \to (k_\mu)^{\sigma(i)\sigma(j)}$,
while the $\mathbb{Z}_2$ acts as $(k_\mu)^{ij} \to (k_\mu)^{ji}$.
We define a $N(N-1)\times N(N-1)$ diagonal matrix
\ie
h = \text{diag}(\, y_1y_2^{-1} , y_1y_3^{-1} ,\cdots, y_{N-1}y_N^{-1}, y_1^{-1} y_2,y_1^{-1}y_3 ,\cdots, y_{N-1}^{-1}y_N\,)\,,
\fe
whose trace is given in \eqref{flagch}.
For each element $\sigma$ of $S_N\times \mathbb{Z}_2$, we write down its matrix representation acting on $\mathfrak{k}$, and compute ${\rm tr}[(\sigma h)^m]$.\footnote{For example, the charge conjugation $\mathbb{Z}_2$ is realized as
$\sigma =\left(
\begin{matrix} 0 & I_{ {N(N-1)\over2}} \\
I_{ {N (N-1)\over2}} & 0\end{matrix}\right)$ \, .}
The partition function for the $S_N\times \mathbb{Z}_2$ invariant operators is then
\ie
{1\over 2 N!} \prod_{i=1}^N \oint {dy_i\over 2\pi \i y_i} \,\sum_{\sigma\in S_N \times \mathbb{Z}_2} Z_\sigma(q,x,y_i)\,,
\fe
where $Z_\sigma$ is as in (\ref{eq:defzsigma}).
We find that all the indices are negative. See Table \ref{tab:flag}.
If we impose a smaller subgroup of $S_N\times \mathbb{Z}_2$, the indices are even more negative. Thus, we conclude that our analysis does not predict higher-spin quantum conserved currents for the flag sigma model anywhere on the parameter space.
This in particular suggests that the classical integrability of the flag sigma model on $\frac{U(3)}{U(1)^3}$ found in \cite{Bykov:2014efa} is likely to be broken at the quantum level.
\section{Conclusions and future directions}
\label{sec:conclusions}
In this paper, we systematized the analysis of Goldschmidt and Witten \cite{Goldschmidt:1980wq} by exploiting the conformal symmetry of coset models in the UV.
We introduced the index $\mathcal{I}(J)$, eqns. (\ref{eq:IndexClassical}) and (\ref{eq:defindex}), whose positivity for spin $J>2$ gives a sufficient condition for quantum integrability. We also discussed the invariance of the index under conformal perturbation theory around the UV fixed point.
We applied our formalism in several examples and found the following results:
\begin{enumerate}
\item The $\mathbb{C}\mathbb{P}^{1}$ model (Section \ref{sec:cp1}) is integrable at $\theta=0$ and $\theta=\pi$, where there is a $\mathbb{Z}_2$ charge conjugation symmetry, since $\mathcal{I}(4)$ and $\mathcal{I}(6)$ are positive. On the other hand, without imposing the extra $\mathbb{Z}_2$ symmetry, the indices are all non-positive, consistent with the standard lore that the $\mathbb{C}\mathbb{P}^1$ model is not integrable away from $\theta=0,\pi$.
\item The indices for the $\mathbb{C}\mathbb{P}^{N-1}$ model (Section \ref{sec:cpn}) with $N\geq 3$ are all non-positive, consistent with the fact that they are not quantum integrable \cite{Abdalla:1980jt,Gomes:1982qh}.
\item For the $O(N)$ model (Section \ref{sec:on} and Table \ref{tab:on}), we found that $\mathcal{I}(4)=\mathcal{I}(6)=1$ (with a $\mathbb{Z}_2$ symmetry), thereby establishing the existence of a spin-6 conserved current in addition to the well-known spin-4 conserved current.
\end{enumerate}
The examples above are symmetric cosets which are known to be classically integrable, but our analysis is also applicable to more general cosets.
As an example, we studied the $\frac{U(N)}{U(1)^{N}}$ flag sigma models and found that
\begin{enumerate}
\item[4.] The indices for the flag sigma models (Section \ref{sec:flag} and Table \ref{tab:flag}) are all negative even after imposing the maximum amount of discrete symmetry.
Thus it is unlikely that these models are integrable.
\end{enumerate}
We now remark on some avenues for future work.
As demonstrated in the example of the $\mathbb{C}\mathbb{P}^1$ model, discrete symmetry plays an important role for quantum integrability.
However, our analysis is not sensitive to potential 't Hooft anomalies, which can have consequences for integrable flows.
For example, while the $\mathbb{CP}^1$ model has $O(3)=SO(3) \rtimes \mathbb{Z}_2$ global symmetry both at $\theta=0$ and $\theta=\pi$, the 't Hooft anomalies are different at these two points.
At $\theta=0$, there is no anomaly, while at $\theta=\pi$, there is a mixed anomaly between $SO(3)$ and the $\mathbb{Z}_2$ charge conjugation symmetry \cite{Gaiotto:2017yup,Metlitski:2017fmd,Ohmori:2018qza}.
Relatedly, the IR phases at $\theta=0$ and at $\theta=\pi$ are different.
At $\theta=0$, the IR is trivially gapped, while at $\theta=\pi$, the IR phase is gapless and is described by the $SU(2)_1$ WZW model which captures the mixed anomaly.
One potential avenue to incorporate the information from 't Hooft anomalies into our index would be to interpret it as a torus partition function (possibly with symmetry lines inserted), whose modular transformation generally depends on the 't Hooft anomaly (see, for example, \cite{Freed:1987qk,Numasawa:2017crf,Lin:2019kpn}).
Our analysis can be extended to supersymmetric theories and theories with fermions.
For instance, it is known that the $\mathbb{CP}^{N-1}$ models can be made quantum integrable by coupling them to fermions \cite{Gomes:1982qh,Basso:2012bw}, and it would be interesting to see if the same is true for the flag sigma models.
Using the idea developed in this paper, one can also analyze ``fine-tuned'' quantum integrability: Some theories \cite{Basso:2012bw} can be made quantum integrable after tuning the coefficients for marginal operators.
This can be diagnosed by computing a ``refined'' index
\begin{align}
\mathcal{I}_{r}(J):=\mathcal{I}(J)+c(2,0)\,.
\end{align}
Here $c(2,0)$ is the number of marginal primary operators in the UV.
Unlike $\mathcal{I}(J)$ discussed in the paper, $\mathcal{I}_r(J)>0$ is not a sufficient condition for quantum integrability, but having $\mathcal{I}_{r}(J) > 0$ will make it more likely for quantum integrability to be achieved at some point in parameter space.
It should also be possible to extend our analysis to deformations of sigma models which partially break the global $G$ symmetry.
One famous example is the sausage model \cite{Fateev:1992tk} (see \cite{Hoare:2019ark} for a recent discussion), which is an integrable deformation of the $O(3)$ model.
The integrability of such models can be analyzed by generalizing our computation to operators which are not invariant under $G$.
Finally, it would be interesting if one can generalize our analysis to superstring sigma models and find new integrable backgrounds.\footnote{The classification of classically integrable backgrounds was performed in \cite{Zarembo:2010sg}. See also \cite{Wulff:2017lxh,Wulff:2017vhv,Wulff:2019tzh} for discussions on quantum integrability of string backgrounds based on factorized scattering.}
\subsection*{Acknowledgments}
We would like to thank Z. Komargodski, K. Ohmori, P. Orland, N. Seiberg, E. Witten and M. Yamazaki for useful conversations.
We thank B. Basso and K. Zarembo for comments on a draft.
SK is supported by DOE grant number DE-SC0009988.
RM is supported by US Department of Energy grant No.\ DE-SC0016244.
The work of SHS is supported by the National Science Foundation grant PHY-1606531 and by the Roger Dashen Membership.
This work benefited from the 2019 Pollica summer workshop, which was supported in part by the Simons Foundation (Simons Collaboration on the Non-Perturbative Bootstrap) and in part by the INFN.
SHS is grateful for the hospitality of the Physics Department of National Taiwan University during the completion of this work.
| {
"attr-fineweb-edu": 1.573242,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUa8jxK6nrxpQczL3K | \section{Introduction}\label{introduction}}
\IEEEPARstart{T}{he} problems of clustering aim at the optimal grouping of the observed data and appear in very diverse fields
including pattern recognition and signal compression. Fuzzy c-means \cite{dunn1973fuzzy, bezdek1973fuzzy, bezdek2013pattern, bezdek1984fcm}
and deterministic annealing clustering \cite{rose1990deterministic,rose1990statistical,rose1993constrained,beni1994least}
are the most widely used ones among the objective function-based clustering methods.
Both of them start with an attempt to alleviate the local minimum trap problem
suffered by hard c-means \cite{duda1973pattern}
and achieved better performance in most cases.
However, we haven't yet found solid theoretical foundation for FCM,
and the parameter $m$ involved seems unnatural without any physical meaning.
Another crucial assumption underlying most current theory of clustering problem is that the distribution
of training samples is identical to the distribution of future test samples, but it is often violated in practice
where the distribution of future data deviates from the distribution of training data.
For example, a decision-making system is forecasting future actions in the presence of
current uncertainty and imperfect knowledge\cite{garibaldi2019need}.
In this paper, we propose a clustering model based on importance sampling
which minimizes the worst case of expected distortions under the constraint of distribution deviation.
The distribution deviation is measured by the Kullback–Leibler
divergence\cite{kullback1951information,williams1980bayesian, sadaaki1997fuzzy,ichihashi2000gaussian,
coppi2006fuzzy,ortega2013thermodynamics,genewein2015bounded,hihn2019information}
between the current distribution and a future distribution.
The proposed model is called \textit{Importance Sampling Deterministic Annealing}, denoted as ISDA for short,
and we show that fuzzy c-means is a special case of ISDA
which gives a physical meaning of the fuzzy exponent $m$ in fuzzy c-means.
The proposed ISDA clustering algorithm aims to minimize the loss in maximum degradation
and hence the resulting optimal problem is a minimax problem.
Inspired from the importance sampling method\cite{tokdar2010importance,shi2009neural,shi2009hierarchical},
we convert the constraint between the current and future distribution
to a constraint on the importance sampling weights.
The constrained minimax problem can be reformulated to an unconstrained problem using the Lagrange method.
The advantage of the reformulation of ISDA is that the
resulting unconstrained optimization problem is dependent on cluster centers only and
the solution to the corresponding optimization problem can be found by applying the
quasi-newton algorithms\cite{gill1972quasi,fletcher2013practical}.
We conduct experiments on both synthetic datasets and a real-world dataset
to validate the effectiveness of ISDA.
First, an evaluation metric called M-BoundaryDist is proposed
as a measure of how well a clustering algorithm performs
with respect to the boundary points.
M-BoundaryDist calculates the sum of distances of boundary points to the dataset centroid.
Experiment results on synthetic Gaussian datasets show that
when $T_2$ is small, the cluster centers of ISDA are closer to the boundary points
compared with Kmeans\cite{krishna1999genetic} and FCM
and performs better under large distribution shifts.
Next, results on load forecasting problem show that
ISDA performs better compared with Kmeans and FCM
on 9 out of 12 months on future load series.
Both synthetic and real-world examples validate the effectiveness of ISDA.
\textbf{Outline of the paper.}
\Cref{Related Work} gives a brief review of related work on fuzzy c-means and deterministic annealing clustering algorithm.
\Cref{ISDA} describes our proposed importance sampling deterministic annealing for clustering model
and the algorithm to solve it.
The relationship between fuzzy c-means and ISDA is also given in this section.
\Cref{Results} conducts experiments on synthetic Gaussian datasets.
Specifically, \Cref{metric} compares Kmeans, FCM and ISDA with respect to the boundary points and
\Cref{dist-shift} compares the three clustering methods on deviated datasets.
\Cref{T2} analyzes how the temperature $T_2$ affects the ISDA clustering result.
\Cref{load-forecasting} applies ISDA on a real-world load forecasting problem and show that ISDA performs better
under most scenarios of future distribution shifts.
Finally, we conclude this paper in \Cref{conclusion}.
\section{Related Work} \label{Related Work}
Let $D =\{x_1,x_2,\cdots,x_N\}$ be a given set of $N$ points in $S$ dimensional space.
These data points are to be partitioned into $C$ clusters.
We denote the prototype of cluster $j$ as $y_j$.
$Y=\{y_1, y_2, \cdots, y_C\}$ denotes all cluster centers and
$d(x_i, y_j)$ denotes the \textbf{squared} distance between $x_i$ and $y_j$,
which is usually used as the distortion measure.
\textbf{Hard C-Means Clustering}
In clustering analysis, hard c-means assigns each data point to a single cluster\cite{menard2004non}
and aims to minimize the following objective function $F_H(Y,U)$
\begin{align}
\begin{split}
\label{eqn:HCM-uij}
\min_{Y,U} \quad & F_{H}(Y,U)=\sum_{i=1}^{N}\sum_{j=1}^{C}u_{ij}d(x_i,y_j) \\
\text{s. t.} \quad & \sum_{j=1}^{C}u_{ij}=1, 1\leq i \leq N \\
& 0 < \sum_{i=1}^{N} u_{ij} < N, 1\leq j \leq C \\
& u_{ij}\in\{0,1\}, 1\leq i \leq N, 1\leq j \leq C
\end{split}
\end{align}
where $u_{ij}$ denotes the membership of the $i$-th data point to the $j$-th cluster center
and $U=[u_{ij}]_{N \times C}$ is the partition matrix.
The objective of hard c-means is to find the optimal center $Y$ and the membership $U$.
\textbf{Fuzzy C-Means Clustering}
Fuzzy clustering is a fruitful extension of hard c-means\cite{duda1973pattern}
with various applications and is supported by cognitive evidence.
The fuzzy clustering algorithms regard each cluster as a fuzzy set
and each data point may be assigned to multiple clusters
with some degree of sharing\cite{menard2004non}.
In fuzzy c-means\cite{bezdek1984fcm}, an exponent parameter $m$ is introduced and
$u_{ij} $ is interpreted as the fuzzy membership with values in $[0,1]$
which measures the degree to which the $i$-th data point belongs to the $j$-th cluster.
The corresponding objective function $F_{FCM}(Y,U)$ and the constraints are as follows
\begin{align}
\begin{split}
\label{eqn:FCM}
\min_{Y,U} \quad & F_{FCM}(Y,U)=\sum_{i=1}^{N}\sum_{j=1}^{C} u_{ij}^{m} d(x_i,y_j)\\
\text{s. t.} \quad & \sum_{j=1}^{C} u_{ij}=1, 1\leq i \leq N \\
& 0 < \sum_{i=1}^{N} u_{ij} < N, 1\leq j \leq C \\
& u_{ij}\in [0,1], 1\leq i \leq N, 1\leq j \leq C
\end{split}
\end{align}
where $m \in [1, \infty)$ is a fuzzy exponent called the fuzzifier.
The larger $m$ is, the fuzzier the partition\cite{ichihashi2000gaussian}.
The necessary optimality conditions for the fuzzy partition matrix
$U$ is as follows\cite{bezdek2013pattern}
\begin{align}
u_{ij}=\frac{d(x_i,y_j)^{\frac{1}{1-m}}}{\sum_{j=1}^{c}d(x_i,y_j)^{\frac{1}{1-m}}},
\quad 1\leq j \leq C, \quad 1\leq i \leq N. \label{eq:FCM-Uij}
\end{align}
Substituting \eqref{eq:FCM-Uij} into \eqref{eqn:FCM}, we get
\begin{align}
R_{FCM}(Y) = \sum_{i=1}^{N} (\sum_{j=1}^{C} d(x_i,y_j)^{1\over 1-m})^{1-m}. \label{eq:FCM-reform}
\end{align}
Minimizing $R_{FCM}(Y)$ with respect to $Y$, we can get the optimal cluster prototypes.
\eqref{eq:FCM-reform} is called the reformulated criteria of FCM\cite{hathaway1995optimization}
and can be solved by commercially available software.
The function $F_{FCM}(Y,U)$ depends on both $U$ and $Y$ and the function $R_{FCM}(Y)$ depends on $Y$ only.
The aim of reformulation is to decrease the number of variables by eliminating $U$
by the optimal necessary condition with respect to $U$.
\textbf{Deterministic Annealing Clustering}
The deterministic annealing clustering was derived from a statistical physical or information-theoretical view,
and finds many applications in unsupervised and supervised problems\cite{rose1998deterministic}.
Let $x$ denote a data point or source vector, $y(x)$ denote its representation cluster center,
and $d(x,y(x))$ denote the distortion measure.
For a random variable $X$ with distribution $p(x)$, the expected distortion for this representation can be written as
\begin{equation}
L = \int_x\int_y p(x,y)d(x,y)dxdy = \int_x p(x)\int_y p(y|x)d(x,y)dxdy \label{eq:Loss}
\end{equation}
where $p(x,y)$ is the joint probability distribution
and $p(y|x)$ is the association probability relating input vector $x$ and cluster center $y$.
The aim of deterministic annealing for clustering is to
minimize $L$ with respect to the conditional probability $p(y|x)$ and $y$
subject to a specified level of randomness.
The level of randomness is usually measured by the joint entropy $H(X,Y)$,
which can be decomposed into sums of entropy and conditional entropy,
which is $H(X,Y)=H(X) + H(Y|X)$.
Since $H(X)$ is independent of clustering,
we use the conditional entropy $H(Y|X)$ as a measure of randomness.
Therefore, the constraint becomes $H(Y|X) \leq C_0$ and
the constrained optimization problem becomes
\begin{align}
\min_{p(y|x),y} \quad L & = \int_x p(x)\int_y p(y|x)d(x,y)dxdy \label{eq:DA-L} \\
s.t. \quad & H(Y|X) \leq C_0. \label{eq:DA-constraints}
\end{align}
The above problem can be reformulated to the unconstrained optimization problem
using the Lagrange method, as shown in \eqref{eq:DA-lagrange}
\begin{align}
\min_{p(y|x), y} F & = L-T_1H(Y|X). \label{eq:DA-lagrange}
\end{align}
Here the Lagrange multiplier $T_1$ is the temperature
which governs the level of randomness of the conditional entropy.
In classical clustering problems,
the dataset $D$ is assumed to be independently drawn from $p(x)$
and the codebook $Y$ is finite. If we denote the association probability $p(y_j|x_i)$ as $u_{ij}$,
then the empirical estimates of \eqref{eq:DA-lagrange} is \eqref{eq:DA-distortion}
\begin{equation}
F_{DA}(Y,U) = \sum_{i=1}^{N}\sum_{j=1}^{C}u_{ij}d(x_i,y_j) +
T_1\sum_{i=1}^{N}\sum_{j=1}^{C}u_{ij}log u_{ij} \label{eq:DA-distortion}
\end{equation}
then the optimization problem becomes
\begin{align}
\begin{split}
\label{eqn:DA}
\min_{U,Y} \quad & F_{DA}(Y,U) \\
\text{s.t.} \quad & \sum_{j=1}^{C} u_{ij}=1, 1\leq i \leq N \\
& 0 < \sum_{i=1}^{N} u_{ij} < N, 1 \leq j \leq C \\
& u_{ij} \in [0,1], 1\leq i \leq N ,1 \leq j \leq C.
\end{split}
\end{align}
This is known as deterministic annealing for clustering\cite{rose1990deterministic,rose1998deterministic}.
An equivalent derivation of \eqref{eqn:DA} can be obtained by
the principle of maximum entropy in which the level of expected distortion $L$
is fixed\cite{rose1998deterministic,jaynes1957information}.
Minimizing $F_{DA}(Y,U)$ with respect to $u_{ij}$
is straightforward and gives the Gibbs distribution\cite{rose1998deterministic}
\begin{equation}
u_{ij} = \frac{exp(-\frac{1}{T_1} d(x_i, y_j))}{\sum_{j=1}^{C} exp(-\frac{1}{T_1} d(x_i, y_j))}. \label{eq:DA-U}
\end{equation}
The corresponding minimum of $F_{DA}(Y,U)$ is obtained by
putting \eqref{eq:DA-U} back to \eqref{eq:DA-distortion},
also known as the reformulation of determinisic annealing for clustering\cite{zhang2003robust}, which is
\begin{equation}
R_{DA}(Y) = - T_1 \sum_{i=1}^{N} log(\sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})). \label{eq:DA-reformulation}
\end{equation}
The underlying assumption of HCM, FCM and determinisic annealing clustering
is that the distribution of training data is the same as future data,
however it may not hold in many real cases.
In the following section, we propose a new clustering algorithm
to handle this this problem derived from the importance sampling method.
\section{Importance Sampling Deterministic Annealing} \label{ISDA}
In the proposed Importance Sampling Deterministic Annealing (ISDA) clustering method,
we assume that the observed data set draws from a distribution $q(x)$
and our aim is to construct a clustering algorithm for a population with unknown distribution $p(x)$.
We further assume that if $p(x)$ are, instead of being completely unknown,
restricted to a class of distributions, i.e.
\begin{equation}
\Gamma = \{p(x): KL(p(x)||q(x)) \leq C_1 \}. \label{eq:gamma}
\end{equation}
A \textit{minimax} approach is applied through minimizing the
worst-case loss restricted to this constraint.
\Cref{ISDA-model} gives a principled derivation of the minimax approach and
\Cref{ISDA-algorithm} solves the corresponding optimization problem based on its reformulation.
The derivation of our proposed approach in this paper is heavily dependent on the work\cite{rose1998deterministic}.
\subsection{Principle of ISDA clustering} \label{ISDA-model}
In this section, we give out the principled deviation of the minimax approach.
In our proposed algorithm, we aim to minimize the \textit{worst-case situation}
of expected distortion under the given constraints,
which is
\begin{equation}
L = \int_x\int_y p(x,y)d(x,y)dydx
\end{equation}
where $d(x,y)$ represents a \textbf{squared} distance between $x$ and $y$
for convenience and the derivation also holds for other distortion measures.
First, we find the best partition $U$ to minimize the expected distortion $L$
under the conditional entropy constraint.
The corresponding optimization problem is
\begin{align}
\begin{split}
\min_{p(y|x)} \quad & L =\int_x\int_y p(x,y)d(x,y)dydx \\
\text{s.t.} \quad & H(Y|X) \leq C_0.
\end{split}
\end{align}
Second, for a given partition $p(y|x)$ , we find a $p(x)$
which maximizing the objective function and corresponds to the \textit{worst-case situation}.
However, $p(x)$ is unknown in the problem and we assume that
$p(x)$ is subject to the constraint $KL(p(x)||q(x)) \leq C_1$.
Therefore, the corresponding optimization problem becomes
\begin{align}
\begin{split}
\max_{p(x)} \min_{p(y|x)} \quad & L =\int_x\int_y p(x,y)d(x,y)dydx \\
\text{s.t.} \quad & H(Y|X) \leq C_0 \\
\quad & KL(p(x)||q(x)) \leq C_1.
\end{split}
\end{align}
Third, given the fuzzy partition $p(y|x)$ and the worst-case distribution $p(x)$,
we aim to find the best prototype $y$ which minimizes the objective function.
Then the corresponding optimization problem is
\begin{align}
\begin{split} \label{eq:L-with-constraints}
\min_{y} \max_{p(x)} \min_{p(y|x)} \quad & L=\int_x\int_y p(x,y)d(x,y)dydx \\
\text{s.t.} \quad & H(Y|X) \leq C_0 \\
\quad & KL(p(x)||q(x)) \leq C_1.
\end{split}
\end{align}
Suppose $Y=\{y_1,y_2,\cdots,y_C\}$ is a finite set and
the observed dataset $D =\{x_1,x_2,\cdots,x_N\}$ are $N$ i.i.d samples drawn from $q(x)$.
Derived from the importance sampling method,
the constraint on $p(x)$ becomes the constraint on the importance sampling weights, which is
\begin{equation}
\Gamma = \{w(x_i): KL(w(x_i)||\{ \frac{1}{N} \}) \leq C_1 \} \label{eq:gamma-W}
\end{equation}
where $\{ \frac{1}{N} \}$ denotes the discrete uniform distribution with $N$ points.
The self-normalized importance sampling weight for $x_i$ is
$w_i = {{p(x_i)\over q(x_i)}\over{{\sum_l {{p(x_l)\over q(x_l)}}}}}$.
The corresponding importance sampling weight is $W=[w_i]_{N \times 1}$ with $\sum_{i=1}^{N} w_i=1$,
which is called the importance sampling weight distribution.
The association probability $p(y_j|x_i)$ is denoted as $u_{ij}$
and the fuzzy membership matrix is $U=[u_{ij}]_{N \times C}$.
Then the empirical estimate of $L$ is
\begin{equation}
L \approx \sum_{i=1}^{N} w_i \sum_{j=1}^{C} u_{ij} d(x_i, y_j), \label{eq:L-empirical}
\end{equation}
the empirical estimate of $H(Y|X)$ is
\begin{equation}
H(Y|X) \approx \sum_{i=1}^{N} w_i \sum_{j=1}^{C} u_{ij} log u_{ij}, \label{eq:H(Y|X)-empirical}
\end{equation}
and the empirical estimate of $KL(p(x)\parallel q(x))$ is
\begin{align}
KL(p(x) \parallel q(x)) & \approx KL(w(x_i) \parallel \{\frac{1}{N}\}) \nonumber \\
& = \sum_{i=1}^{N} w_i log w_i + log N. \label{eq:KL-pq-empirical}
\end{align}
The proof of \eqref{eq:L-empirical}, \eqref{eq:H(Y|X)-empirical} and \eqref{eq:KL-pq-empirical}
are shown in Appendix A
Then, the constrained optimization problem in \eqref{eq:L-with-constraints} can be reformulated to
the unconstrained optimization problem using the Lagrange method,
\begin{equation}
F_{ISDA}^{0}(Y,W,U) = L-T_1H(Y|X) -T_2 KL(w(x_i)||\{ \frac{1}{N} \}) \label{eq:ISDA-objective} \\
\end{equation}
where $T_1 > 0$ and $T_2 > 0$ are the temperature parameters
which govern the randomness of $U$ and $W$ respectively.
Plugging \eqref{eq:L-empirical}, \eqref{eq:H(Y|X)-empirical} and \eqref{eq:KL-pq-empirical}
back into \eqref{eq:ISDA-objective}, we get the empirical estimates of the objective function for ISDA
clustering, which is
\begin{align}
& F_{ISDA}^{0}(Y,W,U) = \sum_{i=1}^{N} w_i\{\sum_{j=1}^{C} u_{ij} d(x_i,y_j) \nonumber \\
& +T_1\sum_{j=1}^{C}u_{ij}log u_{ij}\} -T_2\sum_{i=1}^{N} w_i log w_i-T_2log(N). \label{eq:ISDA-empirical-0}
\end{align}
Since $T_2$ is predefined and the last term $log(N)$ is a constant,
we finally get $F_{ISDA}(Y,W,U)$ by omitting the last term, which is
\begin{align}
& F_{ISDA}(Y,W,U) = \sum_{i=1}^{N} w_i\{\sum_{j=1}^{C} u_{ij} d(x_i,y_j) \nonumber \\
& + T_1\sum_{j=1}^{C}u_{ij}log u_{ij}\} - T_2\sum_{i=1}^{N} w_i log w_i. \label{eq:ISDA-empirical}
\end{align}
Adding the constraints on the partition matrix $U$ and the importance sampling weight $W$,
the optimization problem of ISDA is as follows
\begin{align}
\min_{Y} \max_{W} \min_{U} \quad & F_{ISDA}(Y,W,U) \label{eq:ISDA} \\
\text{s.t.} \quad & \sum_{j=1}^{C} u_{ij}=1, 1 \leq i \leq N \nonumber \\
& 0 < \sum_{i=1}^{N} u_{ij} < N, 1 \leq j \leq C \label{eq:constraints-uij} \\
& u_{ij} \in [0,1], 1\leq i \leq N ,1 \leq j \leq C \nonumber \\
& \sum_{i=1}^{N}w_{i}=1, w_i \in [0,1], 1 \leq i \leq N \label{eq:constraints-wi}
\end{align}
where \eqref{eq:constraints-uij} are the constraints for the fuzzy membership $U$\cite{bezdek1984fcm}
and \eqref{eq:constraints-wi} is the constraint for the importance sampling weight $W$.
In conclusion, ISDA is an objective-function-based clustering method and
the objective funciton can be seen as a trade-off between
the expected distortion, the level of randomness and the distribution deviation.
When $T_2 \rightarrow 0$, the distribution shift $KL(p(x)) \parallel q(x))$ can be very large
and for $T_2 \rightarrow \infty $, the distribution shift should be small,
the effect of $T_2$ is further illustrated in \Cref{T2}.
\subsection{Reformulation of ISDA clustering} \label{ISDA-algorithm}
In this section, we give a reformulation of ISDA and
a corresponding optimization routine following \cite{hathaway1995optimization} to solve the problem.
We derive the membership and weight update equations from the necessary optimality
conditions for minimization of the criterion function
by differentiating $F_{ISDA}(U,W,Y)$ with respect to $U$, $W$ and set the derivatives to zero.
Specifically, let the Lagrange multiplier be $\{\lambda_i \}_{i=1}^{N}$ and $\lambda$,
then the Lagrange function becomes $\mathcal{L}_{ISDA}$
\begin{align}
\mathcal{L}_{ISDA} & = \sum_{i=1}^{N} w_i\{\sum_{j=1}^{C} u_{ij} d(x_i,y_j)+
T_1\sum_{j=1}^{C}u_{ij}log u_{ij}\} \nonumber \\
& -T_2\sum_{i=1}^{N} w_i log w_i \nonumber \\
& - \sum_{i=1}^{N} \lambda_i (\sum_{j=1}^{C} u_{ij} -1) - \lambda (\sum_{i=1}^{N} w_i-1). \label{eq:ISDA-lagrange}
\end{align}
Setting the derivative of $\mathcal{L}_{ISDA}$ with respect to $U$ to zero,
we get the optimality necesary condition for $U$, which is
\begin{equation}
u_{ij} = \frac{exp(-\frac{d(x_i, y_j)}{T_1})}{\sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})}. \label{eq:ISDA-uij}
\end{equation}
Plugging \eqref{eq:ISDA-uij} back into \eqref{eq:ISDA-lagrange}, we get the reformulation for $U$, which is
\begin{align}
R_{ISDA}(Y,W) = & -T_1 \sum_{i=1}^{N} w_i [log \sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})] \nonumber \\
& \quad -T_2\sum_{i=1}^{N} w_i log w_i - \lambda (\sum_{i=1}^{N} w_i-1). \label{eq:ISDA-Y-W}
\end{align}
Setting the derivative of $R_{ISDA}(Y,W)$ with respect to $W$ to zero,
we get the optimality necesary condition for $W$, which is
\begin{equation}
w_i = \frac{[\sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})]^{-\frac{T_1}{T_2}}}
{\sum_{l=1}^{N} [\sum_{j=1}^{C} exp(-\frac{d(x_l, y_j)}{T_1})]^{-\frac{T_1}{T_2}}}.\label{eq:ISDA-wi}
\end{equation}
Substituting \eqref{eq:ISDA-wi} into \eqref{eq:ISDA-Y-W}, we get the reformulation for $U$ and $W$, which is
\begin{equation}
R_{ISDA}(Y) = T_2 log(\sum_{l=1}^{N} [\sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})]^{-\frac{T_1}{T_2}}). \label{eq:ISDA-Y}
\end{equation}
We call $R_{ISDA}(Y)$ the reformulation function of $F_{ISDA}(Y,W,U)$
and the minimization of $R_{ISDA}(Y)$ with respect to $Y$ is equivalent to
the min-max-min of $F_{ISDA}(Y,W,U)$ with respect to $Y,W,U$.
Therefore, finding the solution to ISDA clustering becomes minimization of
$R_{ISDA}(Y)$ with respect to $Y$.
The proofs of \Crefrange{eq:ISDA-uij}{eq:ISDA-Y} are shown in Appendix B.
\textit{Remark:}
ISDA can be seen as a two-level statistical physical model.
For the first system, for a given $x_i$,
if we regard $d(x_i,y_j)$ as the energy for the prototype $y_j$,
then
\begin{equation}
\sum_{j=1}^{C} u_{ij} d(x_i,y_j) + T_1\sum_{j=1}^{C}u_{ij}log u_{ij}
\end{equation}
becomes the Helmholtz free energy with the temperature $T_1$\cite{rose1998deterministic}.
In bounded rationality theory\cite{genewein2015bounded,ortega2015information},
$-T_1 log \sum_{j=1}^{C} exp(-\frac{d(x_i,y_j)}{T_1}) $ is called the certainty equivalence.
For the second system, if we regard $log [\sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})]^{T_1}$
as the energy for $x_i$, then
\begin{equation}
- \sum_{i=1}^{N} w_i [log \sum_{j=1}^{C} exp(-\frac{d(x_i, y_j)}{T_1})]^{T_1} -T_2 \sum_{i=1}^{N} w_i log w_i
\end{equation}
becomes the negative Helmholtz free energy with the temperature $T_2$.
\subsection{Fuzzy-ISDA}
In this section, we use the \textbf{logarithmic} transformation\cite{sadaaki1997fuzzy} of distortion $d(x,y)$ as the distortion measure
and call the resulting ISDA model as Fuzzy-ISDA.
The expected logarithmic distortion is
\begin{equation}
L^{Fuzzy} = \int_x\int_y p(x,y)log d(x,y)dydx.
\end{equation}
Similarly in ISDA, the corresponding optimization problem of Fuzzy-ISDA is
\begin{align}
\begin{split} \label{eq:L-log-with-constraints}
\min_{y} \max_{p(x)} \min_{p(y|x)} \quad & L^{Fuzzy} = \int_x\int_y p(x,y)log d(x,y)dydx \\
\text{s.t.} \quad & H(Y|X) \leq C_0 \\
\quad & KL(p(x)||q(x)) \leq C_1.
\end{split}
\end{align}
The empirical estimation of the reformulation of Fuzzy-ISDA with respect to $U$ and $W$ is as follows
\begin{equation}
R_{ISDA}^{Fuzzy}(Y) =T_2 log (\sum_{i=1}^{N} (\sum_{j=1}^{C} d(x_i, y_j)^{-\frac{1}{T_1}})^{-\frac{T_1}{T_2}}). \label{eq:ISDA-Y-log}
\end{equation}
Let $T_1=m-1$, $T_2=1$, then \eqref{eq:ISDA-Y-log} becomes
\begin{align}
R_{ISDA}^{Fuzzy}(Y)= log (\sum_{i=1}^{N} (\sum_{j=1}^{C} d(x_i, y_j)^{\frac{1}{1-m}})^{1-m}).\label{eq:ISDA-FCM}
\end{align}
Comparing \eqref{eq:ISDA-FCM} with the reformulation function of FCM
\begin{equation}
R_F(Y)=\sum_{i=1}^{N}(\sum_{j=1}^{C}d(x_i,y_j)^{1\over 1-m})^{1-m} \label{eq:FCM-reformulation}
\end{equation}
we can see that the minimization of $R_{F}(Y)$ is equivalent to
the minimization of $R_{ISDA}^{Fuzzy}(Y)$ with respect to $Y$.
Finally, we obtain the following theorem which
\textbf{reveals the relationship between fuzzy clustering and ISDA clustering}.
\begin{theorem} \label{ISDA-FCM}
The fuzzy c-means is a special case of ISDA clustering in which distortion is measured by $log d(x_i,y_j)$
and the parameters $T_1$, $T_2$ are set as $T_1=m-1$, $T_2=1$.
\end{theorem}
Therefore, the fuzzy component $m=T_1+1$ in fuzzy c-means can be interpreted
as the recalibration of temperature in thermodynamic system.
The \autoref{ISDA-FCM} reveals there is a deep relationship between fuzzy c-means
with thermodynamics\cite{rose1998deterministic} and information theory\cite{genewein2015bounded}.
\subsection{Algorithm} \label{algorithm}
In this section, we give out the algorithm to solve \eqref{eq:ISDA-Y}.
Inspired from \cite{hathaway1995optimization}, we use \textit{fminunc's} BFGS\cite{fletcher2013practical} algorithm
in MATLAB Optimization Toolbox\cite{MatlabOTB} to find the minimum of the unconstrained optimization problem.
The corresponding $U$ and $W$ are obtained through \eqref{eq:ISDA-uij} and \eqref{eq:ISDA-wi}.
The initial cluster centers are uniformly sampled from the domain of the training dataset $\mathcal{X}$.
$U$ and $W$ are sampled from the standard uniform distribution
and standardized according to \eqref{eq:constraints-uij} and \eqref{eq:constraints-wi} respectively.
The details of the ISDA clustering algorithm are as follows.
\[
\left[ \begin{array}{l}
Inputs:X, C, T_1, T_2 \\[1ex]
Outputs: U, W, Y
\end{array} \right]
\]
\begin{enumerate}
\item Sample initial $U$ and $W$ from the standard uniform distribution
and normalize them columnwisely to satisfy
\eqref{eq:constraints-uij} and \eqref{eq:constraints-wi}.
Choose $C$ centers uniformly at random from $\mathcal{X}$.
\item Using \textit{fminunc} in MATLAB Optimization Toolbox to get $y_j$
until a given stopping criterion is satisfied.\\
Apply \eqref{eq:ISDA-uij} to compute $u_{ij}$. \\
Apply \eqref{eq:ISDA-wi} to compute $w_i$.
\end{enumerate}
\section{\textbf{Numerical Results}} \label{Results}
In this section, we conduct numerical experiments to show the effectiveness
of our proposed algorithm and analyze its performance.
Specifically, \Cref{metric} shows that ISDA centers are closer to the boundary points
which are used to measure the worst-case scenarios.
\Cref{T2} analyzes how the temperature $T_2$ affects the ISDA results.
\Cref{dist-shift} shows that ISDA performs better compared with Kmeans and FCM
under large future distribution shifts.
\textbf{Dataset}
In this section, we use the following synthetic dataset if not otherwise specified.
The dataset contains three clusters and the data points in each cluster are normally
distributed over a two-dimensional space.
The three means and covariance matrices
are (1, 0), (-0.578,-1), (-0.578, 1) and
$\begin{pmatrix}
1.0 & 0.0 \\
0.0 & 0.3
\end{pmatrix}$,
$\begin{pmatrix}
0.475 & 0.303 \\
0.303 & 0.825
\end{pmatrix}$,
$\begin{pmatrix}
0.475 & -0.303 \\
-0.303 & 0.825
\end{pmatrix}$.
The default number of points in each cluster is 200.
This dataset is called the \textit{default} dataset in this paper.
\textbf{Experiment settings}
We follow the python package scikit\cite{scikit-learn} for the implementation
of Kmeans using the initialization proposed in Kmeans++\cite{arthur2006k} and use
\cite{dias2019fuzzy} for the implementation of FCM.
We use the commonly chosen $m=2$ in fuzzy clustering as the default value in all compared FCM models.
The effect of $T_1$ is analyzed in detail in \cite{rose1998deterministic} and it behaves similarly in ISDA.
Therefore, we set $T_1=1.0$ as the default value in all ISDA models.
For the implementation of ISDA,
we use the squared Euclidean distance as the distance measure and apply the following
stopping criterion,
we use the default optimality tolerance $ \Delta F_{ISDA}(Y) \leq 10^{-6}$ in MATLAB
as the stopping criteria.
\subsection{Boundary Points} \label{metric}
\textbf{M-BoundaryDist}
In this paper, we propose a metric called M-BoundaryDist
as a measure of how well a clustering algorithm performs
with respect to the boundary points.
The boundary points are used to measure the worst-case scenarios.
First, we define the \textit{centroid} of one dataset as the mean of the dataset
averaged over each dimension.
The \textit{boundary points} of the dataset is the points far away from
the centroid of the dataset.
We denote the centroid of the dataset $D$ as $D_{centroid}$ and
the $M$ boundary points as \textit{M-BoundaryPoints}.
Suppose the boundary points assigned to the cluster-$j$ are denoted as $x^{j}_1, \ldots ,x^{j}_{c_j}$,
where $c_j$ is the number of boundary points assigned to the cluster-$j$
and $y_j$ represents the cluster center.
Next, M-BoundaryDist is defined as follows
\begin{align*}
\text{M-BoundaryDist} = \sum_{j=1}^{C} \sum_{m=1}^{c_j} d(x^{m}_j,y_j).
\end{align*}
Clearly, $\sum_{j=1}^{C}c_{j}=M$.
When $M=1$, the boundary point is called MaxBoundaryPoint and the corresponding
distance is called MaxBoundaryDist.
\begin{figure}
\begin{center}
\includegraphics[width=.9\linewidth]{ISDA-MBoundaryPoints.pdf}
\end{center}
\caption{Fuzzy-ISDA ($T_2=0.1$) clustering result of
four synthetic Gaussian datasets with 2,3,4,6 clusters.
The data points are colored under the Fuzzy-ISDA clustering result.}
\label{fig:ISDA-MBoundaryPoints}
\end{figure}
\autoref{fig:ISDA-MBoundaryPoints}(a),(b),(c),(d) shows Fuzzy-ISDA($T_2=0.1$) clustering result
of four synthetic Gaussian datasets with 2,3,4,6 clusters respectively.
The details of the datasets are in Appendix C.
The figure shows the dataset centroids, 10-BoundaryPoints,
true centers and Fuzzy-ISDA clustering centers.
\autoref{fig:ISDA-MBoundaryPoints} shows that Fuzzy-ISDA centers are closer
to the boundary points of the dataset compared with true centers.
\subsection{Effect of $T_2$} \label{T2}
\begin{figure*}
\begin{center}
\includegraphics[width=.9999\linewidth]{boundaryPoints_clusterCenter_diffT2.pdf}
\end{center}
\caption{Results of Fuzzy-ISDA, ISDA, FCM and Kmeans centers of $T_2$ changes from 0.1 to 1.0.
The training dataset are colored under Fuzzy-ISDA clustering results.
$-\sum_{i=1}^{N} w_i log w_i$ measures the entropy of the importance sampling weight of Fuzzy-ISDA.}
\label{fig:diffT2}
\end{figure*}
\begin{table}
\caption{\label{tab:ISDA-T2}
Comparison of MaxBoundaryDist of Fuzzy-ISDA, ISDA, FCM and Kmeans under different $T_2$.
MBD represents MaxBoundaryDist and Entropy represents $-\sum_{i=1}^{N} w_i log w_i$.
``\textit{Fuzzy-}'' means Fuzzy-ISDA. ``\textit{--}'' means not available.}
\centering
\begin{tabular}{ lccccc }
\hline
Model&$T_2$& Fuzzy-Entropy & Fuzzy-MBD & Entropy & MBD \\
\hline
Kmeans&--&--&--&--&8.89\\
FCM &--&--&--&--&8.31\\
ISDA&0.1&4.19&2.97&2.97&3.53\\
ISDA&0.2&4.44&4.03&3.50&3.69\\
ISDA&0.3&4.69&5.26&3.92&3.87\\
ISDA&0.4&5.02&6.29&4.30&4.06\\
ISDA&0.5&5.32&6.94&4.63&4.26\\
ISDA&0.6&5.55&7.38&4.91&4.46\\
ISDA&0.7&5.71&7.70&5.16&4.67\\
ISDA&0.8&5.83&7.95&5.36&4.87\\
ISDA&0.9&5.92&8.15&5.52&5.06\\
ISDA&1.0&5.99&8.31&5.65&5.25\\
ISDA&1.5&6.17&8.75&6.04&6.06\\
ISDA&2.0&6.25&8.80&6.19&6.65\\
\hline
\end{tabular}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=.99\linewidth]{Dist-W-T2-eps-converted-to.pdf}
\end{center}
\caption{Comparison of weight distributions under different $T_2$.
The annotation shows the maximum weights $w_i$ under different $T_2$.
The dashed lines zoom in the weight distributions between [0,0.01].}
\label{fig:Dist-W-T2}
\end{figure}
In this section, we analyze the effect of the temperature $T_2$.
\autoref{fig:diffT2} displays the ISDA clustering result under different $T_2$ together with FCM and Kmeans.
\autoref{tab:ISDA-T2} compares MaxBoundaryDist under different $T_2$.
\autoref{fig:Dist-W-T2} shows the weight distributions among different $T_2$.
\autoref{fig:diffT2} compares clustering centers of Fuzzy-ISDA, ISDA, FCM and Kmeans under different $T_2$.
The figure shows that as $T_2$ gets smaller, the trend that Fuzzy-ISDA centers(red points)
moving towards boundary points is very clear.
When $T_2$ changes from 1.0 to 0.1,
the cluster center of green points moves upper left,
the cluster center of orange points moves lower left and
the cluster center of light blue points moves right.
The cluster centers of Fuzzy-ISDA and ISDA are closer to the boundary points
compared with FCM and Kmeans.
Meanwhile, \autoref{tab:ISDA-T2} compares numeric results of MaxBoundaryDist under different $T_2$.
As $T_2$ gets smaller, MaxBoundaryDist becomes smaller in both models.
When $T_2\in [0.1, 1.0]$, MaxBoundaryDist of Fuzzy-ISDA is smaller than that of Kmeans and FCM,
this observation shows that Fuzzy-ISDA performs better than Kmeans and FCM in terms of
distances to MaxBoundaryPoint when $T_2$ is small.
Moreover, \autoref{fig:diffT2}(a) shows that
the centers of Fuzzy-ISDA($T_1=1,T_2=1$) overlap with the centers of FCM($m=2$)
and \autoref{tab:ISDA-T2} shows MaxBoundaryDist of Fuzzy-ISDA($T_1=1,T_2=1$)
is equal to MaxBoundaryDist of FCM($m=2$).
This observation validates the result in \autoref{ISDA-FCM}.
Since the performance of Fuzzy-ISDA is more stable than that of ISDA,
we use Fuzzy-ISDA as the default model in the following experiments.
Then, we analyze how $T_2$ affects the weight distributions in Fuzzy-ISDA.
\autoref{fig:Dist-W-T2} compares the weight distributions under different $T_2$.
The figure shows that smaller $T_2$ leads to more sharply peaked weight distribution
while larger $T_2$ leads to broader weight distribution.
The maximum weight of three models under $T_2=0.3$, $T_2=0.5$, $T_2=0.7$ in Fuzzy-ISDA are
0.121, 0.053 and 0.027 respectively.
This is because as $T_2 \rightarrow 0$, the distribution $KL(w(x_i) \parallel \{\frac{1}{N}\})$
can be very large, in other words, smaller $T_2$ leads to more sharply peaked distribution.
Numeric results in \autoref{tab:ISDA-T2} also show that as $T_2$ gets smaller,
the entropy $-\sum_{i=1}^{N} w_i log w_i$ gets smaller.
\subsection{Distribution Shift} \label{dist-shift}
In previous sections, we validate that the centers of Fuzzy-ISDA are closer to the boundary points
compared with FCM and Kmeans when $T_2$ is small.
In this section, we mimic possible future distribution shifts by generating shifted Gaussians
and show that Fuzzy-ISDA performs better when the distribution shift is large.
The distance between the original and the shifted Gaussian distributions is calculated by the KL divergence.
Suppose there are two multivariate Gaussian distributions $\mathcal{N} (\mu_1, \Sigma_1)$
and $\mathcal{N} (\mu_2, \Sigma_2)$, the KL divergence between the above two distributions
is defined as follows\cite{duchi2007derivations}
\begin{align*}
\medmath{\text{KL-Dist} = \frac{1}{2} \bigl( log \frac{det \Sigma_2}{det \Sigma_1} - n + tr(\Sigma_2^{-1} \Sigma_1)
+ (\mu_2 - \mu_1)^{T} \Sigma_2^{-1} (\mu_2 - \mu_1)\bigr)}
\end{align*}
where $n$ is the number of dimensions of the data.
Two types of distribution shift are considered here, first is the translation of the Gaussian mean
and the second is the scale of the Gaussian covariance matrix.
For mean translation, a shifted distribution is generated from a new mean under the same covariance matrix.
The new means are selected evenly on the circumference of the circle centered at the original mean $(a,b)$
with a radius of $R$. Here, we call $R$ the shifted mean distance
and larger $R$ implies larger distribution shifts.
The polar coordinates of the circle are defined as
$x = R * cos(\phi) + a$ and $y = R * cos(\phi) + b$ where $\phi \in [0, 2\pi]$.
In this experiment, 13 equiangularly spaced points are selected,
therefore three Gaussians in the default dataset lead to 13*13*13=2197 shifted Gaussian distributions in total.
For the covariance scale, the shifted distribution is generated from a scaled covariance matrix
by simply multiplying a scaling factor under the same mean.
These 13 scaling factors ($S$) are chosen from \{0.5,0.6,0.7,0.8,0.9,1.0,1.5,2,2.5,3,3.5,4,4.5\}.
The total KL divergence between the original and the new dataset is calculated by summing
three KL divergence together, which is
$\text{KL-Dist} = \text{KL-Dist}_1 + \text{KL-Dist}_2 + \text{KL-Dist}_3$.
\begin{figure}
\begin{center}
\includegraphics[width=.9999\linewidth]{dist-shift-eps-converted-to.pdf}
\end{center}
\caption{Original and shifted datasets under maximum and minimum KL divergence.
(a) and (b) show maximum and minimum distribution shifts under mean translation
where $R$ represents the shifted distance.
(c) and (d) show maximum and minimum distribution shifts under scaled covariance where
$S1$, $S2$ and $S3$ represent the scaling factors.
(d) shows the same distribution under a different random seed since all three covariance scaling factors are 1.0.
``A-'' represents WithinClusterDist and
``A-diff'' represents the difference between WithinClusterDist between Kmeans and Fuzzy-ISDA($T_2=0.1$).}
\label{fig:dist-shift}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.9999\linewidth]{dist-shift-scatter-eps-converted-to.pdf}
\end{center}
\caption{Comparison of WithinClusterDist difference against KL divergence
between the original and shifted distributions.
X-axis represents the KL divergence between the original distribution
and the shifted distributions.
Y-axis in (a) represents Kmeans's WithinClusterDist minus Fuzzy-ISDA($T_2=0.1$)'s WithinClusterDist.
Y-axis in (b) represents FCM's WithinClusterDist minus Fuzzy-ISDA($T_2=0.1$)'s WithinClusterDist.
The black dotted horizontal line represents WithinClusterDist difference equals zero.
In the legend, $R$ represents shifted mean distance,
\textit{pos} and \textit{neg}
represent the ratio of positive and negative distance difference respectively.}
\label{fig:dist-shift-scatter}
\end{figure}
In this experiment,
we first get three models (Fuzzy-ISDA($T_2=0.1$), FCM and Kmeans) under the default dataset,
then generating new datasets under the shifted distributions,
next predicting on the shifted dataset and calculating within cluster sum of distances,
denoted as \textbf{WithinClusterDist}.
The metric WithinClusterDist is used to measure
\textit{how well a clustering model performs under a future distribution shift},
which is calculated by summing all distances within each cluster.
Specifically, suppose the new data points in the shifted distribution
assigned to the cluster-$j$ are denoted as ${x^{*}}^{j}_1, \ldots ,{x^{*}}^{j}_{A^{*}_j}$,
where $A^{*}_j$ denotes the number of points in cluster-$j$ and $y_j$
represents the cluster center of the original dataset,
WithinClusterDist is defined as follows,
\begin{align*}
\text{WithinClusterDist} = \sum_{j=1}^{C} \sum_{m=1}^{A^{*}_j}d({x^{*}}^{j}_{m}, y_j).
\end{align*}
We calculated 2197 WithinClusterDist and show the maximum and the minimum ones
with respect to the KL divergence, which are shown in \autoref{fig:dist-shift}.
WithinClusterDist of three clustering models are shown in the title of each subplot.
\autoref{fig:dist-shift}(a) and \autoref{fig:dist-shift}(b) show maximum and minimum distribution shifts under mean translations.
The difference on WithinClusterDist between Fuzzy-ISDA and Kmeans is 531.98 and 264.09 when $R=3.0$ and $R=1.5$ respectively.
\autoref{fig:dist-shift}(c) and \autoref{fig:dist-shift}(d) show maximum and minimum distribution shifts under scaled covariances.
The difference on WithinClusterDist between Fuzzy-ISDA and Kmeans is 267.20 and -523.72 when $S=4.5$ and $S=1.0$ respectively.
\autoref{fig:dist-shift} shows that
Fuzzy-ISDA performs better than FCM and Kmeans when the distribution shift is large as in (a), (b), (c)
while performs worse than FCM and Kmeans when the distribution shift is small in (d).
Furthermore, \autoref{fig:dist-shift-scatter} compares WithinClusterDist difference
between Fuzzy-ISDA and Kmeans(FCM) against KL divergence under different shift translation factor $R$.
Within each subplot, we can see that larger $R$ leads to larger KL divergence,
which implies larger distribution shifts.
Points above zero (black dotted line) mean Fuzzy-ISDA performs better while
points below zero mean Fuzzy-ISDA performs worse.
The ratios of positive and negative WithinClusterDist difference are shown in the legend.
In (a), when $R$ equals \{1.5, 2.0, 2.5, 3.0\}, the ratio that Fuzzy-ISDA performs better than Kmeans
is \{0.40, 0.67, 0.89, 0.98\}.
In (b), when $R$ equals \{1.5, 2.0, 2.5, 3.0\}, the ratio that Fuzzy-ISDA performs better than FCM
is \{0.42, 0.69, 0.90, 0.99\}.
This observation shows that Fuzzy-ISDA performs better
when the distribution shift becomes larger, which validates our assumption that
Fuzzy-ISDA can \textit{do best in the worst case} where the level of ``worse''
is measured by the KL divergence.
\section{Load forecasting} \label{load-forecasting}
\begin{figure}
\begin{center}
\includegraphics[width=.9\linewidth]{month_load-eps-converted-to.pdf}
\end{center}
\caption{Normalized load in 2014 for each month on testing dataset.
X-axis represents the time index.}
\label{fig:month-load}
\end{figure}
In this section, we evaluate the properties of our proposed Fuzzy-ISDA clustering algorithm on a real-world load forecasting problem.
First, we give the outline of the method and then explain it in detail.
Following \cite{fan2006short,dong2017short,liu2018short}, we use a two-stage method.
First, three clustering models(Kmeans, FCM and Fuzzy-ISDA) are applied to separate the training days
into several clusters in an unsupervised manner.
Second, for each time stamp (96 time stamps in total),
a Support Vector Regression\cite{smola2004tutorial} model is used to fit training data in each cluster in a supervised manner.
For each testing day, it is first assigned to a cluster according to the trained clusters,
then for each time stamp, using the corresponding regression model for the cluster and predicting the result.
Specifically, the load forecasting dataset \footnote{\url{http://shumo.neepu.edu.cn/index.php/Home/Zxdt/news/id/3.html}}
we use in this section is from The Ninth Electrician Mathematical Contest in Modeling in China,
which is composed of two parts: historical loads and weather conditions.
Daily load is recorded every 15 minutes, 96 records in total.
Each record time is called a time stamp in this paper.
The weather dataset consists of daily maximum, minimum and mean temperature,
humid and rainfall. The time range is from 20120101 to 20141231.
We use the consecutive 24 months as the training dataset and the following one month as
testing dataset. For example, if the training dataset ranges from 20120201 to 20140131,
the corresponding testing dataset is from 20140201 to 20140228.
There are 12 testing datasets, from January to December.
Take February as an example, the length of the training dataset is 731 and
the length of testing dataset is 28, therefore
the shapes of training and testing load data are [731,96] and [28,96] respectively.
We normalize the training dataset for each time stamp by $\frac{x-x_{max}}{x_{max}-x_{min}}$.
The testing dataset is normalized using the statistics from the training dataset, $x_{max}$ and $x_{min}$.
The normalized monthly load series is shown in \autoref{fig:month-load}.
Next, we explain the features used for clustering and regression models.
Inspired from \cite{fan2006short}, we use the following available features for clustering:
previous day's maximum daily load, last week's average maximum daily load,
average of the previous two days' mean temperature.
Therefore, the shape of training feature for the clustering models is [731,3].
Meanwhile, the regression features are historical loads
from previous \{24,25,26,48,72,96,120,144,168\} hours,
which means previous \{96, 100, 104, 192, 288, 384, 480, 576, 672\}
time stamps. This is because there are 4 records per hour.
As a result, the regression feature is of length 9.
In this paper, we use Support Vector Regression(SVR) as the regression model
where the regularization parameter $SVR_{C}$ and the epsilon $SVR_{\epsilon}$ are set to 1 and 0.1,
which are the default values for the SVR model in sklearn\cite{scikit-learn} package.
Finally, we explain the training routine in detail.
The training days are first separated into $C$ clusters based on the clustering features.
Then, for each cluster, we train 96 SVRs, one for each time stamp.
For each test day in the test dataset, we first predict which cluster it belongs to based on its clustering features.
Next, for each time stamp, we find its corresponding regression model and make the prediction.
Mean squared error (MSE) is used to measure the performance of the model,
\textit{smaller MSE implies more reasonable separation of clusters}.
In this section, we use $T_2=0.1$ as the default value in Fuzzy-ISDA models and
for each model, we use different random seeds and report the mean of 6 runs.
\begin{figure}
\begin{center}
\includegraphics[width=.9\linewidth]{max_weight-eps-converted-to.pdf}
\end{center}
\caption{Fuzzy-ISDA($T_2=0.1$) weights and the corresponding average daily load of each training day.
The model is trained with 3 clusters on February.
The top 1\%(73) weights and their corresponding daily loads are highlighted by red stars.
max{\_}W{\_}num means the selected number of largest weights.}
\label{fig:max-weight}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=.9\linewidth]{month_mse.pdf}
\end{center}
\caption{Comparison of test mean squared error of three clustering models for each month.
X-axis shows the number of clusters $C$.
Y-axis shows the test mean squared error.
Dotted lines represent means and shaded areas represent standard deviations of 6 runs.}
\label{fig:month-mse}
\end{figure}
\begin{table}
\caption{\label{tab:power-test-mse}
Fuzzy-ISDA($T_2=0.1$), FCM and Kmeans' test MSE for 12 months.
The best mean of 6 runs are reported
and the best result in each row is marked in bold.}
\centering
\small
\begin{tabular}{lccc}
\hline
Month & Kmeans & FCM & Fuzzy-ISDA$(T_2=0.1)$ \\
\hline
1 & 0.006490 & 0.006254 & \textbf{0.005157} \\
2 & 0.017547 & 0.016247 & \textbf{0.007210} \\
3 & \textbf{0.003348} & 0.003368 & 0.003398 \\
4 & 0.005413 & 0.005045 & \textbf{0.004742} \\
5 & 0.008841 & 0.009042 & \textbf{0.007786} \\
6 & 0.019318 & 0.019131 & \textbf{0.018890} \\
7 & 0.019871 & \textbf{0.017205} & 0.017815 \\
8 & 0.007177 & 0.007041 & \textbf{0.006707} \\
9 & 0.010046 & 0.010002 & \textbf{0.009837} \\
10 & 0.008517 & 0.008667 & \textbf{0.008115} \\
11 & 0.004919 & 0.005071 & \textbf{0.004880} \\
12 & 0.003682 & \textbf{0.003654} & 0.004059 \\
\hline
\end{tabular}
\end{table}
First, \autoref{fig:max-weight} shows the importance sampling weight $W$ and
the average daily loads of the training dataset.
The training dataset contains 731 days and
the Fuzzy-ISDA($T_2=0.1$) model is trained with $C=3$ clusters on February.
\autoref{fig:max-weight}(a) shows Fuzzy-ISDA weights for each day
and \autoref{fig:max-weight}(b) shows each day's average daily load.
The largest 1\% weights and its corresponding daily loads
are highlighted by red stars.
\autoref{fig:max-weight}(b) shows that daily loads with higher
weights are around the valleys, which are the data points with extreme values.
This is because the clustering features are partly based on a day's previous daily loads.
As a result, the data points around valleys and preceding the valleys
are of higher weigths.
This observation shows that Fuzzy-ISDA clustering algorithm puts higher weights on
the data points with extreme values.
Second, we compare the performance of Kmeans, FCM($m=2$) and Fuzzy-ISDA($T_2=0.1$) on testing dataset.
For each clustering model, \autoref{tab:power-test-mse} reports the lowest test MSE
among the models with different number of clusters, which is $C=2,3,4,5,6,7,8,9,10$.
\autoref{tab:power-test-mse} shows that Fuzzy-ISDA($T_2=0.1$) performs better
on 9(1,2,4,5,6,8,9,10,11) out of 12 months compared with Kmeans and FCM.
\autoref{fig:month-mse} shows the test MSE of these models on 12 months and 9 clusters in detail.
Since the distribution of testing dataset is different from the training dataset,
results in \autoref{fig:month-mse} and \autoref{tab:power-test-mse}
validate the effectiveness of Fuzzy-ISDA under future distribution shifts in most scenarios.
\section{\textbf{Conclusion}} \label{conclusion}
In this paper, we propose an Importance Sampling Deterministic Annealing(ISDA) clustering method,
which combines importance sampling and determinisic annealing to solve the problem of
data distribution deviation clustering problems.
The objective function of ISDA is derived from an information-theoretical viewpoint and
\autoref{ISDA-FCM} reveals that FCM is a special case of ISDA and the fuzzy exponent $m$
can be interpreted as the recalibration of temperature in thermodynamic system.
This observation shows that Fuzzy c-means has a solid theoretical rationale.
Experiment results show that ISDA performs better in worst-case scenarios
compared with Kmeans and FCM on both synthetic and real-world datasets.
Besides, there are many possible applications for ISDA
such as designing a deliver system considering
not only economic benefits but also reachability to remote areas;
designing a recommendation system for users with few ratings and
designing a fair face recognition system taking care of the minority.
Applying ISDA to these problems will be studied in our future work.
\section*{\textbf{Disclosure statement}}
No potential conflict of interest was reported by the authors.
\section*{Acknowledgments}
This work is supported by
the National Natural Science Foundation of China under Grants 61976174.
Lizhen Ji is addtionally supported by the
Nature Science Basis Research Program of Shaanxi (2021JQ-055).
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.857422,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUa945qdmB6zIPxKBs | \section{Introduction}
\setcounter{equation}{0}
Recently, Bergshoeff, Hohm and Townsend (BHT) proposed a parity
conserving theory of gravity in three dimensions (3D), which is defined
by adding certain curvature-squared terms to the Einstein-Hilbert
action \cite{1,2}. When the BHT gravity is linearized around the
Minkowski ground state, it is found to be equivalent to the Fierz-Pauli
theory for a free massive spin-2 field \cite{3}. Moreover, it is
ghosts-free, unitary and renormalizable \cite{4,5}. On the other hand,
the overall picture is changed when we go over to the (A)dS background,
where various dynamical properties, such as unitarity, gauge invariance
or boundary behavior, become more complex \cite{2,6,7,8,9}.
Dynamical characteristics of a gravitational theory take a particularly
clear form in the constrained Hamiltonian approach \cite{10}. Analyzing
the nature of constraints in the fully \emph{nonlinear} BHT gravity, we
discovered the special role of an extra condition \cite{11}; when
applied to a maximally symmetric solution, it takes the familiar form
$\L_0/m^2\ne -1$, where $m^2$ is the mass parameter and $\L_0$ a
cosmological constant\footnote{The canonical analysis of the BHT
gravity performed in \cite{12} refers to the case $\L_0=0$.}. The
resulting theory is found to possess two Lagrangian degrees of freedom,
in agreement with the number of massive graviton modes on the (A)dS
background \cite{2}. In the present paper, we extend our investigation
to the \emph{critical point} $\L_0/m^2=-1$ in the maximally symmetric
sector of the theory; in this case, the ground state is uniquely
determined by an effective cosmological constant \cite{2,6}. In the
linear approximation, there appears an \emph{extra gauge invariance}
which eliminates one component of the massive graviton, reducing it to
the partially massless mode \cite{13,14,15}. By comparing these results
with those obtained nonperturbatively \cite{11}, we can understand how
the canonical structure of the BHT gravity is changed in the process of
linearization. In Ref. \cite{16}, the canonical analysis of the
linearized BHT gravity is carried out only for the generic values of
the parameters.
The paper is organized as follows. In section 2, we give an account of
the linearized BHT gravity in the Lagrangian formalism. In particular,
we discuss the Lagrangian form of the extra gauge symmetry, constructed
later by the canonical methods. In section 3, we perform a complete
canonical analysis of the linearized BHT gravity around a maximally
symmetric background, assuming the critical condition $\L_0/m^2=-1$.
Then, in section 4, we classify the constraints and find a difference
in their number and type (first or second class), in comparison to the
results of the nonperturbative analysis \cite{11}. As a consequence, we
conclude that the theory exhibits a single Lagrangian degree of
freedom. In section 5, the resulting set of constraints is used to
construct the canonical generator of extra gauge symmetry. After that,
the existing Lagrangian mode can be interpreted as a partially massless
state. Finally, section 6 is devoted to concluding remarks, while
appendices contain some technical details.
We use the same conventions as in Ref. \cite{11}: the Latin indices
refer to the local Lorentz frame, the Greek indices refer to the
coordinate frame; the middle alphabet letters
$(i,j,k,...;\m,\n,\l,...)$ run over 0,1,2, the first letters of the
Greek alphabet $(\a,\b,\g,...)$ run over 1,2; the metric components in
the local Lorentz frame are $\eta_{ij}=(+,-,-)$; totally antisymmetric
tensor $\ve^{ijk}$ and the tensor density $\ve^{\m\n\r}$ are both
normalized by $\ve^{012}=1$.
\section{Linearized Lagrangian dynamics}
\setcounter{equation}{0}
Following the approach defined in our previous paper \cite{11}, we
study the BHT gravity in the framework of Poincar\'e gauge theory
\cite{17}, where the basic gravitational variables are the triad field
$b^i$ and the Lorentz connection $\om^k$ (1-forms), and the
corresponding field strengths are the torsion
$T^i=db^i+\ve^i{}_{jk}\om^j\wedge b^k$ and the curvature
$R^i=d\om^i+\frac{1}{2}\,\ve^i{}_{jk}\om^j\wedge\om^k$ (2-forms). The
underlying geometric structure corresponds to Riemann-Cartan geometry,
in which $b^i$ is an orthonormal coframe, $g:=\eta_{ij}b^i\otimes b^j$
is the metric of spacetime, and $\om^i$ is the Cartan connection. For
$T_i=0$, the geometry becomes Riemannian.
\prg{Lagrangian.} In local coordinates $x^\m$, the BHT Lagrangian
density can be written in the form \cite{11}:
\bsubeq\lab{2.1}
\be
\cL=a\ve^{\m\n\r}\left(\s b^i{_\m}R_{i\n\r}
-\frac{1}{3}\bL\ve^{ijk}b_{i\m}b_{j\n}b_{\k\r}\right)
+\frac{a}{m^2}\cL_K+\ve^{\m\n\r}\frac{1}{2}\l^i{_\m}T_{i\n\r}\, .
\ee
Here, the Lagrange multiplier $\l^i{_\m}$ ensures the vanishing of
torsion and thereby, the Riemannian nature of the connection, while
$\cL_K$ is defined in terms of an auxiliary field $f^i{_\m}$ as
\be
\cL_K=\frac{1}{2}\ve^{\m\n\r}\f^i{_\m}R_{i\n\r}-b\cV_K\,,\qquad
\cV_K=\frac{1}{4}\left(f_{i\m}f^{i\m}-f^2\right)\, ,
\ee
\esubeq
where $f:=f^k{_\r}h_k{^\r}$ and $b=\det(b^i{_\m})$. Using the field
equations to eliminate $\f^i{_\m}$, one can verify that $\cL_K$ reduces
to the standard BHT form.
Introducing the notation $Q_A=(b^i{_\m},\om^i{_\m},f^i{_\m}\l^i{_\m})$,
we now consider the linearized form of the theory around a maximally
symmetric solution $\bar Q_A$, characterized by (Appendix A)
\bsubeq\lab{2.2}
\be
\barG_{ij}=\Leff\eta_{ij}\, ,\qquad \barf^i{_\m}=-\Leff\barb^i{_\m}\,,
\qquad \barl^i{_\m}=0\, , \lab{2.2a}
\ee
where $\Leff$ is the effective cosmological constant. The linearization
of the Lagrangian density \eq{2.1} is based on the expansion
\be
Q_A=\bar Q_A+\wt Q_A\,,
\ee
\esubeq
where $\wt Q_A$ is a small excitation around $\bar Q_A$. The piece
of $\cL$ quadratic in $\wt Q_A$ takes the form:
\bsubeq\lab{2.3}
\bea
\cL^{(2)}&=&a\ve^{\m\n\r}\left(2\s\tb^i{_\m}\bar\nab_\n\tom_{i\r}
+\s\ve^{ijk}\bar b^i{_\m}\tom^j{_\n}\tom^k_{\r}
-\L_0\ve_{ijk}\bar b^i{_\m}\tb^j{_\n}\tb^k{_\r}\right) \nn\\
&&+\frac{a}{m^2}\cL_K^{(2)}
+\ve^{\m\n\r}\tl^i{_\m}\left(\bar\nab_\n\tb_{i\r}
+\ve_{ijk}\tom^j{_\n}\bar b^k{_\r}\right)\, ,
\eea
where
\bea
\cL_K^{(2)}&:=&\ve^{\m\n\r}\left(\tf^i{_\m}\bar\nab_\n\tom_{i\r}
-\frac{\Leff}2 \ve_{ijk}\bar b^i{_\m}\tom^j{_\n}\tom^k{_\r}\right)
-\left(b\cV_K\right)^{(2)}\, , \nn\\
\left(b\cV_K\right)^{(2)}&:=&
\frac{\bar b}{4}\left(\eta^{ij}\bar g^{\m\n}
-\bar h^{i\m}\bar h^{j\n}\right)\tf_{i\m}\tf_{j\n}
+\frac{\bar b}{2}\Leff\left(\eta^{ij}\bar g^{\m\n}
+\bar h^{i\m}\bar h^{j\n}
-2h^{i\n}h^{j\m}\right)\tf_{i\m}\tb_{j\n} \nn\\
&&+\frac {\bar b}4\Leff^2\left(\eta^{ij}\bar g^{\m\n}-\bar
h^{i\n}\bar h^{j\m}\right)\tb_{i\m}\tb_{j\n}\,.
\eea
\esubeq
\prg{Field equations.} The variation of $\cL^{(2)}$ with respect to
$\wt Q_A=(\tb^i{_\m},\tom^i{_\m},\tf^i{_\m},\tl^i{_\m})$ yields the
linearized BHT field equations:
\bea
&&a\ve^{\m\n\r}\left(2\s\bar\nab_\n\tom_{i\r}
-2\bL\ve_{ijk}\bar b^j{}_\n\tb^k{}_\r\right)
-\frac{a}{m^2}W_i{^\m}
+\ve^{\m\n\r}\bar\nabla_\n\tl_{i\r}=0\, , \nn\\
&&\ve^{\m\n\r}\left[a\bar\nab_\n
\left(2\s\tb_{i\r}+\frac{1}{m^2}\tf_{i\r}\right)
+a\left(2\s-\frac{\Leff}{m^2}\right)\ve_{ijk}\bar b^j{_\n}\tom^k{_\r}
+\ve_{ijk}\bar b^j{}_\n \tl^k{}_\r\right] =0\,, \nn\\
&&\ve^{\m\n\r}\bar\nab_\n\tom_{i\r}
-\frac {\bar b}2\left[(\eta_{ij}\bar g^{\m\n}
-\bar h_i{^\m}\bar h_j{^\n})(\tf^j{_\n}+\Leff\tb^j{_\n})
+2\Leff(\bar h_i{^\m}\bar h_j{^\n}
-\bar h_i{^\n}\bar h_j{^\m})\tilde b^j{_\n}\right]=0\, , \nn\\
&&\ve^{\m\n\r}\left(\bar\nab_\n\tb_{i\r}
+\ve_{ijk}\tom^j{_\n}\bar b^k{_\r}\right)=0\,.\lab{2.4}
\eea
where $W_i{^\m}:=\d\left(b\cV_K\right)^{(2)}/\d\tb^i{_\m}$ takes the
form:
\bea
W_i{^\m}&=&\frac{1}{2}\Leff\bar b
\left[\left(\eta_{ij}\bar g^{\m\n}+\bar h_i{^\m}\bar h_j{^\n}
-2\bar h_i{^\n}\bar h_j{^\m}\right)\tf^j{_\n}
+\Leff(\eta_{ij}\bar g^{\m\n}
-\bar h_i{^\n}\bar h_j{^\m})\tb^j{_\n}\right]\, . \nn
\eea
Let us now focus our attention on the trace of the first field
equation, the linearized version of (A.3):
\be
\left(\s+\frac\Leff{2m^2}\right)
\bar h_i{^\m}\left(\tf^i{_\m}+\Leff\tb^i{_\m}\right)=0\,.\lab{2.5}
\ee
In the canonical approach, this relation is expected to be a
\emph{constraint}, as is the case in the nonlinear regime. However,
there is a \emph{critical condition} on parameters, defined by
$\Leff+2\s m^2=0$, for which equation \eq{2.5} is identically
satisfied. This is an important signal that the related canonical
structure of the linearized theory might be significantly changed.
Using \eq{A.5}, the critical condition can be equivalently written as
\be
\L_0/m^2=-1\, , \lab{2.6}
\ee
or as $\Leff=2\s\L_0$. The central idea of our work is to examine the
influence of this condition on the \emph{canonical structure} of the
linearized BHT massive gravity.
\prg{Extra gauge symmetry.} When we have a maximally symmetric
background, the critical condition \eq{2.6} implies that the massive
graviton of the linearized BHT gravity (with two helicity states)
becomes a (single) partially massless mode; simultaneously, there
appears an extra gauge symmetry in the theory. By a systematic analysis
of the related canonical structure (see section 5), we discover that
this symmetry has the following form:
\bea
\d_E \tb^i{_\m}&=&\eps\barb^i{_\m}\, , \nn\\
\d_E\tom^i{_\m}&=&
-\ve^{ijk}\bar b_{j\m}\bar h_k{^\n}\bar\nab_\n\eps\,, \nn\\
\d_E\tf^i{_\m}&=&-2\bar\nab_\m(\bar h^{i\n}\bar\nab_\n\eps)
+\Leff\eps\bar b^i{_\m}\,, \nn\\
\d_E\tl^i{_\m}&=&0\, , \lab{2.7}
\eea
where $\eps$ is an infinitesimal gauge parameter. The proof of this
statement at the level of the field equations \eq{2.4} is given in
Appendix B. Although the form of $\d_E \tf^i{_\m}$ has been known for
some time, see for instance \cite{15,2}, our result uncovers the very
root of this symmetry by specifying its action on all the fields,
including $\tb^i{_\m}$. Up to second order terms, one can rewrite the
infinitesimal gauge transformation of $\tb^i{_\m}$ in the form $\d_E
b^i{_\m}=\eps b^i{_\m}$, which looks like a Weyl rescaling. However, in
doing so, one should keep in mind that \eq{2.7} is \emph{not} the
symmetry of the full nonlinear theory, but only of its linearized
version. Note also that the Weyl-like form of \eq{2.7} closely
resembles the result found in \cite{18}, which describes an extra
gauge symmetry of the Chern-Simons gravity. The presence of the gauge
parameter $\eps$ and its first and second derivatives in \eq{2.7}
indicates significant changes of the set of first class constraints, in
comparison to the nonlinear BHT theory.
\section{Canonical analysis of the linearized theory}
\setcounter{equation}{0}
We are now going to analyze the canonical structure of the BHT gravity
linearized around the maximally symmetric background
$G_{ij}=\Leff\eta_{ij}$, at the critical point \eq{2.6}. Technically,
the analysis is based on the Lagrangian \eq{2.3}, quadratic in the
excitation modes $\wt Q_A$.
\prg{Primary constraints.} If $P^A=\bar P^A+\wt P^A$ are the
canonical momenta conjugate to the field variables $Q_A=\bar Q_A+\wt
Q_A$, then transition to the linearized theory implies $\{\wt Q_A,\wt
P^B\}=\d_A^B$. In other words, the basic phase space variables of the
linearized theory are
$$
\wt Q_A=(\tb^i{_\m}, \tom^i{_\m},\tl^i{_\m},\tf^i{_\m})\, ,\qquad
\wt P^A=(\tpi_i{^\m},\wt\Pi_i{^\m},\tp_i{^\m},\tP_i{^\m})\, .
$$
From the Lagrangian \eq{2.3}, we obtain the primary constraints of the
linearized theory:
\bea
&&\phi_i{^0}:=\tilde\pi_i{^0}\approx 0\, ,\qquad\,\,
\phi_i{^\a}:=\tilde\pi_i{^\a}
-\ve^{0\a\b}\tilde\l_{i\b}\approx 0\, , \nn\\
&&\Phi_i{^0}:=\wt\Pi_i{^0}\approx 0\, ,\qquad
\Phi_i{^\a}:=\wt\Pi_i{^\a}-2a\ve^{0\a\b}
\left(\s\tilde b_{i\b}+\frac{1}{2m^2}\tilde\f_{i\b}\right)
\approx 0\, , \nn\\
&&\tp_i{^\m}\approx 0\, ,\hspace{61pt}
\tP_i{^\m}\approx 0\, . \lab{3.1}
\eea
\prg{Total Hamiltonian.} Inspired by the results of~\cite{11}, we find
that the quadratic canonical Hamiltonian $\cH_c$ can be represented in
the form (up to a divergence):
\bea
\cH_c&=&\tb^i{_0}\cH_i+\tom^i{_0}\cK_i+\tf^i{_0}\cR_i+\tl^i{_0}\cT_i\nn\\
&&+\bar b^i{_0}{\cal A}_i+\bar\om^i_0{\cal B}_i
+\bar f^i{_0}{\cal C}_i+\frac{a}{m^2}(b\cV_K)^{(2)}\,. \lab{3.2}
\eea
The components of $\cH_c$ are defined as follows:
\bea
&&\cH_i:=-\ve^{0\a\b}\left(2a\s\bar\nab_\a\tom_{i\b}
-2a\bL\ve_{ijk}\bar b^j{}_\a \tb^k{}_\b
+\bar\nabla_\a\tl_{i\b}\right)\, ,\,, \nn\\
&&\cK_i:=-\ve^{0\a\b}\left[a\bar\nab_\a
\left(2\s \tb_{i\b}+\frac{1}{m^2}\tf_{i\b}\right)
+a\left(2\s-\frac{\Leff}{m^2}\right)\ve_{ijk}\bar b^j{_\a}
\tom^k{_\b}+\ve_{ijk}\bar b^j{}_\a\tl^k{}_\b\right]\, , \nn\\
&&\cR_i:=-\frac{a}{m^2}\ve^{0\a\b}\bar\nab_\a\tom_{i\b}\, ,\nn\\
&&\cT_i:=-\ve^{0\a\b}\left(\bar\nab_\a\tb_{i\b}
+\ve_{ijk}\tom^j{_\a}\bar b^k{_\b}\right)\, , \nn\\
&&{\cal A}_i:=-\ve^{0\a\b}\ve_{ijk}
\left(a\s\tom^j{_\a}\tom^k{_\b}
-a\L_0\tb^j{_\a}\tb^k{_\b}+\tom^j{_\a}\tl^k{_\b}\right)\,,\nn\\
&&{\cal B}_i:=-\ve^{0\a\b}\ve_{ijk}\left(2a\s \tb^j{_\a}\tom^k{_\b}
+\frac{a}{m^2}\tom^j{_\a}\tf^k{_\b}+\tb^j{_\a}\tl^k{_\b}\right)\,,\nn\\
&&{\cal C}_i:=-\frac{a}{2m^2}\ve^{0\a\b}\ve_{ijk}\tom^j{_\a}\tom^k{_\b}\,.
\eea
In order to simplify further exposition, we find it more convenient to
continue our analysis in a reduced phase space formalism. The formalism
is based on using the 24 second class constraints
$X_A=(\phi_i{^\a},\Phi_i{^\a},\tp_i{^\a},\tP_i{^\a})$ to eliminate the
momenta $(\tpi_i{^\a},\wt\Pi_i{^\a},\tp_i{^\a},\tP_i{^\a})$. The
dimension of the resulting reduced phase space $R_1$ is $N=72-24=48$,
and its structure is defined by the basic nontrivial Dirac brackets
(DB):
\bea
&&\{\tb^i{_\a},\tl^j{_\b}\}^*_1=\eta^{ij}\ve_{0\a\b}\d\, ,\qquad
\{\tom^i{_\a},\tf^j{_\b}\}^*_1=\frac{m^2}{a}\eta^{ij}\ve_{0\a\b}\d\,,\nn\\
&&\{\tl^i{_\a},\tf^j{_\b}\}^*_1=-2m^2\s\eta^{ij}\ve_{0\a\b}\d\,,
\eea
while the remaining DBs remain the same as the corresponding Poisson
brackets. In $R_1$, the total Hamiltonian takes the form:
\be
\cH_T=\cH_c+u^i{_0}\phi_i{^0}+v^i{_0}\Phi_i{^0}
+w^i{_0}\tp_i{^0}+z^i{_0}\wt P_i{^0}\, . \lab{3.4}
\ee
\prg{Secondary constraints.} The consistency conditions of the primary
constraints $\tpi_i{^0},\tPi_i{^0},\tp_i{^0}$ and $\tP_i{^0}$ produce
the secondary constraints:
\bsubeq\lab{3.6}
\bea
\hcH_i&:=&\cH_i+\frac{a}{m^2}W_i{^0}\approx 0\, , \nn\\
\cK_i&\approx& 0 \, , \lab{3.6a}\\
\hcR_i&:=&\cR_i+\frac{a\bar b}{2m^2}
\left[(\eta_{ij}\bar g^{0\m}-\bar h_i{^0}\bar h_j{^\m})(\tf^j{_\m}
+\Leff\tb^j{_\m})\right. \nn\\
&&\hspace{60pt}
+\left.2\Leff(\bar h_i{^0}\bar h_j{^\m}
-\bar h_i{^\m}\bar h_j{^0})\tilde b^j{_\m}\right]\approx 0\, ,\nn\\
\cT_i&\approx& 0 \, . \lab{3.6b}
\eea
\esubeq
\prg{Tertiary constraints.} Let us now introduce the change of
variables:
\be
z^i{_0}'=z^i{_0}-\bar f^i{_m}u^m{_0}\, ,\qquad
\tilde\p_i{^0}'=\tilde\p_i{^0}+\bar f_i{^k}\tP_k{^0}\, ,
\ee
such that
$$
u^i{_0}\tilde\p_i{^0}+z^i{_0}\tP_i{^0}
=u^i{_0}\tilde\p{}_i{^0}'+z^i{_0}'\tP_i{^0}\, .
$$
The consistency conditions of $\cK_i$ and $\hcR_i$ determine two
components $z'_{\b 0}:=\bar b^k{_\b}z'_{k0}$ of $z'_{k0}$:
\bea
z'_{\b0}=-\ve_{ijk}\bar b^i{_0}\bar\om^j{_0}(\tf^k{_\b}
+\Leff\tb^k{_\b})+\bar b_{i0}\bar\nab_\b(\tf^i{_0}+\Leff \tb^i{_0})
+\frac{m^2}a\ve_{ijk}\bar b^i{_0}\bar b^j{_\b}\tl^k{_0}\,,\nn
\eea
while the consistency of $\hcH_i$ and $\cT_i$ leads to the tertiary
constraints:
\bsubeq
\bea
&&\th_{\m\n}:=\tf_{\m\n}-\tf_{\n\m}\approx 0\, , \lab{3.8a}\\
&&\psi_{\m\n}:=\tl_{\m\n}-\tl_{\n\m}\approx 0\, , \lab{3.8b}
\eea
where
\be
\tf_{\m\n}= \bar b^i{_\m}\tf_{i\n}-\Leff\bar b^i{_\n}\tb_{i\m}\,,
\qquad \tl_{\m\n}=\bar b^i{_\m}\tl_{i\n}\,.
\ee
\esubeq
\prg{Quartic constraints.} Further consistency conditions determine two
components $w_{\b 0}:=\bar b^k{_\b}w_{k0}$ of $w_{k0}$:
\bea
w_{\b0}&=&-\ve_{ijk}\bar b^i{_0}\bar\om^j{_0}\tl^k{_\b}
+\bar b^i{_0}\bar\nab_\b \tl^i{_0}
-2\L_0\ve_{ijk}\bar b^i{_0}\bar b^j{_\b}\tb^k{_0}
+\frac a{m^2}\ve_{0\b\a}\bar b^i{_0}W_i{^\a} \nn\\
&&-a\s\bar b\ve_{0\b\a}\left(\bar b_{i0}\bar g^{\a\n}(\tf^i{_\n}
+\Leff\tb^j{_\n})-2\Leff\bar h_i{^\a}\tb^i{_0}\right)\,, \nn
\eea
and produce the relations
\bsubeq
\bea
&&\chi:=\bar h_i{^\m}\tl^i{_\m}\approx 0\, , \lab{3.9a}\\
&&\vphi:=\left(\s+\frac\Leff{2m^2}\right)
\bar h_i{^\m}\left(\tf^i{_\m}+\Leff\tb^i{_\m}\right)\approx 0\, .
\eea
\esubeq
At the critical point \eq{2.6}, the expression $\vphi$ identically
vanishes, and the only quartic constraint is $\chi$.
We close the consistency procedure by noting that the consistency of
$\chi$ determines the multiplier $w_{00}:=\bar b^k{_0}w_{k0}$:
$$
\bar g^{00}w_{00}=-\left(2\bar g^{\a 0}w_{\a0}
+\bar g^{\a\b}\bar b^i{_\a}\dot\tl_{i\b}\right)\,,
$$
where $\dot\tl_{i\b}$ is calculated in Appendix C, while the absence of
$\vphi$ implies that $z'_{00}:=\bar b^k{_0}z'_{k0}$ remains
undetermined.
\bitem
\item[$\bull$] In comparison to the nonlinear BHT massive gravity,
the linearized theory has \emph{one constraint less} ($\vphi$) and
\emph{one undetermined multiplier more} ($z'_{00}$), which leads to a
significant modification of its canonical structure.
\eitem
\section{Classification of constraints}
\setcounter{equation}{0}
Among the primary constraints, those that appear in $\cH_T$ with
arbitrary multipliers ($u^i{_0},v^i{_0}$ and $z'_{00}$) are first
class (FC):
\be
\tpi_i{^0}{}',\wt\Pi_i{^0}, \tP^{00}=\mbox{FC}\, ,
\ee
while the remaining ones, $\tilde p_i{^0}$ and $\tP^{\a0}$, are second
class. Note that $\tP^{00}:=\bar h^{k0}\tP_k{^0}$.
Going to the secondary constraints, we use the following simple
theorem:
\bitem
\item[$\bull$] If $\phi$ is a FC constraint, then $\{\phi,H_T\}^*_1$
is also a FC constraint.
\eitem
The proof relies on using the Jacoby identity. The theorem implies that
the secondary constraints $\hcH'_i:=-\{\tpi_i{^0}{}',H_T\}^*_1$,
$\cK_i=-\{\wt\Pi_i{^0},H_T\}^*_1$ and
$\hcR^{00}:=-\{\tP^{00},H_T\}^*_1$ are FC. A straightforward
calculation yields:
\bea
\hcH_i'&=&\hcH_i+\bar f_i{^k}\hcR_k\, , \nn\\
\hcR^{00}&=&\bar h_i{^0}\hcR{}^i
-\bar h_i{^0}\bar\nab_\b(\bar b^i{_0}\tP^{\b0})
+\frac{a}{2m^2}\bar b\ve_{0\a\b}
\frac{\bar g^{0\a}}{\bar g^{00}}(\bar f^{0\b}
-\bar g^{00}\bar f_0{^\b})\tilde p^{00} \nn\\
&&-a\bar b\ve_{0\a\b}\left[\s \bar g^{0\b}
+\frac{1}{2m^2}(\bar f^{0\b}-\bar g^{0\b}\bar f
+\bar g^{0\b}\bar f_0{^0}-\bar g^{00}\bar f_0{^\b})
\right]\tp^{\a0}\, . \nn
\eea
Since the background is maximally symmetric, we have:
\bea
&&\hcH'_i=\hcH_i-\Leff\hcR_i\, , \nn\\
&&\hcR^{00}=\bar h_i{^0}\hcR{}^i
-\bar h_i{^0}\bar\nab_\b(\bar b^i{_0}\tP^{\b0})\, .
\eea
After identifying the above 14 FC constraints, we now turn our
attention to the remaining (tertiary and quartic) 17 constraints.
However, we know \cite{10} that the number of second class constraints
has to be even. As one can verify, the constraint $\wt\psi_{\a\b}$ is
FC, while the other 16 constraints are second class (Appendix D). The
complete classification of constraints in the reduced space $R_1$ is
displayed in Table 1.
\begin{center}
\doublerulesep 1.8pt
\begin{tabular}{lll}
\multicolumn{3}{l}{\hspace{16pt}Table 1. Classification
of constraints in $R_1$} \\
\hline\hline
\rule{0pt}{12pt}
&~First class \phantom{x}&~Second class \phantom{x} \\
\hline
\rule[-1pt]{0pt}{16pt}
\phantom{x}Primary &~$\tpi_i{^0}'',\tPi_i{^0},\tP^{00}$
&~$\tp_i{^0},\tP^{\a0}$ \\
\hline
\rule[-1pt]{0pt}{19pt}
\phantom{x}Secondary\phantom{x} &~$\hcH'_i,\cK_i,\hcR^{00}$
&~$\cT_i,\hcR{}^\a{}'$ \\
\hline
\rule[-1pt]{0pt}{16pt}
\phantom{x}Tertiary\phantom{x}
&~ $\psi_{\a\b}$&~$\th_{0\a},\th_{\a\b},\psi_{0\a}$ \\
\hline
\rule[-1pt]{0pt}{16pt}
\phantom{x}Quartic\phantom{x}
& &~$\chi$\\ \hline\hline
\end{tabular}
\end{center}
Here, $\hcR^\a{}'=\bar h_i{^\a}\hcR^i{}'$, where $\hcR_i{}'$ is a
suitable modification of $\hcR_i$, defined so that it does not contain
$\tf_{i0}$:
\bea
\hcR_i{}'&:=& \hcR_i{}-\frac{a\bar b}{4m^2}\left(
\bar h_i{^\m}\bar g^{0\n}-\bar h_i{^\n}\bar g^{0\m}\right)\th_{\m\n}\nn\\
&\equiv&\cR_i+\frac{a\bar b}{2m^2}(\bar h_i{^\a}\bar h_j{^0}
-\bar h_i{^0}\bar h_j{^\a})(\tf^j{_\a}-\Leff \tb^j{_\a})\,.
\eea
Now, we can calculate the number of independent dynamical degrees of
freedom with the help of the standard formula:
$$
N^* = N-2N_1-N_2\, ,
$$
where $N$ is the number of phase space variables in $R_1$, $N_1$ is the
number of FC, and $N_2$ the number of second class constraints. Using
$N=48$ and, according to the results in Table 1, $N_1 = 15$ and $N_2 =
16$, we obtain that
\bitem
\item[$\bull$] the number of physical modes in the phase space is
$N^* = 2$, and consequently, the BHT theory at the critical point
\eq{2.6} exhibits one Lagrangian degree of freedom.
\eitem
\section{Extra gauge symmetry}
\setcounter{equation}{0}
The presence of an extra primary FC constraint $\wt P^{00}$ implies the
existence of an extra gauge symmetry. To simplify its canonical
construction, we go over to the reduced phase space $R_2$, which is
obtained from $R_1$ by using the additional constraints
\be
R_2:\qquad\th_{\b 0}\equiv \tf_{\b 0}-\tf_{0\b}=0\, ,
\qquad \wt P^{\b 0}=0\, ,
\ee
to eliminate the variables $\tf_{\b0}$ and $\wt P^{\b0}$. Basic DBs
between the canonical variables in $R_2$ retain the same form as in
$R_1$. Starting with the primary FC constraint $\wt P^{00}$,
Castellani's algorithm \cite{19} leads to the following canonical
generator in $R_2$:
\bea
G_E&=&-2\ddot{\eps}\wt P^{00}
+\dot\eps\left[-2\hcR^{0}{}'
+2(\bar h^{i0}\bar \nab_0 \bar b^i{_0})\tP^{00}
+\ve_{ijk}\bar h^{i0}\bar b^j{_0}\wt\Pi^k{_0}\right] \nn\\
&&+\eps\left[\ve^{0\a\b}\bar b^i{_\a}\tl_{i\b}+\tpi_0{^0}
-\ve_{ijk}\bar\nab_\a(\bar h^{i\a}\bar b^j{_0}\wt\Pi^k{_0})
+2\bar\nab_\a\hcR^\a{}'\right. \nn\\
&&\qquad\left.
-2\bar\nab_\a(\bar h_i{^\a}\tP^{00}\bar\nab_0\bar b^i{_0})
+\Leff \bar g_{00}\tP^{00}\right]\,.
\eea
The action of $G_E$ on $R_2$ is given by $\d_E\phi=\{\phi,G_E\}_2^*$,
which yields:
\bsubeq\lab{5.4}
\bea
\d_E \tb^i{_\m}&=&\eps\bar b^i{_\m}\, , \nn\\
\d_E\tom^i{_\m}
&=&-\ve^{ijk}\bar b_{j\m}\bar h_k{^\n}\bar\nab_\n\eps\,, \nn\\
\d_E\tf^i{_\a}&=&-2\bar\nab_\a(\bar h^{i\n}\bar\nab_\n\eps)
+\Leff\eps \bar b^i{_\a}\, , \nn\\
\d_E\tf_{00}&=&
-2b_{i0}\bar\nab_0\left(\bar h^{i\n}\bar\nab_\n\eps\right)\,,\nn\\
\d_E\tl^i{_\m}&=&0\,.
\eea
To make a comparison with \eq{2.5}, we now derive the transformation
law for the variable
$\tf^i{_0}=\bar h_i{^\m}\tf_{\m0}
+\Leff\bar h_i{^\n}\bar b^j{_0}\tb_{j\n}$.
Using
\bea
\d_E\tf_{\b0}&=&-2\bar b^i{_0}\bar\nab_\b(\bar h_i{^\n}\bar\nab_\n\eps)\nn\\
&=&-2\pd_\b\pd_0\eps-2\bar b^i{_0}(\bar\nab_\b \bar h_i{^\n})\pd_\n\eps\nn\\
&=&-2\pd_0\pd_\b\eps-2\bar b^i{_\b}(\bar\nab_0 \bar h_i{^\n})\pd_\n\eps\nn\\
&=&-2\bar b^i{_\b}\bar\nab_0(\bar h_i{^\n}\bar\nab_\n\eps)\,,\nn
\eea
one obtains
\be
\d_E\tf^i{_0}=-2\bar\nab_0(\bar h^{i\n}\bar\nab_\n\eps)
+\Leff\eps\bar b^i{_0}\,.
\ee
\esubeq
The transformation rules \eq{5.4} are in complete agreement with
\eq{2.5}.
\section{Concluding remarks}
\setcounter{equation}{0}
In the \emph{nonperturbtive} regime of the BHT gravity, the constraint
structure is found to depend critically on the value of $\Om^{00}$,
where $\Om^{\m\n}=\s g^{\m\n}+G^{\m\n}/2m^2$ \cite{11}. In the region
of the phase space where $\Om^{00}\ne 0$, the BHT theory has \emph{two}
Lagrangian degrees of freedom, which corresponds to two helicity states
of the massive graviton excitation.
In this paper, we studied the canonical structure of the BHT gravity
\emph{linearized} around the maximally symmetric background,
$G^{\m\n}=\Leff g^{\m\n}$. At the critical point $\L_0/m^2=-1$, the
background solution is characterized by the property $\Om^{\m\n}=0$,
the covariant version of $\Om^{00}=0$. Analyzing the constraint
structure of the linearized theory, we constructed the canonical
generator of \emph{extra gauge symmetry}, which is responsible for
transforming two massive graviton excitations into a single, partially
massless mode; moreover, the theory is found to have \emph{one}
Lagrangian degree of freedom.
In order to properly understand the linearized theory, one should
stress that although we have $\Om^{\m\n}=0$ on the very background, the
linearized theory is well-defined in the region off the background,
where $\Om^{\m\n}\ne 0$. In this region, the process of linearization
induces a drastic modification of the canonical structure of the BHT
theory, leading to the change of the number and type of constraints and
physical degrees of freedom.
Thus, the canonical structure of the BHT gravity at the critical point
$\L_0/m^2=-1$ does not remain the same after linearization. Following
the arguments of Chen et al. \cite{20}, we are led to conclude that
the canonical consistency of the BHT gravity, expressed by the
stability of its canonical structure under linearization, is violated
at the critical point $\L_0/m^2=-1$.
\section*{Acknowledgements}
This work was supported by the Serbian Science Foundation under Grant
No. 171031.
| {
"attr-fineweb-edu": 1.412109,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUaQPxK3YB9m8ST3OF | \section{Introduction}
Research in deep \acrfull{rl} has seen tremendous progress in recent years with widespread success in various areas including video games~\cite{Lample.2017}, board games~\cite{Silver.2017}, robotics~\cite{Gu.2017}, industrial assembly~\cite{T.Inoue.2017} and continuous control tasks~\cite{TimothyP.Lillicrap.2016} among others. This rapid increase in interest in the research community can be particularly traced back to advances made in the training of \acrfull{dnn} in the last decade, as well as novel \acrshort{rl} algorithms developed recently. Notable example of the latter include value function based methods like deep Q-networks~\cite{Mnih.2015}, policy gradient methods like deep deterministic policy gradient~\cite{TimothyP.Lillicrap.2016}, \acrfull{a2c}~\cite{Mnih.2016}, trust region policy optimization~\cite{Schulman.2015} and \acrfull{ppo}~\cite{Schulman.2017} to name a few. Also additional training components have helped in improving \acrshort{rl} capabilities like improved exploration strategies~\cite{Conti.2018}, intrinsic motivation~\cite{Mohamed.2015} and curiosity-driven methods~\cite{Pathak.2017}.
Revisiting the training of \acrshort{dnn}, regularization and better optimization methods have played a crucial role in improving their generalization capabilities, where Batch Normalization~\cite{Ioffe.2015b}, Dropout~\cite{Srivastava.2014} and weight decay~\cite{Goodfellow.2016} are the most prominent examples which have become a standard in supervised learning. Surprisingly, little attention has been paid to methods for improving the generalization capabilities of \acrshort{dnn} during reinforcement learning, although this appears to be crucial in supervised and unsupervised learning tasks. Regardless, most of the above mentioned approaches are also utilized in \acrshort{rl}, although there are stark differences between supervised learning and \acrshort{rl}. It must be noted however that the above methods nevertheless also assist in \acrshort{rl} training~\cite{Cobbe.2019}. Our goal however, is to develop a principled optimization and training approach for \acrshort{rl}, especially considering its dynamic learning process.
In literature, generalization in \acrshort{rl} is usually done by testing the trained agent's performance on an unseen variation of the environment, usually performed by procedurally generating new environments \cite{Cobbe.2019}. We however, want to improve the evaluation performance on the same environment rather than generating new and unseen environments for the agent. An introduction to the existing methods for generalization in \acrshort{rl} is provided in Section~\ref{sec:relwork}. As a related problem, the derivation of suitable network sizes for a particular \acrshort{rl} problem is rarely addressed. In practice, the size, i.e. depth and width of the neural networks, is mainly adjusted by either random search or grid search methods~\cite{Bergstra.2012}. The other recent methods usually tune other hyperparameters such as learning rate, entropy cost, and intrinsic reward and do not consider size of the network in \acrshort{rl}~\cite{Jaderberg.2017}. Therefore, tuning for an optimal architecture requires knowledge on both the type of \acrshort{rl} algorithm that is used and the application domain where the algorithms are applied, which inhibits fast deployment of the learning agents. An automatic adjustment of the required network parameters is highly desirable because of the long training times in \acrshort{rl} together with the large number of hyperparameters to be tuned.
To this end, we tackle the above described weaknesses in current \acrshort{rl} methods, namely targeted training in the evolving learning setting and the automatic adjustment of the trainable parameters in the neural network. We present \acrfull{gmrl}, which maintains trust regions and reduces gradient variance, from the initial training phase in two of those algorithms, for a targeted training. The original proposal for \acrfull{gm} with network pruning was originally introduced in~\cite{Chadha.2019c} for supervised training of \acrshort{dnn}. We enhance the previous work by concentrating on the gradients flow in the network rather than the weights. Specifically, rather than pruning irrelevant weights, we focus on the adaptive learning of the most relevant weights during the course of training. We develop different methods for \acrshort{gmrl}, starting with a method that requires knowledge of the learning process and then developing a momentum based dynamic learning scheme which particularly suits the sequential learning process of \acrshort{rl}. We further develop method to automatically adjust the \acrshort{gm} hyperparameters, particularly the active network capacity required for a certain task. It is important to note that the proposed approaches are independent from the type of \acrshort{rl} algorithm used and are therefore, universally applicable. We apply and test the proposed algorithms in various continuous and discrete application domains. The proposed \acrshort{gm} approaches with the \acrshort{a2c}~\cite{Mnih.2016} algorithm is tested on a multi-robot manufacturing station where the goal is a coordinated operation of two industrial robots, sometimes termed in literature as Job-Shop Scheduling Problem (JSSP)~ \cite{Applegate.1991}. Thereafter, we test the approach on some well known reinforcement learning environments from OpenAI Gym~\cite{GregBrockman.2016} like the Atari games from the Arcade Learning Environment~\cite{Bellemare.2013} and MuJoCo~\cite{E.Todorov.2012} both with the \acrshort{ppo}~\cite{Schulman.2017} algorithm. The results obtained underline the improved generalization performance and the capability to automatically adjust the network size allowing for successful training also in strongly over-parameterized neural networks.
The contributions of the work can be summarized as follows:
\begin{itemize}
\item We introduce four novel GM methods, each successively increasing the performance of the RL algorithm, namely Frozen threshold with Gradient Monitoring (F-WGM), Unfrozen threshold with Gradient Monitoring (U-WGM), Momentum with Gradient Monitoring (M-WGM), Adaptive Momentum with Gradient Monitoring (AM-WGM).
\item The methods reduce the gradient variance helping in improving the training performance during the initial phase of \acrshort{rl} with the M-WGM and AM-WGM methods acting as a replacement for gradient clipping in PPO. In addition to the superior evaluation performance, the methods are shown to expedite the convergence speed by increasing the learning rates and increasing the 'k-epoch' updates in PPO.
\item The proposed AM-WGM method allows for continuous adjustment of the network size by dynamically varying the number of active parameters during training by adjusting the network capacity based on the feedback from the rewards collected during the learning progress.
\item We conduct various experiments on different application domains including a coordination problem of a multi-robot station, Atari games, and MuJoCo tasks to underline the performance gains and the general applicability of the proposed methods.
\end{itemize}
The paper is organized as follows. Related work is presented in Section~\ref{sec:relwork}. In Section~\ref{sec:RL}, the basics of the \acrshort{rl}-framework employed are introduced. The proposed \acrshort{gm} methods and their integration with \acrshort{rl} is presented in Section~\ref{sec:GM}. Section \ref{sec:Results} presents a thorough comparison of the results obtained from proposed methods on the various application domains. Section~\ref{sec:conclusion} concludes the paper and an outlook for future work.
\section{Related Work} \label{sec:relwork}
We first discuss general approaches in \acrshort{dnn} training that help in better generalization capabilities and subsequently focus on methods specifically for generalization in \acrshort{rl}. Finally, we discuss approaches for tuning the network size in \acrshort{rl}.
\textbf{Generalization in \acrshort{dnn}:} Deep feed-forward neural networks had been notoriously difficult to train in the past due to various factors including vanishing gradient~\cite{Hochreiter.1998}, highly non-convex optimization problems~\cite{Gori.1992} and the tendency to over-fit~\cite{Lawrence.1997}. All of these short-comings have been virtually mitigated in modern deep learning architectures through a myriad of techniques. They include initialization of the trainable parameters~\cite{Glorot.2010,He.2015}, the use of sparse and non-saturating activation functions such as ReLU~\cite{Glorot.2011} in the hidden layers and the use of more efficient stochastic gradient descent optimization algorithms such as Adam~\cite{DiederikP.Kingma.2015}. The other approaches enhancing generalization capabilities are Batch Normalization~\cite{Ioffe.2015b} to overcome the internal co-variate shift and Dropout\cite{Srivastava.2014} which masks the neural activations with a masking matrix drawn from a Bernoulli distribution. For dropout, various variants improving on the vanilla dropout have been developed including variational dropout~\cite{Kingma.2015b} and targeted dropout~\cite{Gomez.2019}. Similarly, individual weights instead of hidden activations units are dropped in~\cite{Wan.2013}. Recently, it has been investigated that over-parameterization also leads to better generalization performance in supervised learning with \acrshort{dnn} ~\cite{BehnamNeyshabur.2019,Belkin.2019,Brutzkus.2019}. Another popular approach is the incorporation of auxiliary loss functions into the main loss resulting in either $L_1$ or $L_2$ regularization. An increasing popular method for optimizing neural network training is gradient clipping~\cite{Pascanu.2013}, originally developed for the exploding gradient problem in recurrent neural networks. It has been proven to increase convergence speed in supervised learning in~\cite{JingzhaoZhang.2020}. Also a multitude of approaches for network pruning have been reported to help in the generalization performance. Generally, the pruning methods are applied iteratively based on magnitude based~\cite{Han.2015}, gradient or Hessian ~\cite{Hassibi.1993,LeCun.1990b}. Recent methods such as~\cite{NamhoonLee.2019,NamhoonLee.2020} calculate the sensitivity of each connection and prune the weights with a single shot approach. Please refer~\cite{Blalock.2020} for a recent overview of the various pruning methods that have been developed for neural networks. We emphasize that our approach does not include pruning weights, but freezing them by not allowing the gradients to flow to the respective weights. Also a direct application of pruning methods in \acrshort{rl} is not clear as these methods usually require a retraining which is far-fetched for the evolving data-set scenario during \acrshort{rl} training. Indeed, all of the above methods that have been used in \acrshort{rl}, were specifically developed for supervised learning, but just found themselves to be used in \acrshort{rl}.
\textbf{Variance Reduction and Generalization in \acrshort{rl}:} Variance reduction techniques for gradient estimates in \acrshort{rl} have been introduced in~\cite{Greensmith.2004} where control variate are used for estimating performance gradients. An Averaged Deep Q Network approach has been proposed in \cite{Anschel.2017} where averaging previously learned Q-values estimates leads to a more stable training procedure. Also, variance reduction in the gradient estimate for policy gradient \acrshort{rl} methods has been proposed in~\cite{HongziMao.2019} with an input-dependent baseline which is a function of both the state and the entire future input sequence. Contrary to the previous approaches, we consider variance reduction in the gradient estimate by freezing the gradient update of a particular weight.
Literature on generalization in \acrshort{rl} usually focuses on the performance of the trained agent in an unseen environment~\cite{Cobbe.2019,Justesen.2018,XingyouSong.2020,Igl.2019}. However, better generalization methods for evaluating the agent on the same environment is missing in literature. This is especially the case in industrial production environments where the production setup does not change drastically with time. The proposed approach is focused on this area where a fast and reliable training procedure has been developed for discrete and continuous environments.
\textbf{Neural Architecture Search:}
There are a number of hyperparameters in neural network training, with the size of the network being one of the most important ones. Apart from grid search and random search, there also exist a number of approaches including Bayesian optimization~\cite{Bergstra.2013}, evolutionary methods~\cite{Young.2015}, many-armed bandit~\cite{Li.2017b}, population based training~\cite{Jaderberg.2017} and \acrshort{rl}~\cite{Baker.2017}. All of the above methods search for neural architectures for a supervised learning setup. \cite{Schaul.2019} present a multi-arm bandit approach for adaptive data generation to optimize a proxy of the learning progress. We on the other hand, propose a method which makes the learning process robust to the choice of the size of the network. Furthermore, all the of the above methods search in a sequential and very computationally expensive manner. Our proposed method on the other hand, start with a possibly over-parameterized network and increase or decrease the learning capacity during training to adjust the learning procedure. This way we dynamically determine the actually relevant number of parameters in each training phase.
\section{Introduction to Reinforcement Learning}\label{sec:RL}
Reinforcement Learning (RL) is the branch of machine learning that deals with training agents to take an action $a$, as a response to the state of the environment at that particular time, $s_{t}$, to get a notion of reward, $r$. The objective of the RL agent is to maximize the collection of this reward. Sutton and Barto define RL as, \say{ learning what to do – how to map situations to actions – so as to maximize a numerical reward signal} \cite{Sutton.1998}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{images/agent_env.jpg}
\caption{The interaction of agent and environment as MDP}
\label{fig:my_label}
\end{figure}
A reinforcement learning system has two major components: the agent and the environment where the overall system is characterized as a Markov Decision Process (MDP). The agent is the intelligent learning system, while the environment is where the agent operates. The dynamics of the MDP is defined by the tuple $\mathcal{(S, A, P, R,}$ $\mathcal{p}_0$ $)$, with the set of states $\mathcal{S}$, the set of actions $\mathcal{A}$, a transition model $\mathcal{P}$, in which for a given state $s$ and action $a$, there exists a probability for the next state $s^{\prime} \in \mathcal{S}$ , a reward function $\mathcal{R : S \times A \times S \xrightarrow{} \mathbb{R}}$ which provides a reward for each state transition $s_t \xrightarrow{} s_{t+1}$. and a re-initialization probability $\mathcal{p}_0$.
A policy $\pi(a|s),$ provides an action $a \: \epsilon \: \mathcal{A}$, for a state $s$ presented by the environment. A policy could use \textit{state-value function}, $v(s) = E[R_{t}|S_{t}=s]$, which is the expected return from the agent when starting from a state $s$, or an \textit{action-value function}, $q(s, a) = E[R_{t}|S_{t}=s, A_{t} = a]$, which is the expected return from the agent when starting from a state $s$, while taking action $a$. Here, $R_{t} = \sum_{t}{\gamma^t r_{t}}$ is the discounted reward that the agent collects over $t$ times steps and $\gamma$ is the discount factor, where $ \: 0 \leq \gamma \leq 1.$ The policy then can be defined by an $\epsilon-greedy$ strategy where the actions are chosen based on $\pi(a|s) = argmax(q(s, \mathcal{A}))$ for a greedy action or a completely random action otherwise.\\
Alternatively there are policy gradient methods that use a \textit{parameterized policy}, $\pi(a|s,\theta)$, to take action without using the value functions to take actions. Value functions may still be used to improve the learning of the policy itself as seen in A2C. The objective of the agent is to find an optimal policy, $\pi^*(a|s),$ that collects the maximum reward. To find the optimal policy, the trainable parameters of the policy are updated in such a way that it seeks to maximize the performance as defined by the cost function $J(\theta_{t})$ as illustrated in Equation \eqref{eq:policy update}. There exists at least one policy, such that $\pi^*(a|s) \geq \pi(a|s)$, where $\pi^*$ is defined as the optimal policy.
\begin{equation}
\label{eq:policy update}
\theta_{t+1} = \theta_{t} + \rho \nabla J(\theta_{t}),
\end{equation}
where $\theta$ are the parameters of the policy $\pi$ and $\rho$ is the learning rate. There are different choices of $J(\theta_{t})$ for different algorithms as explained in the sections below.
\subsection{Advantage Actor-Critic}
In this section we introduce the policy gradient algorithm Advantage Actor Critic (A2C)~\cite{Mnih.2016}. A2C are policy gradient methods that use the value function to reduce the variance in the calculated cost function. Here, the \textit{actor} refers to the policy $\pi (a|s,\theta_{1})$ and the \textit{critic} refers to the value function $v(s,\theta_{2})$, where $\theta_{1} \: and \: \theta_{2} $ are the parameters of the actor and critic respectively. The parameters $\theta_{1} \: and \: \theta_{2} $ are partially shared in case of the A2C algorithm we use. The cost function for the actor and the critic of A2C algorithm is given by Eqn. \eqref{a2c-j} and \eqref{adv} respectively.
\begin{equation}
\label{a2c-j}
J(\theta) = \mathbb{E}_{\sim \pi_{\theta}} \left[ \sum_{(s_{t},a_{t}) \epsilon } \: log \pi_{\theta}(a_{t}, s_{t}) \: . \: A_{\pi_{\theta}} (s_{t}, a_{t}) \right]
\end{equation}
\begin{equation}
\label{adv}
A_{\pi_{\theta}} (s_{t}, a_{t}) = Q_{\pi_{\theta}} (s_{t}, a_{t}) - V_{\pi_{\theta}} (s_{t})
\end{equation}
We use two co-operative agents that use indirect communication channels to solve the multi robot coordination environment. They are explained in detail in section \ref{learn_setting}.
\subsection{Proximal Policy Optimization:}
In this section, Proximal Policy Optimization (PPO) is explained. In A2C, the focus of the algorithm was to get a good estimate of the gradients of the parameter $\theta$. But applying multiple optimization steps on this, empirically leads to large policy updates that destabilizes the learning process. A surrogate objective function in used in PPO to overcome this.
\begin{equation}
\label{ppo}
\underset{\theta}{max} \: \mathbb{E}_{\sim \pi_{\theta}} \left[ min ( r_{t}(\theta)\hat A_{t}, \: clip(r_{t}(\theta), 1-\epsilon, 1+\epsilon) \hat A_{t}) \right]
\end{equation}
where
\begin{equation}
r_{t}(\theta) = \frac{\pi_{\theta}(s_{t}, a_{t})}{\pi_{\theta_{old}}(s_{t}, a_{t})}
\end{equation}
and $\hat{A_{t}}$ is the estimator of the advantage function at time step $t$. Refer to ~\cite{Schulman.2017} to have a full overview of the algorithm. Due to this controlled nature of the policy updates PPO are found to work well with continuous control problems. Hence, PPO is used in MuJoCo and Atari Learning Environment.
\section{Reinforcement Learning with Gradient Monitoring}\label{sec:GM}
Modern deep learning architectures are in general over-parameterized, i.e. the parameters drastically outnumber the available data set size~\cite{Arora.2018b,BehnamNeyshabur.2019}. While this has been empirically shown to improve the learning performance compared to shallow architectures, determining a suitable number of layers and neurons depending on the problem at hand remains to be an open issue.
To circumvent the determination of network size, successful workarounds have focused on reducing the number of actively learning parameters per iteration. They end up in reducing the degrees of freedom during the training using methods like drop-out, drop connect and their various sub-forms where network activations or weights are randomly switched off.\\
Gradient monitoring follows a different approach in that it intends to steer the learning process of the \acrshort{dnn} by actively manipulating the backward pass of the training process. Specifically, we purposefully deactivate and activate the gradients in the backward pass for a subset of weights based on the learning conditions which is explained in the subsection below. Although applicable to deep learning setting, we find GM particular useful for reinforcement learning since it reduces the variance in the gradient estimates during the crucial initial part of the learning process and also introduces a dynamic way to clip the gradients that is applied layer-wise as opposed to the norm of the entire gradients popularly used.
\subsection{Gradient Monitoring in DNN}
To illustrate the training procedure with GM, we consider fully connected feed-forward DNN with more than one hidden layer trained with mini-batch gradient descent and gradient based optimizers, although we found it most effective with momentum based gradient optimizers like Adam~\cite{DiederikP.Kingma.2015}. However, we emphasize that GM is universally applicable to other network structures like convolutional or recurrent NN. The training procedure in NN minimizes the loss function by calculating the partial derivative of the loss functions with respect to each of the weight parameters recursively. Hence, for a NN model with $m \geq 2$ hidden layers we denote $W_{m}$ as the weight matrix for the $m^{th}$ layer and $\nabla L_{W_1}$, $\nabla L_{W_2} $ .... $ \nabla L_{W_m}$ denote the gradients for each weight matrix. The gradient calculated as per the Adam optimizer is shown in Equation \eqref{eq:weightupdate}
\begin{equation}
\label{eq:weightupdate}
\nabla L_{W_t}=\frac {\hat{{m_t}}}{{\sqrt{\hat{{v_t}}}+\epsilon}}.
\end{equation}
To deactivate the gradients, we set elements of the gradient matrix $\nabla L_{W_t}$ in~\eqref{eq:weightupdate} to zero. To accomplish this, we define a masking matrix $M$, whose values are either one or zero, and calculate the new gradient matrix $ \nabla \hat {L}_{W_t}$ as shown in ~\eqref{mask_mul}.
\begin{equation}
\label{mask_mul}
\nabla \hat{L}_{W_t} = M_{W_t} \circ \nabla L_{W_t},
\end{equation}
where $\circ$ denotes the Hadamard product. The weight update is performed then with a standard gradient descent update as in Equation~\eqref{eq:weight_step}
\begin{equation}\label{eq:weight_step}
{W}_{t+1} = {W}_t - \rho \nabla \hat L_{W_t}.
\end{equation}
The steering of the learning process is decided based on the effect of each parameter on the forward as well as the backward pass. Therefore, the masking matrix, $M_{W_t}$, is calculated based on a function that takes as input, the weights $W_t$, their respective gradients $\nabla L_{W_t}$ from the backward pass, a learning threshold $\mu(W_t, \nabla L_{W_t})$, and a learning factor $\lambda$. A decision matrix $D_{W_t}(W_t, \nabla L_{W_t})$ is constructed to estimate the learning process. This decision matrix $D_{W_t}(W_t, \nabla L_{W_t} )$ is compared with the learning threshold $\lambda \mu (W_t, \nabla L_{W_t})$, in order to make the decision if the masking value is active (1) or inactive (0). The decision matrix can be calculated using lot of combinations like $\left|\frac{\nabla L_{W_t}}{W_t}\right|$, $\left|\frac{{W_t}}{\nabla L_{W_t}}\right|$ or $\left|\nabla L_{W_t} \circ {W_t} \right|$. We use the absolute values since we are interested in the quantum of learning. Specifically, the masking matrix $M$ can be defined as
\begin{equation}
M_{W_{t}} = {H} (D_{W_t}(W_t, \nabla L_{W_t}) - \lambda \mu(W_t, \nabla L_{W_t})),
\end{equation}
where ${H}$ is the Heaviside step function in which the gradients of weight connections which do not reach the relative amount of learning are deactivated, i.e. receive no gradient during the back-pass. Note that due to the use of Adam optimizer, the decision for freezing gradients is not only based on the actual gradient calculated over a mini-batch but based on the decaying average of the previous gradients. We emphasize that GM is applied to each layer in the NN. The list of hyperparameters used along with their symbols and the algorithm they are used in is given in table \ref{tab:hyperparameters}
\begin{table}
\caption{Hyperparameters used in GM algorithms}
\begin{center}
\begin{tabular}{ |c|c|c|c|c| }
\hline
Symbol & Description & VGM & M-WGM & AM-WGM\\
\hline
$\lambda$ & Learning factor& \ding{51}& \ding{51}& \ding{51}\\
$\eta_{start}$ & Start of GM & \ding{51} & \ding{55}& \ding{51}\\
$\eta_{repeat}$ & Mask update frequency& \ding{51} & \ding{55}& \ding{51}\\
$\zeta$ & Masking momentum& \ding{55}& \ding{51}& \ding{51}\\
$M_\zeta$ & Momentum matrix& \ding{55}& \ding{51}& \ding{51}\\
$\alpha_\lambda$ & change rate of $\lambda$& \ding{55}& \ding{55}& \ding{51}\\
$\phi$ & Reward collection rate& \ding{55}& \ding{55}& \ding{51}\\
$R$ & Rewards collected& \ding{55}& \ding{55}& \ding{51}\\
\hline
\end{tabular}
\label{tab:hyperparameters}
\end{center}
\end{table}
\subsection{Vanilla Gradient Monitoring}
The core of GM is the derivation of suitable conditions for activating and deactivating the gradients, $\nabla L_{t}$, flow which includes deriving $\mu$ based on the actual status of learning. To keep the representation simple $D_t(W_t, \nabla L_{W_t})$ and $\mu(W_t, \nabla L_{W_t})$ will be simply written as $D_t$ and $\mu$ respectively henceforth. Obviously, keeping a constant integer value as the learning threshold $\mu$ for all the gradients is not appropriate as the proportion of learning represented in the gradients might have different distributions in different layers and different time-steps. Furthermore, choosing a single constant learning value for different learning tasks is not trivial. Hence, the learning threshold is made adaptable by the use of functions like the mean or the percentile of the values of the decision matrix $D_{W}$. This provides a method that ensures that a certain portion of the gradients are allowed in any situation. We define $H$ such that all gradients above or below the learning condition is deactivated. In this paper we use the mean of all the elements $d_{ij}$ in the decision matrix $D_{W_t} \in \mathbb{R}^{n}$ for each layer $m$ as the $\mu$ function, and use $\left|\frac{\nabla L_{W_t}}{W_t}\right|$ as the $D$ function. Concretely, we deactivate all gradients below this learning condition:
\begin{align}
\mu_{m} = \frac{1}{n} \sum_{ij} d_{ij}
\end{align}
Beside the question which gradients to deactivate, we also have to answer the question when to deactivate the ineffective gradients to make training most effective. This problem is solved in two ways. First, as with the learning rate, similar schedules for deactivating is set up depending on the problem at hand. The methods F-WGM and U-WGM use this setup which are together called Vanilla Gradient Monitoring (VGM).\\
Alternatively, we introduce a momentum parameter on top of the masking matrix to alleviate the problem in deciding when to start deactivating the gradients. The methods M-WGM and AM-WGM use these methods. In this section, we further discuss only about the methods F-WGM and U-WGM, while M-WGM and AM-WGM are discussed in the further sections.\\
For F-WGM and U-WGM we have to define $\eta_{start}$, which defines after which epoch the masking matrix is applied, along with the $\lambda$ parameter, which is the multiplication factor for the learning condition $\mu$. $\eta_{start}$ is a hyperparameter which is tuned. But the start of the GM application can be automated by giving it as the point of the first successful episode. This is the point near which where we found empirically the highest amount of gradients being back propagated. So creating and applying the first masking at this point makes sense. The pseudo code for F-WGM and U-WGM is provided in \ref{alg:gm}. The only difference between F-WGM and U-WGM is that, in the case of F-WGM the $\lambda$ is kept constant and $M$ is updated with the same $\lambda$ value for every few iterations ($\eta_{repeat}$). While in U-WGM, the $\lambda$ is made variable, decreasing in value after every update
\makeatletter
\def\State\hskip-\ALG@thistlm{\State\hskip-\ALG@thistlm}
\makeatother
\begin{algorithm}[tb]
\caption{Frozen and Unfrozen with Gradient Monitoring}
\label{alg:gm}
\begin{algorithmic}[1]
\State {\bfseries Input:} $\nabla L_{t}$, \textit{${W}_{t-1}$}, $\rho$, $\lambda$, $\eta$, $\eta_{start}$, $\eta_{repeat}$
\State \textbf{Init:} Masking Matrix ${M}$
\State \textbf{Sequence:}
\If{$\eta >= \eta_{start}$} and $\eta_{repeat} \% \eta == 0$
\For{each layer $m$}
\State Masking matrix $M={H} \big(D_t - \lambda \mu$ \big)
\State Gradients: $\nabla L_{t} = \nabla L_{t} \circ {M}$
\EndFor
\EndIf
\State \textbf{Output:} Weights \textit{${W}_{t}$} = \textit{${W}_{t-1}$} + $\rho\nabla L_{t}$
\end{algorithmic}
\end{algorithm}
The motivation behind the U-WGM is that the weight parameters which did not have a relative high impact on the learning process during the initial phase of learning (till epoch $\eta_{start}$) might nevertheless have an impact later, e.g. once other weights have settled. Hence, by reducing the learning condition threshold, those weights can participate in the learning process again. The factor $\lambda$ is a hyperparameter which in practice we found that halving, i.e. $\lambda^\prime=\lambda/2$ at every $\eta_{repeat}$ works well.
\subsection{Momentum with GM}
One of the disadvantages of the previous approaches is that the performance of the algorithm is hugely dependant on the hyperparameters $\eta_{start}$ and $\eta_{repeat}$. $\eta_{start}$ is at the first episode of convergence, since that was around where the absolute sum of gradients was at the maximum. This poses a problem when scaling up the use of GM RL to other continuous control long-horizon tasks, since we always need to decide in hindsight when to begin the application of GM. Hence a new version of GM RL was developed to tackle the same called Momentum with Gradient Monitoring (M-WGM). Here we introduce a momentum matrix $M_{\zeta}$ and a momentum hyperparameter $\zeta$, where the momentum matrix applied to the gradients right from the first episode and the momentum hyperparameter provides an intuitive control over the learning process. The pseudo code for M-WGM is give in Algorithm \ref{alg:mgm}.
\begin{algorithm}[tb]
\caption{Momentum - Gradient Monitoring}
\label{alg:mgm}
\begin{algorithmic}[1]
\State {\bfseries Input:} $\nabla L_{t}$, \textit{${W}_{t-1}$}, $\rho$, $\lambda$, $\zeta$
\State \textbf{Init:} ${M_{\zeta}}$, ${M}$
\State \textbf{Sequence:}
\For{each layer}
\State Masking matrix $M={H} \big(D_t - \lambda \mu$ \big)
\State Momentum matrix: $M_{\zeta} = M_{\zeta}{\zeta} + M(1-{\zeta})$
\State Gradients: $\nabla L_{t} = \nabla L_{t} \circ M_{\zeta}$
\EndFor
\State \textbf{Output:} Weight \textit{${W}_{t}$} = \textit{${W}_{t-1}$} + $\rho\nabla L_{t}$
\end{algorithmic}
\end{algorithm}
The gradients and the masking matrix are calculated as usual, but the masking matrix $M$ is not used directly. We use the momentum matrix $M_\zeta$ which now keeps track of the running momentum of the elements of the masking matrix. The momentum matrix is element-wise multiplied with the gradients and the gradients are finally applied to the weights. The rationale behind this being that the gradients are updated according to the frequency of their activation in the masking matrix. So instead of applying the continuously varying noisy masking matrix, we instead use a controlled momentum matrix. The momentum method controls the variance in the gradient updates, especially in the early stages of the RL learning process where this stability provides for convergence to a better minima. Also in the later stages the momentum matrix still controls the sudden bursts of gradients that could destabilize the learning process and therefore provides much better performance of the agents as shown empirically in the results section. As such we use this method as a controlled replacement for the global gradient clipping usually done in RL algorithms.
\subsection{Adaptive Momentum with GM}
The $\lambda$ parameter in the M-WGM algorithm is kept constant throughout the training. But as noticed in the U-WGM method, modifying the hyperparameter for learning condition threshold ($\lambda$) improves performance. Hence in this section, we introduce the algorithm Adaptive Momentum with Gradient Monitoring (AM-WGM), where instead of hand setting the threshold for masking matrix activation it is made adaptable based on the performance (reward collection rate $\phi$) of the agent. For example, if the agent performs worse than before then the threshold is increased so that fewer gradients are active in the masking matrix and vice-versa. This means when the performance is bad, learning is restricted to fewer weights, while in case of improved performance, the agent is provided more weights in the learning process. The pseudo code for the AM-WGM is provided in Algorithm \ref{alg:amgm}. To ensure stability in the initial training episodes, $\lambda$ is not modified until a few episodes are completed, usually at about 30\% of the total episodes, and it is also updated after every few episodes, usually at about 10\% of the total episodes. These are also denoted by the hyperparameters $\eta_{start}$, $\eta_{repeat}$.
\begin{algorithm}[tb]
\caption{Adaptive Momentum with Gradient Monitoring}
\label{alg:amgm}
\begin{algorithmic}[1]
\State {\bfseries Input:} $\nabla L_{t}$, \textit{${W}_{t-1}$}, $\rho$, $\lambda$, $\zeta$, ${\alpha_\zeta}$, $\eta$, ${\eta_{start}}$, ${\eta_{repeat}}$
\State \textbf{Init:} $M_\zeta$, $M$, ${R_{o}}$, ${\phi_{n}}$, ${\phi_{o}}$, ${\eta_{start}}$, ${\eta_{repeat}}$
\State \textbf{Sequence:}
\State Update ${R_{n}}$
\If{${\eta} >= {\eta_{start}}$ and $\eta \% \eta_{repeat}$ == 0}
\State ${\phi_{n}}$ = ${R_{n}}$ $/$ ${R_{o}}$
\If {${\phi_{n}}$ $/$ ${\phi_{o}}$ $>=1$}
\State \textit{change} = -1
\ElsIf {${\phi_{n}}$ $/$ ${\phi_{o}}$ $<1$}
\State \textit{change} = 1
\EndIf
\State $\lambda$ = clamp($\lambda$+(${\alpha_{\lambda}}$*\textit{change}), 0, 1)
\State ${\phi_{o}}$ = ${\phi_{n}}$
\State ${R_{o}}$ = ${R_{n}}$
\EndIf
\For{each layer}
\State Masking matrix $M={H} \big(D_t - \lambda\mu \big)$
\State Momentum matrix $M_\zeta = M_\zeta{\zeta} + M(1-{\zeta})$
\State New Gradients: $\nabla L_{t} = \nabla L_{t} \circ M_{\zeta}$
\EndFor
\State \textbf{Output:} Weight \textit{${W}_{t}$} = \textit{${W}_{t-1}$} + $\rho\nabla L_{t}$
\end{algorithmic}
\end{algorithm}
\begin{align}
R_n = \frac{1}{T} \sum_{i=1}^{T} r_t
\end{align}
So AM-WGM is similar to M-WGM in the initial stages of the learning process and it only activates after a certain number of updates are applied and the learning process has stabilized. The algorithm initializes the parameters: reward collected in current episode ($R_n$), reward collected in previous episode ($R_o$), rate of reward collected in the current episode ($\phi_n$), rate of reward collected in the previous episode ($\phi_o$). The mean reward collected is stored for the current episode is stored as $R_n$. The rate of reward collected, $\phi_n$, is the calculated. If the rate of reward collection increased ($\phi_n$ $/$ $\phi_o$ >= 1) then we reduce the threshold ($\lambda$) (which allows more gradients to be used), while we increase the increase the threshold ($\lambda$) if the performance has degraded. The hyperparameter $\alpha_\lambda$ controls the amount of change in the $\lambda$ value. The adaptable nature of the algorithm has empirically shown to increase the performance of the agents.
\subsection{Summary:}
The GM methods explained above contribute mainly on two things on the algorithm level: provide an additional trust-region constraint for the policy updates and variance reduction of the gradients. The additional trust-region constraint is provided by removing the noisy insignificant gradient contributions. Noise in the gradients update is reduced by use of the masking matrix or the momentum matrix while the insignificant contributions are removed by using the Heaviside step function. So only the consistently high contributing gradients are propagated while the others are factored to have a low impact on the learning process. The removal of these gradients also reduce the overall variance. This is especially critical for a stable learning in the initial stages of the learning process. Our results from the experiments corroborate this.
\section{Experimental Results}\label{sec:Results}
We test the GM-RL algorithms on a variety of different environments to prove its applicability empirically. We discuss and apply the proposed methods to a real world multi-robot coordination environment with discrete state and action space. Further we apply two algorithms, M-WGM and AM-WGM, on the OpenAI Gym environments of Atari games (continuous state space and discrete action space) and MuJoCo simulations (continuous state space and continuous action space). This is because the algorithms, M-WGM and AM-WGM, perform the best in multi-robot co-ordination problem and they can also be directly applied without any hindsight information. The results from the OpenAI Gym environments prove the general 'plug and play' nature of the M-WGM and AM-WGM methods, wherein any previously proven algorithm can be improved upon by usage of GM-RL. All the \acrshort{rl} and proposed GM methods have been developed using PyTorch~\cite{Paszke.2019b}.\\
The results section is structured as follows. All the GM methods introduced in the previous section (F-WGM, U-WGM, M-WGM, AM-WGM) are tested on the multi-robot coordination environment. The results show that the algorithm gets progressively better with each iteration. Then the applicable GM solutions (M-WGM, AM-WGM) are applied in the OpenAI Gym environments. The results obtained are compared with similar algorithms Without Gradient Monitoring (WOGM). The results are tested on various random seed initialization (Atari: 5 seeds and MuJoCo: 10 seeds) to test for stability of the algorithm.
\subsection{Multi-Robot Coordination Environment}
This section describes the application of GM-RL algorithm on a cooperative, self-learning robotic manufacturing cell. The environment along with the corresponding RL agent setup is described in sections below, followed by the results achieved on the various trials.
\subsubsection{Environment description}
We use a simulation of the cooperative, self-learning robotic manufacturing cell to interact with the RL agent. Training is done on the simulated environment since training the RL agent on the real manufacturing cell is time consuming and it requires large amounts of data to converge to a good minima~\cite{zhu2017target}. The simulated environment closely resembles the actual environment, emulating all the necessary features like position of work piece, status of machines etc., accelerating the learning process from taking weeks to few hours. The simulation environment is developed in Python.
\subsubsection{Learning Set-up} \label{learn_setting}
\begin{figure}[t]
\centering
\includegraphics[scale=0.3, align=c]{images/schematic_dia.jpg}
\caption{Schematic diagram of the multi-robot setup with the common operation region of both robots shown in grey stripes}
\label{fig:robot_scheme}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.8, align=c]{images/robot_station.jpg}
\caption{The multi-robot setup of two industrial robots and a working platform}
\label{fig:robot_setup}
\end{figure}
The multi-robot coordination problem is setup in the cooperative test-bed as shown in the Figure~\ref{fig:robot_setup}. The test-bed has a dual robot setup, consisting of Adept Cobra i600 SCARA robot and an ABB IRB1400 6-DOF-robot. The test-bed has six handling stations, two input buffers, three handling stations, and one output buffer as shown in Figure~\ref{fig:robot_scheme}. There are two different types of work-pieces to be handled through the stations, Work-Piece-1 (WP1) and Work-Piece-2 (WP2), each with their respective input buffers. Both the work pieces have their own pre-defined sequences that are encoded into them through embedded RFID chips. The schematic diagram of the test-bed is given in the Figure~\ref{fig:robot_scheme}. The robots pick and place the work-pieces from one station to the other. The work space where the handling stations are located is accessible to both the robots, hence it is possible to have a shared working space denoted by the striped grey area in Figure~\ref{fig:robot_scheme}. Each robot has its own proprietary software to control its movements. Hence a supervisory control system is implemented through Siemens S7 platform that controls and coordinates the robot movements. This supervisory control system sends signals to the robot's on-board control system where the actual action is taken by the robot. The task of the agents in this test-bed is to move a predefined number of work pieces through the handling stations within an optimal steps or time-frame into the output buffer.
\textbf{Agent, Environment and Reward Representation:}
In this part, we discuss the agent-type, number of agents, the action-space and the state representation for the agents. The \textit{robots} are used as the agents, where both robots act independently in a multi-agent setup. The robots take the universal state of the system and output an action each. For the \textit{architecture} of the robotic multi-agent, independent learners with communication enabled between the agents was chosen since it is established that communication between the agents helps in faster convergence~\cite{Panait.2005}. This setup gives the RL-agent a good overview of the global conditions and local conditions. For the action-space, the agent has only predefined movements of the work-piece paths since a supervisory control system takes care of the hardware level movements. Providing this information, instead of the agent computationally finding the best movements is a practical choice as well because such a constraint already exists as a part of the manufacturing document in the factory. The action-space by extension controls the input, output, and loading of the work-piece in the resources. Additionally, no Programmable Logic Control (PLC) programming is required to implement the RL-agent. This eliminates ineffective actions and the size of the action-space is reduced to 10 instances per robot.
The state-space is a piece of very important information since this is what the agent sees. The state of the resources was chosen to represent the state-space since it was independent of the work-order size, and also had the computational advantage of being in a discrete space. The state-space has 27 states given by $3^3$, three work stations and three job types (WP1, WP2, and empty). Additionally the work-piece completion percentage is also given as part of the state-space. This acts as a communication channel between the agents to identify the work done by the other agent.
\begin{table}
\caption{Rewards setup for multi-robot environment}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
State & Reward\\
\hline
Each step & -1\\
Locked state & -100\\
Incomplete after 1000 steps & -100\\
Unequal WP movement & -30\\
WP Output & +50\\
Target achieved & 500\\
\hline
\end{tabular}
\label{tab:jssp-rew}
\end{center}
\end{table}
The setting-up of the \textit{rewards} for a RL-agent is important as it can influence the stability of the system. We setup the reward as follows. Every action taken by the robots incurred a reward of -0.1. During the course of the study of environment two states were identified as being locked, meaning no further movement of the job is possible. If the robot reached these states it gets a reward of -100. Also, if the agent is not able to reach the required target in 1000 steps, it receives a reward of -100. To ensure that the equal quantities of WP1 and WP2 are processed, a constraint was placed on the system such that if one of the work-piece reaches completion without the other work-piece reaching even 75\% of its target, then it gets a reward of -30 as this behaviour is not completely bad but something to be improved upon. Every individual output from the environment incurred a reward of +50, while the agent gets a reward of +500 if the global targets of both agents are achieved. The reward for the individual output can be characterised as an intermediate reward, which guides the agent to make more such actions that will eventually lead to achieving the global target. The global target is set as 20 work-pieces each of WP1 and WP2. The rewards are shown in table \ref{tab:jssp-rew}
We use a similar approach as presented in \cite{Mnih.2016} with neural network architecture in the actor critic algorithm. The main idea is to use multi-task learning \cite{Caruana.1997}, which constructs neural networks that enable generalised learning of the tasks, in the context of the actor-critic algorithm. The 'multi-headed neural network' also is known to generalise the tasks by taking advantage of the system specific information from the signals \cite{Caruana.1997}. The primary hyper-parameters in focus here are the network size, the learning rate, the batch size and the $n$-step size. The values of hyper-parameters which gave the best results are shown in Table \ref{tab:hpy-a2c} which were set using grid-search. Although the network size can be arbitrarily large, we use this particular size which gave the best result for the WOGM algorithm. This is discussed in \textit{Robustness to Choice of Network Size} of the results section. The activation function in the actor and critic layer is ReLU while those used in the shared layer is Sigmoid.
\begin{table}[]
\caption{Hyperparameters of A2C algorithm}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
Algorithm & Hyperparameter & Value\\
\hline
All & NN Input Size & 29\\
All & Body - Network Layers & 2\\
All & Body - Layer Neurons & 10\\
All & Head - Network Layers & 2\\
All & Head - Layer Neurons & 10\\
All & Batch size & 10\\
WOGM & Learning rate ($\rho$) & 1e-3\\
VGM, M-WGM, AM-WGM & Learning rate ($\rho$) & 2e-3\\
All & Learning Factor ($\lambda$) & 0.5\\
All & Discount factor ($\gamma$) & 0.99\\
AM-WGM & Momentum value ($\zeta$) & 0.0005\\
AM-WGM & AM-WGM start ($\eta_{start}$) & 1500\\
AM-WGM & AM-WGM repeat ($\eta_{repeat}$) & 1000\\
AM-WGM & Masking Momentum ($\zeta$) & 0.999\\
AM-WGM & Threshold change ($\alpha_\zeta$) & 0.001\\
\hline
\end{tabular}
\label{tab:hpy-a2c}
\end{center}
\end{table}
\subsubsection{Results}
In this section, we discuss the results in three sub-parts namely the gradients during the back-propagation, the amount of rewards the agents collected, and the task time and the number of work-piece outputs achieved by each agent in the multi-robot environment. All the results provided here are from the deployment of the agents after the training is complete, although we also notice that the GM-RL algorithms improve the training performance, leading to faster convergence in all GM methods as shown in Figure~\ref{fig:jssp_conv}. All the agents are trained for 5000 episodes where the convergence is achieved after 1000-2000 episodes in each of the algorithms, allowing for 3000 further episodes of learning. The agents are eventually tested for 4000 episodes.
\textbf{Gradients:}
In this section, we discuss about the amount of gradients that back-propagate through the neural network to analyse the targeted learning activity in the network. Since the gradient directions can be both positive and negative, in-order to get the actual quantum of the gradients, the absolute sum of the gradients for each backward pass is calculated. The absolute sum for the GM methods are calculated after the masking matrix is applied hence the quantum of gradients back-propagated by the GM methods are considerably less than the WOGM method as can be seen in Figure~\ref{fig:abs_grads}. It can be noted that in the case of U-WGM the gradients spike is noted in the iterations at which the masking matrix is applied. While the F-WGM and U-WGM are still prone to the odd fluctuations in the gradients back-propagated, it should be noted that the momentum based GM methods (M-WGM and AM-WGM) control their gradient variations well during the entire training process. The WOGM training is exposed to extensive variation in the amount of gradient flow. This variance reduction eventually leads to a stable learning process which reflects in the rewards collected as well as illustrated in Figure ~\ref{fig:rewards}. The AM-WGM algorithm collects the most rewards, followed by the rest of the rest of the GM methods, with the WOGM algorithm collecting the least amount of rewards.
\textbf{Robustness to Choice of Network Size:}
Another important advantage of using the GM methods is the higher degree of freedom or robustness to the size of the network chosen. This is because the threshold-function ($\lambda\mu$) explained in Algorithm~\ref{alg:gm} adaptively selects only the required neurons for learning and ensures the learning is focused only on them. In Figure.~\ref{fig:act_neurons}, the dynamic selection of the amount active neurons from all the GM methods are illustrated over the training progress. This dynamic selection accelerates the learning process while removing the need for hyper-parameter tuning for the number of neurons in the DNN. To provide additional evidence for this phenomenon, we trained the same multi-robot co-ordination problem with a randomly chosen bigger network size (3-layers, 20-neurons per layer) with the M-WGM algorithm. Three simulations were made one without any GM method, one with M-WGM (threshold - 0.5) and one with M-WGM (threshold - 0.75). As illustrated in Figure~\ref{fig:nn_rew}, the rewards collected by the WOGM method with more parameters, is considerably less to all the M-WGM methods when the network size is changed. The drop in performance is substantially less in the GM algorithms. Furthermore, Figure~\ref{fig:nn_output} illustrates the drastic increase in the number of steps required for the WOGM method to achieve the work-piece transportation goal. This shows the robustness to the change in the size of the network. Fig.~\ref{fig:nn_act_perct} illustrates the automatic adjustment on the amount of active weights in the M-WGM methods. It can be observed that the for the same learning factor ($\lambda$)value of 50\%, the quantum of gradients back-propagated in the smaller network is higher than in the bigger network, further proving the automatic usable network size adjustment capability of the algorithm.
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/JSSP_grads.png}
\caption{Gradients Propagating through the network}
\label{fig:abs_grads}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth, align=c]{images/JSSP_act_neurons.png}
\caption{Active neurons in back-prop}
\label{fig:act_neurons}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth, align=c]{images/jssp_nn_rewards.png}
\caption{Rewards collected by the agents of different network sizes }
\label{fig:nn_rew}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth, align=c]{images/jssp_nn_no_steps.png}
\caption{Output by different NN sizes}
\label{fig:nn_output}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/jssp_nn_active_compar.png}
\caption{Comparison of active gradient percentage by network size}
\label{fig:nn_act_perct}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/JSSP_rew.png}
\caption{Rewards collected by each algorithm}
\label{fig:rewards}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/JSSP_wps_train.png}
\caption{Convergence speed of algorithms}
\label{fig:jssp_conv}
\end{figure}
\textbf{Task Time:}
Task time is the amount of steps required by the agents to move the 20 work-pieces through the production system. The figures show the episode wise steps taken (Figure~\ref{fig:task_time}) and jobs completed (Figure~\ref{fig:wp_output}). WOGM performs the worst in terms of number of steps required and is also not stable in work completion. F-WGM improves the task time while still being a bit unstable in work-piece completion. U-WGM provides a very stable output at the same time reducing the task time further. While M-WGM provides the best task completion time and is stable enough for deployment, AM-WGM provides the combination of stability and task time completion. This also reflects in the amount of rewards collected.
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/JSSP_step_cnt.png}
\caption{Steps taken to complete the given target}
\label{fig:task_time}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/JSSP_wps.png}
\caption{Work-pieces completed in each episode}
\label{fig:wp_output}
\end{figure}
\subsection{MuJoCo}
\subsubsection{Environment Description:}
The MuJoCo engine facilitates accurate and fast simulations of physical systems for research and development. This engine, wrapped inside the OpenAI Gym environment, provides for a frequently used~\cite{haarnoja2018soft,fujimoto2018addressing} training and testing benchmark environment in the domain of RL. The already established baseline performance in the form of PPO algorithm helps in the direct comparison of the effect of introducing GM methods. We test the M-WGM and AM-WGM on four randomly selected MuJoCo environments i.e. Half Cheetah, Ant, Humanoid and Inverted Double Pendulum (IDP). Each algorithm was run on the four environments with 10 random seed initialization. The learning setup is the same as in~\cite{Schulman.2017}, if not stated otherwise. The hyperparameter for all the different algorithms are shown in table~\ref{tab:hpy-Mujoco-ppo}.
\subsubsection{Results}
Since we are reducing the gradients back-propagated, the hyperparameters of the PPO algorithm are modified to reflect that and take advantage of the reduced variances. For example, the learning rate is increased compared to the original paper. This does not destabilize the learning process like in the vanilla PPO implementation due to the inherent variance reduction capabilities of the M-WGM and AM-WGM algorithms. It should be noted that we have also not used the global gradient clipping implemented in the vanilla PPO. The GM implementation provides a better layer-wise control over norm of the gradients. As illustrated in Fig.~\ref{fig:mjc_rews}, during our trials both the GM methods performed better than WOGM. The M-WGM and AM-WGM algorithms both performed on average better in all of the four games across the 10 random seeds. It should be noted that the AM-WGM provides the best normalized improvement over the WOGM. The final scores with the maximum average reward collected are presented in Table~\ref{tab:mujoco_scores}.
\begin{table}
\caption{Reward in MuJoCo environment}
\begin{center}
\begin{tabular}{ |c|c|c|c| }
\hline
Environment & PPO & M-WGM & AM-WGM\\
\hline
Half Cheetah&3600$\pm$1447&3744$\pm$1621&4037$\pm$1785\\
Ant&3151$\pm$584&3225$\pm$611&3183$\pm$758\\
Humanoid&720$\pm$381&750$\pm$658&893$\pm$1007\\
IDP&7583$\pm$1151&8154$\pm$1063&8364$\pm$959\\
\hline
\end{tabular}
\label{tab:mujoco_scores}
\end{center}
\end{table}
\begin{table}
\caption{Hyperparameter Values for PPO in MuJoCo}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
Algorithm & Hyperparameter & Value\\
\hline
WOGM & Learning rate & 2.5e-4\\
WOGM & Hidden Units & 64\\
M- \& AM-WGM & Learning rate & 3e-4\\
M- \& AM-WGM & Hidden Units & 96\\
WOGM & k-epoch updates & 4\\
M- \& AM-WGM & k-epoch updates & 5\\
M- \& AM-WGM & Momentum Value ($\zeta$) & 0.99, 0.9995\\
M- \& AM-WGM & Threshold ($\lambda$) & 0.5\\
M- \& AM-WGM & Global Gradient clipping & False\\
M- \& AM-WGM & Momentum Matrix($M_\zeta$) Init & 1\\
AM-WGM & Threshold Change ($\alpha_\zeta$) & 0.05\\
AM-WGM & Adaptive start from & 150\\
AM-WGM & Adaptive start for & 50\\
\hline
\end{tabular}
\label{tab:hpy-Mujoco-ppo}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/mujoco_GM_comparo.JPG}
\caption{Percent Change in the performance of the M-WGM and AM-WGM with WOGM as baseline }
\label{fig:mjc_rews}
\end{figure}
\subsection{Atari}
\subsubsection{Environment Description}
The Atari games were first introduced in~\cite{Bellemare.2013} to aid the development of general domain independent AI technology. The environment also provides for a baseline where previous algorithms have been tested. We test a total 10 games, 6 of which were randomly selected (Battlezone, Frostbite, Gopher, Kangaroo, Timepilot, and Zaxxon). The other 4 (Montezuma's Revenge, Pitfall, Skiing, and Solaris) are selected to specifically to test the long term credit assignment problem of the algorithms. We use the ram information as input to the network with no frames being skipped. The 10 games were run on the three algorithms (WOGM, M-WGM, and AM-WGM) over 5 random seed initialization. The learning setup is the same as in~\cite{Schulman.2017}, if not stated otherwise. The hyperparameter for all the different algorithms are shown in table~\ref{tab:hpy-Atari-ppo}.
\subsubsection{Results}
\begin{table}
\caption{Hyperparameter Values for PPO in Atari games}
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
Algorithm & Hyperparameter & Value\\
\hline
WOGM & Learning rate & 2.5e-4\\
WOGM & Hidden Units & 64\\
M- \& AM-WGM & Learning rate & 4e-4\\
M- \& AM-WGM & Hidden Units & 96\\
M- \& AM-WGM & Momentum Value ($\zeta$) & 0.999\\
M- \& AM-WGM & Threshold ($\lambda$) & 0.5\\
M- \& AM-WGM & Global Gradient clipping & False\\
M- \& AM-WGM & Momentum Matrix($M_\zeta$) Init & 0\\
AM-WGM & Threshold Change ($\alpha_\zeta$) & 0.1\\
AM-WGM & Adaptive start from & 2000\\
AM-WGM & Adaptive start for & 1000\\
\hline
\end{tabular}
\label{tab:hpy-Atari-ppo}
\end{center}
\end{table}
As with the implementation in MuJoCo environment, we use a higher learning rate and do not use the global clipping used in the vanilla PPO. We also found that increasing the k-epoch update in AM-WGM increases its performance significantly. As shown in the Figure~\ref{fig:atari_rews_1}, the M-WGM method performs better than WOGM in 4 out of the 6 random games while AM-WGM performs better in 5 out of the 6 random games. There was no performance improvement for the algorithms in the difficult games except in Solaris, where there is a drastic improvement made by the GM algorithms as shown in Figure~\ref{fig:atari_rews_2}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/atari_score_1.JPG}
\caption{Percentage Performance improvement of the proposed methods in 6 randomly selected games}
\label{fig:atari_rews_1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.2, align=c]{images/atari_score_2.JPG}
\caption{Performance improvement in difficult games}
\label{fig:atari_rews_2}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
We propose four novel neural network training methodologies called Gradient Monitoring in Reinforcement Learning, for a more robust and faster training progress. The proposed methods incorporate a targeted training procedure in neural network by systematically reducing the gradient variance and additional trust-region constraint for the policy updates. The adaptive momentum method helps the network to chose the optimal number of parameters required for a particular training step based on the feedback from the rewards collected. This results in the training algorithm being robust to the selection of the size of the network. The proposed methods on average, outperform the standard A2C in the multi-robot co-operation application and the standard PPO algorithm in the Mujoco and Atari environment.
A potential limitation of the F-WGM and UF-WGM methods is the occurrence of peaks in the the gradient during training which can sometimes disturb the learning process. Another limitation of the AM-WGM is the selection of the hyperparameter $\eta_{start}$ .This can be eliminated by using feedback from the reward collection during training. This will part of the future work. Subsequent research will also focus on the performance improvement of the RL agent to generalize to unseen environment setup like in CoinRun~\cite{Cobbe.2019}, application to model free on-policy RL algorithms like trust region policy optimization~\cite{Schulman.2015} and model free off-policy RL algorithms like deep deterministic policy gradient \cite{TimothyP.Lillicrap.2016}.
\printbibliography
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/sharafath.jpg}}]{Mohammed Sharafath Abdul Hameed}
received the M.Sc. degree in 2019 from the South Westphalia University of Applied Sciences, Soest, Germany. He is currently working as an research assistant at the department of automation technology at the South Westphalia University of Applied Sciences,Soest and working towards his PhD. His research interests include deep reinforcement learning, automation, production planning, and smart manufacturing.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/Chadha.jpg}}]{Gavneet Singh Chadha}
received the M.Sc. degree in 2016 from the South Westphalia University of Applied Sciences, Soest, Germany. He is currently working as an research assistant at the department of automation technology at the South Westphalia University of Applied Sciences,Soest and working towards his PhD. His research interests include deep neural networks, fault diagnosis, predictive maintenance and machine learning.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{images/190schwung.jpg}}]{Andreas Schwung}
received the Ph.D. degree in electrical engineering from the Technische Universit\"at Darmstadt, Darmstadt, Germany, in 2011. From 2011 to 2015, he was an R\&D Engineer with MAN Diesel \& Turbo SE, Oberhausen, Germany. Since 2015, he has been a Professor of automation technology at the South Westphalia University of Applied Sciences, Soest, Germany. His research interests include model based control, networked automation systems, and intelligent data analytics with applications in manufacturing and process industry.
\end{IEEEbiography}
\begin{IEEEbiographynophoto}{Steven X. Ding}
received the Ph.D. degree in
electrical engineering from Gerhard Mercator
University of Duisburg, Duisburg, Germany,
in 1992.
From 1992 to 1994, he was an R\&D Engineer
with Rheinmetall GmbH. From 1995 to 2001, he
was a Professor of control engineering with the
University of Applied Science Lausitz, Senftenberg,
Germany, where he served as the Vice
President during 1998–2000. Since 2001, he
has been a Professor of control engineering and
the Head of the Institute for Automatic Control and Complex Systems
(AKS) with the University of Duisburg-Essen, Duisburg. His research
interests are model-based and data-driven fault diagnosis and fault tolerant
systems and their application in industry, with a focus on
automotive systems and mechatronic and chemical processes.
\end{IEEEbiographynophoto}
\end{document} | {
"attr-fineweb-edu": 1.961914,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUav3xK6wB9mn48vt5 | \section{Introduction}\label{intro section}
In this present work, we are interested in the 3-D compressible Navier-Stokes equations with an external potential force in the whole space $\mathbb R^3$ ($j=1,2,3$):
\begin{align}\label{NS}
\left\{ \begin{array}{l}
\rho_t + \text{\rm div} (\rho u) =0, \\
(\rho u^j)_t + \text{\rm div} (\rho u^j u) + (P)_{x_j} = \mu\,\Delta u^j +
\lambda \, (\text{\rm div} \,u)_{x_j} + \rho f^{j}.
\end{array}\right.
\end{align}
Here $x\in\mathbb R^3$ is the spatial coordinate and $t\in[0,\infty)$ stands for the time. The unknown functions $\rho=\rho(x,t)$ and $u=(u^1,u^2,u^3)(x,t)$ represent the density and velocity vector in a compressible fluid. The function $P=P(\rho)$ denotes the pressure, $f=(f^1(x),f^2(x),f^3(x))$ is a prescribed external force and $\mu$, $\lambda$ are positive viscosity constants. The system \eqref{NS} is equipped with initial condition
\begin{equation}
\label{IC}
(\rho(\cdot,0)-\tilde\rho,u(\cdot,0)) = (\rho_0-\tilde\rho,u_0),
\end{equation}
where the non-constant time-independent function $\tilde\rho=\tilde\rho(x)$ (known as the {\it steady state solution} to \eqref{NS}) can be obtained formally by taking $u\equiv0$ in \eqref{NS}:
\begin{align}\label{steady state}
\nabla P(\tilde\rho(x)) =\tilde\rho(x)f(x).
\end{align}
The Navier-Stokes system \eqref{NS} expresses conservation of momentum and conservation of mass for Newtonian fluids, which has been studied by various teams of researchers. The local-in-time existence of classical solution to the full Navier-Stokes equations was proved by Nash \cite{nash} and Tani \cite{tani}. Later, Matsumura and Nishida \cite{mn1} obtained the global-in-time existence of $H^3$-solutions when the initial data was taken to be small (with respect to $H^3$ norm), the results were then generalised by Danchin \cite{danchin} who showed the global existence of solutions in critical spaces. In the case of large initial data, Lions \cite{lions} obtained the existence of global-in-time finite energy weak solutions, yet the problem of uniqueness for those weak solutions remains completely open. In between the two types of solutions as mentioned above, a type of ``intermediate weak'' solutions were first suggested by Hoff in \cite{hoff95, hoff05, hoff06} and later generalised by Matsumura and Yamagata in \cite{MY01}, Suen in \cite{suen13b, suen14, suen16, suen2021existence} and other systems which include compressible magnetohydrodynamics (MHD) \cite{suenhoff12, suen12, suen20b}, compressible Navier-Stokes-Poisson system \cite{suen20a} and chemotaxis systems \cite{LS16}. Solutions as obtained in this intermediate class are less regular than those small-smooth type solutions obtained by Matsumura and Nishida \cite{mn1} and Danchin \cite{danchin} in such a way that the density and velocity gradient may be discontinuous across some hypersurfaces in $\mathbb R^3$. On the other hand, those intermediate weak solutions would have more regularity than the large-weak type solutions developed by Lions \cite{lions} so that the uniqueness and continuous dependence of solutions may be obtained; see \cite{hoff06} and other compressible system \cite{suen20b}.
Nevertheless, the global existence of smooth solution to the Navier-Stokes system \eqref{NS} with arbitrary smooth data is still unknown. From the seminal work given by Xin \cite{xin}, it was proved that smooth solution to \eqref{NS} will blow up in finite time in the whole space when the initial density has compact support. Motivated by the well-known Serrin's criterion on the Leray-Hopf weak solutions to the 3-D incompressible Navier-Stokes equations, Huang, Li and Xin \cite{HLX11} later proved that the strong solution exists globally if the velocity satisfies the Serrin's condition and either the sup-norm of the density or the time integral of the $L^\infty$-norm of the divergence of the velocity is bounded. Under an extra assumption on $\lambda$ and $\mu$, Sun, Wang and Zhang \cite{swz11} obtained a Beale-Kato-Majda blow-up criterion in terms of the upper bound of the density, which is analogous to the Beal-Kato-Majda criterion \cite{BKM84} for the ideal incompressible flows. The results from \cite{swz11} were later generalised to other compressible systems \cite{suen13a, suen15, suen20b}.
In this present work, we extend the results from \cite{HLX11} and \cite{swz11} to the case of compressible Navier-Stokes equations with large potential force. The main novelties of this current work can be summarised as follows:
\begin{itemize}
\item We successfully extend the results from \cite{HLX11} and obtain a Serrin type blow-up criterion for \eqref{NS} in which initial vacuum state is allowed;
\item For the isothermal case, under the assumption that initial density is away from zero, we obtain a blow-up criterion in terms of density {\it only}. Such result is also consistent with the case studied in \cite{suen13a} when the magnetic field is removed.
\item We introduce some new methods in controlling extra terms originated from the external force $f$ which is absent in \cite{suen13a} and \cite{suen20b}.
\end{itemize}
We give a brief description on the analysis applied in this work, and the main idea of the following discussion comes from Hoff \cite{hoff95, hoff05, hoff06}. Due to the presence of the external force $f$, one cannot simply apply the same method given in \cite{suen13a} and \cite{suen20b} for obtaining the required blow-up criteria for the solutions. To understand the issue, we consider a decomposition on the velocity $u$ given by
\begin{equation*}
u=u_p+u_s,
\end{equation*}
for which $u_p$ and $u_s$ satisfy
\begin{align}\label{def of u_p and u_s}
\left\{ \begin{array}{l}
\mu\Delta(u_p)+\lambda\nabla\text{\rm div}(u_p)=\nabla(P-P(\tilde\rho)),\\
\rho(u_s)_t-\mu\Delta u_s-\lambda\nabla\text{\rm div}(u_s)=-\rho u\cdot\nabla u-\rho(u_p)_t+\tilde\rho^{-1}(\rho-\tilde\rho)\nabla P(\tilde\rho).
\end{array}\right.
\end{align}
The decomposition of $u$ is important for obtaining some better estimates on the velocity $u$, which allows us to control $u$ in terms of $u_s$ and $u_p$ separately. Since we are addressing solutions around the steady state $(\tilde\rho,0)$, it is natural to consider the difference $P-P(\tilde\rho)$ as appeared in \eqref{def of u_p and u_s}$_1$. Yet the term $P(\tilde\rho)$ will create extra terms in the following sense:
\begin{itemize}
\item On the one hand, since $P(\tilde\rho)$ is not necessary a constant, there is an extra term $\nabla P(\tilde\rho)$ arising from $\nabla(P-P(\tilde\rho))$;
\item On the other hand, using the identity \eqref{steady state}, we can express $f$ in terms of $\tilde\rho$ and $P$, so that the term $\rho f$ from \eqref{NS}$_2$ can be combined with $\nabla P(\tilde\rho)$ to give $\tilde\rho^{-1}(\rho-\tilde\rho)\nabla P(\tilde\rho)$ in \eqref{def of u_p and u_s}$_2$.
\end{itemize}
Compare with the previous work \cite{suen13a} and \cite{suen20b}, the term $\tilde\rho^{-1}(\rho-\tilde\rho)\nabla P(\tilde\rho)$ is distinctive for the present system \eqref{NS}, and we have to develop new method for dealing with it. By examining the regularity, one can see that $\tilde\rho^{-1}(\rho-\tilde\rho)\nabla P(\tilde\rho)$ is more regular than $\nabla(P-P(\tilde\rho))$, hence it can be used for obtaining $H^1$ estimates on $u_s$ provided that $\|\tilde\rho^{-1}(\rho-\tilde\rho)\nabla P(\tilde\rho)\|_{L^2}$ is under control. Thanks to the $L^2$-energy balance law given by \eqref{L^2 estimate blow up}, we can control $\|\tilde\rho^{-1}(\rho-\tilde\rho)\nabla P(\tilde\rho)\|_{L^2}$ if $\rho$ is bounded. This is a crucial step for obtaining estimates for $u$ in some higher regularity classes, and the details will be carried out in section~\ref{proof of main 2 section}.
Another key of the proof is to extract some ``hidden regularity'' from the velocity $u$ and density $\rho$, which is crucial for decoupling $u$ and $\rho$. In order to achieve our goal, we introduce an important canonical variable $F$ associated with the system \eqref{NS}, which is known as the {\it effective viscous flux}. To see how it works, by the Helmholtz decomposition of the mechanical forces, we can rewrite the momentum equation \eqref{NS}$_2$ as follows (summation over $k$ is understood):
\begin{equation}\label{derivation for F}
\rho\dot u^j = (\tilde\rho F)_{x_j}+\mu\omega^{j,k}_{x_k}+\rho f^{j}-P(\tilde\rho)_{x_j},
\end{equation}
where $\dot u^j=u^j_t+u\cdot u^j$ is the material derivative on $u^j$, $\omega^{j,k}=u^j_{x_k}-u^k_{x_j}$ is the vorticity and the effective viscous flux $F$ is defined by
\begin{equation*}
\tilde\rho F=(\mu+\lambda)\text{\rm div}\,u-(P(\rho)-P(\tilde\rho)).
\end{equation*}
By differentiating \eqref{derivation for F} with respect to $x_{j}$ and using the anti-symmetry from $\omega$, we obtain the following Poisson equation for $F$
\begin{equation}\label{poisson in F}
\Delta (\tilde\rho F) = \text{\rm div}(\rho\dot{u}-\rho f+\nabla P(\tilde\rho)).
\end{equation}
The Poisson equation \eqref{poisson in F} can be viewed as the analog for compressible Navier-Stokes system of the well-known elliptic equation for pressure in incompressible flow. For sufficiently regular steady state $\tilde\rho$, by exploiting the Rankine-Hugoniot condition (see \cite{suenhoff12} for example), one can deduce that the effective viscous flux $F$ is relatively more regular than $\text{\rm div}(u)$ or $P(\rho)$, which turns out to be crucial for the overall analysis in the following ways:
\noindent{(i)} The equation \eqref{derivation for F} allows us to decompose the acceleration density $\rho \dot u$ as the sum of the gradient of the scalar $F$ and the divergence-free vector field $\omega^{\cdot,k}_{x_k}$. The skew-symmetry of $\omega$ insures that these two vector fields are orthogonal in $L^2$, so that $L^2$ bounds for the terms on the left side of \eqref{derivation for F} immediately give $L^2$ bounds for the gradients of both $F$ and $\omega$. These in turn will be used for controlling $\nabla u$ in $L^4$ when the estimates of $u(\cdot,t)$ in $H^2$ are unknown, which are crucial for estimating different functionals in $u$ and $\rho$; also refer to Lemma~\ref{higher estimate on nabla u lem} and Remark~\ref{rem:L4 bound on u by F}. The details will be carried out in section~\ref{proof of main 1 section}.
\noindent{(ii)} As we have seen before, we aim at applying a decomposition of $u$ given by $u=u_p+u_s$ with $u_p$ satisfying \eqref{def of u_p and u_s}$_1$. To estimate the term $\partial_t u_p$, if we apply time-derivative on the above identity, then there will be the term $\nabla(\partial_t P(\rho))$ appearing in the analysis. In view of the strongly elliptic system \eqref{def of u_p and u_s}$_1$, we can obtain estimates on $\|\partial_t u_p\|_{L^2}$ in terms of the lower order term $\|P(\rho)u\|_{L^2}$ if we have
\begin{align*}
\nabla(\partial_t P(\rho))=\nabla\text{\rm div}(-P(\rho)u),
\end{align*}
which is valid when the system is {\it isothermal}, i.e. for the case when $\gamma=1$ in \eqref{condition on P}; also refer to Lemma~\ref{estimate on u_s lemma} and the estimate \eqref{estimate on dt u_p}.
We now give a precise formulation of our results. For $r\in[1,\infty]$ and $k\in[1,\infty)$, we let $L^r:=L^r(\mathbb R^3)$, $W^{k,r}:=W^{k,r}(\mathbb R^3)$ and $H^{k}:=H^{k}(\mathbb R^3)$ be the standard Sobolev spaces, and we define the following function spaces for later use (also refer to \cite{HLX11, WEN2013534, swz11} for similar definitions):
\begin{align*}
\left\{ \begin{array}{l}
D^{k,r}:=\{u\in L^1_{loc}(\mathbb R^3):\|\nabla^k u\|_{L^r}<\infty\},\|u\|_{D^{k,r}}:=\|\nabla^k u\|_{L^r}\\
D^{k}:=D^{k,2},D^1_{0}:=\{u\in L^6:\|\nabla u\|_{L^2}<\infty\}.
\end{array}\right.
\end{align*}
We define the system parameters $P$, $f$, $\mu$, $\lambda$ as follows. For the pressure function $P=P(\rho)$ and the external force $f$, we assume that
\begin{align}\label{condition on P}
\mbox{$P(\rho)=a\rho^\gamma$ with constants $a>0$ and $\gamma\ge1$;}
\end{align}
\begin{align}\label{condition on f}
\mbox{there exists $\psi\in H^2$ such that $f=\nabla\psi$ and $\psi(x)\to 0$ as $|x|\to\infty$.}
\end{align}
The viscosity coefficients $\mu$ and $\lambda$ are assumed to satisfy
\begin{align}\label{assumption on viscosity}
7\mu>\lambda>0.
\end{align}
Next, we define $\tilde\rho$ as mentioned at the beginning of this section. Given a constant densty $\rho_{\infty}>0$, we say that $(\tilde\rho,0)$ is a {\it steady state solution} to \eqref{NS} if $\tilde\rho\in C^2(\mathbb R^3)$ and the following holds
\begin{align}
\label{eqn for steady state} \left\{ \begin{array}{l}
\nabla P(\tilde\rho(x)) =\tilde\rho(x)\nabla\psi(x), \\
\lim\limits_{|x|\rightarrow\infty}\tilde\rho(x) = \rho_{\infty}.
\end{array}\right.
\end{align}
Given $\rho_\infty>0$, we further assume that
\begin{align}\label{bound on psi}
-\int_0^{\rho_\infty}\frac{P'(\rho)}{\rho} d\rho<\inf_{x\in\mathbb R^3} \psi(x)\le\sup_{x\in\mathbb R^3} \psi(x)<\int_{\rho_\infty}^\infty \frac{P'(\rho)}{\rho} d\rho,
\end{align}
and by solving \eqref{eqn for steady state}, $\tilde\rho$ can be expressed explicitly as follows:
\[
\tilde\rho(x)=
\begin{cases}
\displaystyle\rho_{\infty}\exp(\frac{1}{a}\psi(x)), & \text{for } \gamma=1 \\
\displaystyle(\rho_{\infty}^{\gamma-1}+\frac{\gamma-1}{a\gamma}\psi(x))^\frac{1}{\gamma-1}, & \text{for } \gamma>1.
\end{cases}
\]
We recall a useful lemma from \cite{LM11} about the existence of steady state solution $(\tilde\rho,0)$ to \eqref{NS} which can be stated as follows:
\begin{lem}\label{existence for steady state lem}
Given $\rho_\infty>0$, if we assume that $P$, $f$, $\psi$ satisfy \eqref{condition on P}-\eqref{assumption on viscosity} and \eqref{bound on psi}, then there exists positive constants $\rho_1,\rho_2,\delta$ and a unique solution $\tilde\rho$ of \eqref{eqn for steady state} satisfying $\tilde\rho-\rho_\infty\in H^2\cap W^{2,6}$ and
\begin{align}\label{1.1.17-2}
\rho_1<\rho_1+\delta\le\tilde\rho\le\rho_2-\delta<\rho_2.
\end{align}
\end{lem}
From now on, we fix $\rho_\infty>0$ and choose $\tilde\rho$ satisfying Lemma~\ref{existence for steady state lem}. And for the sake of simplicity, we also write $P=P(\rho)$ and $\tilde P=P(\tilde\rho)$ unless otherwise specified.
We give the definitions for strong solution and maximal time of existence as follows.
\begin{defn}
We say that $(\rho,u)$ is a (local) {\it strong solution} of \eqref{NS} if for some $T>0$ and $q'\in(3,6]$, we have
\begin{align}\label{def of strong sol}
\left\{ \begin{array}{l}
0\le\rho\in C([0,T],W^{1,q'}),\qquad\rho_t\in C([0,T],L^{q'}),\\
u\in C([0,T], D^1\cap D^2)\cap L^2(0,T; D^{2,q'}),\\
\rho^\frac{1}{2}u_t\in L^\infty(0,T;L^2),\qquad u_t\in L^2(0,T;D^1).
\end{array}\right.
\end{align}
Furthermore, $(\rho,u)$ satisfy the following conditions:
\begin{itemize}
\item For all $0\le t_1\le t_2\le T$ and $C^1$ test functions $\varphi\in\mathcal{D}(\mathbb R^3\times(-\infty,\infty))$ which are Lipschitz on $\mathbb R^3\times[t_1,t_2]$ with $\text{\rm supp }\varphi(\cdot,\tau)\subset K$, $\tau\in[t_1,t_2]$, where $K$ is compact, it holds
\begin{align}\label{WF1}
\left.\int_{\mathbb R^3}\rho(x,\cdot)\varphi(x,\cdot)dx\right|_{t_1}^{t_2}=\int_{t_1}^{t_2}\int_{\mathbb R^3}(\rho\varphi_t + \rho u\cdot\nabla\varphi)dxd\tau.
\end{align}
\item For test functions $\varphi$ which are locally Lipschitz on $\mathbb R^3 \times [0, T]$ and for which $\varphi$, $\varphi_t$, $\nabla\varphi \in L^2(\mathbb R^3 \times (0,T))$, $\nabla\varphi \in L^\infty(\mathbb R^3 \times (0,T))$ and $\varphi(\cdot,T) = 0$, it holds
\begin{align}\label{WF2}
\left.\int_{\mathbb R^3}(\rho u^{j})(x,\cdot)\varphi(x,\cdot)dx\right|_{t_1}^{t_2}=&\int_{t_1}^{t_2}\int_{\mathbb R^3}[\rho u^{j}\varphi_t + \rho u^{j}u\cdot\nabla\varphi + (P-\tilde{P})\varphi_{x_j}]dxd\tau\notag\\
& - \int_{t_1}^{t_2}\int_{\mathbb R^3}[\mu\nabla u^{j}\cdot\nabla\varphi + \lambda(\text{\rm div}(u))\varphi_{x_j}]dxd\tau\\
&+ \int_{t_1}^{t_2}\int_{\mathbb R^3}(\rho f - \nabla \tilde{P})\cdot\varphi dxd\tau.\notag
\end{align}
\end{itemize}
\end{defn}
\begin{defn}
We define $T^*\in(0, \infty)$ to be the {\it maximal time of existence} of a strong solution $(\rho,u)$ to \eqref{NS} if for any $0<T<T^*$, $(\rho,u)$ solves \eqref{NS} in $[0, T]\times\mathbb R^3$ and satisfies \eqref{def of strong sol}-\eqref{WF2}. Moreover, the conditions \eqref{def of strong sol}-\eqref{WF2} fail to hold when $T=T^*$.
\end{defn}
We are ready to state the following main results of this paper which are summarised in Theorem~\ref{Main thm for gamma>1}-\ref{Main thm for gamma=1}:
\begin{thm}\label{Main thm for gamma>1}
Given $\rho_\infty>0$, let $\tilde\rho$ be the steady state solution to \eqref{eqn for steady state}. Let $(\rho,u)$ be a strong solution to the Cauchy problem \eqref{NS} satisfying \eqref{condition on P}-\eqref{assumption on viscosity} with $\gamma>1$. Assume that the initial data $(\rho_0,u_0)$ satisfy
\begin{align}\label{condition on IC1}
\rho_0\ge0,\qquad\rho_0-\tilde\rho\in L^1\cap H^1\cap W^{1,\tilde q},\qquad u_0\in D^1\cap D^2,
\end{align}
for some $\tilde q>3$ and the compatibility condition
\begin{align}\label{compatibility condition}
-\mu\Delta u_0-\lambda\nabla\text{\rm div}(u_0)+\nabla P(\rho_0)-\rho_0 f=\rho_0^\frac{1}{2}g,
\end{align}
for some $g \in L^2$. If $T^*<\infty$ is the maximal time of existence, then we have
\begin{align}\label{blow-up 1}
\lim_{T\to T^*}(\sup_{\mathbb R^3\times[0,T]}|\rho|+\|\rho^\frac{1}{2}u\|_{L^s(0,T;L^r)})=\infty,
\end{align}
for some $r$, $s$ that satisfy
\begin{align}\label{conditions on r s}
\frac{2}{s}+\frac{3}{r}\le1,\qquad r\in(3,\infty],\qquad s>\frac{3}{2}.
\end{align}
\end{thm}
\begin{thm}\label{Main thm for gamma=1}
Let $(\rho,u)$ be a strong solution to the Cauchy problem \eqref{NS} satisfying \eqref{condition on P}-\eqref{assumption on viscosity} with $\gamma=1$. Assume that the initial data $(\rho_0,u_0)$ satisfy \eqref{condition on IC1}-\eqref{compatibility condition}. Suppose that the initial density $\rho_0$ further satisfies
\begin{equation}\label{further assumption on initial density}
\inf_{x\in\mathbb R^3}\rho_0(x)>0.
\end{equation}
If $T^*<\infty$ is the maximal time of existence, then we have
\begin{align}\label{blow-up 2}
\lim_{T\to T^*}\sup_{\mathbb R^3\times[0,T]}|\rho|=\infty.
\end{align}
\end{thm}
The rest of the paper is organised as follows. In section~\ref{prelim section}, we recall some known facts and useful inequalities which will be used in later analysis. In section~\ref{proof of main 1 section}, we give the proof of Theorem~\ref{Main thm for gamma>1} by obtaining some necessary bounds on the strong solutions. In section~\ref{proof of main 2 section}, we give the proof of Theorem~\ref{Main thm for gamma=1} by introducing a different approach for the isothermal case $\gamma=1$.
\section{Preliminaries}\label{prelim section}
In this section, we give some known facts and useful inequalities. We first state the following local-in-time existence and uniqueness of strong solutions to \eqref{NS} with non-negative initial density (references can be found in \cite{nash} and \cite{tani}):
\begin{prop}\label{Local-in-time existence prop}
Let $\tilde\rho$ be a steady state solution and $(\rho_0-\tilde\rho,u_0)$ be given initial data satisfying \eqref{condition on IC1}-\eqref{compatibility condition}, then there exists a positive time $T>0$ and a unique strong solution $(\rho,u)$ to \eqref{NS} defined on $\mathbb R^3\times(0,T]$.
\end{prop}
Next, we recall the following Gagliardo-Nirenberg inequalities:
\begin{prop}
For $p\in[2,6]$, $q\in(1,\infty)$ and $r\in(3,\infty)$, there exists some generic constant $C>0$ such that for any $h_1\in H^1$ and $h_2\in L^q\cap D^{1,r}$, we have
\begin{align}
\|h_1\|^p_{L^p}&\le C\|h_1\|^\frac{6-p}{2}_{L^2}\|\nabla h_1\|^\frac{3p-6}{2}_{L^2},\label{GN1}\\
\|h_2\|_{L^\infty}&\le C\|h_2\|^\frac{q(r-3)}{3r+q(r-3)}_{L^q}\|\nabla h_2\|^\frac{3r}{3r+q(r-3)}_{L^r}.\label{GN2}
\end{align}
\end{prop}
We also recall the following two canonical functions, namely the effective viscous flux $F$ and vorticity $\omega$, which are defined by
\begin{equation}\label{def of F and omega}
\tilde\rho F=(\mu+\lambda)\text{\rm div}\,u-(P(\rho)-P(\tilde\rho)),\qquad\omega=\omega^{j,k}=u^j_{x_k}-u^k_{x_j}.
\end{equation}
The following lemma gives some useful estimates on $u$ in terms of $F$ and $\omega$.
\begin{lem}
For $r_1,r_2\in(1,\infty)$ and $t>0$, there exists a universal constant $C$ which depends on $r_1$, $r_2$, $\mu$, $\lambda$, $a$, $\gamma$ and $\tilde\rho$ such that, the following estimates hold:
\begin{align}\label{bound on F and omega in terms of u}
\|\nabla F\|_{L^{r_1}}+\|\nabla\omega\|_{L^{r_1}}&\le C(\|\rho^\frac{1}{2} \dot{u}\|_{L^{r_1}}+\|(\rho-\tilde\rho)\|_{L^{r_1}})\notag\\
&\le C(\|\rho^\frac{1}{2} u_t\|_{L^{r_1}}+\|\rho^\frac{1}{2} u\cdot\nabla u\|_{L^{r_1}}+\|(\rho-\tilde\rho)\|_{L^{r_1}})
\end{align}
\begin{align}\label{bound on u in terms of F and omega}
||\nabla u(\cdot,t)||_{L^{r_2}}\le C(|| F(\cdot,t)||_{L^{r_2}}+||\omega(\cdot,t)||_{L^{r_2}}+||(\rho-\tilde\rho)(\cdot,t)||_{L^{r_2}}).
\end{align}
\end{lem}
\begin{proof}
By the definitions of $F$ and $\omega$, and together with \eqref{NS}$_2$, $F$ and $\omega$ satisfy the elliptic equations
\begin{align*}
\Delta (\tilde\rho F)=\text{\rm div}(\rho \dot{u}-\rho f+\nabla P(\tilde\rho))=\text{\rm div}(\rho u_t+\rho u\cdot\nabla u-\rho f+\nabla P(\tilde\rho)),
\end{align*}
\begin{align*}
\mu\Delta\omega=\nabla\times(\rho \dot{u}-\rho f+\nabla P(\tilde\rho))=\nabla\times(\rho u_t+\rho u\cdot\nabla u-\rho f+\nabla P(\tilde\rho)),
\end{align*}
where $\dot{h}:=\partial_t h+u\cdot \nabla h$ is the material derivative on $h$. Hence by applying standard $L^p$-estimate, the estimates \eqref{bound on F and omega in terms of u}-\eqref{bound on u in terms of F and omega} follow.
\end{proof}
Finally, we recall the following inequality which was first proved in \cite{BKM84} for the case $\text{\rm div}(u)\equiv0$ and was proved in \cite{HLX11} for compressible flows.
\begin{prop}
For $q\in(3,\infty)$, there is a positive constant $C$ which depends on $q$ such that the following estimate holds for all $\nabla u\in L^2\cap D^{1,q}$,
\begin{align}\label{log estimate on nabla u}
\|\nabla u\|_{L^\infty}\le C(&\|\text{\rm div}(u)\|_{L^\infty}+\|\nabla\times u\|_{L^\infty}\ln(e+\|\nabla^2 u\|_{L^q})\notag\\
&+C\|\nabla u\|_{L^2}+C,
\end{align}
where $e$ is the base of the natural logarithm.
\end{prop}
\section{Proof of Theorem~\ref{Main thm for gamma>1}}\label{proof of main 1 section}
In this section, we give the proof of Theorem~\ref{Main thm for gamma>1}. Let $(\rho,u)$ be a strong solution to the system \eqref{NS} as described in Theorem~\ref{Main thm for gamma>1}. By performing standard $L^2$-energy estimate (see \cite{suen2021existence} for example), we readily have
\begin{align}\label{L^2 estimate blow up}
\sup_{0\le \tau\le t}\left((\|\rho^\frac{1}{2} u(\cdot,\tau)\|_{L^2}^2+\int_{\R^3} G(\rho(x,s))dx\right) + \int_{0}^{t}\|\nabla u(\cdot,\tau)\|_{L^2}^2d\tau\le C_0,
\end{align}
for all $t\in[0,T^*)$, where $C_0$ depends on the initial data but is independent of both $t$ and $T^*$. Here $G$ is a functional given by
\begin{align*}
\rho\int_{\tilde\rho}^\rho\frac{P(s)-P(\tilde\rho)}{s^2}ds=\rho\int_{\tilde\rho}^\rho\frac{as^\gamma-a\tilde\rho^\gamma}{s^2}ds.
\end{align*}
In order to prove Theorem~\ref{Main thm for gamma>1}, for the sake of contradiction, suppose that \eqref{blow-up 1} does not hold. Then there exists some constant $M_0>0$ such that
\begin{align}\label{blow-up 1 not}
\lim_{T\to T^*}(\sup_{\mathbb R^3\times[0,T]}|\rho|+\|\rho^\frac{1}{2}u\|_{L^s(0,T;L^r)})\le M_0.
\end{align}
We first obtain the estimates on $\nabla u$ and $u_t$ under \eqref{blow-up 1 not}:
\begin{lem}\label{estimate on nabla u lem}
Assume that \eqref{blow-up 1 not} holds, then for $t\in[0,T^*)$, we have
\begin{align}\label{estimate on nabla u}
\sup_{0\le \tau\le t}\|\nabla u(\cdot,\tau)\|^2_{L^2}+\int_0^t\int_{\R^3}\rho|u_t|^2dxd\tau\le C,
\end{align}
where and in what follows, $C$ denotes a generic constant which depends on $\mu$, $\lambda$, $a$, $f$, $\gamma$, $\tilde\rho$, $M_0$, $T^*$ and the initial data.
\end{lem}
\begin{proof}
We multiply the momentum equation \eqref{NS}$_2$ by $u_t$ and integrate to obtain
\begin{align*}
&\frac{1}{2}\frac{d}{dt}\int_{\R^3}(\mu|\nabla u|^2+\lambda(\text{\rm div}(u))^2)dx+\int_{\R^3}\rho|u_t|^2dx\notag\\
&=\int_{\R^3} P\text{\rm div}(u)dx-\int_{\R^3}\rho u\cdot\nabla u\cdot u_tdx+\int_{\R^3}\rho f\cdot u_tdx.
\end{align*}
Using H\"{o}lder's inequality and Young's inequality, the term involving $f$ can be bounded by
\begin{align*}
\Big|\int_{\R^3}\rho f\cdot u_tdx\Big|&\le\Big(\int_{\R^3}\rho|u_t|^2dx\Big)^\frac{1}{2}\Big(\int_{\R^3}\rho|f|^2dx\Big)^\frac{1}{2}\\
&\le\frac{1}{2}\Big(\int_{\R^3}\rho|u_t|^2dx\Big)+C\Big(\int_{\R^3}\rho|f|^2dx\Big).
\end{align*}
Hence by following the steps given in \cite{HLX11}, we arrive at
\begin{align}\label{bound on nabla u 1}
\frac{1}{2}\frac{d}{dt}\int_{\R^3}(|\nabla u|^2+(\text{\rm div}(u))^2 - P\text{\rm div}(u))dx+\frac{1}{2}\int_{\R^3}\rho|u_t|^2dx\notag\\
\le C\|\nabla u\|^2_{L^2}+C\int_{\R^3}\rho|u\cdot\nabla u|^2dx+C.
\end{align}
To estimate the advection term on the right side of \eqref{bound on nabla u 1}, for $r$, $s$ satisfying \eqref{conditions on r s}, we use \eqref{GN1} and \eqref{bound on u in terms of F and omega} to obtain
\begin{align*}
\|\rho^\frac{1}{2} u\cdot\nabla u\|_{L^2}&\le C\|\rho u\|_{L^r}\|\nabla u\|_{L^\frac{2r}{r-2}}\\
&\le C\|\rho^\frac{1}{2}u\|_{L^r}(\|F\|^{1-\frac{3}{r}}_{L^2}\|\nabla F\|^\frac{3}{r}_{L^2}+\|\nabla\omega\|^{1-\frac{3}{r}}_{L^2}\|\nabla \omega\|^\frac{3}{r}_{L^2}+1).
\end{align*}
Using Young's inequality, for any $\varepsilon>0$ being small, there exists $C_{\varepsilon}>0$ such that
\begin{align*}
&\|\rho^\frac{1}{2}u\|_{L^r}(\|F\|^{1-\frac{3}{r}}_{L^2}\|\nabla F\|^\frac{3}{r}_{L^2}+\|\nabla\omega\|^{1-\frac{3}{r}}_{L^2}\|\nabla \omega\|^\frac{3}{r}_{L^2}+1)\\
&\le\varepsilon(\|\nabla F\|_{L^2}+\|\nabla\omega\|_{L^2}))+C_{\varepsilon}\|\rho^\frac{1}{2}u\|^\frac{s}{2}_{L^r}(\|F\|_{L^2}+\|\omega\|_{L^2}+1)+C_{\varepsilon},
\end{align*}
and hence by applying \eqref{bound on F and omega in terms of u}, we obtain
\begin{align}\label{bound on nabla u 2}
\|\rho^\frac{1}{2} u\cdot\nabla u\|_{L^2}\le C\varepsilon\|\rho u_t\|_{L^2}+C_{\varepsilon}\|\rho^\frac{1}{2}u\|^\frac{s}{2}_{L^r}(\|\nabla u\|_{L^2}+1)+C_{\varepsilon}.
\end{align}
Applying \eqref{bound on nabla u 2} on \eqref{bound on nabla u 1} and choosing $\varepsilon>0$ small enough,
\begin{align}\label{bound on nabla u 3}
&\frac{d}{dt}\int_{\R^3}(|\nabla u|^2+(\text{\rm div}(u))^2 - P\text{\rm div}(u))dx+\frac{1}{2}\int_{\R^3}\rho|u_t|^2dx\notag\\
&\le C(\|\rho^\frac{1}{2}u\|^\frac{s}{2}_{L^r}+1)(\|\nabla u\|_{L^2}+1)+C\le C(\|\nabla u\|_{L^2}+1),
\end{align}
where the last inequality follows by \eqref{blow-up 1 not}. Hence the estimate \eqref{estimate on nabla u} follows by using Gr\"{o}nwall's inequality on \eqref{bound on nabla u 3}.
\end{proof}
Next, we make use of Lemma~\ref{estimate on nabla u lem} to obtain some higher order estimates on $u$ which can be stated in the following lemma:
\begin{lem}\label{higher estimate on nabla u lem}
Assume that \eqref{blow-up 1 not} holds, then for all $t\in[0,T^*)$, we have
\begin{align}\label{higher estimate on nabla u}
\sup_{0\le \tau\le t}\rho\|\dot u(\cdot,\tau)\|^2_{L^2}+\int_0^t\int_{\R^3}\rho|\nabla\dot{u}|^2dxd\tau\le C.
\end{align}
\end{lem}
\begin{proof}
Following the steps given in \cite{HLX11}, we readily have
\begin{align*}
\sup_{0\le \tau\le t}\rho\|\dot u(\cdot,\tau)\|^2_{L^2}+\int_0^t\int_{\R^3}\rho|\nabla\dot{u}|^2dxd\tau\le C\int_0^t\|\nabla u(\cdot,\tau)\|^4_{L^4}d\tau+C\Big(\int_{\R^3}\rho|f|^2\Big)+C.
\end{align*}
To estimate the term $\displaystyle\int_0^t\|\nabla u\|^4_{L^4}d\tau$, we apply \eqref{bound on F and omega in terms of u} and \eqref{bound on u in terms of F and omega} to get
\begin{align*}
\int_0^t\|\nabla u\|^4_{L^4}d\tau&\le C\int_0^t(\|F\|^4_{L^4}+\|\omega\|^4_{L^4})d\tau+C\\
&\le C\int_0^t(\|F\|^\frac{5}{2}_{L^2}\|\nabla F\|^\frac{3}{2}_{L^2}+\|\omega\|^\frac{5}{2}_{L^2}\|\nabla \omega\|^\frac{3}{2}_{L^2})d\tau+C\\
&\le C\int_0^t\|\nabla\dot{u}\|^\frac{3}{2}_{L^2}d\tau+C,
\end{align*}
hence by using Young's inequality and Gr\"{o}nwall's inequality, the estimate \eqref{higher estimate on nabla u} holds for all $T\in[0,T^*)$.
\end{proof}
\begin{rem}\label{rem:L4 bound on u by F}
As pointed out in section~\ref{intro section}, it is important to use the effective viscous flux $F$ and the vorticity $\omega$ for estimating the term $\displaystyle\int_0^t\|\nabla u\|^4_{L^4}d\tau$, since there is no available {\it a priori} bound on $\|\nabla u\|_{H^1}$ and hence, we cannot merely apply the Sobolev embedding $H^1\hookrightarrow L^4$ in Lemma~\ref{higher estimate on nabla u lem}.
\end{rem}
We give the following estimate on the density gradient $\nabla \rho$ and the $H^1$ norm of $\nabla u$:
\begin{lem}\label{higher estimate on rho and u lem}
Assume that \eqref{blow-up 1 not} holds, then for all $t\in[0,T^*)$, we have
\begin{align}\label{higher estimate on rho and u}
\sup_{0\le \tau\le t}(\|\rho\|_{H^1\cap W^{1,q'}}+\|\nabla u\|_{H^1})(\cdot,\tau)\le C,
\end{align}
for all $q'\in(3,6]$.
\end{lem}
\begin{proof}
For any $p\in[2,6]$, we have
\begin{align*}
&\frac{d}{dt}(|\nabla\rho|^p)+\text{\rm div}(|\nabla(\rho-\tilde\rho)|^pu)+(p-1)|\nabla(\rho-\tilde\rho)|^p\text{\rm div}(u)\\
&+p|\nabla(\rho-\tilde\rho)|^{p-2}\nabla(\rho-\tilde\rho)\nabla u\nabla(\rho-\tilde\rho)+p(\rho-\tilde\rho)|\nabla(\rho-\tilde\rho)|^{p-2}\nabla(\rho-\tilde\rho)\nabla\text{\rm div}(u)\\
&=-p\nabla\text{\rm div}(\tilde\rho u)\cdot\nabla(\rho-\tilde\rho)|\nabla(\rho-\tilde\rho)|^{p-2}.
\end{align*}
We integrate the above equation over $\mathbb R^3$ and use \eqref{GN1}, \eqref{bound on F and omega in terms of u} and \eqref{higher estimate on nabla u} to obtain
\begin{align}\label{estimate on dt nabla rho}
&\frac{d}{dt}\|\nabla(\rho-\tilde\rho)\|_{L^p}\notag\\
&\le C(1+\|\nabla u\|_{L^\infty}+\|\nabla(\tilde\rho F)\|_{L^p}+\|u\|_{L^p}+\|\nabla u\|_{L^p})\|\nabla(\rho-\tilde\rho)\|_{L^p}\notag\\
&\le C(1+\|\nabla u\|_{L^\infty}+\|\nabla\dot{u}\|_{L^2})\|\nabla(\rho-\tilde\rho)\|_{L^p}.
\end{align}
On the other hand, upon rearranging terms from the momentum equation \eqref{NS}$_2$, we have
\begin{align}\label{elliptic system for u}
\mu\Delta u+\lambda\nabla\text{\rm div}(u)=\rho\dot{u}+\nabla(P(\rho)-P(\tilde\rho))+\nabla P(\tilde\rho)(\tilde\rho-\rho)\tilde\rho^{-1}.
\end{align}
Hence for each $q'\in(3,6]$, by applying $L^{q'}$-estimate on $u$ in \eqref{elliptic system for u}, we have
\begin{align}\label{q' estimate on nabla u}
\|\nabla u\|_{W^{1,q'}}\le C(1+\|\nabla\dot{u}\|_{L^2}+\|\nabla(\rho-\tilde\rho)\|_{L^{q'}}).
\end{align}
Using the Sobolev inequality \eqref{GN2}, together with the estimates \eqref{log estimate on nabla u} and \eqref{q' estimate on nabla u}, we have
\begin{align}\label{log estimate on nabla u with rho}
\|\nabla u\|_{L^\infty}\le C+&C(\|\text{\rm div}(u)\|_{L^\infty}+\|\omega\|_{L^\infty})\ln(e+\|\nabla\dot{u}\|_{L^2})\notag\\
&+C(\|\text{\rm div}(u)\|_{L^\infty}+\|\omega\|_{L^\infty})\ln(e+\|\nabla(\rho-\tilde\rho)\|_{L^{q'}}).
\end{align}
To estimate the time integral of $(\|\text{\rm div}(u)\|_{L^\infty}^2+\|\omega\|_{L^\infty}^2)$, using \eqref{GN2}, \eqref{bound on F and omega in terms of u} and \eqref{higher estimate on nabla u}, we readily have
\begin{align}\label{bound on time integral on div(u) and omega}
\int_0^t(\|\text{\rm div}(u)\|_{L^\infty}^2+\|\omega\|_{L^\infty}^2)(\cdot,\tau)d\tau&\le C\int_0^t(\|F\|^2_{L^\infty}+\|\omega\|^2_{L^\infty})(\cdot,\tau)d\tau+C\notag\\
&\le C\int_0^t\|\nabla\dot{u}(\cdot,\tau)\|^2_{L^2}d\tau+C\le C.
\end{align}
Hence by applying \eqref{log estimate on nabla u with rho} on \eqref{estimate on dt nabla rho} with $p=q'$, using Gr\"{o}nwall's inequality with the bounds \eqref{higher estimate on nabla u} and \eqref{bound on time integral on div(u) and omega}, we obtain
\begin{align}\label{sup bound on Lq rho}
\sup_{0\le \tau\le t}\|\nabla(\rho-\tilde\rho)(\cdot,\tau)\|_{L^{q'}}\le C.
\end{align}
By combining \eqref{log estimate on nabla u with rho} with \eqref{sup bound on Lq rho} and \eqref{bound on time integral on div(u) and omega}, it further gives
\begin{align}\label{bound on time integral of nabla u infty}
\int_0^t\|\nabla u(\cdot,\tau)\|_{L^\infty}d\tau\le C.
\end{align}
Imtegraing \eqref{estimate on dt nabla rho} with $p=2$ over $t$ and together with \eqref{higher estimate on nabla u} and \eqref{bound on time integral of nabla u infty}, it follows that
\begin{align}\label{sup bound on L2 rho}
\sup_{0\le \tau\le t}\|\nabla(\rho-\tilde\rho)(\cdot,\tau)\|_{L^2}\le C.
\end{align}
which gives the bound on $\rho-\tilde\rho$ as claimed in \eqref{higher estimate on rho and u}. The bound on $u$ as appeared in \eqref{higher estimate on rho and u} then follows from $L^2$-estimate on \eqref{elliptic system for u} with the bounds \eqref{estimate on nabla u} and \eqref{sup bound on L2 rho}, and we finish the proof for \eqref{higher estimate on rho and u}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Main thm for gamma>1}]
The proof then follows from the same argument given in \cite{HLX11}, namely by choosing the function $(\rho,u)(x,T^*)$ to be the limit of $(\rho,u)(x,t)$ as $t\to T^*$, one can show that $(\rho,u)(x,T^*)$ satisfies the compatibility condition \eqref{compatibility condition} as well. Therefore, if we take $(\rho,u)(x,T^*)$ to be the new initial data for the system \eqref{NS}, then Proposition~\ref{Local-in-time existence prop} applies and shows that the local strong solution can be extended beyond the maximal time $T^*$.
\end{proof}
\section{Proof of Theorem~\ref{Main thm for gamma=1}}\label{proof of main 2 section}
In this section, we prove Theorem~\ref{Main thm for gamma=1} using a different approach compared with the proof of Theorem~\ref{Main thm for gamma>1}. We let $(\rho,u)$ be a strong solution to the system \eqref{NS} for the isothermal case as described in Theorem~\ref{Main thm for gamma=1}, and for the sake of contradiction, suppose that \eqref{blow-up 2} does not hold. Then there exists a constant $\tilde{M}_0>0$ such that
\begin{align}\label{blow-up 2 not}
\lim_{T\to T^*}\sup_{\mathbb R^3\times[0,T]}|\rho|\le \tilde{M}_0.
\end{align}
Furthermore, together with the bound \eqref{bound on time integral of nabla u infty} on $\|\nabla u\|_{L^\infty}$ and the assumption \eqref{further assumption on initial density} on $\rho_0$, we have
\begin{align}\label{lower bound on rho}
\inf_{\mathbb R^3\times[0,T^*)}\rho\ge \tilde{M}_1,
\end{align}
where $\tilde{M}_1>0$ is a constant which depends on $\mu$, $\lambda$, $a$, $f$, $\tilde\rho$, $\tilde{M}_0$, $T^*$ and the initial data.
To facilitate our discussion, we introduce the following auxiliary functionals:
\begin{align}
\Phi_1(t)&=\sup_{0\le \tau\le t}\|\nabla u(\cdot,\tau)\|^2_{L^2}+\int_{0}^{t}\|\rho^\frac{1}{2}\dot{u}(\cdot,\tau)\|_{L^2}^2d\tau,\label{def of Phi 1}\\
\Phi_2(t)&=\sup_{0\le \tau\le t}\|\rho^\frac{1}{2}\dot{u}(\cdot,\tau)\|^2_{L^2}+\int_{0}^{t}\|\nabla\dot{u}(\cdot,\tau)\|^2_{L^2}d\tau,\label{def of Phi 2}\\
\Phi_3(t)&=\int_0^t\int_{\mathbb R^3}|\nabla u|^4dxd\tau.\label{def of Phi 3}
\end{align}
We recall the following lemma which gives estimates on the solutions of the Lam\'{e} operator $\mu\Delta+\lambda\nabla\text{\rm div}$. Details can be found in \cite[pp. 39]{swz11}.
\begin{lem}\label{estimates on Lame operator}
Consider the following equation:
\begin{equation}\label{eqn for Lame operator}
\mu\Delta v+\lambda\nabla\text{\rm div}(v)=J,
\end{equation}
where $v=(v^1,v^2,v^3)(x)$, $J=(J^1,J^2,J^3)(x)$ with $x\in\mathbb R^3$ and $\mu$, $\lambda>0$. Then for $r\in(1,\infty)$, we have:
\begin{itemize}
\item if $J\in W^{2,r}(\mathbb R^3)$, then $||\Delta v||_{L^r}\le C||J||_{L^r}$;
\item if $J=\nabla\varphi$ with $\varphi\in W^{2,r}(\mathbb R^3)$, then $||\nabla v||_{L^r}\le C||\varphi||_{L^r}$;
\item if $J=\nabla\text{\rm div}(\varphi)$ with $\varphi\in W^{2,r}(\mathbb R^3)$, then $||v||_{L^r}\le C||\varphi||_{L^r}$.
\end{itemize}
Here $C$ is a positive constant which depends on $\mu$, $\lambda$ and $r$.
\end{lem}
One of the key for the proof of Theorem~\ref{Main thm for gamma=1} is to estimate the $L^4$ norm of $\rho^\frac{1}{4} u$ and the results can be summarised in the following lemma:
\begin{lem}
Assume that \eqref{blow-up 2 not} holds, then for $t\in[0,T^*)$, we have
\begin{align}\label{L4 estimate on u}
\sup_{0\le \tau\le t}\int_{\R^3}\rho|u|^4dx\le \tilde C,
\end{align}
where and in what follows, $\tilde C$ denotes a generic constant which depends on $\mu$, $\lambda$, $a$, $f$, $\tilde\rho$, $\tilde{M}_0$, $T^*$, $\tilde{M}_1$ and the initial data.
\end{lem}
\begin{proof}
It can be proved by the method given in \cite{HLX11} (also refer to \cite{hoff95} and \cite{HL09} for more details) and we omit here for the sake of brevity. We point out that the condition \eqref{assumption on viscosity} is required for obtaining \eqref{L4 estimate on u}.
\end{proof}
We begin to estimate the functionals $\Phi_1$, $\Phi_2$ and $\Phi_3$. The following lemma gives an estimate on $\Phi_1$ in terms of $\Phi_3$:
\begin{lem}\label{bound on Phi 1 lemma}
Assume that \eqref{blow-up 2 not} holds. For any $0\le t< T^*$,
\begin{equation}\label{bound on Phi 1}
\Phi_1(t)\le \tilde C[1+\Phi_3(t)].
\end{equation}
\end{lem}
\begin{proof}
Following the proof of Lemma~\ref{estimate on nabla u lem}, we have
\begin{align}\label{H1 estimate blow up}
\sup_{0\le \tau\le t}\int_{\R^3}|\nabla u|^2dxd\tau+\int_0^T\int_{\R^3}\rho|\dot{u}|^2dxd\tau\le \tilde C+\tilde C\int_0^T\int_{\R^3}|\nabla u|^3dxd\tau.
\end{align}
The second term on the right side of \eqref{H1 estimate blow up} can be bounded by
\[\begin{aligned}
\int_0^T\int_{\R^3}|\nabla u|^3dxd\tau&\le \Big(\int_0^T\int_{\R^3}|\nabla u|^2dxd\tau\Big)^\frac{1}{2}\Big(\int_0^T\int_{\R^3}|\nabla u|^4dxd\tau\Big)^\frac{1}{2}\\
&\le \tilde C\Phi_3^\frac{1}{2}.
\end{aligned}\]
Applying the above bounds on \eqref{H1 estimate blow up}, the result follows.
\end{proof}
Before we estimate $\Phi_2$, we introduce the following decomposition on $u$ stated in section~\ref{intro section}. We write
\begin{equation}\label{decomposition on u}
u=u_p+u_s,
\end{equation}
where $u_p$ and $u_s$ satisfy \eqref{def of u_p and u_s} and we recall that $\tilde P:=P(\tilde\rho)$. Then by using \eqref{estimates on Lame operator}, for all $r>1$, the term $u_p$ can be bounded by
\begin{equation}\label{estimate on u_p}
\int_{\R^3}|\nabla u_p|^rdx\le \tilde C\int_{\R^3}|P-\tilde P|^rdx\le \tilde C\int_{\R^3}|\rho-\tilde\rho|^rdx.
\end{equation}
On the other hand, the term $u_s$ can be estimated as follows.
\begin{lem}\label{estimate on u_s lemma}
For any $0\le t<T^*$, we have
\begin{equation}\label{estimate on u_s}
\sup_{0\le \tau\le t}\int_{\R^3}|\nabla u_s|^2dxd\tau+\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau+\int_0^T\int_{\R^3}|\Delta u_s|^2dxd\tau\le \tilde C.
\end{equation}
\end{lem}
\begin{proof}
We multiply \eqref{def of u_p and u_s}$_2$ by $\partial_t(u_s)$ and integrate to obtain
\begin{align}\label{estimate on u_s step 1}
&\int_{\R^3}\mu|\nabla u_s|^2dx\Big|_0^t+\int_0^T\int_{\R^3}(\mu+\lambda)|\text{\rm div} u_s|^2dxd\tau+\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\\
&=-\int_0^T\int_{\R^3}\Big(\rho u\cdot\nabla u\Big)\cdot\partial_t(u_s)dxd\tau-\int_0^T\int_{\R^3}\Big(\rho\partial_t(u_p)\Big)\cdot\partial_t(u_s)dxd\tau\notag\\
&\qquad+\int_0^T\int_{\R^3}\tilde\rho^{-1}(\rho-\tilde\rho)\nabla\tilde P\cdot\partial_t(u_s)dxd\tau.\notag
\end{align}
We estimate the right side of \eqref{estimate on u_s step 1} term by term. Using \eqref{L4 estimate on u} and \eqref{estimate on u_p}, the first integral can be bounded by
\begin{align*}
&\Big(\int_0^T\int_{\R^3}\rho|u|^2|\nabla u|^2dxd\tau\Big)^\frac{1}{2}\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}\\
&\le \tilde C\Big[\int_0^t\Big(\int_{\R^3}\rho|u|^4dx\Big)^\frac{1}{2}\Big(\int_{\R^3}|\nabla u_s|^4dx+\int_{\R^3}|\nabla u_p|^4dx\Big)^\frac{1}{2}d\tau\Big]^\frac{1}{2}\\
&\qquad\qquad\times\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}\\
&\le \tilde C\Big[\int_0^t\Big(\int_{\R^3}|\nabla u_s|^2dx\Big)^\frac{1}{4}\Big(\int_{\R^3}|\Delta u_s|^2dx\Big)^\frac{3}{4}d\tau+\int_0^t\Big(\int_{\R^3}|\rho-\tilde\rho|^4dx\Big)^\frac{1}{2}d\tau\Big]^\frac{1}{2}\\
&\qquad\qquad\times\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}\\
&\le \tilde C\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}\\
&\qquad\qquad\times\Big[\Big(\int_0^T\int_{\R^3}|\nabla u_s|^2dxd\tau\Big)^\frac{1}{8}\Big(\int_0^T\int_{\R^3}|\Delta u_s|^2dxd\tau\Big)^\frac{3}{8}+1\Big].
\end{align*}
Next to estimate $\displaystyle-\int_0^T\int_{\R^3}\Big(\rho\partial_t(u_p)\Big)\cdot\partial_t(u_s)dxd\tau$, we differentiate \eqref{def of u_p and u_s}$_1$ with respect to $t$ and use the assumption that $P(\rho)=a\rho$ to obtain
\begin{equation*}
\mu\Delta\partial_t(u_p)+(\mu+\lambda)\nabla\text{\rm div}\partial_t(u_p)=\nabla\text{\rm div}(-P\cdot u).
\end{equation*}
Using Lemma~\ref{estimates on Lame operator} and the $L^2$-estimate \eqref{L^2 estimate blow up} on $u$, we have
\begin{align}\label{estimate on dt u_p}
\int_0^T\int_{\R^3}|\partial_t(u_p)|^2dxd\tau\le \tilde C\int_0^T\int_{\R^3}|P\cdot u|^2dxd\tau\le \tilde C.
\end{align}
Therefore
\begin{align*}
&-\int_0^T\int_{\R^3}\Big(\rho\partial_t(u_p)\Big)\cdot\partial_t(u_s)dxd\tau\\
&\le\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}\Big(\int_0^T\int_{\R^3}|\partial_t(u_p)|^2dxd\tau\Big)^\frac{1}{2}\\
&\le \tilde C\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}.
\end{align*}
To estimate $\displaystyle\int_0^T\int_{\R^3}\tilde\rho^{-1}(\rho-\tilde\rho)\nabla\tilde P\cdot\partial_t(u_s)dxd\tau$, using \eqref{L^2 estimate blow up} and \eqref{lower bound on rho}, we readily have
\begin{align*}
&\int_0^T\int_{\R^3}\tilde\rho^{-1}(\rho-\tilde\rho)\nabla\tilde P\cdot\partial_t(u_s)dxd\tau\\
&\le \tilde C\Big(\int_0^T\int_{\R^3}|\rho-\tilde\rho|^2dxd\tau\Big)^\frac{1}{2}\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}\\
&\le \tilde C\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\Big)^\frac{1}{2}.
\end{align*}
Combining the above, we have from \eqref{estimate on u_s step 1} that
\begin{align}\label{estimate on u_s step 2}
&\int_{\R^3}|\nabla u_s|^2(x,t)dx+\int_0^T\int_{\R^3}|\text{\rm div}(u_s)|^2dxd\tau+\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau\notag\\
&\le \tilde C\Big(\int_0^T\int_{\R^3}|\nabla u_s|^2dxd\tau\Big)^\frac{1}{4}\Big(\int_0^T\int_{\R^3}|\Delta u_s|^2dxd\tau\Big)^\frac{3}{4}+\tilde C.
\end{align}
It remains to estimate the term $\displaystyle\int_0^T\int_{\R^3}|\Delta u_s|^2$. Rearranging the terms in \eqref{def of u_p and u_s}$_2$, we have that
\begin{equation*}
\mu\Delta u_s+(\mu+\lambda)\nabla\text{\rm div}(u_s)=\rho\partial_t(u_s)+\rho u\cdot\nabla u+\rho\partial_t(u_p)-\tilde\rho^{-1}(\rho-\tilde\rho)\nabla\tilde P.
\end{equation*}
Therefore, we can apply Lemma~\ref{estimates on Lame operator} and the bound \eqref{L^2 estimate blow up} to get
\begin{align}\label{estimate on Delta u_s}
&\int_0^T\int_{\R^3}|\Delta u_s|^2dxd\tau\\
&\le \tilde C\Big[\int_0^T\int_{\R^3}(|\rho\partial_t(u_s)|^2+|\rho u\cdot\nabla u|^2+|\rho\partial_t(u_p)|^2+|\rho-\tilde\rho|^2)dxd\tau\Big]\notag\\
&\le \tilde C\Big(\int_0^T\int_{\R^3}\rho|\partial_t(u_s)|^2dxd\tau+1\Big).\notag
\end{align}
Applying the estimate \eqref{estimate on Delta u_s} on \eqref{estimate on u_s step 2} and using Gr\"{o}wall's inequality, we conclude that for $0\le t< T^*$,
\begin{equation*}
\int_{\R^3}|\nabla u_s|^2(x,t)dx\le \tilde C,
\end{equation*}
and the result \eqref{estimate on u_s} follows.
\end{proof}
We now give the estimate on $\Phi_2$ as defined in \eqref{def of Phi 2}, which is given in the following lemma:
\begin{lem}\label{bound on Phi 2 lemma}
Assume that \eqref{blow-up 2 not} holds. For any $0\le t< T^*$,
\begin{equation}\label{bound on Phi 2}
\Phi_2(t)\le \tilde C[\Phi_1(t)+\Phi_3(t)+1].
\end{equation}
\end{lem}
\begin{proof}
Following the steps given in \cite[pp.228-230]{hoff95} (by taking $\sigma\equiv1$), we have
\begin{align}\label{bound on Phi 2 step 1}
&\int_{\R^3}|\dot{u}(x,t)|^2dx+\int_0^T\int_{\R^3}|\nabla\dot{u}|^2dxd\tau\notag\\
&\le \tilde C\Big |\sum_{1\le k_i,j_m\le 3}\int_{0}^{t}\int_{\mathbb R^3}u^{j_1}_{x_{k_1}}u^{ j_2}_{x_{k_2}}u^{j_3}_{x_{k_3}}dxd\tau\Big|\\
&\qquad\qquad+\tilde C\Big(\int_0^T\int_{\R^3}|\nabla u|^4dxd\tau+\Phi_1(t)+1\Big).\notag
\end{align}
The summation term in \eqref{bound on Phi 2 step 1} can be bounded by $\displaystyle\tilde C\int_0^T\int_{\R^3}|\nabla u|^3$, and hence it can be bounded by $\tilde C\Phi_3^\frac{1}{2}$. The estimate \eqref{bound on Phi 2} then follows by Cauchy's inequality.
\end{proof}
Finally, we make use of $u_s$ and $u_p$ in \eqref{def of u_p and u_s} to estimate $\Phi_3$:
\begin{lem}\label{bound on Phi 3 lemma}
For any $0\le t< T^*$,
\begin{equation}\label{bound on Phi 3}
\Phi_3(t)\le \tilde C[\Phi_1(t)^\frac{1}{2}+1].
\end{equation}
\end{lem}
\begin{proof}
Using the decomposition \eqref{decomposition on u} on $u$ and the estimates \eqref{estimate on u_p} and \eqref{estimate on u_s}, we have
\begin{align*}
\Phi_3&\le \int_0^T\int_{\R^3}|\nabla u_s|^4dxd\tau+\int_0^T\int_{\R^3}|\nabla u_p|^4dxd\tau\\
&\le \tilde C\int_0^t\Big(\int_{\R^3}|\nabla u_s|^2dx\Big)^\frac{1}{2}\Big(\int_{\R^3}|\Delta u_s|^2dx\Big)^\frac{3}{2}d\tau+\int_0^T\int_{\R^3}|\rho-\tilde\rho|^4dxd\tau\\
&\le \tilde C\Big[\Big(\sup_{0\le \tau\le t}\int_{\R^3}|\Delta u_s(x,\tau)|^2dx\Big)^\frac{1}{2}+1\Big].
\end{align*}
To estimate $\displaystyle\int_{\R^3}|\Delta u_s|^2$, we rearrange the terms in \eqref{def of u_p and u_s}$_2$ to obtain
\[\mu\Delta u_s+(\mu+\lambda)\nabla\text{\rm div}(u_s)=\rho\dot{u}-\rho\nabla\phi.\]
Therefore Lemma~\ref{estimates on Lame operator} implies that
\begin{align*}
\int_{\R^3}|\Delta u_s|^2dx\le \tilde C\Big[\int_{\R^3}(|\rho\dot{u}|^2+|\rho\nabla\phi|^2)dx\Big]\le \tilde C(\Phi_2+1),
\end{align*}
and the result follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{Main thm for gamma=1}]
In view of the bounds \eqref{bound on Phi 1}, \eqref{bound on Phi 2} and \eqref{bound on Phi 3}, one can conclude that for $0\le t< T^*$,
\begin{equation}\label{bound on Phi 1 Phi 2 Phi 3}
\Phi_1(t)+\Phi_2(t)+\Phi_3(t)\le \tilde C.
\end{equation}
Hence using the bound \eqref{bound on Phi 1 Phi 2 Phi 3} and applying the same argument given in the proof of Lemma~\ref{higher estimate on rho and u lem}, for $T\in[0,T^*)$ and $q\in(3,6]$, we also have
\begin{align*}\label{higher estimate on rho and u}
\sup_{0\le t\le T}(\|\rho\|_{H^1\cap W^{1,q}}+\|\nabla u\|_{H^1})\le \tilde C.
\end{align*}
Therefore, similar to the proof of Theorem~\ref{Main thm for gamma>1}, we can extend the strong solution $(\rho,u)$ beyond $t=T^*$, which leads to a contradiction. This completes the proof of Theorem~\ref{Main thm for gamma=1}.
\end{proof}
\subsection*{Acknowledgment}
The author would like to thank the anonymous reviewers for their useful comments which have greatly improved the manuscript. The work described in this paper was partially supported from the Dean's Research Fund of the Faculty of Liberal Arts and Social Science, The Education University of Hong Kong, HKSAR, China (Project No. FLASS/DRF 04634). This work does not have any conflicts of interest.
\bibliographystyle{amsalpha}
| {
"attr-fineweb-edu": 1.756836,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUawHxK0zjCrsN9f7t | \section{Introduction}
In ITER, the performance of burning plasmas will depend on the population of alpha particles being well confined within the plasma core, as the heating of the deuterium-tritium (DT) plasma will then rely mainly on the energy of these suprathermal particles that are produced by core fusion reactions.
A phenomenon that can potentially hinder the successful operation of ITER is therefore the destabilization of Alfv\'en eigenmodes (AEs) by alpha particles \cite{Fu1989}, whereby an increased radial transport of the latter could degrade the conditions necessary to sustain the fusion process and lead to power fluxes that exceed the design values of the ITER plasma facing components \cite{Fasoli2007, Kurki-Suonio2009, Sharapov2013}.
While ITER scenario development is underway and in the absence of experimental results for guidance, a comprehensive modelling approach is mandatory to forecast the stability of Alfv\'enic activity in ITER plasmas.
In this article the stability of AEs is systematically addressed for the 15 MA ELMy H-mode ITER baseline scenario \cite{Polevoi2002, Pinches2015, Lauber2015, Rodrigues2015} making use of a recently introduced framework \cite{Rodrigues2015} that is based on the hybrid MHD drift-kinetic code \mbox{CASTOR-K} \cite{Borba1999, Nabais2015}, which is the key element in our suite of numerical codes.
Growth rates are computed systematically for all possible eigenmodes taking into account the alpha-particle drive, the Landau damping due to the DT ions, the thermalized helium (He) ions and the electrons, and the interaction with the Alfv\'en continuum.
Radiative damping is calculated a posteriori for the most relevant modes.
The remainder of this article is organized in three main sections.
The ITER baseline scenario, our modelling workflow and the numerical codes on which it relies are described in section \ref{section_scenario_and_workflow}.
section \ref{section_results} is dedicated to a detailed discussion of the results.
Finally, a summary and conclusions are provided in section \ref{section_conclusions}.
\section{Scenario and workflow}
\label{section_scenario_and_workflow}
\subsection{ITER baseline scenario}
\label{section_scenario}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_scenario.pdf}
\caption{Kinetic profiles from ASTRA simulations \cite{Polevoi2002,Pereverzev2008} for two variants of the 15 MA ITER baseline scenario. Compared with the {\it LoPed} scenario variant (left), {\it HiPed} (right) is characterised by higher pedestal temperatures and lower temperatures at the plasma core (dashed lines), which results in lower densities of the main fusion products, namely alpha particles and He ashes.}
\label{figure_scenario}
\end{center}
\end{figure*}
The two variants of the ITER scenario that are analysed here have been produced by ASTRA transport simulations \cite{Polevoi2002,Pereverzev2008}.
In both variants the plasma current is $I_p$ = 15 MA, the toroidal magnetic field is $B_0$ = 5.3 T, and the plasma major and minor radii are $R_0$ = 6.2 m and $a$ = 2 m, respectively.
The position of the magnetic axis is $R_\mathrm{m}$ = 6.4 m.
Figure \ref{figure_scenario} shows density and temperature profiles versus radial coordinate $s=\sqrt{\Psi/\Psi_\mathrm{b}}$, where the poloidal magnetic flux $\Psi$ is normalized to its boundary value $\Psi_\mathrm{b}$.
It can be seen that both scenario variants have approximately the same electron density $n_\mathrm{e}$ and impurity content $n_\mathrm{Z}$, which is essentially due to beryllium coming from the first wall.
The main difference between the two variants is that in the one on the left of figure \ref{figure_scenario}, hereafter referred to as {\it LoPed}, electron ($T_\mathrm{e}$) and ion ($T_\mathrm{i}$) temperatures are much lower at the pedestal and much higher at the core than in the {\it HiPed} variant on the right.
In fact, the {\it LoPed} and {\it HiPed} scenarios cover a 3 keV range of pedestal-top temperatures around the expected value for ITER, which is approximately 5 keV on the basis of edge MHD stability \cite{Huijsmans2013}.
Naturally, the higher (by roughly 10 keV) core temperatures in {\it LoPed} go together with a higher density of alpha particles $n_\alpha$ and of helium ashes (thermalized alpha particles) $n_\mathrm{He}$ in {\it LoPed} than in {\it HiPed}.
This in turn is reflected on the fuel density $n_\mathrm{DT} = n_\mathrm{D} + n_\mathrm{T}$ being lower at the plasma core in {\it LoPed} than in {\it HiPed}.
The mix of deuterium and tritium is optimal in both scenario variants, i.e., $n_\mathrm{D} = n_\mathrm{T}$.
For convenience, differences in the safety factor $q$ will be discussed later, apropos figure \ref{figure_most_unstable_modes_in_gap} and figure \ref{figure_q_detail}.
\subsection{Numerical codes and modelling workflow}
\label{section_modelling}
Our workhorse is \mbox{CASTOR-K}, which is used to assess the stability of AEs through the computation of their linear growth-rates.
Besides \mbox{CASTOR-K} our suite of numerical codes comprises HELENA \cite{Huysmans1991} to obtain magnetic equilibria, and the incompressible ideal-MHD code MISHKA \cite{Mikhailovskii1997} to compute the eigenmodes of a given equilibrium.
The extensive identification of all the AEs of an equilibrium is key to the success of our methodical stability-analysis approach.
For this reason MISHKA has been preferred to the resistive-MHD code CASTOR, which we also use to calculate radiative-damping rates, as reported in section \ref{section_radiative_damping}.
In a simple performance test done by executing both MHD codes in the same CPU, with the same input, MISHKA calculated an eigenmode in around 2.5 s, while CASTOR took approximaely 23.5 s to compute the same eigenmode.
MISHKA is therefore roughly 10 times faster than CASTOR, which makes it an adequate tool to solve for thousands of eigenmodes in a reasonably short time.
We focus on eigenmodes with toroidal mode number $n$ ranging from 1 to 50 in order to stay within the limits of the drift-kinetic ordering for alpha particles, i.e., $k_\perp\rho_\alpha<1$ where $k_\perp$ is the AE perpedicular wavenumber and $\rho_\alpha$ is the gyroradius of the alpha particles \cite{Rodrigues2015}.
We further restrict our analysis to AEs whose eigenfrequency falls in one of the first three gaps of the ideal shear Alfv\'en wave continuum, namely (ordered by increasing frequency) the Toroidicity induced AE (TAE) gap, the Ellipticity induced AE (EAE) gap, and the Non-circular triangularity-induced AE (NAE) gap \cite{Fu1989, Betti1991, Kramer1998}.
All possible TAEs, EAEs, and NAEs with $n\leq 50$ have been determined by scanning the mode frequency $\omega/\omega_\mathrm{A}$ from $0.01$ to $2.0$, a value higher than the top frequency of the NAE gap for the range of $n$ considered.
The scan step is $2.0×\times10^{-5}$, which we have found to be sufficiently small not to miss eigenmodes.
At every step of the scan, the quantity $(\omega/\omega_\mathrm{A})^2$ is input to MISHKA as a guess of the mode eigenvalue which, upon convergence, is returned together with the corresponding mode eigenfunction.
Valid AEs are subsequently collected from the large set of eigenmodes produced by the frequency scan.
The selection is based on two criteria, namely, the eigenfunction must be well-resolved radially \cite{Rodrigues2015}, and the mode cannot be affected by continuum damping.
We account for the effect of continuum damping in a straightforward, binary way.
A given AE is considered to be fully damped if its frequency matches the ideal Alfv\'en continuum at any radial position where the mode amplitude exceeds 1\% of its maximum, otherwise continuum damping is not taken into consideration at all for that mode.
A total of 705 AEs have successfully passed this validation process for {\it LoPed}, and 401 for {\it HiPed}.
The selected AEs are then processed by \mbox{CASTOR-K}, which calculates the energy $\delta W_\mathrm{p}$ exchanged between a mode and a given population p of plasma particles.
The associated contribution to the growth rate, $\gamma_\mathrm{p}=\mathrm{Im}(\delta W_\mathrm{p})/(2\omega W_\mathrm{k})$, where $W_\mathrm{k}$ is the kinetic energy of the mode perturbation \cite{Borba1999} is computed by \mbox{CASTOR-K} as well --- it is the basic quantity used in the stability analysis of eigenmodes.
Four \mbox{CASTOR-K} runs have been done for each mode to calculate the drive due to the alpha particles ($\alpha$) and the Landau damping due to the interaction with the bulk ions (DT), the electrons (e) and the helium ashes (He).
The net growth rate of the AE is then obtained by summing these 4 contributions, i.e.,
\begin{equation}
\gamma = \gamma_\alpha + \gamma_\mathrm{DT} + \gamma_\mathrm{e} + \gamma_\mathrm{He}.
\label{equation_net_growth_rate}
\end{equation}
Note that for some modes, particularly for even, Low-Shear TAEs (LSTAEs) \cite{Fu1995} which sit near the bottom of the TAE gap and consist of a symmetric combination of poloidal harmonics (see figure \ref{figure_LSTAE_sequences}) \cite{Nyqvist2012}, radiative damping may have an additional non-negligible contribution to $\gamma$, a subject that will be addressed in section \ref{section_radiative_damping}.
In \mbox{CASTOR-K}, the distribution function of every particle population p is modelled with the product of a function of $s$ and a function of energy $E$ \cite{Nabais2015, Rodrigues2015},
\begin{equation}
F_\mathrm{p}(s,E)=n_\mathrm{p}(s)f_\mathrm{p}(E).
\label{equation_distribution_function}
\end{equation}
The radial profiles $n_\mathrm{p}(s)$ of the thermal populations (DT, e and He), and of the alpha particles are shown in figure \ref{figure_scenario} for {\it LoPed} and {\it HiPed}.
Concerning the energy distribution $f_\mathrm{p}(E)$, while DT ions, He ashes and electrons have been described by Maxwellian distributions \cite{Rodrigues2015}, a slowing-down distribution which is determined by the effects of electron and ion drag on alpha particles \cite{Gaffey1976, Candy1996, Pinches1998, Bilato2014} has been used to describe the alpha-particle population,
\begin{equation}
f_\mathrm{\alpha}(E)=f_\mathrm{sd}(E)\left/\int_0^\infty f_\mathrm{sd}(E)dE\right.,
\label{equation_normalized_slowing_down}
\end{equation}
where
\begin{equation}
f_\mathrm{sd}(E)=\frac{1}{E^{\,3/2}+E_c^{\,3/2}}\,\mathrm{erfc}\left(\frac{E-E_0}{\Delta E}\right).
\label{equation_slowing_down}
\end{equation}
This expression provides a good approximation to distributions calculated with Fokker-Planck models \cite{Gaffey1976, Bilato2014}, and its analytical simplicity is convenient for the calculation of derivatives in Castor-K.
The crossover energy $E_\mathrm{c}$ is the alpha-particle energy below which ion drag becomes more important than electron drag.
It is given by
\begin{equation}
E_\mathrm{c}=T_\mathrm{e}\left(\frac{3Z_1}{4}\right)^{2/3}\left(\frac{\pi m_\alpha}{m_\mathrm{e}}\right)^{1/3},
\label{equation_crossover_energy}
\end{equation}
where $Z_1=\sum_i m_\mathrm{\alpha}n_i z_i^2/\left(m_in_\mathrm{e}\right)$ is a sum over ions, the $i$th ion species having density $n_i$, charge number $z_i$ and mass $m_i$, and the electron temperature $T_\mathrm{e}$ is measured in eV.
Using values of $T_\mathrm{e}$ at $s=0.4$ where the gradient of $n_\alpha$ is practically at its maximum we obtain $E_\mathrm{c} = 595.1$ keV in {\it LoPed} and $E_\mathrm{c} = 423.2$ keV in {\it HiPed}.
The ion temperature $T_\mathrm{i}$ at $s=0.4$ has been chosen as the dispersion of the birth energy of alpha particles around the value $E_0=3.5$ MeV, i.e., $\Delta E = 15.5$ keV in {\it LoPed} and $\Delta E = 11.4$ keV in {\it HiPed}.
Sensitivity analysis showed that varying $\Delta E$ from 10 keV to 100 keV changed $\left|\gamma_\alpha\right|$ by at most 0.5\% in {\it LoPed} and 0.1\% in {\it HiPed}.
Concerning the sensitivity to variations of $E_\mathrm{c}$, using $T_\mathrm{e}$ values in equation \ref{equation_crossover_energy} taken in the region of strong $n_\alpha$ gradient $0.25\lesssim s\lesssim 0.55$ led to a maximum variation in $\left|\gamma_\alpha\right|$ of 10\% in {\it LoPed} and 5\% in {\it HiPed}.
\section{Modelling results}
\label{section_results}
This section is essentially focused on the characterization of the destabilized eigenmodes with particular emphasis on their growth rates, which are reported in section \ref{section_growth_rates}.
Since the evaluation of the radiative-damping contribution to the net growth-rates is much less automated or systematic than the calculation of the other drive and damping terms,
it is discussed separately in section \ref{section_radiative_damping}.
The radial structure and frequency distribution of AEs within the TAE gap is discussed in section \ref{section_TAE_gap}.
\subsection{AE stability}
\label{section_growth_rates}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_growth_rates_all_AE.pdf}
\caption{Linear growth rates $\gamma$ normalized to the Alfv\'en frequency $\omega_\mathrm{A} = v_\mathrm{A}/R_\mathrm{m}$, where the Alfv\'en velocity at the magnetic axis is $v_\mathrm{A} \approx 7.1\times10^6$ m/s in {\it LoPed} and $v_\mathrm{A} \approx 7.0\times 10^6$ m/s in {\it HiPed}, versus toroidal mode number $n$ and colored by AE frequency for the two variants of the 15 MA ITER baseline scenario. TAEs appear in dark-blue patterns which correspond to frequencies around $\omega_\mathrm{A}/2$ and are the most unstable.}
\label{figure_growth_rates_all_AE}
\end{center}
\end{figure*}
The linear growth rates computed with \mbox{CASTOR-K} for all the valid AEs found in both variants of the ITER baseline scenario are represented in figure \ref{figure_growth_rates_all_AE}, where different symbols are used for TAEs, EAEs, and NAEs (notice the normalization to the Alfv\'en frequency instead of the more common mode frequency).
A quite noticeable feature of the growth rates in figure \ref{figure_growth_rates_all_AE} is that they are larger in {\it LoPed} than in {\it HiPed}, which is consistent with the much higher alpha-particle density and density gradient in the {\it LoPed} variant of the scenario --- see figure \ref{figure_scenario}.
Moreover, it is striking that although a large number of EAEs and NAEs have positive growth rates in {\it LoPed}, clearly in both scenario variants all markedly unstable modes are TAEs.
No AEs have been found for $n<7$ in {\it LoPed} and $n<13$ in {\it HiPed}.
Figure \ref{figure_growth_rates_TAE} shows the TAEs from figure \ref{figure_growth_rates_all_AE} with their radial location, which is defined as the position of the maximum amplitude of their strongest poloidal harmonic.
It is clear that the unstable modes are well localized at the core of the plasma, inside the region $s\lesssim 0.48$ for {\it LoPed} and $s\lesssim 0.32$ for {\it HiPed}.
It can further be seen from the radial profile of $q$ in figure \ref{figure_most_unstable_modes_in_gap} that these modes exist within the low magnetic-shear region of the plasma --- they are in fact LSTAEs.
Only a few modes exist for $n\gtrsim 20$ in {\it HiPed}.
These are low-frequency modes sitting close to the bottom of the TAE gap which do not cross the continuous spectrum, as seen in figure \ref{figure_TAE_gap_distribution}.
Such is not the case in {\it LoPed} for which more modes exist for high $n$.
This is due to the extended low magnetic-shear region, within which LSTAEs with higher eigenfrequencies can exist without matching the top boundary of the TAE gap, as seen in figure \ref{figure_most_unstable_modes_in_gap}, and because in {\it LoPed} higher-$n$ modes are located at inner positions than lower-$n$ modes.
Indeed, a difference in the evolution of the location of LSTAEs as $n$ increases can also be seen in figure \ref{figure_growth_rates_TAE}.
While in {\it HiPed} modes become progressively located at positions farther from the core, the opposite occurs in {\it LoPed}.
This contrast in behavior results from $q(s)$ being monotonically increasing (see figure \ref{figure_most_unstable_modes_in_gap}) together with the fact that within the $0.2<s<0.37$ region $q$ is below 1 in {\it HiPed} but above 1 in {\it LoPed}, as shown in figure \ref{figure_q_detail}.
Considering the TAE condition $q(s)=(m\pm 1/2)/n$ \cite{Cheng1986} and that $m\approx n$ for LSTAEs we obtain $q(s)\approx 1\pm 1/(2n)$.
Therefore, since $q>1$ in {\it LoPed}, $q$ must decrease as $n$ increases so TAEs move towards lower $s$ values, whereas for $q<1$ in {\it HiPed} TAEs move towards higher $s$ values as $q$ increases.
Moreover, the higher magnetic shear in {\it HiPed} does not allow the existence of LSTAEs beyond $s \approx 0.3$, that is for $n\gtrsim20$.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_growth_rates_TAE.pdf}
\caption{In both scenarios analysed the most unstable TAEs are core-localized, as indicated by their reddish colors. As seen in the $q$ profile in figure \ref{figure_most_unstable_modes_in_gap}, in {\it HiPed} these TAEs are inside the $q=1$ surface (i.e., they are ``tornado modes'' \cite{Sharapov2013}).}
\label{figure_growth_rates_TAE}
\end{center}
\end{figure*}
The growth rates of the unstable TAEs in figure \ref{figure_growth_rates_TAE} are visibly peaked at toroidal mode numbers that are roughly $n \approx 30$ in {\it LoPed} and $n \approx 15$ in {\it HiPed}.
As shown in figure \ref{figure_most_unstable_modes_in_gap} the most unstable mode in {\it LoPed} is a $n = 31$ even LSTAE radially localized at $s\approx 0.37$, whose frequency $\omega/\omega_\mathrm{A}=0.395$ lies at $0.5\%$ of the TAE gap ($0\%$ being the bottom gap frequency and $100\%$ the top gap frequency).
This mode has a net growth rate $\gamma/\omega_\mathrm{A} \approx 1.24\%$ of which $2.17\%$ is from alpha-particle drive and $-0.88\%$ is due to damping by the bulk ions.
In {\it HiPed} the most unstable mode is a $n = 16$ LSTAE found at $s\approx 0.17$ with a frequency $\omega/\omega_\mathrm{A}=0.533$, which lies at $83.1\%$ of the TAE gap.
It is an odd mode, formed by an anti-symmetric combination of poloidal harmonics (see figure \ref{figure_LSTAE_sequences}) \cite{Nyqvist2012}, and its growth rate is $\gamma/\omega_\mathrm{A} \approx 0.60\%$, for which alpha-particles contribute with $0.94\%$ and bulk ions with $-0.33\%$.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_most_unstable_modes_in_gap.pdf}
\caption{The eigenfunctions of the most unstable AEs represented by the colored lines have been computed by MISHKA using 17 harmonics with poloidal mode number starting at $m=n-1$, but here only the strongest 4 are shown. In both scenario variants the modes fall within the TAE gap of the Alfv\'en continuous spectrum, whose boundaries are shown as solid black lines --- notice that the baseline of the eigenfunction (zero value on the right axis) marks the mode frequency on the left axis.
}
\label{figure_most_unstable_modes_in_gap}
\end{center}
\end{figure*}
As happens in these two examples, Landau damping of TAEs is in general substantial (as discussed in detail in section \ref{section_TAE_gap}) and it is mainly due to the DT ions.
Indeed, we have found damping by He ashes to be relatively small and damping by electrons to be negligible.
It should be noticed that while in {\it LoPed} the most unstable mode is close to $s=0.38$ where the gradient of the alpha-particle density $n_\alpha$ is highest, that is not possible in {\it HiPed} because the maximum $n_\alpha$ gradient occurs at $s\approx 0.44$ where the higher magnetic shear only allows non-local modes that interact strongly with the Alfv\'en continuum.
Nevertheless, in both scenario variants the $n_\alpha$ gradient remains close to its maximum value in the mid-radius region $0.25\lesssim s\lesssim 0.55$ which encloses all unstable AEs.
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{Figure_q_detail.pdf}
\caption{The on-axis value of the safety factor in {\it HiPed} is $q(0) = 0.96$, around 3\% lower than $q(0) = 0.987$ in {\it LoPed}. While in {\it HiPed} the low magnetic-shear region (due to sawtooth crashes) ranges from the axis to $s \approx 0.3$, in {\it LoPed} it extends to $s \approx 0.5$. The $q=1$ surface is located at $s \approx 0.2$ in {\it LoPed} and $s \approx 0.37$ in {\it HiPed}.
}
\label{figure_q_detail}
\end{center}
\end{figure}
At this point it is interesting to verify a well-known estimate for the toroidal mode number of the most driven AE \cite{Pinches2015, Rodrigues2015}.
The estimate is based on matching the width of passing alpha-particle orbits and the TAE width, which leads to $n\approx s/q^2\times a/R_\mathrm{m} \times \Omega_{\alpha}/\omega_\mathrm{A}$, where the cyclotron frequency of the alpha particles is $\Omega_{\alpha} \approx 2.5\times10^8$ rad/s and $q(s)$ is evaluated at the location $s$ of the AE with the highest drive.
By using $q(0.37) \approx 1.016$ we arrive at $n \approx 26$ in {\it LoPed}, while $q(0.17) \approx 0.968$ in {\it HiPed} leads to $n \approx 13$.
Considering the simplicity of the rationale behind them, these are reasonable guesses for the toroidal mode numbers of the most driven, and for that matter also of the most unstable AEs.
However, it must be remarked that the above formula depends significantly on the location of the most driven modes, which could not have been known with accuracy beforehand.
\subsection{Radiative damping}
\label{section_radiative_damping}
With the introduction of finite gyroradius effects TAEs become coupled with Kinetic Alfv\'en Waves (KAWs), which carry energy in the radial direction away from the eigenmode in an intrinsic TAE damping process known as radiative damping \cite{Rosenbluth1975, Mett1992, Candy1994, Nyqvist2012, Pinches2015}.
In the case of LSTAEs only the symmetric modes suffer significant radiative damping, as the KAWs excited by the anti-symmetric LSTAEs interfere within the mode localization region resulting in a negligible net energy flux \cite{Mett1992, Nyqvist2012}.
Radiative damping has previously been shown to have a significant contribution to the net growth rate $\gamma$ in {\it LoPed} \cite{Rodrigues2015}, namely for its most unstable mode, a symmetric LSTAE sitting very close to the bottom of the TAE gap.
On the contrary, the most unstable modes in {\it HiPed} are anti-symmetric LSTAEs, so their radiative damping is expected to be very small \cite{Nyqvist2012}.
Here, the same approach that has been followed in \cite{Rodrigues2015} is used to estimate $\gamma_\mathrm{rad}$, the radiative-damping contribution to the net growth rate of selected AEs from both scenario variants.
The method \cite{Gorelenkov1992, Mett1992, Candy1994, Connor1994} relies on a formal equivalence between a non-ideal MHD model that accounts for finite parallel electric field and first-order ion-gyroradius effects, and the resistive MHD model that is implemented in the CASTOR eigenvalue code \cite{Kerner1998}.
In order to compute non-ideal eigenmodes and determine their radiative-damping rates, in place of the (usually real) resistivity we input to CASTOR the complex quantity \cite{Candy1994, Connor1994}
\begin{equation}
\eta = i 4 q^2 \left\{ \frac{3}{4} + \frac{T_\mathrm{e}}{T_\mathrm{i}} \left[ 1-i\delta \left(\nu_\mathrm{e}\right) \right] \right\} \left( \frac{\omega}{\omega_\mathrm{A}} \right)^3 \left( \frac{\rho_\mathrm{i}}{R_\mathrm{m}} \right)^2,
\label{equation_complex_resistivity}
\end{equation}
and conduct a scan in $\delta(\nu_\mathrm{e})$, a wave dissipation-rate due to collisional friction between trapped electrons and passing particles ($\nu_\mathrm{e}$ is the electron collision frequency), which leads to an imaginary frequency component and to the corresponding damping rate $\gamma_\mathrm{CASTOR}$.
We thus obtain the MHD growth-rate $\gamma_\mathrm{CASTOR}$ as a function of $\delta(\nu_\mathrm{e})$, from which $\gamma_\mathrm{rad}$ can be inferred.
Our method is based on the fact that of the two components of non-ideal eigenmodes, KAWs suffer much stronger collisional damping than AEs \cite{Hasegawa1976, Candy1994}.
Therefore, as $\delta(\nu_\mathrm{e})$ is increased from zero $\gamma_\mathrm{CASTOR}$ rises mainly due to the dominant damping of KAWs, up to a certain value of $\delta(\nu_\mathrm{e})$ for which all KAW energy is damped by collisional friction and only the much weaker damping of the AE remains.
This change of behavior is observed as a modification of the slope of the $\gamma_\mathrm{CASTOR}$ versus $\delta(\nu_\mathrm{e})$ curve which shows a noticeable knee, as discussed below.
An indication of the scan limits can be taken from known expressions of $\delta(\nu_\mathrm{e})$, for which $0<\delta(\nu_\mathrm{e})\ll 1$ \cite{Candy1994, Connor1994}.
It should therefore not be necessary to extend the scan to $\delta(\nu_\mathrm{e})>1$ to observe the knee.
The right-hand side of equation \ref{equation_complex_resistivity} is evaluated at the position of the eigenmode in question.
At the location of all the eigenmodes of interest the normalized ion gyroradius $\rho_\mathrm{i}/R_\mathrm{m}$ has the value $6.25×\times10^{-4}$ in {\it LoPed} and $3.43×\times10^{-4}$ in {\it HiPed}, whereas $q$ is approximately $1.02$ in {\it LoPed} and $0.97$ in {\it HiPed}.
For the AE frequency $\omega$ in equation \ref{equation_complex_resistivity} we use the value given by MISHKA.
Notice that this frequency would only be the same as the frequency of the eigenmode given by CASTOR if $\eta=0$, i.e., in the ideal-MHD case if compressibility is negligible.
In our calculations we found that the CASTOR frequency is generally slightly higher, by up to 5\%, than the MISHKA frequency.
Moreover, as can be seen in figure \ref{figure_radiative_damping} the frequency of the CASTOR eigenmode changes during a $\delta(\nu_\mathrm{e})$ scan --- except when radiative damping is very small.
To address these frequency changes and ensure that we are indeed analysing the intended AE, a scan is made not only in $\delta(\nu_\mathrm{e})$ but also in the guess frequency that is input to CASTOR.
To initiate a $\delta(\nu_\mathrm{e})$ scan we use a guess frequency that is close to the MISHKA eigenmode frequency. Subsequently, the guess frequency at every scan step is given by the frequency of the converged CASTOR eigenmode in the previous step of the scan.
This way it is guaranteed that the input given to CASTOR changes slowly and we are tracking the same eigenmode during the whole $\delta(\nu_\mathrm{e})$ scan, which considerably simplifies the process.
By scanning a range of initial guess-frequency values around the frequency of the MISHKA eigenmode we obtain a set of $\delta(\nu_\mathrm{e})$ scans.
From this set we can then choose the scan for the non-ideal CASTOR eigenmode that corresponds to the desired ideal MISHKA eigenmode with coupled KAWs.
The $\delta(\nu_\mathrm{e})$ scan on the left of figure \ref{figure_radiative_damping} is for the most unstable TAE in {\it HiPed}, which we recall is anti-symmetric.
The constant slope of $\gamma_\mathrm{CASTOR}/\omega_\mathrm{A}$ versus $\delta(\nu_\mathrm{e})$ exemplifies the outcome of a $\delta(\nu_\mathrm{e})$ scan for odd LSTAEs, which do not suffer noticeable radiative damping.
As discussed above, this is an expected result given the negligible energy carried away by the KAWs, and it has been observed for the other anti-symmetric and noticeably unstable modes in {\it HiPed} as well.
Loosely speaking, in this case the MHD damping rate is solely due to the weak damping of the ideal AE component of the non-ideal eigenmode, and it is therefore simply proportional to the wave dissipation-rate $\delta(\nu_\mathrm{e})$.
The eigenfunctions calculated by CASTOR and by MISHKA are practically the same for these eigenmodes, as well as their frequencies which differ by no more than 1\%.
The right side of figure \ref{figure_radiative_damping} shows the $\delta(\nu_\mathrm{e})$ scan for the most unstable {\it symmetric} TAE in {\it HiPed} with the same $n=16$.
This eigenmode has a net growth-rate $\gamma/\omega_\mathrm{A} \approx 0.31\%$.
It is localized at $s \approx 0.15$ and its frequency is $\omega/\omega_\mathrm{A}=0.481$.
In contrast with the TAE analysed on the left side of figure \ref{figure_radiative_damping}, the CASTOR eigenmode frequency $\omega/\omega_\mathrm{A}$ now varies noticeably during the scan.
Simultaneously, as $\delta \left(\nu_\mathrm{e}\right)$ rises there is a significant change in the slope of the $\gamma_\mathrm{CASTOR}/\omega_\mathrm{A}$ versus $\delta(\nu_\mathrm{e})$ curve, which eventually reaches a constant value as the curve asymptotically approaches a straight line.
A break-in-slope (BIS) technique is used to pinpoint $\delta_\mathrm{rad}$, the value of $\delta(\nu_\mathrm{e})$ at the intersection of the two lines obtained by linear fitting the first and last few points of the curve, as shown by the green lines in figure \ref{figure_radiative_damping}.
Following the reasoning above, the ordinate at the knee of the curve (where its slope changes abruptly), which in this case is for the abscissa $\delta_\mathrm{rad}\approx0.09$ is taken as the radiative-damping rate $\gamma_\mathrm{rad}/\omega_\mathrm{A} \approx -0.57\%$.
Although the net growth rate calculated by \mbox{CASTOR-K} for this particular AE is positive, the eigenmode is in fact stable when considering radiative damping, since $(\gamma+\gamma_\mathrm{rad})/\omega_\mathrm{A} \approx -0.26\%$.
Notice that these are somewhat rough estimates of the radiative-damping rate $\gamma_\mathrm{rad}$ that depend on several factors.
In particular they depend on the values of $T_\mathrm{e}$ and $T_\mathrm{i}$ used in equation \ref{equation_complex_resistivity}, which are calculated at a particular value of $s$ chosen to represent the radial position of the eigenmode.
The uncertainty in $\delta_\mathrm{rad}$ must also be considered.
Evidently, a different method could be used to determine the knee of the $\gamma_\mathrm{CASTOR}/\omega_\mathrm{A}$ versus $\delta(\nu_\mathrm{e})$ curve.
We could for example choose the point where the slope of $\gamma_\mathrm{CASTOR}/\omega_\mathrm{A}$ and the frequency $\omega/\omega_\mathrm{A}$ become practically stable, $\delta(\nu_\mathrm{e})\approx 0.15$.
The radiative-damping rate would in that case differ from the BIS value by approximately 20\%.
Nevertheless, such a difference would not have a significant impact on the stability of the TAEs.
Table \ref{table_radiative_damping} summarizes the radiative-damping analysis that has been made for some of the most unstable eigenmodes in {\it LoPed}, all of which are symmetric.
These modes have been chosen from the top of the two curves that can be seen on the left of figure \ref{figure_growth_rates_TAE} peaking around $n=25$ and $n=31$.
While the analysis confirms that radiative damping accounts for a significant fraction of the net growth-rate in {\it LoPed}, it also shows that the most unstable eigenmode remains the same after taking radiative damping into account, as other unstable eigenmodes suffer a similar radiative-damping effect.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_radiative_damping.pdf}
\caption{Estimation of radiative-damping rates for a pair of {\it HiPed} TAEs. Red curves represent the MHD damping-rate calculated by CASTOR as a function of the wave dissipation-rate $\delta(\nu_\mathrm{e})$ \cite{Candy1994, Connor1994}, whereas blue curves represent the frequency of the non-ideal eigenmode. For the analysis of the TAE on the left, which is the most unstable in {\it HiPed}, the values $T_\mathrm{e}=14.7\,\mathrm{keV}$ and $T_\mathrm{i}=13.0\,\mathrm{keV}$ are used in equation \ref{equation_complex_resistivity}. The constant slope of the red curve indicates a negligible value of $\gamma_\mathrm{rad}$, as expected since this is an anti-symmetric mode \cite{Nyqvist2012}. In the case of the symmetric TAE on the right the temperatures are $T_\mathrm{e}=14.8\,\mathrm{keV}$ and $T_\mathrm{i}=13.1\,\mathrm{keV}$. The 5-point BIS analysis of the red curve leads to a radiative-damping rate $\gamma_\mathrm{rad}/\omega_\mathrm{A} \approx -0.57\%$, which has been picked at $\delta_\mathrm{rad}\approx0.09$ as indicated by the green lines.}
\label{figure_radiative_damping}
\end{center}
\end{figure*}
\begin{table*}
\begin{center}
\caption{\label{table_radiative_damping}Radiative-damping contribution to the net growth-rate for a selection of unstable LSTAEs in {\it LoPed}.}
\captionsetup{justification=centering}
\begin{tabular}{@{}*{9}{c}@{}}
\hline
\noalign{\smallskip}
$n$ & $\omega/\omega_\mathrm{A}$ & $s$ & $T_\mathrm{e}\,\mathrm{(keV)}$ & $T_\mathrm{i}\,\mathrm{(keV)}$ & $\gamma/\omega_\mathrm{A}$ & $\delta_\mathrm{rad}$ & $\gamma_\mathrm{rad}/\omega_\mathrm{A}$ & $(\gamma+\gamma_\mathrm{rad})/\omega_\mathrm{A}$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
24 & $0.382$ & $0.43$ & $17.0$ & $14.8$ & $0.67\%$ & $0.12$ & $-0.63\%$ & $0.04\%$ \\
25 & $0.384$ & $0.42$ & $17.3$ & $15.0$ & $0.80\%$ & $0.12$ & $-0.50\%$ & $0.30\%$ \\
26 & $0.386$ & $0.41$ & $17.6$ & $15.3$ & $0.71\%$ & $0.12$ & $-0.71\%$ & $0.00\%$ \\
30 & $0.394$ & $0.37$ & $18.7$ & $16.2$ & $1.16\%$ & $0.14$ & $-0.73\%$ & $0.43\%$ \\
31 & $0.395$ & $0.37$ & $18.7$ & $16.2$ & $1.24\%$ & $0.09$ & $-0.36\%$ & $0.88\%$ \\
32 & $0.397$ & $0.36$ & $19.0$ & $16.4$ & $1.13\%$ & $0.07$ & $-0.41\%$ & $0.72\%$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Radial structure and frequency distribution of LSTAEs}
\label{section_TAE_gap}
\begin{figure}
\begin{center}
\includegraphics[width=0.49\textwidth]{Figure_TAE_gap_distribution.pdf}
\caption{Frequency distribution of LSTAEs as a function of toroidal mode number in {\it HiPed}. The lower branch is formed by symmetric modes near the bottom of the TAE gap, while the modes in the upper branch are anti-symmetric and close to the top of the gap. The modes that appear scattered at the bottom right of the figure with $s\gtrsim 0.4$ are not LSTAEs.}
\label{figure_TAE_gap_distribution}
\end{center}
\end{figure}
In figure \ref{figure_TAE_gap_distribution} the frequency of TAEs is plotted versus $n$ for {\it HiPed}.
Two main branches are evident inside the TAE gap: an upper branch made of modes that rise in frequency as $n$ increases, and a lower branch with decreasing frequency modes.
The lower-frequency branch is made of even, symmetric modes, while the modes in the upper branch are odd, anti-symmetric modes \cite{Pinches2015, Lauber2015}.
Furthermore, it has been verified that for a given $n$ the frequency of anti-symmetric modes rises with the number of peaks in their poloidal harmonics, while symmetric modes have progressively lower frequencies as their number of peaks increases.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_LSTAE_sequences.pdf}
\caption{This group of $n=16$ LSTAEs in {\it HiPed} is a subset of the data in figure \ref{figure_TAE_gap_distribution}. The sequence in the top row shows the eigenfunctions of odd, anti-symmetric modes, whereas the eigenfunctions in the bottom row form a sequence of even, symmetric modes. The absolute value of every eigenfunction has been normalized to unity. Moreover, eigenfunctions have been labelled with the number of peaks in each of their poloidal harmonics, of which only the first 4 are shown, and with their parity.}
\label{figure_LSTAE_sequences}
\end{center}
\end{figure*}
This behavior is illustrated in figure \ref{figure_LSTAE_sequences} for $n=16$, but it occurs for all the modes in the two branches of figure \ref{figure_TAE_gap_distribution} and it has also been observed in {\it LoPed}.
The long line in figure \ref{figure_TAE_gap_distribution} that chirps down until $n \approx 45$ has the simplest symmetric TAEs with a single peak, no zeroes or oscillations.
Its ``mirror'' line in the upper branch ends abruptly at $n = 19$.
This occurs because, as discussed earlier, in {\it HiPed} higher-$n$ modes are located farther from the core than lower-$n$ modes and their eigenfrequencies match the Alfv\'en continuum at a smaller $s$ for modes at the top of the gap than at its bottom, as can be seen in figure \ref{figure_most_unstable_modes_in_gap}, thereby causing the missing anti-symmetric modes to be effectively annihilated by continuum damping.
The opposite situation occurs in {\it LoPed} as higher-$n$ modes are located at positions closer to the core than lower-$n$ modes, so the anti-symmetric modes are missing for $n < 27$.
Figure \ref{figure_most_unstable_modes_in_gap} also shows that in both scenario variants the most unstable modes are LSTAEs with a single peak.
However, whereas in {\it LoPed} symmetric modes have larger growth rates than anti-symmetric modes with the same $n$, in {\it HiPed} Alfv\'enic activity is dominated by anti-symmetric modes.
This situation is unusual since the TAEs that are commonly observed in experiments are symmetric \cite{Kramer2004}.
The leading role of anti-symmetric TAEs in {\it HiPed} is explained in figure \ref{figure_symmetry}, where it can be seen that it is due to their lower damping and not to a lower alpha-particle drive of the symmetric modes.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure_symmetry.pdf}
\caption{Summary of growth-rates for symmetric and anti-symmetric LSTAEs in both scenario variants, showing their net value $\gamma$ (middle part of the circular markers), drive $\gamma_\alpha$ (left marker part), and damping $\gamma_\mathrm{DT} + \gamma_\mathrm{e} + \gamma_\mathrm{He}$ (right marker part) --- see equation \ref{equation_net_growth_rate}. Only modes with positive net growth-rates are shown. In {\it LoPed} the most driven modes are the symmetric LSTAEs, which are also the most unstable. That is not the case in {\it HiPed} because although symmetric modes are the most driven, they also have the highest damping, whereby the most unstable {\it HiPed} modes are anti-symmetric LSTAEs.}
\label{figure_symmetry}
\end{center}
\end{figure*}
\section{Summary}
\label{section_conclusions}
The linear stability of AEs in the presence of alpha particles has been analysed for two variants of the 15 MA ELMy H-mode ITER baseline scenario using a specialized workflow that is based on the hybrid MHD drift-kinetic code \mbox{CASTOR-K}.
Our modelling results show that, considering alpha-particle drive and Landau damping on DT ions, helium ashes and electrons, the most unstable modes have toroidal mode numbers around $n \approx 30$ in the scenario variant with a lower pedestal temperature, named {\it LoPed}, and $n \approx 15$ in {\it HiPed}, with maximum growth rates of 1.24\% and 0.60\%, respectively.
In both scenario variants these modes are LSTAEs, i.e., they are localized in the low magnetic-shear region of the plasma core.
{\it LoPed} has a higher density of alpha particles at the plasma core than {\it HiPed}, which is consistent with the larger AE growth rates that have been found for {\it LoPed}.
Radiative damping, which has been determined for a number of chosen modes in both scenarios, was shown to somewhat reduce the growth rates of the most unstable modes in {\it LoPed}, but to be insufficient to stabilize them or alter their growth-rate ordering.
This result is in line with the calculation done in \cite{Rodrigues2015} for the most unstable mode in {\it LoPed}.
On the contrary, {\it HiPed} results are essentially unaltered since in this case all significantly unstable modes are anti-symmetric TAEs, which are practically unaffected by radiative damping \cite{Nyqvist2012}.
Nevertheless, radiative damping having been considered, Alfv\'enic activity remains most unstable in the {\it LoPed} variant of the ITER baseline scenario.
Concerning symmetry, in the case of {\it HiPed} a clear frequency distribution of symmetric and anti-symmetric LSTAEs within the TAE gap has been found that agrees with and illustrates results from recent studies on the same ITER scenario \cite{Pinches2015, Lauber2015}.
It has been found that practically all unstable modes are symmetric in {\it LoPed}, whereas in {\it HiPed} anti-symmetric modes have the highest growth rates.
The rather uncommon fact that in {\it HiPed} the most unstable modes are not symmetric has been shown to be due to the lower Landau-damping rates of the anti-symmetric modes.
Indeed, the most driven modes in {\it HiPed} are symmetric LSTAEs just like in {\it LoPed}, which shows the importance of considering all drive and damping processes when assessing the stability of a scenario.
Neutral beam injection (NBI), which has not been considered here, has previously been found to drive AEs quite significantly for $s\gtrsim 0.3$ in the case of {\it LoPed}, assuming a 1 MeV birth energy of the NBI fast ions \cite{Pinches2015, Toigo2015}.
It is therefore important to calculate the NBI drive, certainly so in the case of {\it LoPed} for which the most unstable AEs are located at $s\gtrsim 0.3$.
This shortcoming is to be addressed in future work.
It is important to note that differences in the safety factor, namely in the radial extent of the low magnetic-shear region have been shown to strongly influence both the number of existing AEs and their radial location, particularly of the most unstable LSTAEs.
Moreover, since magnetic shear is very low in most of the plasma core, some care must be taken in the interpretation of these and similar stability results as they can be sensitive to relatively small variations in the on-axis value of the safety factor \cite{Rodrigues2015EPS, Rodrigues2015IAEA}.
\section*{Acknowledgments}
This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No. 633053.
IST activities received financial support from ``Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia'' (FCT) through project UID/FIS/50010/2013.
ITER is Nuclear Facility INB no. 174.
The views and opinions expressed herein do not necessarily reflect those of the European Commission, IST, FCT, or the ITER Organization.
All computations were carried out using the HELIOS supercomputer system at the Computational Simulation Centre of the International Fusion Energy Research Centre (IFERC-CSC) in Aomori, Japan, under the Broader Approach collaboration between Euratom and Japan implemented by Fusion for Energy and JAEA.
PR was supported by EUROfusion Consortium grant no. WP14-FRF-IST/Rodrigues and NFL was supported by FCT grant no. IF/00530/2013.
\providecommand{\newblock}{}
| {
"attr-fineweb-edu": 1.832031,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUb-7xK5YsWR0KiZHN | \section{Introduction}
\label{sec:intro}
Social Network Analytics computations are a significant part of a
growing number of big data applications in many fields. Many
different analytics frameworks target graph analytics
computations~\cite{graphx,giraph,galois,powergraph,graphlab,graphChi},
offering high-level programming models for the development of complex
graph algorithms.
Partitioning and placement play a much more important role in graph
computations compared to map-reduce analytics~\cite{mapReduce}, as
graph algorithms very often have irregular computational dependencies
that depend on the graph structure. Sub-optimal partitioning and
placement of the graph may cause load-imbalance, incur high
communication costs, or delay the convergence of an iterative
computation. Thus, many graph analytics frameworks provide a way for
the user to control partitioning, aiming to allow for
algorithm-specific optimizations. Overall, graph partitioning
strategies are either edge-cuts or vertex-cuts~\cite{rahimian}. Edge
cuts partition the vertex set, optimizing for the number of edges that
cross partition boundaries, as these translate to communication costs.
Abou-Rjeili and Karypis~\cite{karypis} have shown edge cuts to produce
partitions of very different sizes and may lead to load imbalance
especially for power-law graphs, as are most of the social network
analytics datasets. To avoid such imbalance, vertex cuts divide edges
into balanced partitions, replicating vertices as required. In this
case, communication cost tends to be proportional to the number of
replicated edges, therefore vertex cut algorithms aim to minimize
them.
However, communication cost is not perfectly correlated with the
number of cut edges or replicated vertices. Other metrics that affect
performance include the vertices of the largest partition, the ratio
of replicated vertices to total vertices, the ratio of edges in the
largest partition to total edges, and more~\cite{metrics}. Although
these metrics correlate with performance for many analytics
computations, it is not always straightforward to predict which one is
the most important factor for every graph algorithm.
This work aims to improve understanding of the impact partitioning
strategies have on the performance of each graph algorithm and
dataset, to help recover some of the performance lost to the
generalizations graph analytics frameworks are forced to do to provide
high-level abstractions. We do that in GraphX~\cite{graphx}, a graph
analytics framework that is part of Apache Spark~\cite{spark}, a
popular open-source analytics engine\footnote{We selected GraphX/Spark
as these have currently the most active communities in terms of
repository commits on github.com and technical discussions on
stackoverflow.com.}. To do that, we use a set of five partitioning
metrics to measure and compare the partitioning algorithms available
in GraphX, together with two partitioning algorithms we propose, on
six large graphs. Moreover, we use a set of four well-known graph
algorithms to evaluate the impact of partitioning on performance, and
also to understand the predictive quality of the metrics used to
compare partitioning algorithms.
Overall, the contributions of this paper are:
\begin{itemize}
\item We systematically evaluate a set of partitioning algorithms
using a wide set of metrics on a set of social graphs over a set
of four very different algorithms, implemented in GraphX.
\item We propose two new hash partitioning algorithms that optimize a
different combination of metrics compared to the existing vertex
cut strategies implemented in GraphX.
\item We show which partition metric is correlated according the graph
algorithm and the dataset and
\item We show that partitioning depends on: (i) the number of
partitions, (ii) the application operation and (iii) the
properties of the graph.
\end{itemize}
We believe that our conclusions will help analytics experts optimize
their analytics pipelines to the dataset and extract much of the
performance lost to high-level abstractions, without having to resort
to custom implementations.
\section{Datasets}
\begin{figure}[t]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/indegree_trim.png}
\end{minipage}
\hfill
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/outdegree_trim.png}
\end{minipage}
\hfill
\begin{minipage}{\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/legend.png}
\end{minipage}
\caption{In-degree and Out-degree distribution of graph datasets.}
\label{fig:graph:degrees}
\end{figure}
We use a set of six social graphs to study the effect of partitioning
on social network analytics. Four of those were obtained from the
SNAP collection of datasets referring to social networks~\cite{snap}.
Namely, %
\emph{YouTube} is a connected part of the YouTube social graph that
includes several communities~\cite{youtubedata},
\emph{Pocek} is a connected and anonymized part of the Pocek, an
on-line social network in Slovakia~\cite{pocekdata},
\emph{Orkut} is a connected part of the Orkut free on-line social
network~\cite{youtubedata},
\emph{socLiveJournal} is a sample of the livejournal.com
graph~\cite{backstrom2006group},
\emph{RoadNet-CA} is the road network of California~\cite{road},
\emph{RoadNet-PA} is the road network of
Pennsylvania~\cite{road},
\emph{RoadNet-TX} is the road network of
Texas~\cite{road}.
The \emph{follow-jul} and \emph{follow-dec} datasets are parts of the
twitter.com follow graph that we crawled using the twitter API
starting July 2016, and up to July 2017 and December 2017
respectively; and that we have anonymized by hashing user IDs. The
first dataset is a subset of the second, and both include friend and
follower relations of any users that have published tweets in the
Greek language during the corresponding time period.
\begin{table}[t]
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
Dataset & Vertices & Edges & Symm & ZeroIn\% & ZeroOut\% & Triangles & Conn.Comp. & Diameter & Size \\
\hline
\hline
RoadNet-PA & 1.0M & 3.0M & 100.00 & 0.00 & 0.00 & 67.1K & 1052 & $\infty$ &83.7M \\ \hline
YouTube & 1.1M & 2.9M & 100.00 & 0.00 & 0.00 & 3.0M & 1 & 20 &74.0M \\ \hline
RoadNet-TX & 1.3M & 3.8M & 100.00 & 0.00 & 0.00 & 82.8K & 1766 & $\infty$ &56.5M \\ \hline
Pocek & 1.6M & 30.6M & 54.34 & 6.94 & 12.25 & 32.5M & 1 & 11 & 404M \\ \hline
RoadNet-CA & 1.9M & 5.5M & 100.00 & 0.00 & 0.00 & 120.6K & 1052 & $\infty$ &83.7M \\ \hline
Orkut & 3.0M & 117.1M & 100.00 & 0.00 & 0.00 & 627.5M & 1 & 9 & 3.3G \\ \hline
socLiveJournal & 4.8M & 68.9M & 75.03 & 7.39 & 11.12 & 285.7M & 1,876 & $\infty$ & 1.0G \\ \hline
follow-jul & 17.1M & 136.7M & 37.57 & 46.94 & 25.65 & 4.8B & 52 & $\infty$ & 2.7G \\ \hline
follow-dec & 26.3M & 204.9M & 37.57 & 55.05 & 18.34 & 7.6B & 47 & $\infty$ & 4.1G \\ \hline
\end{tabular}
}
\end{center}
\caption{Characterization of datasets.}
\label{tab:datasets}
\end{table}
Table~\ref{tab:datasets} shows some representative characteristics of
the datasets, as reported in the corresponding publications, or, when
missing, as we measured them using GraphX.
The first column shows the name of each dataset, ordered by the number
of vertices.
The second and third columns show the size of each graph, its number
of vertices and edges, respectively.
The fourth column shows edge symmetry, i.e., the percentage of edges
that were reciprocated. YouTube, Facebook, Orkut and Friendster are
undirected graphs, hence their symmetry is by definition 100\%. A
high degree of symmetry in social networks affects the structure of
the network, resulting in increased connectivity and reduced diameter
of the graph.
The fifth and sixth columns show the percentage of vertices that have
no incoming or outgoing edges, respectively. Such ``leaf'' vertices
often occur when sampling a larger graph using forest-fire crawling to
sample the graph.
The seventh column shows the total number of triangles in each
network. The number of triangles is commonly used to assess the
density of the network. As such, we expect datasets with higher
triangle counts to incur higher communication costs in BSP
computations.
The eighth column shows the number of connected components in each
graph. For directed graphs, we measured connected components using
the strongly connected components algorithm implemented in GraphX.
The ninth column shows the diameter of each graph, i.e., its longest
shortest path. The diameters of the Facebook, Twitter, YouTube and
Orkut graphs are very short, as these are dense networks. The
remaining datasets have infinite diameter, as the graphs include more
than one connected components.
The last column shows the size of the each graph dataset on disk.
To further characterize and understand each dataset,
Figure~\ref{fig:graph:degrees} shows the distribution of in-degree and
out-degree for each graph. Although all datasets exhibit fat-tailed
distributions of both in-degree and out-degree, not all seem to be
power-law distributions.
Moreover, vertices in social graphs that have many outgoing links tend
to have many incoming links. Not all networks, however, exhibit this
pattern to the same degree. We compare the graphs in this regard, by
computing the distribution of the ratio of out-degree to in-degree
over all vertices. Figure~\ref{fig:outToInDegrees} shows the
cumulative distribution function for this ratio. Note that for
Facebook and YouTube all vertices have a ratio of 1, since they are
undirected graphs. All directed graphs exhibit the pattern where most
users have a very similar number of in- and out-edges, although the
socLiveJournal graph has the least number of ``superstar'' users
compared to the rest. Twitter and Follow, both being parts of the
same twitter social graph, have the largest percentage of
``superstar'' nodes compared to the other networks.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{plots/cdfPlotDegrees.png}
\caption{The CDF of the out-degree to in-degree ratio over all vertices.}
\label{fig:outToInDegrees}
\end{figure}
\section{Graph Partitioning}
GraphX is a graph analytics framework built on Apache Spark. Spark
uses the abstraction of Resilient Distributed Datasets (RDDs) to
structure, sequence, and schedule MapReduce computations over
partitioned data. GraphX uses RDD partitions to store graph
partitions in its own representation, and maps computations expressed
in the Bulk-Synchronous Parallel (BSP) programming model to
lower-level RDD MapReduce operations.
GraphX uses vertex cut partitioning; it first distributes graph edges
into RDD partitions, and then builds a graph partition representation,
local to each RDD partition, containing local and replicated vertices
as well as metadata describing all necessary communication to
implement a BSP computation. GraphX includes four ways to initially
partition the edge list that represents each GraphX graph, which
result into four different vertex cut strategies. In addition to
these, we develop and use two additional partitioning strategies
aiming to explore the design space and optimize for different metrics.
Specifically, we use the following GraphX partitioners:
\\
\textbf{Random Vertex Cut (RVC)} assigns edges to partitions by
hashing together the source and destination vertex IDs, resulting in a
random vertex cut that collocates all same-direction edges between two
vertices.
\\
\textbf{Edge Partition 1D (1D)} assigns edges to partitions by hashing
the source vertex ID.This causes all edges with the same source vertex
to be collocated in the same partition.
\\
\textbf{Edge Partition 2D (2D)} arranges all partitions into a square
matrix and picks the column on the basis of the source vertex's hash
and the row on the basis of the destination vertex's hash. This
strategy guarantees a $2 ∗ \sqrt{N}$ upper bound on vertex replication
where $N$ is the number of partitions. Moreover, this strategy works
best if the number of partitions is a perfect square; if not, the
algorithm uses the next largest number and potentially creates
imbalanced partitioning.
\\
\textbf{Canonical Random Vertex Cut (CRVC)} assigns edges to
partitions by hashing the source and destination vertex IDs in a
canonical direction, resulting in a random vertex cut that collocates
all edges between two vertices, regardless of direction. For example
$(u, v)$ and $(v, u)$ hash to the same partition in Canonical Random
Vertex Cut but not necessarily under RVC.
To explore the design space further, we designed and implemented two
additional partitioning algorithms by changing some of the assumptions
in GraphX partitioners:
\\
\textbf{Source Cut (SC):} assign edges to partition by simple modulo
of the the source vertex IDs. This is almost equivalent to the 1D
partitioner, but also assuming that vertex IDs may capture a metric of
locality. We expected this partitioner to result into less balanced
partitions, as the hashing function in 1D achieves a more uniform
distribution, but in cases where vertex ID similarity captures
locality, we expected this partitioner to take advantage of it.
\\
\textbf{Destination Cut (DC):} assigns edges to partitions by simple
modulo on only the destination vertex IDs. This is similar to SC
except it uses the vertex ID of the edge destination to assign edges
to partitions. As in SC, we expect any correlation between vertex IDs
and locality to be captured, at the expense of load-balancing.
\subsection{Characterization Metrics}
For each of these partitioners we measure a set of metrics that
capture the properties of the partitioning, and help understand how a
partitioner works on each different dataset. We use a set of standard
metrics that have been used to predict performance~\cite{metrics},
augmenting the set with the standard deviation of the edge-partition
sizes, the number of vertices that are replicated in other partitions,
and the number of vertices that are not replicated and only reside in
a single partition. Note that even if GraphX uses vertex cut
partitioning, it does not store solely edges on each partition, but
instead reconstructs the vertices per edge partition, and finally
creates a data structure that includes the partition's vertex list.
\\
\textbf{Balance} aims to capture how balanced partition sizes are, and
is equal to the ratio of the number of edges in the biggest partition,
over the average number of edges per partition.
\\
\textbf{Non-Cut} describes the number of vertices that are not
replicated among partitions and reside in a single partition.
\\
\textbf{Cut} is the number of vertices that exist in more than one
partition, irrespective of how many copies of each cut vertex there
are. In essence, these are the vertices who are copied across
partitions into the number of copies that forms the Communication Cost
metric.
\\
\textbf{Communication Cost} (CommCost) aims to approximate the
communication cost incurred by the partitioning for a hypothetical
Bulk-Synchronous Parallel computation that stores a fixed-sized state
on all vertices. It is equal to the total number of copies of
replicated vertices that exist in more than one partition, as this is
the number of messages that need to be exchanged on every superstep to
agree on the state stored in these vertices.
\\
\textbf{Edge Partition Standard Deviation} (PartStDev) is the standard
deviation of the number of edges per partition. Similarly to Balance,
it constitutes a measure of imbalance in the edge partitions. Note,
however, that imbalance in edge partitions does not necessarily mean
imbalanced usage of memory, as the final partitions also hold the
vertices of all included edges.
Note that some of the metrics are related, since the sum of
Communication Cost plus Non-Cut Vertices is always equal to the sum of
Vertices to Same plus Vertices to Other. This is because they
correspond to different breakdowns of the total number of vertex
replicas that exist over all partitions, one is based on the existence
of referring edges in the same partition, and the other is based on
the non existence of referring edges in other partitions.
\subsection{Social Network Analytics Algorithms}
To measure the effect of the differences in partitioning presented
above, we ran the following graph algorithms in GraphX.
\\
\textbf{PageRank (PR)} computes the importance of websites within the
web graph, based on the shape of the graph around it.
\\
\textbf{Connected Components (CC)} computes the number of connected
components of the graph. The connected components algorithm labels
each connected component of the graph with ID of it's lowest-numbered
vertex.
\\
\textbf{Triangles (TR)} computes the number of triangles passing
through each vertex and sums to find the number for the whole graph.
\\
\textbf{Shortest Path (SSSP)} computes shortest paths to the given set
of landmark vertices, returns a graph where each vertex attribute is a
map containing the shortest-path distance to each reachable landmark.
\section{Evaluation}
We ran all experiments on a cluster of 5 Intel(R) Xeon(R) E5-2630 CPUs
with 256GB of main memory, configured as 1 Spark Driver and 4 Spark
Executors. Each Executor used 220GB of memory and 32 cores, resulting
into 128 total cores. For algorithms that iterate either a number of
times or to fixpoint, namely PageRank and Connected Components, we run
each experiment for 10 iterations. All times reported are the average
of 5 runs for each of two granularity configurations: Configuration
(i) uses 128 partitions and (ii) uses 256 partitions.
We measured all metrics for all configurations, datasets and
partitioners, and computed their correlation with running
time\footnote{Due to space constraints, we refer the reader to the
accompanying Technical Report for a presentation of the results on all
metrics, and focus only on the most important findings in this
paper.}.
\begin{figure}
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/pagerank_128.png}
Configuration (i)
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/pagerank_256.png}
Configuration (ii)
\end{minipage}
\hfill
\begin{minipage}{\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/eval_legend.png}
Legend (omitted from subsequent Figures)
\end{minipage}
\caption{Correlation between execution time and communication cost for PageRank}
\label{fig:pagerank}
\end{figure}
Figure~\ref{fig:pagerank} shows the correlation between execution time
and the communication cost for PageRank. As expected, we found the
Communication Cost metric to be consistently the most important
predictor of execution time for algorithms similar to PageRank, where
computation per node is small compared to the exchanged messages, with
correlation coefficients of 95\% and 96\% for partitionings (i) and
(ii) respectively. For PageRank, finer grain partitioning increases
the execution time even for the largest dataset, as the algorithm is
communication bound.
We found that the number of partitions affects not only the execution
time but also the optimal partition strategy. For the coarse-grain
configuration (i), the best partition strategy for follow-jul,
follow-dec and YouTube is 2D, while for all others it is DC. For the
fine-grain configuration (ii), the best partition strategy for YouTube
,RoadNet-PA, RoadNet-TX is DC and for all others it is 2D. In
general, we found best to opt for DC for smaller datasets and 2D for
large datasets, as even in the one exception to this rule, for YouTube
the two partitioners differed only very slightly. Both of these
partitioners aim to optimize for communication cost, with 2D achieving
better locality on large datasets.
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/cc_128.png}
Configuration (i)
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/cc_256.png}
Configuration (ii)
\end{minipage}
\caption{Correlation between execution time and communication cost for Connected Components}
\label{fig:cc}
\end{figure}
Figure~\ref{fig:cc} shows the correlation between execution time and
communication cost for Connected Components. CC is a
label-propagation algorithm where the values of most vertices converge
very fast and will not be updated in subsequent iterations. As in
PageRank, Communication Cost is the best performance predictor for
both configurations, correlated at 92\% and 94\% respectively. In
comparison to PageRank, the CC algorithm is less consistently
dominated by Communication Cost, however; after a few iterations the
algorithm converges for most vertices, allowing the fine-grain
configuration (ii) to perform better for all but the smallest
datasets, compared to the coarse-grained configuration (i).
For configuration (i), the best partition strategy for follow-jul,
follow-dec, Orkut and socLiveJournal is 2D. For the five smaller
datasets, Pocek, YouTube, RoadNet-CA, RoadNet-PA and RoadNet-TX the
best partition strategy is 1D, although the difference is in the
noise. On configuration (ii), the best partition strategy for all
datasets is 2D. Execution time from configuration (i) to (ii)
decreases up to 22\% on the bigger datasets, because after the first
few iterations not all vertices need to be revisited. This results in
partitions of similar size being load-imbalanced with respect to
running time, causing the fine-grain configuration (ii) to perform
better.
\begin{figure}[t]
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/triangles_128.png}
Configuration (i)
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/triangles_256.png}
Configuration (ii)
\end{minipage}
\caption{Correlation between Execution Time and Cut-Vertices for Triangle Count}
\label{fig:triangles}
\end{figure}
Figure~\ref{fig:triangles} shows the correlation between execution
time and the number of Cut vertices for Triangle Count (TR). TR
performs much more computation per node compared to PageRank and CC,
and much less communication. We found the most correlated
partitioning metric to execution time in this case to be the number of
vertices replicated across more than one partitions, as these incur
additional reductions to communicate per-vertex state in GraphX (and
all Pregel-like systems). Correlation to execution time was 95\% and
97\% for configurations (i) and (ii), respectively. Interestingly,
the Communication Cost metric correlation coefficient is only 43\% and
34\%, respectively.
For configuration (i), the best partitioning strategy for follow-jul,
follow-dec, RoadNet-PA and RoadNet-TX is DC, for Orkut and YouTube it
is SC, for Pocek and Roadnet-CA it is 2D and for socLiveJournal it is
CRVC. As seen from Figure~\ref{fig:triangles}, however, none of the
partitioners manages to optimize for this metric much better than the
rest, resulting in performance differences of 5-10\% between the best
and worst partitioners in all datasets except the smallest. For
configuration (ii), the best partitioning strategy for follow-jul and
YouTube is 2D, for follow-dec, Orkut, RoadNet-CA, RoadNet-PA,
RoadNet-TX, and socLiveJournal it is CRVC, and for Pocek it is DC,
with most differences being in the noise.
With respect to granularity, fine-grain configuration (ii) outperforms
configuration (i) consistently, by up to 40\% for Orkut and 20\% for
follow-dec.
\begin{figure}
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/spath_128.png}
Configuration (i)
\end{minipage}
\hfill
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/spath_256.png}
Configuration (ii)
\end{minipage}
\caption{Correlation between Execution Time and Communication Cost for SSSP}
\label{fig:spath}
\end{figure}
To evaluate the SSSP algorithm, as it is highly sensitive to the
single vertex selected as the shortest path source, we randomly
selected 5 source vertices in each dataset and measured all
partitioners for each of the source vertices. This way we aim to
average over both a number of runs and a number of possible source
vertices, as the algorithm may be highly sensitive to the shape and
density of each dataset as well as the source vertex.
Figure~\ref{fig:spath} shows the correlation between execution time
and Communication Cost for SSSP. As with PageRank, SSSP is highly
communication bound, and may require a number of iterations related to
the graph diameter to finish. The correlation to Communication Cost
for configurations (i) and (ii) is 80\% and 86\% respectively. On
configuration (i), the best partition strategy is 2D for Follow,
Follow\_dec, Orkut and socLiveJournal, for Pocek it is 1D, and for
YouTube it is SC. On configuration (ii), the best partition strategy
is 2D for Follow, Follow\_dec, Orkut and YouTube, and for Pocek and
socLiveJournal it is 1D. Granularity seems to not affect the
execution time consistently in the case of SSSP.
The grid datasets (Roadnet-CA, RoadNet-PA and RoadNet-TX) are not
shown in the plot, as Spark did not complete SSSP, due to out of
memory errors.
Due to the setup of this experiment, the SSSP measurements exhibit a
much greater variance compared to the rest of the algorithms. This is
because the average-of-5 results depicted correspond to 5 different
source vertices, not to 5 exactly identical executions as in the other
algorithms.
Overall, we found that for algorithms with complexity mostly related
to the number of edges should prefer partitioners that optimize the
Communication Cost metric; for algorithms whose complexity mostly
depends on the number of vertices, partitioners should be compared
using the Cut Vertices metric, as a better approximation of the
communication overhead of each Pregel/BSP superstep.
We performed two additional experiments to better understand the
impact of actual communication costs on GraphX, by changing the
physical network and storage resources available in our Spark cluster.
We ran the two additional experiments based on the configuration (ii).
Specifically, we ran the PageRank algorithm on the big dataset,
follow\_dec after updating the network Infrastructure to 40Gbps,
compared to 1Gbps above. We set up two new configurations:
configuration (iii) uses HDFS on Hard Disk Drives for storage, as in
configuration (ii); and configuration (iv) uses local Solid State
Drives on every executor machine. On each configuration we take the
average of five different runs and compare with the configuration (ii)
average. We found that execution time is on average 15\% less for
configuration (iii) and 20\% less for configuration (iv). This shows
that selecting a good partitioner has a bigger impact on performance
for better infrastructure.
\section{Related Work}
Apache GraphX implements the Bulk Synchronous Processing (BSP)
model~\cite{BSPmodel} for graph processing. BSP was first used for
large-scale graph analytics in Pregel~\cite{pregelSystem} introduced
by Malecwiz et al. Except for GraphX, there are several other
Pregel-like systems for graph analytics, such as Giraph,
GPS~\cite{GPS}, Mizan~\cite{mizan} and GraphLab. As these graph
analytics frameworks aim to be generic and support any algorithm, they
are often forced to generalize their design over all graph
computations, resulting in sub-optimal performance compared to a
hand-crafted implementation of each algorithm. Satish et
al.~\cite{satish} demonstrate a huge performance gap between all the
state-of-the-art graph processing frameworks and the best hand-crafted
implementation, which they call the ``ninja gap.''
A lot of related work focuses on the performance comparison of these
systems, without however producing a common consensus. Han et al.,
provide an experimental comparison of Pregel-like
systems~\cite{comparisonOfPregelLikeSystems} of Giraph, GPS, Mizan and
GraphLab. They conclude that Giraph and GraphLab have better
performance than the other two frameworks. Ching et
al.~\cite{oneTrillionEdges} provide a comparison between GraphX and a
custom version of Giraph, and found that Apache Giraph does not
outperform GraphX. Verma et al.~\cite{comparisonPartStrat} present an
experimental comparison between partition strategies in Distributed
Graph Processing on different platforms on PowerGraph, PowerLyra and
GraphX. Through their experiments they prove that no single
partitioning is the best to fit overall situations and give a
heuristic guide for selecting a partitioner algorithm for PowerGraph
and PowerLyra. We have improved on these findings by studying the
effect on much larger graphs, the effect of partitioning granularity,
and explained the performance difference observed in terms of
edge-based or vertex-based communication metrics.
There are several approaches to graph partitioning in the literature,
aiming to optimize the performance of graph processing frameworks.
Fennel~\cite{fennel} is a one-pass streaming graph partitioning
algorithm achieving significant performance improvement than previous
implementations. Stanton et al.~\cite{streamingGraphPartitioning}
present a streaming graph partitioner that eliminates communication
cost of partitioning by partitioning as the graph is loaded. Karypis
and Kumar~\cite{karypis1998fast} have proposed hierarchical
partitioning similar to clustering and community detection
computations, to optimize communication costs.
\section{Conclusions}
Graph Analytics computations are complex and highly dependent on the
properties of each specific dataset. Many such applications use
standard analytics runtimes, optimized for the general case, even
though social network datasets and computations have very particular
characteristics. In this work we investigate how a computation can be
better optimized for social network datasets by tailoring the
partitioning strategy to the dataset and to the computation.
We measure the effect of partitioning on performance for many
datasets, partitioning strategies, and analytics algorithms. Over all
partitioners, we found that communication cost not affect the
performance in all cases except in case of algorithms keeping a lot of
per-vertex state and per-vertex computation, such as Triangle Count.
We show that granularity plays a significant role in performance, and
provide heuristics as to selecting the partitioning granularity based
on the dataset size and algorithm characteristics.
\bibliographystyle{abbrv}
| {
"attr-fineweb-edu": 1.793945,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUb0_xK7FjYHGHyp7f | \section{Introduction}\label{S1}
Recently, reconfigurable intelligent surface (RIS) has been proposed to improve the capacity of the wireless communication system~\cite{ZhangEfficiency}. Specifically, RIS consists of a large number of reconfigurable elements (e.g., 256), which can be deployed between the base station (BS) and the user to establish an extra reflecting link. By properly reconfiguring the RIS elements, RIS can provide high reflecting beamforming gain with low cost and low consumption~\cite{RuiZhang19Beamforming}. The reliable RIS reflecting beamforming requires the accurate channel state information (CSI)~\cite{JunPrecoding,Zijian}. However, due to a large number of RIS elements, CSI acquisition is challenging for the RIS assisted system~\cite{CETutorial}.
There are two typical categories of methods for CSI acquisition, which are respectively the explicit CSI acquisition (i.e., channel estimation) and the implicit CSI acquisition (i.e., beam training). For the first category, the BS sends the pilot signals to the user through the RIS, and the user directly estimates the channel based on the received pilot signals~\cite{PartI}. Since the RIS element is usually passive, only the cascaded channel, i.e., the cascade of the channel from the BS to the RIS and the channel from the RIS to the user, can be estimated by least squares (LS)~\cite{Power'1} or minimum mean square error (MMSE) algorithm~\cite{Nadeem20DFT}. The high-dimensional cascaded channel estimation will lead to unaffordable pilot overhead in the RIS assisted system. In order to solve this problem, two types of low-overhead cascaded channel estimation schemes have been proposed~\cite{PartI}. On the one hand, some compressive sensing (CS) algorithms can be used to estimate the high-dimensional cascaded channel by leveraging the sparsity of the angular cascaded channel~\cite{JunCS,PartII}. On the other hand, the multi-user correlation is exploited to reduce the pilot overhead by considering that all users communicate with the BS via the same RIS~\cite{Wang20Correlation}. However, for this category of method, since RIS cannot preform the reliable reflecting beamforming before channel estimation, the received signal-to-noise ratio (SNR) is usually low. It is difficult for channel estimation to achieve the satisfactory channel estimation accuracy with the low received SNR.
The second category is beam training, where CSI can be obtained by estimating the physical directions of channel paths instead the entire channel. This beam training method has been widely considered in the existing 5G system, especially for millimeter-wave frequency~\cite{5GBT,5GCodebook}. Specifically, the BS and the user preform the training procedure through multiple directional beams (codewords) predefined in the codebook to search the optimal directional beam. After beam training, the physical directions of channel paths can be effectively obtained~\cite{XinyuBS}. Compared with channel estimation, beam training can directly achieve the reliable beamforming by the training procedure, which can avoid estimating the entire channel with the low received SNR. Recently, the beam training method has been extended to the RIS assisted system for CSI acquisition~\cite{RISBT,JunBT,DNBT}. The basic idea is that based on the cascaded array steering vector of the RIS cascaded channel, the codebook consisting of multiple RIS directional beams is firstly designed, and then the training procedure between the RIS and the user is performed to search the optimal RIS directional beam~\cite{RISBT}. By considering that the cascaded array steering vector is mainly determined by the angle differences at the RIS, the partial search based beam training scheme was further proposed to reduce the search complexity~\cite{JunBT}.
However, the existing codebook and beam training schemes may not be applicable any more with the increasing number of RIS elements. Specifically, the RIS assisted system is faced with the ``multiplicative fading" effect~\cite{VincentFading,PathLoss,RenzoRIS}, where the equivalent path loss of the BS-RIS-user reflecting link is the product of (instead of the sum of) the path losses of the BS-RIS link and RIS-user link. Thanks to low cost and low power consumption, more and more RIS elements are expected to deploy to compensate for the severe path loss~\cite{PathLoss}. RIS is more likely to develop into extremely large-scale RIS (XL-RIS) for future 6G communications, which will lead to the fundamental transformation of electromagnetic radiation field structure~\cite{Mingyao}. The electromagnetic radiation field can be divided into far-field region and near-field region~\cite{RayDistance}, which are corresponding to the far-field channel model and near-field channel model, respectively. The boundary of these two fields is determined by the Rayleigh distance~\cite{RayDistance}, which is proportional to the square of the array aperture. In the RIS assisted system, the array aperture is not very large, and the Rayleigh distance is small. The scatters are generally assumed in the far-field region of RIS. The existing codebook for beam training is designed based on the far-field channel model~\cite{RISBT,JunBT,DNBT}. With the increasing number of RIS elements from RIS to XL-RIS (e.g., from 256 to 1024), the array aperture of XL-RIS is very large, and the Rayleigh distance increases accordingly~\cite{Mingyao}. The scatters are more likely to be in the near-field region of XL-RIS, and the near-field channel model should be considered in the XL-RIS assisted system. The existing far-field codebook mismatches the near-field channel model. Thus, the corresponding far-field beam training will cause severe performance loss in the XL-RIS assisted near-field communications. Unfortunately, this important problem has not been studied in the literature.
To full in this gap, we propose the efficient near-field beam training schemes by designing the near-field codebook to match the near-field channel model in this paper\footnote{Simulation codes will be provided in the following link to reproduce the results presented in this paper after publication: http://oa.ee.tsinghua.edu.cn/dailinglong/publications/publications.html.}. Our contributions are summarized as follows.
\begin{enumerate}
\item We design the near-field codebook to match the near-field channel model, and then propose the corresponding near-field beam training scheme for XL-RIS. Specifically, by considering the near-field cascaded array steering vector of the XL-RIS cascaded channel, the near-field codebook is firstly designed, where each codeword is determined by a pair of sampled points in the $x\mbox{-}y\mbox{-}z$ coordinate system. Then, the optimal codeword for XL-RIS is obtained by the exhausted training procedure between the XL-RIS and the user.
\item In order to reduce the beam training overhead, we further design a hierarchical near-field codebook and propose the corresponding hierarchical near-field beam training scheme for XL-RIS. Compared with the near-field codebook, the hierarchical near-field codebook consists of several different levels of sub-codebooks, which are determined by different sampling ranges and sampling steps. During beam training, we search from the first level sub-codebook to the last level sub-codebook in turn, where the sampling ranges and sampling steps gradually become smaller. Finally, the globally optimal codeword can be obtained in the last level sub-codebook associated with the minimum sampling ranges and sampling steps.
\end{enumerate}
The rest of the paper is organized as follows. In Section II, we firstly introduce the signal model, and then review the existing far-field channel model and far-field codebook. The near-field channel model for the XL-RIS assisted system is also presented in Section II. In Section III, the near-field codebook is designed and the corresponding near-field beam training scheme is proposed, and then the hierarchical near-field codebook based beam training is further proposed to reduce the beam training overhead. Simulation results and conclusions are provided in Section IV and Section V, respectively.
{\it Notation}: Lower-case and upper-case boldface letters ${\bf{a}}$ and ${\bf{A}}$ denote a vector and a matrix, respectively; ${{{\bf{a}}^*}}$ and ${{{\bf{a}}^H}}$ denote the conjugate and conjugate transpose of vector $\bf{a}$, respectively; ${{\|{\bf{a}}\|_2}}$ denotes the $l_2$ norm of vector $\bf{a}$; ${{{\bf{A}}^{H}}}$ denotes the conjugate transpose of matrix $\bf{A}$. $\cal CN\left(\mu,\sigma \right)$ denotes the probability density function of the circularly symmetric complex Gaussian distribution with mean $\mu$ and variance $\sigma^2$. Finally, ${\rm{diag}}\left({\bf{a}}\right)$ denotes the diagonal matrix with the vector $\bf{a}$ on its diagonal.
\vspace{-1mm}
\section{System Model}\label{S2}
In this section, we will first introduce the signal model of the XL-RIS assisted communication system. Then, the existing far-field channel model and the far-field codebook for beam training will be briefly reviewed. Finally, the near-field channel model for XL-RIS is presented.
\subsection{Signal Model}\label{S2.1}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.5\linewidth]{Fig1}
\end{center}
\setlength{\abovecaptionskip}{-0.3cm}
\caption{The XL-RIS assisted wireless communication system.} \label{HF}
\end{figure}
As shown in Fig. 1, a XL-RIS is deployed between the BS with $M$-element antenna array and a single-antenna user to provide a reflecting link to assist communication, where the direct link between the BS and the user is blocked by obstacles~\cite{ChongweiCE,RISBT}. The XL-RIS consisting of $N=N_1\times N_2$ elements is placed in the $x\mbox{-}z$ plane, where the center of the XL-RIS is at the origin of the $x\mbox{-}y\mbox{-}z$ coordinate system.
Let ${\bf{G}}\in\mathbb{C}^{N\times M}$ denote the channel from the BS to the XL-RIS and ${\bf{h}}_{r}\in\mathbb{C}^{1\times N}$ denote the channel from the XL-RIS to the user. By considering the downlink transmission, the received signal $r$ at the user can be expressed by
\begin{equation}\label{eq1}
r = {\bf{h}}_{r}{\rm{diag}}\left({\bm{\theta}}\right){\bf{Gv}}s + n,
\end{equation}
where ${\bm{\theta}}=[\theta_{1},\cdots,\theta_{N}]\in\mathbb{C}^{N\times 1}$ is the reflecting beamforming vector at the XL-RIS with $\theta_{n}$ representing the reflecting coefficient at the $n$th RIS element $(n=1, \cdots,N)$, ${\bf{v}}\in\mathbb{C}^{M\times 1}$ represents the beamforming vector at the BS, $s$ represents the symbol transmitted by the BS, $n\sim{\cal C}{\cal N}\left( {0,\sigma^2}\right)$ represents the received noise at the user with ${\sigma^2}$ representing the noise power.
To design the effective beamforming vectors ${\bf{v}}$ and ${\bm{\theta}}$, it is necessary to acquire the accurate CSI~\cite{JunPrecoding,Zijian}. Since there are extremely large number of RIS elements, channel estimation cannot achieve the satisfactory channel estimation with low pilot overhead. By contrast, beam training is a more effective way to acquire CSI~\cite{RISBT,DNBT}. Specifically, since the XL-RIS is generally dominated by the main path (or a few paths), we only need to search the physical direction of the main path by beam training instead of explicitly estimating the entire channel. Thus, in the following description, only the main path is concerned, and the corresponding beam training method will be investigated to search the optimal directional beam to align with the main path.
Moreover, since the BS and the XL-RIS are generally deployed at fixed positions, the channel ${\bf{G}}$ from the BS to the XL-RIS has a much longer channel coherence time than the channel ${\bf{h}}_r$ from the XL-RIS to the user due to the user's mobility~\cite{RISBT}. For simplicity, we assume that the beamforming vector ${\bf{v}}$ at the BS has been aligned with the main path of the channel $\bf{G}$~\cite{RISBT}. In this paper, we only focus on the beam training at the XL-RIS. Next, we will briefly review the existing far-field channel model and the corresponding far-field codebook for beam training.
\subsection{Far-Field Channel Model and Far-Filed Codebook}\label{S2.2}
Based on the far-field channel model, $\bf{G}$ and ${\bf{h}}_r$ can be respectively represented by
\begin{equation}\label{eq2}
{\bf{G}}_{\rm{far\mbox{-}field}}={\alpha_{G}}{\bf{a}}\left({\phi}_{G_r},{\psi}_{G_r}\right){\bf{b}}^T\left({\phi}_{G_t},{\psi}_{G_t}\right),
\end{equation}
\begin{equation}\label{eq3}
{\bf{h}}^r_{\rm{far\mbox{-}field}}={\alpha_{r}}{\bf{a}}^T\left({\phi}_{r},{\psi}_{r}\right),
\end{equation}
where ${\alpha_{G}}$ and ${\alpha_{r}}$ represent the path gains, ${\phi}_{G_r}$ and ${\psi}_{G_r}$ represent the spatial angles at the XL-RIS for the channel $\bf{G}$, ${\phi}_{G_t}$ and ${\psi}_{G_t}$ represent the spatial angles at the BS for the channel $\bf{G}$, ${\phi}_{r}$ and ${\psi}_{r}$ represent the spatial angles at the XL-RIS for the channel ${\bf{h}}_r$. ${\bf{a}}(\phi, \psi)$ and ${\bf{b}}(\phi, \psi)$ represent the far-field array steering vector associated to the XL-RIS and the BS, respectively. Take ${\bf{a}}(\phi, \psi)$ as an example, it can be expressed as~\cite{PartII}
\begin{equation}\label{eq4}
{\bf{a}}\left(\phi,\psi \right) = {\left[ {{e^{ - j2{\pi}{\phi}{\bf{n}}_1}}}\right]}{\otimes}{\left[ {{e^{ - j2{\pi}{\psi} {\bf{n}}_2}}} \right]},
\end{equation}
where ${\bf{n}}_1=[0,1,\cdots,N_1-1]^T$ and ${\bf{n}}_2=[0,1,\cdots,N_2-1]^T$. $\phi = d_f{\rm{sin}}\left(\vartheta\right){\rm{cos}}\left(\upsilon\right)/{\lambda}$ and $\psi=d_f{\rm{sin}}\left(\upsilon\right)/{\lambda}$, where $\vartheta$ and $\upsilon$ respectively represent the physical angles in the azimuth and elevation, $\lambda$ is the carrier wavelength, and $d_f$ is the element spacing satisfying $d_f = \lambda/2$.
By considering that the beamforming vector $\bf{v}$ at the BS has been designed, i.e., ${\bf{v}}=\frac{{\bf{b}}^*}{\sqrt M}$, the receiver signal $r$ in~(\ref{eq1}) can be further represented by
\begin{equation}\label{eq5}
\begin{aligned}
r= & {\alpha}{\bf{a}}^T\left({\phi}_{r},{\psi}_{r}\right){\rm{diag}}\left({\bm{\theta}}\right){\bf{a}}\left({\phi}_{G_r},{\psi}_{G_r}\right){\bar s} + n
\\ = & {\alpha}{\bm{\theta}}^T{\rm{diag}}\left({\bf{a}}\left({\phi}_{r},{\psi}_{r}\right)\right){\bf{a}}\left({\phi}_{G_r},{\psi}_{G_r}\right){\bar s} + n
\\ = & {\alpha}{\bm{\theta}}^T{\bf{a}}\left({\phi}_{G_r}+{\phi}_{r},{\psi}_{G_r}+{\psi}_{r}\right){\bar s} + n
\\ = & {\bm{\theta}}^T{\bar{{\bf{h}}}}_{\rm{far\mbox{-}field}}{\bar s} + n,
\end{aligned}
\end{equation}
where ${\bar{{\bf{h}}}}_{\rm{far\mbox{-}field}}={\alpha}{\bf{a}}\left({\phi}_{G_r}+{\phi}_{r},{\psi}_{G_r}+{\psi}_{r}\right)$ denotes the far-field cascaded channel, $\alpha={\alpha}_G{\alpha}_r$ denotes the effective gain of ${\bar{{\bf{h}}}}_{\rm{far\mbox{-}field}}$, and $\bar s={\bf{b}}^T\left({\phi}_{G_t},{\psi}_{G_t}\right){\bf{v}}s$ denotes the effective transmitted symbol.
For beam training, the entire procedure can be divided into multiple time slots. In different time slots, the reflecting beamforming vector ${\bm{\theta}}$ is set as different codewords in the predefined codebook, which will equivalently produce different directional beams. For each codeword, the user will measure the strength of the received signal $r$ and feedback the optimal codeword index. Based on the far-field array steering vector in~(\ref{eq4}), the existing far-field codebook ${\bf{F}}$ is generally designed as~\cite{RISBT}
\begin{equation}\label{eq6}
{\bf{F}}=\left[{\bf{a}}^*\left({\phi}_1,{\psi}_1\right),\cdots,{\bf{a}}^*\left({\phi}_1,{\psi}_{N_1}\right),\cdots,{\bf{a}}^*\left({\phi}_{N_1},{\psi}_1\right),\cdots,{\bf{a}}^*\left({\phi}_{N_1},{\psi}_{N_2}\right)\right],
\end{equation}
where ${\phi}_n={\frac{2n-N_1-1}{N_1}}$ with $n=1,2,\cdots,N_1$ and ${\psi}_n={\frac{2n-N_2-1}{N_2}}$ for ${n = 1,2, \cdots ,N_{2}}$. Each column of $\bf{F}$ represents a codeword for ${\bm{\theta}}$.
The existing beam training schemes are mainly based on the above far-field codebook ~\cite{RISBT,JunBT,DNBT}. However, when the RIS develops into XL-RIS, the far-field codebook and beam training may not be applicable any more, which will be explained in detail in the next Section II-C.
\subsection{Near-Field Channel Model}\label{S2.2}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{FarNear}
\end{center}
\setlength{\abovecaptionskip}{-0.3cm}
\caption{The near-field region and the far-field region~\cite{Mingyao}.} \label{HF}
\end{figure}
Specifically, as shown in Fig. 2, the electromagnetic radiation field in wireless communications can be divided into far-field region and near-field region~\cite{RayDistance}, where different fields will result in different channel models. The bound between these two fields is determined by the Rayleigh distance $Z=\frac{2D^2}{\lambda}$, where $D$ represents the array aperture. In the conventional RIS assisted system, since the array aperture of the RIS is not too large, the corresponding Rayleigh distance is small. The scatters are in the far-field region of RIS~\cite{JunPrecoding,ChongweiCE,RISBT}, where the RIS channel can be modeled under the planar wave assumption, as described in~(\ref{eq2}) and~(\ref{eq3}). With the increase of the array aperture in the XL-RIS assisted system, the corresponding Rayleigh distance also increases. For example, we consider that the carrier frequency is $30$ GHz and the corresponding carrier wavelength is $\lambda=0.01$ meters. When the array aperture for RIS is $D=0.1$ meters, the Rayleigh distance is only $Z=2$ meters. When the array aperture for XL-RIS increases to $D=1$ meter, the Rayleigh distance can reach $Z=200$ meters. Thus, in the XL-RIS assisted system, the scatters are more likely to be in the near-field region, where the XL-RIS channel should be modeled under the spherical wave assumption. Next, we will introduce the near-field channel model of XL-RIS.
For convenience of description, the distances and coordinates are normalized by the carrier wavelength $\lambda$ in the following of this paper~\cite{JinShi}. The vertical or horizontal distance between two adjacent RIS elements is set to $d$. The coordinate of the RIS element can be represented as $((n_1-\frac{N_1+1}{2})d,0,(n_2-\frac{N_2+1}{2})d)$ in the $x\mbox{-}y\mbox{-}z$ coordinate system, where $n_1=1,\cdots,N_1$ and $n_2=1,\cdots,N_2$.
The near-field channel ${\bf{h}}^r_{\rm{near\mbox{-}field}}$ from the XL-RIS to the user can be represented by~\cite{JinShi}
\begin{equation}\label{eq7}
{\bf{h}}^r_{\rm{near\mbox{-}field}}={\alpha_{r}}{\bf{c}}^T\left(x_r,y_r,z_r\right),
\end{equation}
where $(x_r,y_r,z_r)$ represents the coordinate of the scatter corresponding to the main path between the XL-RIS and user. Compared with the far-field channel model in~(\ref{eq3}), the array steering vector ${\bf{c}}\left(x_r,y_r,z_r\right)$ for the near-field channel model is derived based on the the spherical wave assumption, which can represented by~\cite{JinShi}
\begin{equation}\label{eq8}
{\bf{c}}(x_r,y_r,z_r) = \left[e^{-j2\pi D^r(1,1)}, \cdots, e^{-j2\pi D^r(1,N_2)}, \cdots,e^{-j2\pi D^r(N_1,1)},\cdots,e^{-j2\pi D^r(N_1,N_2)}\right]^T,
\end{equation}
where $D^r(n_1,n_2) = \sqrt{(x_r-(n_1-\frac{N_1+1}{2})d)^2+y_r^2+(z_r-(n_2-\frac{N_2+1}{2})d)^2}$ represents the distance from the $(n_1,n_2)$-th RIS element to $(x_r,y_r,z_r)$.
Similarly, the array steering vector at the RIS of the channel $\bf{G}$ from the BS to the XL-RIS should be also near-field. Thus, the near-field cascaded channel ${\bar{{\bf{h}}}}_{\rm{near\mbox{-}field}}$ in~(\ref{eq5}) can be represented by
\begin{equation}\label{eq9}
{\bar{{\bf{h}}}}_{\rm{near\mbox{-}field}}={\alpha}{{\bar{\bf{c}}}}\left((x_{G_r},y_{G_r},z_{G_r}),(x_r,y_r,z_r)\right),
\end{equation}
where $(x_{G_r},y_{G_r},z_{G_r})$ represents the coordinate of the scatter corresponding to the main path between the BS and XL-RIS. The near-field cascaded array steering vector ${{\bar{\bf{c}}}}\left((x_{G_r},y_{G_r},z_{G_r}),(x_r,y_r,z_r)\right)$ can be represented by
\begin{equation}\label{eq10}
\begin{aligned}
{{\bar{\bf{c}}}}\left((x_{G_r},y_{G_r},z_{G_r}),(x_r,y_r,z_r)\right) = \left[e^{-j2\pi D(1,1)}, \cdots, e^{-j2\pi D(1,N_2)}, \cdots,\right.\\\left.e^{-j2\pi D(N_1,1)},\cdots,e^{-j2\pi D(N_1,N_2)}\right]^T,
\end{aligned}
\end{equation}
where $D(n_1,n_2)=D^r(n_1,n_2) + D^{G_r}(n_1,n_2)$ represents the effective distance of ${\bar{{\bf{h}}}}_{\rm{near\mbox{-}field}}$, and $D^{G_r}(n_1,n_2)=\sqrt{(x_{G_r}-(n_1-\frac{N_1+1}{2})d)^2+y_{G_r}^2+(z_{G_r}-(n_2-\frac{N_2+1}{2})d)^2}$. $\left(x_{G_r},y_{G_r},z_{G_r}\right)$ satisfies $X_{\rm{min}}^{G_r}\leq x_{G_r}\leq X_{\rm{max}}^{G_r}$, $Y_{\rm{min}}^{G_r}\leq y_{G_r}\leq Y_{\rm{max}}^{G_r}$, $Z_{\rm{min}}^{G_r}\leq z_{G_r}\leq Z_{\rm{max}}^{G_r}$, and $\left(x_{r},y_{r},z_{r}\right)$ satisfies $X_{\rm{min}}^{r}\leq x_{r}\leq X_{\rm{max}}^{r}$,$Y_{\rm{min}}^{r}\leq y_{r}\leq Y_{\rm{max}}^{r}$, and $Z_{\rm{min}}^{r} \leq z_{r}\leq Z_{\rm{max}}^{r}$.
Compared with the far-field cascaded array steering vector only associated with the angles in~(\ref{eq4}), the near-field cascaded array steering vector is determined by a pair of points in the $x\mbox{-}y\mbox{-}z$ coordinate system, i.e., $(x_{G_r},y_{G_r},z_{G_r})$ and $(x_r,y_r,z_r)$. The existing far-field codebook mismatches the near-field channel model. Thus, the corresponding far-field beam training will cause severe performance loss in the XL-RIS assisted near-field communications. In this paper, we design the near-field codebook to match the near-field channel model, and then propose the corresponding near-field beam training for XL-RIS, which will be introduced in the next Section III.
\section{Proposed Near-Field Codebook Design and Beam Training Scheme for XL-RIS}\label{S3}
In this section, we will introduce the proposed near-field codebook and the corresponding near-field beam training. Then, a hierachical near-field codebook and the corresponding beam training will be further proposed to reduce the beam training overhead.
\subsection{Proposed Near-Field Codebook Design and Beam training}\label{S3.1}
Inspired by the near-field dictionary matrix design for the XL-MIMO channel estimation~\cite{JinShi}, we design a near-field codebook for the XL-RIS beam training. In the near-field dictionary matrix proposed in~\cite{JinShi} for the extremely large-scale linear antenna array, the considered entire two-dimensional (2D) plane is divided by several sampled points in the $x\mbox{-}y$ coordinate system. Each column of the dictionary matrix is the corresponding near-field array steering vector for the linear array associated with one sampled point. If the near-field codebook for the planar array is required to design in the XL-MIMO system, we only need to extend the 2D plane to the three-dimensional (3D) space, where each codeword can be generated based on the near-field array steering vector for the planar array associated with one sampled point in the $x\mbox{-}y\mbox{-}z$ coordinate system. However, the near-field cascaded channel brings new challenges to the codebook design for XL-RIS .
Specifically, since the near-field cascaded array steering vector ${{\bar{\bf{c}}}}$ of the near-field cascaded channel ${\bar{{\bf{h}}}}_{\rm{near\mbox{-}field}}$ is determined by the sum of the distance from $\left(x_{G_r},y_{G_r},z_{G_r}\right)$ to the XL-RIS and the distance from $\left( x_r,y_r,z_r\right)$ to the XL-RIS, each codeword for XL-RIS should be related to a pair of sampled points in the $x\mbox{-}y\mbox{-}z$ coordinate system, instead of only one sampled point~\cite{JinShi}. Next, we will introduce the designed near-field codebook based on the near-field cascaded array steering vector.
\subsubsection{Near-Field Codebook Design}
Let $\Xi^{G_r}$ and $\Xi^{r}$ denote the two collections of sampled points corresponding to $\left(x_{G_r},y_{G_r},z_{G_r}\right)$ and $\left( x_r,y_r,z_r\right)$, which can be represented as
\begin{equation}\label{eq11}
\begin{aligned}
{\Xi}^{G_r} = \left\{(x^{G_r}_s,y^{G_r}_s,z^{G_r}_s)|x^{G_r}_s=X^{G_r}_{\rm{min}},X^{G_r}_{\rm{min}}+\Delta x^{G_r},\cdots,X^{G_r}_{\rm{max}};y^{G_r}_s=Y^{G_r}_{\rm{min}},\right.\\\left.Y^{G_r}_{\rm{min}}+\Delta y^{G_r},\cdots,Y^{G_r}_{\rm{max}}; z^{G_r}_s=Z^{G_r}_{\rm{min}},Z^{G_r}_{\rm{min}}+\Delta z^{G_r},\cdots,Z^{G_r}_{\rm{max}}\right\},
\end{aligned}
\end{equation}
\begin{equation}\label{eq12}
\begin{aligned}
{\Xi}^{r} = \left\{(x^{r}_s,y^{r}_s,z^{r}_s)|x^{r}_s=X^{r}_{\rm{min}},X^{r}_{\rm{min}}+\Delta x^{r},\cdots,X^{r}_{\rm{max}};y^{r}_s=Y^{r}_{\rm{min}},\right.\\\left.Y^{r}_{\rm{min}}+\Delta y^{r},\cdots,Y^{r}_{\rm{max}}; z^{r}_s=Z^{r}_{\rm{min}},Z^{r}_{\rm{min}}+\Delta z^{r},\cdots,Z^{r}_{\rm{max}}\right\},
\end{aligned}
\end{equation}
where $\Delta x^{G_r}$, $\Delta y^{G_r}$ and $\Delta z^{G_r}$ represent the sampling step on the $x$-axis, $y$-axis and $z$-axis for ${\Xi}^{G_r}$, respectively. $\Delta x^{r}$, $\Delta y^{r}$ and $\Delta z^{r}$ represent the sampling step on the $x$-axis, $y$-axis and $z$-axis for ${\Xi}^{r}$, respectively. Let $\Delta =[\Delta x^{G_r},\Delta y^{G_r},\Delta z^{G_r},\Delta x^{r},\Delta y^{r},\Delta z^{r}]$ denote all sampling steps. Given a pair of sampled points $(x^{G_r}_s,y^{G_r}_s,z^{G_r}_s)$ and $(x^{r}_s,y^{r}_s,z^{r}_s)$, the effective sampled distance $D_s(n_1,n_2)$ can be presented by
\begin{equation}\label{eq13}
\begin{aligned}
D_s(n_1,n_2) = & \sqrt{\bigg(x^{G_r}_s-(n_1-\frac{N_1+1}{2})d\bigg)^2+{y^{G_r}_s}^2+\bigg(z^{G_r}_s-(n_2-\frac{N_2+1}{2})d\bigg)^2}\\
+ & \sqrt{\bigg(x^{r}_s-(n_1-\frac{N_1+1}{2})d\bigg)^2+{y^{r}_s}^2+\bigg(z^{r}_s-(n_2-\frac{N_2+1}{2})d\bigg)^2},
\end{aligned}
\end{equation}
where $n_1=1,2,\cdots,N_1$ and $n_2=1,2,\cdots,N_2$.
\begin{algorithm}[htbp]
\caption{Near-field codebook design}
\textbf{Inputs}: The two collections of sampled points $\Xi^{G_r}$ and $\Xi^{r}$, the number of RIS elements $N_1$ and $N_2$.
\\\textbf{Initialization}: ${\bf{W}}=\emptyset$, $L=0$.
\\1. \textbf{for} $(x^{G_r}_s,y^{G_r}_s,z^{G_r}_s)\in \Xi^{G_r}$ \textbf{do}
\\2. \hspace*{+3mm}\textbf{for} $(x^{r}_s,y^{r}_s,z^{r}_s)\in \Xi^{r}$ \textbf{do}
\\3. \hspace*{+6mm}${\bar{\bf{c}}}_s=\left[e^{-j2\pi D_s(1,1)}, \cdots, e^{-j2\pi D_s(1,N_2)}, \cdots,e^{-j2\pi D_s(N_1,1)},\cdots,e^{-j2\pi D_s(N_1,N_2)}\right]^H$
\\4. \hspace*{+6mm}\textbf{if} ${\bf{w}}\notin {\bf{W}}$ \textbf{then}
\\5. \hspace*{+9mm}${\bf{W}}=[{\bf{W}},{\bar{\bf{c}}}_s]$
\\6. \hspace*{+9mm}$L=L+1$
\\7. \hspace*{+6mm}\textbf{end if}
\\8. \hspace*{+3mm}\textbf{end for}
\\9. \textbf{end for}
\\\textbf{Output}: The designed near-field XL-RIS codebook ${\bf{W}}$, and the codebook size $L$.
\end{algorithm}
\textbf{Algorithm 1} shows the specific near-field codebook design procedure. From Step 3, we can find that the near-field codeword is generated based on the near-field cascaded array steering vector, which is related to a pair of sampled points $(x^{G_r}_s,y^{G_r}_s,z^{G_r}_s)$ and $(x^{r}_s,y^{r}_s,z^{r}_s)$. It is noted that different pairs of sampled points may produce the same effective sampled distance, which will result in the same codeword. In order to solve this problem, we need to ensure that each new codeword is different from all previous codewords, as shown in Steps 4-7. Finally, the designed near-field codebook ${\bf{W}}$ is obtained, where each column represents one codeword for the reflecting beamforming vector $\bm{\theta}$ at the XL-RIS. The corresponding codebook size $L$ is also obtained.
After designing the near-field codebook for XL-RIS, the beam training procedure between the XL-RIS and the user can be performed to search the optimal codeword for the reflecting beamforming vector $\bm{\theta}$ at the XL-RIS. Next, we will introduce the corresponding near-field beam training scheme.
\subsubsection{Near-Field Beam Training}
The specific near-field beam training procedure is summarized in \textbf{Algorithm 2}, where all the codewords in the designed near-field codebook ${\bf{W}}$ need to be traversed. The entire training procedure can be divided into $L$ time slots. In $l$-th time slots, the BS transmits the effective symbol $\bar{s}$ to the user, where the reflecting beamforming vector ${\bm{\theta}}_l$ is set as the $l$-th codeword in the designed near-field codebook ${\bf{W}}$ at the XL-RIS, as shown in Step 3. After $L$ time slots, the user can obtain the optimal codeword based on all received signals $\{r_l\}_{l=1}^{L}$ with the help of Steps 4-7. Finally, the optimal codeword index $l_{\rm{opt}}$ is fed back from the user to XL-RIS.
\begin{algorithm}[htbp]
\caption{Near-field beam training}
\textbf{Inputs}: The designed near-field XL-RIS codebook ${\bf{W}}$, and the effective transmitted
symbol $\bar{s}$.
\\\textbf{Initialization}: $l=0$, $|r|_{\rm{opt}}=0$, $l_{\rm{opt}}=0$.
\\1. \textbf{for} ${{\bar{\bf{c}}}_s}\in {\bf{W}}^H$ \textbf{do}
\\2. \hspace*{+3mm} $l=l+1$
\\3. \hspace*{+3mm} ${r}_l= {\bm{\theta}}_l^T{\bar{{\bf{h}}}}_{\rm{near\mbox{-}field}}{\bar s} + n_l$, where ${\bm{\theta}}_l={{\bar{\bf{c}}}_s}$
\\4. \hspace*{+3mm} \textbf{if} $|r_l|>|r|_{\rm{opt}}$ \textbf{then}
\\5. \hspace*{+6mm} $l_{\rm{opt}}=l$
\\6. \hspace*{+6mm} $|r|_{\rm{opt}}=|r_l|$
\\7. \hspace*{+3mm} \textbf{end if}
\\8. \textbf{end for}
\\\textbf{Output}: The feedback optimal codeword index $l_{\rm{opt}}$ from the user.
\end{algorithm}
Since the near-field cascaded array steering vector for XL-RIS is jointly determined by a pair of sampled points, the codebook size $L$ is usually large. This exhausted training procedure will lead to huge beaming training overhead. In order to reduce the beam training overhead, we further design a hierachical near-field codebook and propose the corresponding hierachical beam training scheme.
\subsection{Proposed Hierachical Near-Field Codebook Design and Beam training}\label{S3.1}
To reduce the beam training overhead, one effective way is to reduce the codebook size. By referring to~(\ref{eq11}) and~(\ref{eq12}), we can find the codebook size for the entire sampling space is mainly determined by the sampling steps on the $x$-axis, $y$-axis and $z$-axis. If the sampling steps are increased, the codebook size will be reduced. But the performance of the beam training will be also degraded accordingly, since it is difficult to accurately locate the scatters corresponding to the main paths with the reduced codebook. In order to solve this problem, we design a hierachical near-field codebook, which consists of several different levels of sub-codebooks. These different levels of sub-codebooks are determined by different sampling ranges and sampling steps.
Specifically, let $K$ denote the number of different levels of sub-codebooks. In the $k$-th sub-codebook ($k=1,2,\cdots,K$), the corresponding collections of sampled points ${\Xi}^{G_r}_k$ and ${\Xi}^{r}_k$ can be defined as
\begin{equation}\label{eq13}
\begin{aligned}
{\Xi}^{G_r}_k = \left\{(x^{G_r,k}_s,y^{G_r,k}_s,z^{G_r,k}_s)|x^{G_r,k}_s=X^{G_r,k}_{\rm{min}},X^{G_r,k}_{\rm{min}}+\Delta x^{G_r,k},\cdots,X^{G_r,k}_{\rm{max}};y^{G_r,k}_s=Y^{G_r,k}_{\rm{min}},\right.\\\left.Y^{G_r,k}_{\rm{min}}+\Delta y^{G_r,k},\cdots,Y^{G_r,k}_{\rm{max}}; z^{G_r,k}_s=Z^{G_r,k}_{\rm{min}},Z^{G_r,k}_{\rm{min}}+\Delta z^{G_r,k},\cdots,Z^{G_r,k}_{\rm{max}}\right\},
\end{aligned}
\end{equation}
\begin{equation}\label{eq14}
\begin{aligned}
{\Xi}^{r}_k = \left\{(x^{r,k}_s,y^{r,k}_s,z^{r,k}_s)|x^{r,k}_s=X^{r,k}_{\rm{min}},X^{r,k}_{\rm{min}}+\Delta x^{r,k},\cdots,X^{r,k}_{\rm{max}};y^{r,k}_s=Y^{r,k}_{\rm{min}},\right.\\\left.Y^{r,k}_{\rm{min}}+\Delta y^{r,k},\cdots,Y^{r,k}_{\rm{max}}; z^{r,k}_s=Z^{r,k}_{\rm{min}},Z^{r,k}_{\rm{min}}+\Delta z^{r,k},\cdots,Z^{r,k}_{\rm{max}}\right\}.
\end{aligned}
\end{equation}
Take the sampling on the $x$-axis for ${\Xi}^{G_r}_k$ as an example, $[X^{G_r,k}_{\rm{min}},X^{G_r,k}_{\rm{max}}]$ and $\Delta x^{G_r,k}$ represent the sampling range and sampling step, respectively.
Further, let $R^{k}=\big\{[X^{G_r,k}_{\rm{min}},X^{G_r,k}_{\rm{max}}],[Y^{G_r,k}_{\rm{min}},Y^{G_r,k}_{\rm{max}}],\\{[Z^{G_r,k}_{\rm{min}},Z^{G_r,k}_{\rm{max}}]},[X^{r,k}_{\rm{min}},X^{r,k}_{\rm{max}}],[Y^{r,k}_{\rm{min}},Y^{r,k}_{\rm{max}}],{[Z^{r,k}_{\rm{min}},Z^{r,k}_{\rm{max}}]}\big\}$
and ${\Delta}^{k}=\{\Delta x^{G_r,k},\Delta y^{G_r,k},\Delta z^{G_r,k},\\\Delta x^{r,k},\Delta y^{r,k},\Delta z^{r,k}\}$ respectively denote all sampling ranges and all sampling steps for the $k$-th level sub-codebook. In this way, ${\Xi}^{G_r}_k$ and ${\Xi}^{r}_k$ can be completely determined by $R^{k}$ and ${\Delta}^{k}$. Given ${\Xi}^{G_r}_k$ and ${\Xi}^{r}_k$, the $k$-th level sub-codebook ${\bf{W}}_k$ and the corresponding codebook size $L_k$ can be further obtained by referring to \textbf{Algorithm 1}. Fig. 3 shows the comparison between the near-field codebook and the hierachical near-field codebook. In the hierachical near-field codebook, from the $1$-st level sub-codebook to the $K$-th level sub-codebook, both the corresponding sampling ranges and sampling steps gradually become smaller. Thus, the codebook size of each level of sub-codeboook is not large.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{Fig2}
\end{center}
\setlength{\abovecaptionskip}{-0.3cm}
\caption{Comparison between the near-field codebook and the hierachical near-field codebook.}
\end{figure}
Based on the hierachical near-field codebook, we further propose the hierachical near-field beam training scheme. The basic idea is to search from the $1$-st level sub-codebook to the $K$-th level sub-codebook in turn, where the sampling ranges of the latter level sub-codebook is determined by the optimal codeword searched and the sampling steps by the former level sub-codebook, as shown in Fig. 3. By assuming that the searched optimal codeword for the $k$-th level sub-codebook is ${{\bar{\bf{c}}}_{s,k,\rm{opt}}}$ with the corresponding sampled points $(x^{G_r}_{s,k,\rm{opt}},y^{G_r}_{s,k,\rm{opt}},z^{G_r}_{s,k,\rm{opt}})$ and $(x^{r}_{s,k,\rm{opt}},y^{r}_{s,k,\rm{opt}},z^{r}_{s,k,\rm{opt}})$, the sampling ranges of the $(k+1)$-th level sub-codebook can be determined accordingly. Take $[X^{G_r,k+1}_{\rm{min}},X^{G_r,k+1}_{\rm{max}}]$ as an example, it can be represented by
\begin{equation}\label{eq15}
[X^{G_r,k+1}_{\rm{min}},X^{G_r,k+1}_{\rm{max}}] = [x^{G_r}_{s,k,\rm{opt}}-\Delta x^{G_r,k}/2, x^{G_r}_{s,k,\rm{opt}}+\Delta x^{G_r,k}/2].
\end{equation}
For the $1$-st level sub-codebook, the corresponding sampling ranges can be set as $R^{1}=\big\{[X^{G_r}_{\rm{min}},X^{G_r}_{\rm{max}}],[Y^{G_r}_{\rm{min}},Y^{G_r}_{\rm{max}}],{[Z^{G_r}_{\rm{min}},Z^{G_r}_{\rm{max}}]},[X^{r}_{\rm{min}},X^{r}_{\rm{max}}],[Y^{r}_{\rm{min}},Y^{r}_{\rm{max}}],[Z^{r,k}_{\rm{min}},Z^{r,k}_{\rm{max}}]\big\}$.
The initial sampling steps $\Delta^1$ should be set as bigger values to reduce the codebook size. In this paper, we set $\Delta^1=A\Delta$, where $\Delta$ is the sampling steps for the designed near-field codebook in Section III-A, and $A$ is a scalar greater than $1$. Moreover, we define a step control parameter $\delta$ ($0<\delta<1$) to gradually decrease the sampling steps of the sub-codebooks.
\begin{algorithm}[htbp]
\caption{Hierachical near-field beam training}
\textbf{Inputs}: The number of different levels of sub-codebooks $K$, the initial sampling ranges $R^1$, the initial sampling steps $\Delta^1$, the step control parameter $\delta$, and the effective transmitted symbol $\bar{s}$, the number of RIS elements $N_1$ and $N_2$.
\\\textbf{Initialization}: $l_k=0$, $|r|_{k,\rm{opt}}=0$ and $l_{k,\rm{opt}}=0$ for $\forall k$
\\1. \textbf{for} $k=1,2,\cdots, K$ \textbf{do}
\\2. \hspace*{+3mm} generate ${\Xi}^{G_r}_k$ and ${\Xi}^{r}_k$ based on $R^k$ and $\Delta^k$ by~(\ref{eq13}) and~(\ref{eq14})
\\3. \hspace*{+3mm} generate ${\bf{W}}_k$ based on ${\Xi}^{G_r}_k$ and ${\Xi}^{r}_k$ by \textbf{Algorithm 1}
\\4. \hspace*{+3mm} \textbf{for} ${{\bar{\bf{c}}}_{s,k}}\in {\bf{W}}_k$ \textbf{do}
\\5. \hspace*{+6mm} $l_k=l_k+1$
\\6. \hspace*{+6mm} ${r}_{l,k}= {\bm{\theta}}_{l,k}^T{\bar{{\bf{h}}}}_{\rm{near\mbox{-}field}}{\bar s} + n_{l,k}$, where ${\bm{\theta}}_{l,k}={{\bar{\bf{c}}}_{s,k}}$
\\7. \hspace*{+6mm} \textbf{if} $|r_{l,k}|>|r|_{k,\rm{opt}}$ \textbf{then}
\\8. \hspace*{+9mm} $l_{k,{\rm{opt}}}=l_k$
\\9.\hspace*{+9mm} $|r|_{k,\rm{opt}}=|r_{l,k}|$
\\10.\hspace*{+6mm} \textbf{end if}
\\11.\hspace*{+3mm} \textbf{end for}
\\12.\hspace*{+3mm} generate $R^{k+1}$ based on $l_{k,\rm{opt}}$ by~(\ref{eq15})
\\13.\hspace*{+3mm} $\Delta^{k+1}=\delta\Delta^{k}$
\\14.\textbf{end for}
\\\textbf{Output}: The feedback optimal codeword index $l_{K,{\rm{opt}}}$ from the user.
\end{algorithm}
The specific hierachical near-field beam training procedure is summarized in \textbf{Algorithm 3}, where the entire beam training procedure is divided into $K$ stages. In the $k$-th stage, the $k$-th level sub-codebook ${\bf{W}}_k$ is firstly generated by Steps 2-3. Then, Steps 4-11 are performed to search the optimal codeword in ${\bf{W}}_k$. It is noted that the searched optimal codeword index $l_{k,\rm{opt}}$ should be fed back from the user to the XL-RIS to generate the the sampling ranges $R^{k+1}$ and sampling steps $\Delta^{k+1}$ for the $(k+1)$-th level sub-codebook, as shown in Steps 12-13. Finally, the optimal codeword for the $K$-th level sub-codebook is regarded as the searched globally optimal codeword in the hierachical near-field codebook.
From \textbf{Algorithm 3}, we can find that the corresponding beam training overhead is $\sum_{k=1}^{K}L_k$. Since the sampling ranges and sampling steps for the hierachical near-field codebook are carefully designed, $L_k$ for $\forall k$ can be much less than $L$, which will be verified by the following simulation results.
\section{Simulation Results}\label{S5}
In this section, we provide the simulation results to verify the performance of the two proposed near-field beam training schemes.
For simulations, we consider that the number of antennas at the BS is $M=64$, and the number of RIS elements at the XL-RIS is $N=512$ ($N_1=128$ and $N_2=4$). The path gains are generated by ${\alpha _{G} \sim {\cal C}{\cal N}\left( {0,1} \right)}$ and ${\alpha _{r} \sim {\cal C}{\cal N}\left( {0,1} \right)}$. The vertical or horizontal distance between two adjacent RIS elements is set as $d=1/2$~\cite{JinShi}. The distance from the XL-RIS to the scatter is limited to $X^{G_r}_{\rm{min}}=X^{r}_{\rm{min}}=-1200d$, $X^{G_r}_{\rm{max}}=X^{r}_{\rm{max}}=1200d$, $Y^{G_r}_{\rm{min}}=Y^{r}_{\rm{min}}=10d$, $Y^{G_r}_{\rm{max}}=Y^{r}_{\rm{max}}=200d$, $Z^{G_r}_{\rm{min}}=Z^{r}_{\rm{min}}=-400d$, $Z^{G_r}_{\rm{max}}=Z^{r}_{\rm{max}}=-400d$. The symbol transmitted by the BS is set as $s=1$, and the beamforming vector at the BS is set as ${\bf{v}}=\frac{{\bf{b}}^*}{\sqrt M}$. Thus, the effective transmitted symbol is $\bar s=1$. The SNR is defined as $1/{\sigma}^2$.
We compare the proposed near-field beam training scheme and the hierachical near-field beam training scheme with the existing far-field beam training scheme~\cite{RISBT}. In the near-field beam training scheme, the sampling steps on all $x$-axis, $y$-axis and $z$-axis are set to the same, i.e., $\Delta x^{G_r}=\Delta y^{G_r}=\Delta z^{G_r}=\Delta x^{r}=\Delta y^{r}=\Delta z^{r}=\Delta_s$. That is to say $\Delta=[\Delta_s,\Delta_s,\Delta_s,\Delta_s,\Delta_s,\Delta_s]$. In the hierachical near-field beam training scheme, we set $A=4$, and the initial sampling steps $\Delta^1=A\Delta$. The step control parameter is set as $\delta=0.25$. The number of different levels of sub-codebooks is set as $K=2$. In the far-field beam training scheme, the far-field codebook $\bf{F}$ defined in~(\ref{eq3}) is adopted. Moreover, we provide the beamforming scheme with perfect CSI as the upper bound of performance, i.e., ${\bm{\theta}}=\frac{{\bar{\bf{c}}}^*}{\sqrt N}$.
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.8\linewidth]{AR}
\end{center}
\setlength{\abovecaptionskip}{-0.2cm}
\caption{Achievable rate performance comparison against the SNR.} \label{FIG3}
\vspace{-1mm}
\end{figure}
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.8\linewidth]{overhead}
\end{center}
\setlength{\abovecaptionskip}{-0.2cm}
\caption{Beam training overhead comparison against the sampling step $\Delta_s$.} \label{FIG3}
\vspace{-1mm}
\end{figure}
Fig. 4 shows the achievable rate performance comparison against the SNR, where $\Delta_s=100d$. We can find that compared with the existing far-field beam training scheme, the two proposed near-field beam training schemes can achieve better achievable rate performance. Due to the error propagation among different levels of sub-codebooks search, the performance of the hierachical near-field beam training scheme is sightly worse than that of the near-field beam training scheme. Specifically, the hierachical near-field beam training scheme can achieve about $92\%$ achievable rate performance of the near-field beam training scheme.
Fig. 5 further shows the beam training overhead comparison of the two proposed near-field beam training schemes against the sampling step $\Delta_s$. We can find that the hierachical near-field beam training scheme can greatly reduce the beam training overhead. When $\Delta_s=100d$, the beam training overhead of the near-field beam training scheme and the hierachical near-field beam training scheme are respectively $147628$ and $15927$, where the latter is only about $10\%$ of the former.
\section{Conclusions}\label{S6}
In this paper, we have proposed the two near-field beam training schemes by designing the near-field codebook for the XL-RIS assisted system. Simulation results shows that the two proposed near-field beam training schemes can achieve better performance than the existing far-field beam training scheme. Particulary, compared with the near-field beam training scheme, the hierachical near-field beam training scheme can reduce the beam training overhead by about $90\%$ with about $92\%$ achievable rate performance. For future works, the multi-beam training method can be used to the near-field XL-RIS beam training to further reduce the beam training overhead.
| {
"attr-fineweb-edu": 1.802734,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUb0rxK7DgtAWezUXR | \section{Introduction} \label{sec:introduction}
An intriguing empirical observation was made by Bruin, et al., \cite{bruinSimilarityScatteringRates2013} in 2013: The transport scattering rate as extracted from the Drude resistivity formula, $\rho = m/(ne^2\tau)$, where $m$, $n$ and $\tau$ are the carrier effective mass, carrier density, and the transport relaxation time respectively, obeys an approximate bound, $\hbar/\tau = k_\mathrm{B}T$, for many different metals, particularly in the regime where $\rho(T)$ is approximately linear in the temperature (often, such conductors manifesting a linear-in-$T$ resistivity are dubbed `strange metals', particularly if the linearity persists to rather low temperatures). It is implied that $\rho(T)$ should really be the $T$-dependent part of the resistivity with the elastic disorder scattering contribution at $T=0$ subtracted out through careful extrapolation. The terminology `Planckian bound or limit' has stuck to this puzzling empirical phenomenon for historical reasons \cite{hartnollPlanckianDissipationMetals2021a}\cmmnt{[2]} with many experiments \cite{grissonnancheLinearinTemperatureResistivity2021, legrosUniversalTlinearResistivity2019,caoStrangeMetalMagicAngle2020, yuanScalingStrangemetalScattering2022a, huangNonFermiLiquidTransport2020, senStrangeSemimetalDynamics2020, wangUnconventionalFreeCharge2020, ayresIncoherentTransportStrangemetal2021, lizaireTransportSignaturesPseudogap2021, taupinAreHeavyFermion2022}, mostly in 2D strongly correlated systems such as cuprates, claiming the observation of such Planckian bounds on the resistive scattering rates. Exceptions to the Planckian bound on transport have also been pointed out in a few situations \cite{poniatowskiCounterexampleConjecturedPlanckian2021, collignonHeavyNondegenerateElectrons2020}. Such exceptions, where the effective Drude scattering rate is larger than the putative Planckian thermal bound is referred to as the super-Planckian behavior and, by contrast, the situation of the rate being much less than temperature is called sub-Planckian. Particular significance is often attached to the Planckian bound being saturated, i.e., $\hbar/\tau=k_\mathrm{B} T$ (``Planckian metals'') and it is sometimes asserted that the bound is an intrinsic limit on temperature-dependent transport except perhaps for trivial temperature-independent elastic scattering at $T=0$.
The Planckian bound obviously cannot apply to the total resistivity of a metal since all metals have a disorder-induced `residual resistivity' at low temperatures (ignoring any superconducting transition), where the bound must be increasingly violated with the lowering of temperature. The bound must therefore be formally defined by writing:
\begin{equation}
\rho = \rho_0 + \rho (T)
\label{eq:total_resistivity}
\end{equation}
where $\rho(T)$ is the temperature-dependent part of the resistivity which vanishes at $T=0$, and $\rho_0$ is, by definition, the disorder-induced residual resistivity at $T=0$. In discussing Planckian properties, it is always implicitly assumed that $\rho(T)$ is being considered in extracting the scattering rate, with $\rho_0$ subtracted out (or equivalently the situation $\rho(T) \gg \rho_0$ applies). In principle, one should worry about the applicability of the Matthiessen's rule in separating out different scattering contributions, particularly in electron systems with low Fermi temperatures $T_\mathrm{F}$ as may happen for certain strongly correlated 2D metals (but not for regular 3D normal metals), but as long as $\rho(T)\gg\rho_0$ applies, this is not a problem. We will always focus on the temperature-dependent part of the resistivity in discussing Planckian properties in the current work even if it is not always explicitly stated everywhere.
An important point, not often emphasized in discussing Planckian transport, but was already discussed in Ref.~\cite{bruinSimilarityScatteringRates2013}, is that regular 3D normal metals may violate the Planckian bound for $T>40K$ or so, where the metallic resistivity is linear-in-$T$ caused by acoustic phonon scattering in the equipartition regime. For example, Pb manifests a linear-in-$T$ room temperature resistivity which, when converted to a transport scattering rate via the Drude formula, gives $\hbar/\tau \sim 9 k_\mathrm{B}T$, reflecting an empirical super-Planckian behavior strongly violating the Planckian bound. By contrast, Al (and many other metals), obeys the Planckian bound at all temperatures. Basically, all strong-coupling (in the electron-phonon interaction sense) metals with the effective dimensionless electron-phonon coupling strength $\lambda \sim 1$ are trivially super-Planckian since the corresponding transport scattering rate in the linear-$T$ resistivity regime ($T>40K$) is given by
$(\hbar/\tau) / ( k_\mathrm{B}T) \sim 5$--$10$.
Therefore, the Planckian bound may often be violated at higher temperatures in metals by the trivial electron-phonon interaction in the quasi-elastic equipartition scattering regime \cite{hwangLinearinResistivityDilute2019}. It has therefore been suggested that any serious discussion of the Planckian bound should also leave out phonon scattering, in addition to leaving out impurity scattering, since high-temperature phonon scattering is quasielastic, and the bound does not apply to any type of elastic scattering (although, as a matter of empirical fact, the bound does apply to phonon scattering limited resistivity in metals, within a factor of $10$). Of course, this makes the whole Planckian analysis something of a theoretical semantic exercise since experimentally all one can measure is the electrical resistivity and convert it into a scattering rate (by also measuring the effective mass and the effective carrier density), and there is no empirical way of ascertaining whether the resistive scattering is or is not elastic/quasielastic. Also, the subtraction of the disorder induced residual resistivity always involves some arbitrariness as it requires an extrapolation to $T=0$.
In fact, resistive scattering is associated with momentum relaxation, and any process, whether elastic or inelastic, leads to a resistivity if it leads to a relaxation of the net momentum. Nevertheless, the whole Planckian lore has taken on considerable significance because of its claimed connection to `strange metals', where a linear-in-$T$ resistivity arises from some unknown electron correlation effects, and somehow persists to low temperatures. In the literature, researchers often conflate the resistive scattering rate with the imaginary part of a single-particle self-energy arising from electron-electron interactions although it is well-known that the two quantities generally have nothing to do with each other as self-energy (resistivity) is associated with the single-particle (two-particle) propagator. In addition, the standard electron-electron scattering rate arising from the imaginary part of the electron self-energy goes as $T^2$ in 3D (or $T^2 \ln{T}$ in 2D), and therefore, any associated resistive scattering would manifest a $T^2$ resistivity at low temperatures (and not a linear-in-$T$ strange, metallic resistivity).
This $T^2$ versus $T$ issue is sometimes turned around to insist: (1) The linear-in-$T$ resistivity of the so-called strange metals indeed arises from electron-electron interaction behaving in some unknown `strange' manner, and (2) therefore, the physics here must be strange since electron-electron correlation effects produce a linear-in-$T$ resistivity instead of a $T^2$ resistivity. In our opinion this is incorrect logic unless one can show that a reasonable microscopic model leads to correlation-induced linear-in-$T$ resistivity. Often, such a linear-in-$T$ resistivity is somewhat vaguely associated with quantum criticality, but to the best of our knowledge, there is no known physical itinerant electron quantum critical point leading to a linear-in-$T$ resistivity or Planckian scattering.
It should be emphasized that electron-electron scattering can relax momentum only if umklapp or interband Baber scattering is invoked, and connecting an electron self-energy directly with a transport scattering rate is in general incorrect because all momentum dependence is being ignored uncritically in such considerations. The experimental literature is filled with uncritical (and often incorrect) claims of a system being a non-Fermi-liquid simply because it manifests a linear-in-$T$ resistivity at low temperatures, with the unreasonable assumption being that the observed linear-in-$T$ resistivity necessarily implies an imaginary part of self-energy going also as linear in $T$, which would indeed be inconsistent with a Fermi liquid. Such a non-Fermi-liquid claim necessitates at the minimum the observation of the inelastic scattering rate going as $\mathcal{O}(T)$ at arbitrarily low temperatures, not just the resistive scattering rate.
The purpose of the current work is to consider and analyze a well-studied \cite{dassarmaScreeningTransport2D2015,dassarmaMetallicityItsLowtemperature2004,dassarmaLowdensityFinitetemperatureApparent2003,zalaInteractionCorrectionsIntermediate2001} problem, namely the low-temperature density and temperature dependent resistivity of high quality 2D semiconductor systems, from the perspective of the Planckian behavior. In a narrow sense, our work has some superficial similarity with Ref.~\cite{bruinSimilarityScatteringRates2013} where the experimental temperature dependent resistivity of various metals (both 3D and 2D) was analyzed from the Planckian perspective reaching the purely empirical conclusion that most metals obey the Planckian bound. The big qualitative difference between Ref.~\cite{bruinSimilarityScatteringRates2013} and our work is that, in addition to analyzing the existing experimental transport data in 2D semiconductors, we also provide the underlying transport theory which is in approximate agreement with the experimental data, finding to our considerable surprise that the Planckian bound appears to be always obeyed within a factor of $10$. Unlike Ref.~\cite{bruinSimilarityScatteringRates2013}, where all the 3D metallic linear-in-$T$ resistivity most definitively arises from acoustic phonon scattering, our work involves no phonon scattering at all since the experiments (and the associated theory) we consider are all restricted to $<10K$, where phonons are thermally suppressed and, therefore, all phonon scattering contribution to the resistivity is strongly suppressed (the so-called Bloch-Gruneisen regime).
Phonons play no role in the results we discuss here although the same systems and samples do manifest the expected phonon-induced linear-in-$T$ resistivity at higher temperatures ($>10$--$20$K) \cite{minInterplayPhononImpurity2012}.
The experimentally observed strong approximate linear-in-temperature resistivity at low temperatures in many dilute 2D semiconductor systems arises from an interplay between Coulomb disorder and electron-electron interaction, where the carriers scatter from the momentum-dependent screened disorder, which becomes strongly temperature dependent around $2k_\mathrm{F}$ because of the 2D Fermi surface anomaly \cite{sternCalculatedTemperatureDependence1980,goldTemperatureMaximalConductivity1985, dassarmaTheoryFinitetemperatureScreening1986}. Since at low temperatures the most important resistive scattering is the $2k_\mathrm{F}$ back scattering, the strong metallic temperature dependence of the 2D resistivity arises from the nonanalytic temperature dependence of the 2D polarizability function at $2k_\mathrm{F}$\texttt{--} increasing temperature weakens screening, leading to larger resistivity with increasing temperature, and this effect in the leading-order is linear in $T/T_\mathrm{F}$, where $T_\mathrm{F}$ is the Fermi temperature. Thus, for dilute systems, where $T_\mathrm{F}$ is relatively low, the linear-in-$T$ temperature correction to the residual resistivity at $T=0$ could be large, extending to arbitrarily low temperatures. Such a linear temperature correction $\rho(T)$ to $\rho_0$ arising entirely (i.e., no phonons) from screened disorder scattering violates the textbook Sommerfeld expansion which asserts that all thermal corrections in a Fermi system must go as $\mathcal{O}((T/T_\mathrm{F})^2)$ because of the thermal broadening of the Fermi distribution. This is an unexpected and counter-intuitive result arising from the fact that 2D systems have nonanalytic interaction corrections associated with Fermi surface anomalies.
This strong temperature dependence of the 2D resistivity is thus a combined effect of disorder and interaction, and disappears theoretically if one assumes, by ignoring the electron-electron Coulomb interaction, the disorder to be unscreened short-range (or long-range) disorder. The temperature dependence is also suppressed if the disorder is screened by long wavelength Thomas-Fermi screening neglecting the momentum dependence because the 2$k_\mathrm{F}$-anomaly disappears in the long wavelength approximation.
We note that disorder breaks translational invariance allowing interaction to affect momentum relaxation indirectly through the screening of disorder.
The interacting 2D system is still a Fermi liquid with well-defined quasiparticles, but subtle nonanalytic corrections associated with electron-electron interactions give rise to Fermi surface anomalies leading to thermal corrections to all quasiparticle properties violating the $T^2$ law of Sommerfeld expansion \cite{buterakosPresenceAbsenceTwodimensional2021a}. For the temperature dependent resistivity, the physically appealing way of thinking about the linear-in-$T$ correction to the residual resistivity is that the effective momentum-dependent disorder, including interaction effects (i.e., screened disorder), is strongly temperature dependent although the bare disorder arising from quenched impurities obviously is not. An equivalent statement would be that the electrons resistively scatter from the strongly temperature dependent Friedel oscillations associated with the renormalized impurity potential. Note that Friedel oscillations are characteristics of of finite momentum screening at $2k_\mathrm{F}$, and vanishes in the long-wavelength Thomas-Fermi screening approximation often used theoretically.
Thus, the low-temperature linear-in-$T$ resistivity behavior in 2D semiconductor systems is a combined effect of disorder and interaction, extending all the way to $T=0$, but it implies no violation of the Fermi liquid theory. Perhaps this example should be instructive for other systems where an observed linear-in-$T$ resistivity at lower temperatures is automatically claimed to imply a non-Fermi liquid ground state. Linear-in-$T$ resistivity can indeed arise from indirect effects of electron-electron interactions without affecting the Fermi liquid nature of an electronic system. The effect disappears if the electron-electron interaction is set to zero.
Given that the electron-electron interaction plays the decisive role in producing the Fermi surface anomaly leading to a linear-in-$T$ resistivity in 2D semiconductors arising from screened disorder scattering, with electron-phonon scattering being absent in the physics, studying the Planckian properties of 2D semiconductor transport manifesting linear-in-$T$ resistivity takes on great significance because this is a system where the mechanism underlying $\rho(T)$ is understood and there is a huge amount of experimental data covering many different 2D materials manifesting a strong $T$-dependent low-$T$ resistivity.
We emphasize that unscreened disorder in high quality 2D semiconductors arises from unintentional random quenched charged impurities invariably present in the environment, and theoretically, such unscreened disorder by itself produces the expected $\mathcal{O}((T/T_\mathrm{F})^2)$ at low temperatures as implied by the Sommerfeld expansion of the Fermi function since the noninteracting system has no Fermi surface anomalies. Additionally, such unscreened Coulomb disorder gives rise to an `insulating' resistivity, with $\rho(T)$, in fact, decreasing with increasing $T$ as $\mathcal{O}((T/T_\mathrm{F})^2)$ since the thermal smearing of the Fermi surface decreases the effective scattering momentum in the denominator of the Coulomb potential, effectively enhancing the disorder scattering \cite{sternSelfconsistentTreatmentScreening1985}. Thus, the Fermi surface anomaly induced temperature dependent screening effect has a nontrivial qualitative effect on $\rho(T)$, modifying a negative $\mathcal{O}((T/T_\mathrm{F})^2)$ term into a positive $\mathcal{O}(T/T_\mathrm{F})$ correction. Whether such a linear-in-$T$ resistivity is Planckian or not is indeed an important question. We note that for the doped 2D semiconductors of interest in the current work, the system is essentially a continuum electron liquid with all effects of the periodic lattice (and band structure) subsumed in the effective mass and the effective background dielectric constant\texttt{--} this is the extremely successful effective mass approximation utilized universally in the theories of semiconductor transport. Electron-electron interaction induced umklapp scattering is irrelevant in the transport problem of our interest here since the Brillouin zone filling is a minuscule $10^{-5}$ or so with all the carriers being essentially at the band bottom. Thus, all interaction effects enter the theory indirectly through the Coulomb interaction and the 2D polarizability function controlling the screening of random disorder\texttt{--} direct electron-electron scattering is completely momentum-conserving in our systems of interest and does not contribute to the resistivity.
The second part of our work presented here is independent of the resistivity issue, focusing on the calculation of the electron-electron interaction induced inelastic scattering rate, to be compared with the Planckian hypothesis.
As emphasized above, carrier resistivity is associated with momentum relaxation, and not with any imaginary self-energy arising from interaction-induced electron-electron scattering although the two are often conflated uncritically in the discussion on strange metals and Planckian properties. In some theories, where the momentum dependence is ignored and all scattering is by assumption umklapp scattering, the resistivity is given by the momentum independent self-energy, but such theories are typically uncontrolled in any parameter regime. For the 2D doped semiconductors, however, the interacting self-energy calculation we carry out in this work within a many body theory for the continuum electron liquid is exact in the high-density or small $r_s$ limit, where $r_s$ is the dimensionless Wigner-Seitz radius going as the inverse square-root of the 2D carrier density, since the many body perturbation expansion is exact in the small-$r_s$ limit for Coulomb interaction. We obtain the imaginary part of the electron self-energy in the leading order infinite ring diagram approximation to calculate the interaction-induced temperature dependent inelastic scattering rate as a function of temperature, finding that the Planckian bound is approximately (within one order of magnitude) valid over the whole temperature regime ranging from $T\ll T_\mathrm{F}$ to $T\gg T_\mathrm{F}$ and in between. This is again a surprising and potentially important result establishing explicitly that the Planckian bound indeed applies (at least approximately and empirically) to the imaginary part of the dynamical self-energy at all temperatures, i.e., to the inelastic single-particle scattering rate arising from electron-electron Coulomb coupling.
This paper thus presents a study of three independent properties of 2D doped semiconductors interconnected only by their relevance to the Planckian hypothesis. The first part (Sec.~\ref{sec:2}) is purely empirical, following the spirit of Bruin et al.~\cite{bruinSimilarityScatteringRates2013}, where we analyze the published low-temperature metallic ($<10K$) experimental resistivity in the context of Planckian. The other two parts (Sec.~\ref{sec:3} and ~\ref{sec:4}) are theoretical with the second part (Sec.~\ref{sec:3}) providing the transport theory for 2D transport which effectively (and approximately) describes the metallic $T$-dependent resistivity discussed in the first part (Sec.~\ref{sec:2}) using the model of carrier scattering from screened Coulomb disorder, both analytically and numerically, through the Boltzmann-RPA effective theories. The third part (Sec.~\ref{sec:4}) describes the theory for the finite-temperature imaginary part of the 2D electron self-energy, studying the inelastic electron-electron scattering rate in the Planckian context. We emphasize that this third part (i.e., the imaginary 2D self-energy) in our systems has nothing to do with the transport properties discussed in the first two parts since Galilean invariance in our continuum effective mass system ensures that the electron-electron scattering is strictly momentum-conserving (e.g., no umpklapp) and does not contribute to the resistivity. Electron interactions enter into transport indirectly through screening in the first two parts of our work since the presence of disorder breaks the translational invariance allowing interaction effects to affect transport indirectly by dressing or renormalizing (i.e., screening in our case) the effective disorder scattering. A well-known related effect of electron-electron interactions affecting transport is the logarithmic correction to the 2D conductivity in the diffusive limit (the so-called Altshuler-Aronov effect) \cite{altshulerZeroBiasAnomaly1979}, and the physics we discuss in Sec.~\ref{sec:3} is basically the ballistic counterpart of this `interaction effect' coming specifically through the 2$k_\mathrm{F}$-screening of Coulomb disorder.
The rest of this paper is organized as follows. In Sec.~\ref{sec:2}, we provide a Planckian transport analysis for several different 2D semiconductor systems, by comparing the extracted scattering rate from the measured resistivity (taken from the existing published experimental literature) to the temperature over a large temperature and density range. In Sec.~\ref{sec:3}, we provide the transport theory in approximate agreement with the results in Sec.~\ref{sec:2}, by considering carrier scattering from temperature-dependent screened effective disorder arising from random charged impurities, again comparing the theoretical transport scattering rate with temperature in order to test the Planckian hypothesis. In Sec.~\ref{sec:4}, we calculate the finite-temperature inelastic scattering rate arising from electron-electron Coulomb interaction by obtaining the finite-temperature electron self-energy at arbitrary temperature and density, comparing the inelastic scattering rate with temperature from the Planckian hypothesis.
In Sec.~\ref{sec:5}, we provide some intuitively appealing heuristic dimensional arguments supporting the approximate existence of a Planckian dissipation bound.
We conclude in Sec.~\ref{sec:6} by discussing the implications of our findings in the context of the extensive current debate in the literature on the relevance of the Planckian bound in strange metals.
\section{Planckian analysis of experimental resistivity} \label{sec:2}
Transport properties of doped 2D semiconductor systems are among the most studied \cite{andoElectronicPropertiesTwodimensional1982,spivakColloquiumTransportStrongly2010, dassarmaElectronicTransportTwodimensional2011}\cmmnt{[Ando Fowler Stern RMP 1982]} electronic phenomena in all of physics, going back to 1966 when the two-dimensional nature of interface-confined electrons in Si-SiO$_2$ inversions layers in Si MOSFETs was first demonstrated \cite{fowlerMagnetoOscillatoryConductanceSilicon1966}\cmmnt{[Fowler PRL 1966]}. Many of the seminal experimental discoveries in condensed matter physics over the last 50 years were first reported in various 2D semiconductor systems, including
IQHE \cite{klitzingNewMethodHighAccuracy1980}\cmmnt{[von Klitzing PRL 1980]},
FQHE \cite{tsuiTwoDimensionalMagnetotransportExtreme1982, suenObservationFractionalQuantum1992a}\cmmnt{[Tsui PRL 1982]},
even denominator FQHE \cite{willettObservationEvendenominatorQuantum1987}\cmmnt{ [Willett PRL 1987]},
strong localization \cite{mottAndersonTransition1975}\cmmnt{[Mott, Pepper]},
2D plasmons \cite{allenObservationTwoDimensionalPlasmon1977}\cmmnt{ [Tsui PRL 1975]},
weak localization \cite{bishopNonmetallicConductionElectron1980}\cmmnt{[Bishop PRL 1980]},
2D Wigner crystals \cite{tsuiTwoDimensionalMagnetotransportExtreme1982, spielmanResonantlyEnhancedTunneling2000}\cmmnt{[Tsui]},
excitonic superfluidity \cite{kelloggVanishingHallResistance2004}\cmmnt{[Eisenstein]},
2D metal-insulator crossover \cite{dassarmaScreeningTransport2D2015, sarmaSocalledTwoDimensional2005}\cmmnt{[ SDS Hwang Scientific Reports and Solid State Commun]} and many more. There is a huge amount of published experimental resistivity data available in the literature for various 2D semiconductors, both for electrons ($n$) and holes ($p$) as a function of carrier density and temperature. We focus on 6 typical experimental data sets covering 3 different 2D materials systems ($n$-GaAs, $p$-GaAs, $n$-Si), extracting the transport scattering rate from the Drude formula, subtracting out the $T=0$ extrapolated residual resistivity to focus on the purely $T$-dependent part, $\rho(T)$, of the measured resistivity:
\begin{equation}
\rho(T) = \frac{m}{n e^2 \tau (T)}.
\label{eq:finite_temperature_resistivity}
\end{equation}
Note that experimentally one measures the full $\rho$ of Eq.~(\ref{eq:total_resistivity}), and $\rho(T) = \rho - \rho_0$, with $\rho_0$ obtained by an extrapolation to $T=0$ as described below. As explained in the Introduction (Sec.~\ref{sec:introduction}), this subtraction takes out the strictly elastic temperature-independent contribution to the resistivity, which is obviously outside the scope of any Planckian consideration. Once $\rho(T)$ is extracted from the data extrapolation and subtraction, we can obtain the scattering time $\tau(T)$ from Eq.~(\ref{eq:finite_temperature_resistivity}) since $n$ and $m$ are experimentally known in our semiconductor systems.
In Fig.~\ref{fig:1}, we show the extracted (from the experimental resistivity) dimensionless `Planckian resistivity parameter' $\Gamma/(k_\mathrm{B} T)$, with $\Gamma=\hbar/\tau$ as a function of $T$ for six different 2D samples [Fig.~\ref{fig:1}(a)-(f)] for different carrier densities in each case. The six samples correspond to 3 data sets for $n$-Si, 2 data sets for $p$-GaAs, and 1 data set for $n$-GaAs 2D systems. Since different samples have different disorder and also somewhat different effective thickness of the 2D confinement layers, the results for the resistivity differ from sample to sample even for the same materials (just as the resistivity of different Al samples would differ from sample to sample at low temperatures because of the variations in disorder content).
The important point to note is that in all of these results the dimensionless Planckian parameter (even at its maximum peak value) is always less than $10$, and is often less than $2$. There is considerable noise in the results of Fig.~\ref{fig:1} because of the subtraction of $\rho_0$ through $T=0$ extrapolation and because the original experimental resistivity already has quite a bit of noise in it (and is typically plotted on a log scale, making the extraction of $\tau$ from the published results subject to some inherent errors).
We emphasize a particularly sailent feature of the results in Fig.~\ref{fig:1} with significant relevance to the Planckian debate. As is obvious from Fig.~\ref{fig:1}, there is nothing special about the precise Planckian bound with $\Gamma=k_\mathrm{B} T$, and the experimental $\Gamma/(k_\mathrm{B} T)$ rises above the Planckian bound and then drops below it smoothly as a function of temperature at all densities in all samples with nothing special happening at the Planckian point of $\hbar/\tau=k_\mathrm{B} T$. By fine-tuning and post-selection, one could choose a set of results [see, e.g., $0.9$ density curve in Fig.~\ref{fig:1}(c) and $34.6$ density curve in Fig.~\ref{fig:1}(e)] where the Planckian bound $\hbar/\tau=k_\mathrm{B} T$ holds approximately over a finite $T$-range, but this would reflect purely non-generic confirmation bias since the whole set of results presented in Fig.~\ref{fig:1} for many samples over large ranges of temperature and carrier density clearly establish empirically that there is nothing special about the precise Planckian bound, with $\Gamma/(k_\mathrm{B} T)$ varying above or below unity smoothly\texttt{--}in fact, in Fig.~\ref{fig:1}(a) and (d), the ratio remains always below and above unity respectively. The important point is that the dimensionless Planckian parameter never exceeds $10$, thus there indeed seems to be an approximate empirical thermal bound on the temperature-dependent transport scattering rate.
Although we cannot comment definitely on the Planckian transport analysis of other systems published in the literature, the real possibility of a confirmation bias cannot be ruled out, particularly since the actual precise values of $m/n$ in Eq.~(\ref{eq:finite_temperature_resistivity}) are never quite known in strongly correlated materials with complicated Fermi surfaces (in our systems the Fermi surface is always a circle and the relevant parameters are well-known) and, additionally, the subtraction of residual resistivity is always fraught with some errors. We find that the important physics here is not that there are fine-tuned situations where $\hbar/\tau=k_\mathrm{B} T$ Planckian condition may appear to be satisfied, but the surprising fact that the super-Planckian behavior with $\hbar/\tau>k_\mathrm{B} T$ seems to be bounded within a factor of $5$ above the Planckian bound, but the sub-Planckian behavior with $\hbar/\tau < k_\mathrm{B} T$ persists to arbitrarily low vales of the dimensionless Planckian parameter $\Gamma/k_\mathrm{B} T$. The Planckian bound empirically applying within a factor of $10$ in all the semiconductor transport data we analyzed comes as a real surprise to us.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{Figure1.pdf}
\caption{Planckian parameters [defined as $\Gamma(T) /k_\mathrm{B}T$ where $\Gamma(T)=\hbar/\tau(T)$] as a function of $T$, calculated using experimental resitivities of various semiconducting materials [(a) $n$-GaAs \cite{lillyResistivityDilute2D2003a} (b), (c) $p$-GaAs \cite{nohInteractionCorrectionsTwodimensional2003, manfraTransportPercolationLowDensity2007a} and (d)-(f) $n$-Si \cite{tracyObservationPercolationinducedTwodimensional2009a, hwangValleydependentTwodimensionalTransport2013}. Each line with a different color corresponds to a different carrier density $n$ with numbers along the lines representing the corresponding carrier density in the units of $10^{10}\mathrm{cm}^{-2}$. }
\label{fig:1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{Figure2.pdf}
\caption{The experimental resistivity data (open circle curves) corresponding to the Planckian results in Fig.~\ref{fig:1} plotted along with the best fitted linear curves (dashed curves) at low temperatures. Each curve corresponds to a different carrier density in the same way as in Fig.~\ref{fig:1}.}
\label{fig:2}
\end{figure}
For the sake of completeness, we present in Fig.~\ref{fig:2} the original experimental density- and temperature-dependent resistivity for each sample corresponding to the Planckian results presented in Fig.~\ref{fig:1}, showing also the extrapolation to $\rho_0$ used in our analysis. We emphasize that all our Planckian analysis of the experimental data (Fig.~\ref{fig:1}) is based entirely on the measurements and extrapolations shown in Fig.~\ref{fig:2}. There are numerous other 2D transport results (mostly on Si and GaAs based doped 2D semiconductor systems) in the published literature during 1995-2015, showing very similar results at low temperatures, manifesting strongly $T$-dependent $\rho(T)$ at low temperatures ($<10K$) without any direct electron-phonon scattering seemingly contributing to the metallic resistivity. All of these 2D systems are relatively dilute, i.e., low-density systems with rather low Fermi temperatures ($5K-50K$) and are high-quality, i.e., $\rho_0$ in the metallic regime is typically well below $h/2e^2 \sim 10^4$ohms. The systems undergo a relatively sharp density-induced effective metal-insulator crossover at a strongly system-dependent `critical' density, but our interest is entirely focused on the effective low-temperature metallic phase at densities above this critical density, and we have nothing to say in the current work about the 2D metal-insulator transition itself or the insulating phase for densities below the critical density. Planckian physics applies only to the metallic situation.
In discussing Figs.~\ref{fig:1} and \ref{fig:2}, we start by describing the generic features of the measured resistivity shown in Fig.~\ref{fig:2}: (1) $\rho$ first increases with $T$, rising rather rapidly and approximately linearly at low temperatures, reaching a peak value, which, depending on the system and the carrier density could be 10$\%$ to 100$\%$ larger than $\rho_0$ at $T=0$ within a small overall temperature increase of a few Kelvins; (2) the relative value of the temperature dependence $\rho/\rho_0$ or $\rho(T) / \rho_0$ depends strongly on the carrier density in the same sample, with the temperature dependence increasing with decreasing density; (3) at high density, the temperature dependence, while being linear, is very weak; (4) $\rho(T)$ decreases with increasing $T$ beyond the density- and sample-dependent peak, but at higher densities, where the overall $T$-dependence is generally weak, no such peak in $\rho$ appears; (5) the slow decrease of the high-temperature resistivity with increasing $T$ at lower densities apparent in some of the results of Fig.~\ref{fig:2} is neither a `resistivity saturation' nor `an insulating phase' – it is simply the temperature dependence of a classical metallic resistivity, where increasing $T$ beyond the quantum-classical crossover (occurring at low temperatures for low carrier densities) leads to a slowly decreasing $\rho(T)$ with increasing $T$ \cite{dassarmaLowdensityFinitetemperatureApparent2003}. We note that the temperature scale for the $T$-dependent resistivity in Fig.~\ref{fig:2} is determined by $T/T_\mathrm{F}$ with the Fermi temperature $T_\mathrm{F}$ being proportional to the carrier density, and thus the temperature dependence weakens at higher densities. We emphasize that at lower carrier densities, manifesting the quantum-classical crossover, the typical Fermi temperature in these samples is of $\mathcal{O}(10K)$.
The consequences are that the dimensionless Planckian parameter $\Gamma/(k_\mathrm{B} T)$ plotted in Fig.~\ref{fig:1} as a function of $T$ for various densities typically show a maxima between 0.1K and 5K depending on the sample and the carrier density, but the largest peak value [Fig.~\ref{fig:1}(d)] is only $\sim 6$ whereas most of the peak values are around unity (or below). Thus, one inevitable empirical conclusion based on Fig.~\ref{fig:1} is that the extracted scattering rate is bounded from above by $k_\mathrm{B}T$ within a factor of $2-6$. The largest value of the dimensionless Planckian happens mostly at some intermediate temperatures of $\mathcal{O}(1K)$ for Si-based samples [Figs.~\ref{fig:1}(d)-(f)], whereas for GaAs samples [Figs.~\ref{fig:1}(a)-(c)] the peak is around $\mathcal{O}(0.1K)$. At higher $T$ values, the dimensionless Planckian invariably decreases to strongly sub-Planckian behavior, becoming much less than unity. We emphasize that the same sample may manifest super-Planckian ($\Gamma> k_\mathrm{B} T$), Planckian ($\Gamma\sim k_\mathrm{B} T$), and sub-Planckian ($\Gamma < k_\mathrm{B}T$) behavior at different densities and temperatures, clearly establishing that the idea of a strange Planckian metallicity with $\Gamma \sim k_\mathrm{B} T$ is not a precise well-defined parameter-independent concept.
What we find to be the most interesting empirical finding in Fig.~\ref{fig:1} is that the dimensionless Planckian parameter in doped 2D semiconductors manifests sub-Planckian ($<1$) or Planckian ($\sim 1$) or super-Planckian ($>1$) behavior in the same sample as the carrier density and temperature are varied. The highest value of the dimensionless Planckian parameter is mostly achieved at the lowest carrier density (where also the $T$-dependence of the measured resistivity in Fig.~\ref{fig:2} is the strongest), and even this highest value typically is of the order of only $2$--$6$, never exceeding the Planckian limit of unity by more than an magnitude.
In the next section, we discuss a Boltzmann transport theory for the empirical results presented in the current section, based on the carriers being resistively scattered by temperature- and momentum-dependent screened Coulomb disorder (i.e., $T$-dependent Friedel oscillations).
\section{Transport Theory} \label{sec:3}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{diagram_figure.pdf}
\caption{Feynman diagrams for (a) the screened electron-impurity interaction ($u_i$), (b) the single particle impurity Green's function ($G$), and (c) the impurity self-energy ($\Sigma$) within the leading order Born approximation. $n_i$, $G_0$, $v_c$ and $\Pi$ represent the impurity density, the bare Green's function, the bare Coulomb interaction, and the bare polarizability, respectively. Here we suppress the momentum label for visual clarity. }
\label{fig:transport_diagram_figure}
\end{figure}
We consider resistive scattering of 2D carriers with a standard parabolic energy dispersion, with a 2D density of $n$, described by an effective mass $m$ and a background lattice dielectric constant $\kappa$, from random quenched charged impurities of effective 2D concentration $n_i$. The Coulomb coupling between the carriers and the impurities is screened by the momentum-dependent dielectric screening function of the carriers themselves, $\epsilon(q)$, where the density ($n$) and the temperature ($T$) dependence of the screening function is suppressed for notational convenience. We treat the finite-temperature and finite-momentum screening in the mean field random phase approximation(RPA), which expresses the dielectric screening in terms of the electron-electron long-range Coulomb interaction and the 2D noninteracting finite-momentum and finite-temperature polarizability function, $\Pi(q)$ (This RPA screening theory is essentially the leading-order theory in 1/$N$ expansion assuming $N$ fermionic flavors, as used extensively in quantum field theories). Given the screened electron-impurity interaction, we calculate the resistivity using the leading-order Boltzmann transport theory treating the screened disorder scattering in the Born approximation. Figure~\ref{fig:transport_diagram_figure} provides the schematic Feynman diagrams for the equivalent screened disorder approximation for the single-particle Green's function, but of course we obtain the full transport coefficient including the appropriate vertex correction which is automatically guaranteed by the Boltzmann transport theory.
Since the interaction between the carriers and the random impurities is finite ranged here (`screened Coulomb disorder'), the vertex corrections are quantitatively important for doped semiconductors unlike in normal 3D metals where the electron-impurity scattering is always $s$-wave with vertex corrections being unimportant for the resistivity.
The basic quantity entering the Boltzmann resistivity formula is the thermally-averaged scattering rate $\tau$ entering Eq.~(\ref{eq:total_resistivity}). The main calculation is obtaining the scattering rate, $1/\tau$, at finite temperatures as a function of carrier density and temperature (in the leading order theory the scattering rate is proportional to the impurity density $n_i$, simply providing an overall resistivity scale). All the equations going into our transport theory are shown below.
The thermal average of the scattering time is given by
\begin{equation}
\tau=\frac{ \int d\varepsilon_\mathrm{\bm k} \tau(\mathrm{\varepsilon_\mathrm{\bm k}}) \varepsilon_\mathrm{\bm k}\left(-\frac{ \partial f(\varepsilon_\mathrm{\bm k})}{\partial\varepsilon_\mathrm{\bm k}}\right) }
{\int d \varepsilon_\mathrm{\bm k} \varepsilon_\mathrm{\bm k} \left(-\frac{ \partial f(\varepsilon_\mathrm{\bm k})}{\partial\varepsilon_\mathrm{\bm k}}\right) },
\label{eq:tau_finite_T}
\end{equation}
where $\varepsilon_\mathrm{\bm k}=\hbar^2k^2/2m$ is the energy dispersion with $m$ denoting the effective mass, $f(\varepsilon)=1/[1+\exp(\varepsilon-\mu(T))/k_\mathrm{B}T]$ is the Fermi-Dirac distribution function, $\mu(T)$ is the chemical potential with the Fermi energy $E_\mathrm{F} =\mu(T=0)$, and $\tau({\varepsilon_\mathrm{\bm k}})$ is the zero-temperature scattering time, which, in the leading-order Boltzmann transport theory (Fig.~\ref{fig:transport_diagram_figure}), is given by
\begin{equation}
\frac{1}{\tau({\varepsilon_\mathrm{\bm k}})}=\frac{2\pi n_i}{\hbar}
\sum_\mathrm{\bm k'}
\left|u_\mathrm{i}(\bm k - \bm k')\right|^2
(1-\cos{ \theta})\delta(\epsilon_\mathrm{\bm k}-\epsilon_\mathrm{\bm k'}).
\label{eq:tau_zero_T}
\end{equation}
Here $u_\mathrm{i}(\bm q)=v_i(q)/\epsilon(q,T)$ is the screened electron-impurity Coulomb interaction with $v_i(q)=2\pi e^2/\kappa q$ and $\epsilon(q,T)=1-v_c(q)\Pi(q,T)$ is the RPA screening function, where $v_c(q)=2\pi e^2/\kappa q$ is the electron-electron Coulomb interaction and $\Pi(q,T)$ is the finite temperature 2D polarizability, which can be obtained by using the zero temperature polarizability:
\begin{equation}
\Pi(q,T)=\int_\mathrm{0}^{\infty} d\epsilon \frac{\Pi(q)|_{\epsilon_\mathrm{F}=\epsilon}}{4k_\mathrm{B}T\cosh^2{\frac{\epsilon-\mu(T)}{2k_\mathrm{B}T}}},
\label{eq:pol_finite_T}
\end{equation}
where
\begin{equation}
\Pi(q)=
\frac{gm}{2\pi\hbar^2}\left[1 - \Theta(q-2k_\mathrm{F})\frac{\sqrt{q^2- 4k^2_\mathrm{F} }}{q}\right],
\label{eq:pol_zero_T}
\end{equation}
is the zero temperature static 2D polarizability. Here $E_\mathrm{F}(k_\mathrm{F})$ is the Fermi energy(wavevector), $g$ denotes the valley and spin degeneracy and $\Theta(x)$ is the Heaviside step function.
Finite temperature smoothens the $q=2k_\mathrm{F}$ kink in the 2D polarizability algebraically even for $T\ll T_\mathrm{F}$, thus contributing a linear-in-$T$ nonanalytic correction to the resistivity since $2k_\mathrm{F}$ back-scattering is the dominant scattering contributing to the resistivity\texttt{--} this 2D-specific thermal effect disappears if the momentum-independent long-wavelength Thomas-Fermi approximation is used. This linear-in-$T$ term is not present in the Sommerfeld expansion or in 3D electrons scattering from screened disorder\texttt{--}the effect is intrinsic to 2D screening properties.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{Figure4.pdf}
\caption{(a) Finite temperature 2D polarizability as a function of (a) wavevector for various temperatures $T/T_\mathrm{F}=0.0$, $0.1$, $0.2$, $0.5$, $1.0$, and as a function of (b), (c) temperature at a fixed wavevector of (b) $q/k_\mathrm{F}=0.0$ and (c) $q/k_\mathrm{F}=2.0$. The insets in (b) and (c) show the zoom-in of the low temperature region, comparing the numerical results (black solid lines) with the known analytical asymptotic forms (red dashed lines) given by $\Pi(q=0,T)/D_0=1-\exp(-T_\mathrm{F}/T)$ and $\Pi(q=2k_\mathrm{F},T)/D_0=1-\sqrt{\pi/4}(1-\sqrt{2})\zeta(1/2)\sqrt{T/T_\mathrm{F}}$. (d)-(f) Finite temperature 2D polarizability in real space as a function of (d) distance $r$ for various temperatures $T/T_\mathrm{F}=0.0$, $0.1$, $0.2$, $0.5$, $1.0$, and as a function of (e), (f) temperature at a fixed distance of (e) $k_\mathrm{F}r=5/4\pi$ and (f) $k_\mathrm{F}r=7/4\pi$, showing a strong temperature dependence in the low temperature regime.}
\label{fig:4}
\end{figure}
This is illustrated in our Fig.~\ref{fig:4}, where we plot the calculated finite-temperature 2D polarizability (i.e., the `bubble' in Fig.~\ref{fig:transport_diagram_figure}) as a function of momentum for different values of $T/T_\mathrm{F}$, clearly showing that the thermal smearing of the $2k_\mathrm{F}$-kink is $\mathrm{O}(T^{1/2})$ whereas at long wavelength (i.e., zero momentum), defining the Thomas-Fermi screening, the thermal smearing is exponentially weak, clearly showing that a long wavelength screening approximation, the standard approximation in semiconductor physics, would completely miss the strong temperature dependence observed experimentally. We also show the 2D polarizability in the real space, bringing out the strong temperature dependence of the Friedel oscillations in screening associated with the strong thermal smearing of the 2$k_\mathrm{F}$-kink in the momentum space. These strongly temperature dependent Friedel oscillations scatter carriers, leading to the strong temperature dependence of the resistivity arising from the strong temperature dependence of the effective screened disorder. the effect is completely lost in any long wavelength screening approximation, but $2k_\mathrm{F}$-scattering is the most important resistive scattering process, and hence the temperature dependence of $2k_\mathrm{F}$ screening is crucial.
It turns out that the above equations allow for a low-temperature ($T\ll T_\mathrm{F}$) and high-temperature ($T\gg T_\mathrm{F}$) analytical expressions for the 2D resistivity by appropriately expanding the finite-temperature screening function. Note that the analytical expansion is much more sophisticated than the simple Sommerfeld expansion, which would always give $\mathcal{O}(T^2)$ correction at low temperatures arising from the expansion of the Fermi functions. The thermal expansion involves careful consideration of the nonanalyticity in the 2D polarizability function at $q=2k_\mathrm{F}$, because of the strong $2k_\mathrm{F}$ kink as a function of momentum in the 2D polarizability arising from the Heaviside step function in Eq.~(\ref{eq:pol_zero_T}). The analytical results are:
\begin{equation}
\begin{aligned}
\rho(T \ll T_\mathrm{F}) &\approx \rho_0\left[\frac{2x}{1+x}\left(\frac{T}{T_\mathrm{F}}\right) +
\frac{2.646x}{(1+x)^2} \left(\frac{T}{T_\mathrm{F}}\right)^{3/2} \right] \\
\rho(T \gg T_\mathrm{F}) &\approx \rho_1 \left(\frac{T_\mathrm{F}}{T}\right) \left[1-\frac{3\sqrt x}{4}\left(\frac{T_\mathrm{F}}{T}\right)^{3/2} \right]
\end{aligned}
\label{eq:resistivity_asymptotic_low_and_large_T}
\end{equation}
where $\rho_0=\rho(T=0)$, $\rho_1=(h/e^2) (n_i/n) (2\pi x^2/g^2)$, and $x=q_\mathrm{TF}/2k_\mathrm{F}$, with $q_\mathrm{TF}= 2 m e^2/\kappa\hbar^2$ as the 2D Thomas-Fermi screening wavenumber. The key dimensionless parameters controlling the temperature dependence of resistivity are $x=q_\mathrm{TF}/2k_\mathrm{F}$ and $T/T_\mathrm{F}$\texttt{--} for low(high) density, both are large(small) since $T_\mathrm{F}$ and $k_\mathrm{F}$ are proportional to $n$ and $n^{1/2}$, respectively, (and $q_\mathrm{TF}$ is independent of density in 2D), leading to strong(weak) temperature dependence.
Before presenting our full numerical transport results for various 2D semiconductor systems (without any expansion in $T/T_\mathrm{F}$), we first provide an analytical Planckian analysis of the low-temperature results establishing that the analytical result, rather amazingly, is consistent, within a factor of 4, with the Planckian bound conjecture. Using Eq.~(\ref{eq:total_resistivity}) and the leading linear-order term of Eq.~(\ref{eq:resistivity_asymptotic_low_and_large_T}), we write:
\begin{equation}
\rho(T) = \rho – \rho_0 = \rho_0 \frac{2x}{1+x}\frac{T}{T_\mathrm{F}}.
\label{eq:resistivity_asymptotic_low_T}
\end{equation}
We note that $\rho(T)$ is proportional to $\rho_0$, thus increasing linearly with disorder $n_i$ since $\rho_0$ is proportional to $n_i$ in the leading-order transport theory. Since we are interested in the metallic transport property and in the maximum possible value of metallic $\rho(T)$ in order to test the Planckian conjecture, we use the Ioffe-Regel-Mott (IRM) criterion choosing the maximum possible metallic value of $\rho_ 0$ to be the 2D IRM limit of $h/e^2$ when the mean free path equals $1/k_\mathrm{F}$. Putting $\rho_0 = h/e^2$, we automatically obtain the highest possible value of the dimensionless Planckian parameter $(\hbar/\tau)/(k_\mathrm{B} T)$ since the highest $\rho_0$ implies the highest $\rho(T)$ which, by turn, then implies the highest $\hbar/\tau$. Putting $\rho_0 =h/e^2$ and denoting the corresponding finite-temperature scattering time as $\tau_\mathrm{min}$ (implying that this is the maximum allowed scattering rate $1/\tau_\mathrm{min}$ which is consistent with a 2D metal) and doing simple algebra, we obtain:
\begin{equation}
\hbar/\tau_\mathrm{min} = 4k_\mathrm{B} T
\end{equation}
Thus, the maximum possible scattering rate in our theory is limited from above by $4k_\mathrm{B} T$, indicating that the dimensionless Planckian cannot be much larger than $k_\mathrm{B} T$. If we take into account the fact that our leading order Boltzmann theory itself most likely breaks down substantially below the strong scattering IRM limit is reached, we conclude that:
\begin{equation}\label{eq:equalityeq10}
\hbar/\tau(T) < 4 k_\mathrm{B} T,
\end{equation}
which may be construed as the modified Planckian hypothesis for the transport problem under consideration here, i.e., the temperature-dependent 2D resistivity limited by screened Coulomb disorder scattering. We emphasize that although we use the IRM criterion on $\tau_0$ to obtain the basic inequality on the scattering rate, the limit imposed on the corresponding $T=0$ scattering rate is simply that the Fermi surface is well-defined even in the presence of disorder scattering. Using the IRM limit on $\rho_0$ itself we get the following inequality for the $T=0$ scattering rate $\tau_0$:
\begin{equation}\label{eq:IRM}
\hbar/2\tau_0 < E_\mathrm{F}
\end{equation}
which is nothing other than the IRM criterion defining coherent quasiparticles, i.e., with disorder-induced $T=0$ level broadening ($\hbar/2\tau_0$) being less than the Fermi energy. Thus, the generic assumption of coherent quasiparticle transport logically leads to a Planckian bound (within a factor of $4$) on our theoretical transport scattering rate! This shows that the often-claimed uncritical assertion that the saturation of the Planckian bound implies incoherent non-quasiparticle and non-Fermi liquid transport cannot be generically correct since it fails for our system where the bound is violated by a factor of $4$ at the limit of coherent transport.
Of course, one could argue that a factor of 4 is not significant for IRM-type dimensional arguments \cite{bruinSimilarityScatteringRates2013}.
We mention that the correction to the Planckian inequality of Eq.~(\ref{eq:equalityeq10}) arising from the next-to-the-leading-order term of $\mathcal{O}((T/T_\mathrm{F})^{3/2})$ in Eq.~(\ref{eq:resistivity_asymptotic_low_and_large_T}) is rather small for $T<T_\mathrm{F}$ since it is bounded by $1.323/(1 + x)$, and $x\gg1$ for any strong $T$-dependence to manifest itself anyway. This provides an analytical explanation for why the Planckian bound applies to our numerical results within a factor $5$ agreeing with the corresponding experimental results in Fig.~\ref{fig:1} where the bound is always satisfied within a factor of $6$. These theoretical arguments obviously apply only to the 2D doped semiconductor Planckian behavior because it is not known if similar consideration would apply to other Planckian metals studied in the literature.
We find this analytical derivation of an effective Planckian conjecture for our transport phenomenon to be a rather unexpected result since the IRM criterion is a $T=0$ constraint for metallic transport which has nothing to do with temperature, but nevertheless the finite-temperature scattering rate is absolutely constrained from above by $4k_\mathrm{B} T$ based on the IRM constraint imposed on the $T=0$ resistivity. One may be concerned that this upper bound argument is based on the low temperature analytic result [Eq.~(\ref{eq:resistivity_asymptotic_low_T})], and thus it cannot be ruled out that the constraint is not valid at higher temperatures with $\hbar/\tau>4k_\mathrm{B}T$ at $T>T_\mathrm{F}$. In the subsequent discussions below, however, we show numerically (without assuming $T\ll T_\mathrm{F}$) that at all temperatures the Planckian parameter is of the order of only $2$--$4$ in agreement with our modified Planckian hypothesis discussed above by carefully analyzing the high temperature analytical results and presenting the full numerical results.
We mention one other aspect (see the second part of Eq.~(\ref{eq:resistivity_asymptotic_low_and_large_T}) for $T\gg T_\mathrm{F}$) of our analytical results in approximate agreement with experiments on the high-temperature side, where $\rho(T)$ decreases with increasing $T$ in approximately $1/T$ manner in accordance with the experimental finding presented in Sec.~\ref{sec:2}. Given that $\rho(T)$ increases linearly in $T$ for $T\ll T_\mathrm{F}$ and decreases linearly for $T\gg T_\mathrm{F}$ means that there is a quantum-to-classical crossover in the 2D transport around $T \sim T_\mathrm{F}$, where $\rho(T)$ should have a local maxima. This maximum value of $\rho(T)$ is the most important regime for the validity or not of the Planckian hypothesis. Actual numerics shows that this resistivity maximum occurs roughly around $T_\mathrm{F}/3$ with $\rho(T)$ increasing (decreasing) with $T$ below (above).
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{Figure5.pdf}
\caption{Planckian parameter numerically calculated as a function of $T$ using the Boltzmann transport theory and the material parameters corresponding to the experimental systems in Fig.~\ref{fig:1}/\ref{fig:2}(a)-(f). The dashed curves represent the low- and high-$T$ asymptotic results [Eq.~(\ref{eq:resistivity_asymptotic_low_and_large_T})], which agree well with the full numerical results (solid curves). Each curve corresponds to a different carrier density in the same as way in Fig.~\ref{fig:1}. }
\label{fig:5}
\end{figure}
In Fig.~\ref{fig:5}(a)–(f) we present (by solving Eqs.~(\ref{eq:tau_finite_T}-\ref{eq:pol_zero_T}) the full numerically calculated dimensionless Planckian parameter as a function of temperature ($T$) and density ($n$) corresponding to the systems shown in Figs.~\ref{fig:1} and \ref{fig:2} [i.e., using the theoretical materials parameters corresponding to the experimental systems in Figs.~\ref{fig:1}/\ref{fig:2}(a)-(f)]. We also show in each panel of Fig.~\ref{fig:5} the results corresponding to the asymptotic low-$T$ (high-$T$) analytical theory which agree with the full numerical results, deviating from the analytical theory at higher (lower) $T(n)$. Our theory includes the realistic (but nonessential) details of each experimental sample, such as the appropriate valley degeneracy and the quasi-2D width of each 2D system (which typically suppresses the effective interaction through a form factor arising from the confinement wavefunctions) and we assume, to keep the number of parameters a minimum, that the charged impurities are randomly distributed in the 2D layer with an effective 2D density of $n_i$ (relaxing this approximation leads to a better quantitative agreement with the experiment at cost of adding more unknown parameters, which is unnecessary in the context of the current work.) We fix each value of $n_i$ (which only defines the overall scale, not the $T$- and $n$-dependence) by obtaining the best fit with the experiment at the lowest temperature. Thus the random impurity density, $n_\mathrm{i}$, defining the overall resistivity scale, but not its temperature dependence at all, is the only unknown parameter of our model.
We summarize the salient features of our theoretical results presented in Fig.~\ref{fig:5}: (1) The dimensionless Planckian parameter mostly satisfies the Planckian conjecture, within a factor of $10$, being of $\mathcal{O}(1-10)$ or less quite generally; (2) when the parameter exceeds unity, it is only by a factor of $2$--$4$, never by more than an order of magnitude; (3) the theoretical results agree generically qualitatively, and sometimes semi-quantitatively, with the experimental results of Fig.~\ref{fig:1} (this agreement can be made quantitative by using an impurity distribution in the quasi-2D confinement direction, thus adding more parameters to the model in addition to $n_\mathrm{i}$); (4) in general, the Planckian parameter is the largest at the lowest densities [and at intermediate temperatures $\sim\mathcal{O}(T_\mathrm{F}/3)$] for all samples with the behavior being sub-Planckian at higher densities and lower temperatures (again in complete agreement with the experimental data); (5) similar to the experimental findings, the theoretical $\rho (T)$ is linear only for $T\ll T_\mathrm{F}$, deviating from linearity at higher temperatures, but the linearity within the screening theory persists all the way to $T=0$.
We mention in this context that our leading order Boltzmann theory, while being consistent qualitatively with the experimental results everywhere, becomes less valid at lower densities since the theory is exact in the $n\gg n_i$, regime and fails completely for $n<n_i$. The results shown in Figs.~\ref{fig:5} obey the $n\gg n_i$ criterion necessary for the applicability of our Boltzmann theory, and the fitted values of $n_i$ are consistent with the specific materials considered in each case.
\section{Self-energy and INELASTIC SCATTERING RATE} \label{sec:4}
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{self_energy_diagram_figure.pdf}
\caption{Feynman diagrams for (a) the screened Coulomb interaction($w$), and (b) the RPA self-energy. The notations are the same as in Fig.~\ref{fig:transport_diagram_figure}. }
\label{fig:self_energy_diagram_figure}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\linewidth]{Figure7.pdf}
\caption{(a) Finite temperature scattering rates [$\Gamma(T)=\hbar/\tau_\mathrm{ee}(T)$] arising from electron-electron Coulomb interaction numerically calculated within RPA approximation along with the low-$T$ analytical asymptotic results (dashed line) for various values of $r_s=0.1$, $1.0$, $2.0$, and $6.0$. The inset shows the zoom-in of the low temperature region, showing that the numerical and asymptotic results are in good agreement. (b) The power law exponent $p$ of the numerically calculated scattering rates in (a), which saturates from $2$ to $0.5$ with increasing $T$ from $0$ to $20T_\mathrm{F}$ regardless of the value of $r_s$. (c) The Planckian parameter obtained using the results in (a). Here we use the bare electron mass with unity dielectric constant ($\kappa=1$). }
\label{fig:7}
\end{figure*}
In this section, we investigate the Planckian conjecture for the inelastic scattering rate arising from electron-electron Coulomb interaction.
This is unrelated to any transport considerations in our systems, and the motivation is that inelastic scattering from electron-electron interactions may be a natural quantity for Planckian considerations \cite{hartnollPlanckianDissipationMetals2021a}.
We use the RPA approximation involving the infinite series of polarization bubble diagrams (Fig.~\ref{fig:self_energy_diagram_figure}) to evaluate the imaginary part of the self-energy, which is given by
\begin{align}
\mathrm{Im}\Sigma(\bm k, \omega,T)
\!=&\!\int\!\frac{d^2 q}{(2\pi)^2} \left [n_\mathrm{B}(\hbar\omega-\xi_\mathrm{\bm k+\bm q}) + n_\mathrm{F}(-\xi_\mathrm{\bm k+\bm q}) \right ] \nonumber \\
&\times v_c(\bm q)\mathrm{Im}\left[\frac{1}{\varepsilon(\bm q,\xi_\mathrm{\bm k+\bm q}-\hbar\omega, T)}\right],
\label{eq:imag_self_energy}
\end{align}
where $\xi_\mathrm{\bm k}=\varepsilon_\mathrm{\bm k}-\mu(T)$ and $n_\mathrm{F(B)}$ denotes the Fermi (Bose) distribution function. It should be noted that the Coulomb interaction is dynamically screened by the screening function $\varepsilon(\bm q, \omega, T)=1-v_c(q)\Pi(q,\omega, T)$ varying as a function of the frequency $\omega$, whereas the transport calculation involves only the static dielectric function with no dependence on $\omega$. Similar to the static case, we calculate the finite temperature dynamic polarizability $\Pi(q,\omega, T)$ using the zero temperature dynamic polarizability given by \cite{Stern1967}
\begin{align}
\Pi(\bm q,\omega)=&-\frac{m}{\pi} + \frac{m^2}{\pi q^2}
\left[
\sqrt{\left( \omega+\frac{q^2}{2m} \right)^2-\frac{2E_\mathrm{F} q^2}{2m}}\right.\nonumber \\
&-
\left.\sqrt{\left( \omega-\frac{q^2}{2m} \right)^2-\frac{2E_\mathrm{F} q^2}{2m}}\right],
\label{eq:iso_polar}
\end{align}
and Eq.~(\ref{eq:pol_finite_T}).
We note that the self-energy of Eq.~(\ref{eq:imag_self_energy}), as detailed in Fig.~\ref{fig:5}, is the leading-order self-energy in the dynamically screened electron-electron Coulomb interaction, which is exact in the high-density limit.
Within the on-shell approximation, the inelastic Coulomb scattering rate at the Fermi surface at finite temperature $T$ is given by
\begin{equation}
\hbar/\tau_\mathrm{ee}(T)=2\mathrm{Im}\Sigma(\bm k_\mathrm{F}, \xi_\mathrm{\bm k_\mathrm{F}},T).
\end{equation}
It has recently been shown that at low temperatures, $T\ll T_\mathrm{F}$, the Coulomb scattering rate asymptotically analytically behaves as \cite{liaoTwodimensionalElectronSelfenergy2020}
\begin{align}
\hbar/\tau_\mathrm{ee}(T)=&
\frac{\pi}{4}\frac{T^2}{T_\mathrm{F}}\ln{ \frac{\sqrt{2}r_s T_\mathrm{F}}{T} }\nonumber \\
+& \frac{\pi}{12}\left(6 + \ln{2\pi^3} -36 \ln{A}\right)\frac{T^2}{T_\mathrm{F}}\nonumber \\
-&\frac{7\zeta(3)}{\sqrt{2}\pi} \frac{T^3}{r_s T_\mathrm{F}}
\label{eq:low_T_self_energy}
\end{align}
exhibiting the well known $\hbar/\tau(T)\sim T^2 \ln{T}$ behavior in the $T\ll T_\mathrm{F}$ limit \cite{zhengCoulombScatteringLifetime1996}. Here, $\zeta(s)$ is the Riemann-zeta function, $A=1.28243$ is the Glaisher's constant, and $r_s$ is the dimensionless Coulomb interaction parameter characterizing the interaction strength defined as $r_s=\kappa m e^2/\hbar^2\sqrt{\pi n}$ with $n$ being the carrier density. The RPA theory is exact in the high-density or equivalently low-$r_s$ limit.
Figure~\ref{fig:7}(a) presents the numerically calculated scattering rate as a function of temperature for various values of $r_s$ along with the low temperature asymptotic curves [Eq.~(\ref{eq:low_T_self_energy})], showing good agreement between the full numerical and asymptotic results. In Fig.~\ref{fig:7}(b) we plot the power-law exponent of the scattering rate numerically calculated using $p=d\ln{\Gamma}/d\ln{T}$. Note that $p$ varies from 2 to 0.5 with increasing $T$ from $0$ to $20T_\mathrm{F}$, implying that the Planckian parameter defined as $\Gamma/k_\mathrm{B} T$ linearly increases as a function of $T$ for $T\ll T_\mathrm{F}$, and decreases as $1/\sqrt{T}$ for $T\gg T_\mathrm{F}$, resulting in local maxima in the intermediate temperature regime around $T\sim T_\mathrm{F}$, as shown in the Fig.~\ref{fig:7}(c) presenting the Planckian parameter for various values of $r_s$. It should be noted from Fig.~\ref{fig:7}(c) that the maximum of the Planckian parameter is larger for stronger interaction (i.e., larger $r_s$), but is of the order of only $2$--$4$ approximately obeying the Planckian hypothesis for the typical range of $r_s$ values of usual two-dimensional materials ($r_s\lesssim6$).
For much larger $r_s$, i.e., very strongly interacting systems with $r_s\gg 1$, the dimensionless Planckian parameter exceeds unity by an increasingly larger factor, but the RPA theory becomes increasingly unreliable quantitatively for large $r_s$, and we are not sure whether any significance should be attached to our theory for $r_s\gg1$.
We mention that our RPA theory is precisely the leading order 1/$N$ theory in quantum field theories, where N is the number if fermion flavors, which turns out to be equivalent to the leading-order theory in $r_s$ also for an interacting Fermi liquid.
We point out that just as the asymptotically low-$T$ ($\ll T_\mathrm{F}$) analytical behavior of the 2D inelastic scattering rate goes as $T^2 \ln{T}$ (Eq.~\ref{eq:low_T_self_energy}), it is easy to show that high-$T$ ($\gg T_\mathrm{F}$) behavior goes as linear in $T^{1/2}$, simply because at high temperatures ($\gg T_\mathrm{F}$) the Fermi-Dirac statistics for the electrons becomes a Maxwell-Boltzman statistics. Thus, both our low-$T$ and high-$T$ behaviors of the numerically calculated inelastic scattering rates agree precisely with theoretical analytical expectations.
We should add that our use of the mass-shell self-energy approximation within the RPA diagrams (of Fig.~\ref{fig:5}) allows us to neglect the inclusion of any renormalization factor $Z$ in the calculation of the scattering rate, which, in principle, should be the energy-width of the quasiparticle spectral function in the context of Planckian considerations \cite{hartnollPlanckianDissipationMetals2021a}. It turns out that the calculation of the self-energy in the leading order infinite ring diagram approximation is more consistent with the mass-shell self-energy along with neglecting the renormalization factor because this provides an approximate cancellation of the higher-order diagrams in the theory \cite{riceEffectsElectronelectronInteraction1965, tingEffectiveMassFactor1975, leeLandauInteractionFunction1975}.
The full 2D interacting spectral function has recently been calculated in depth and the quantitative difference between inclusion or not of the $Z$-factor is less than a factor of 2, and, therefore, for our Planckian considerations, whether the renormalization factor is included or not is unimportant \cite{ahnFragileStableTwodimensional2021}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{self_energy_exp.pdf}
\caption{Experimental Planckian parameters obtained using the experimentally measured Coulomb scattering rates for GaAs present in Fig.~3 of Ref.~\cite{zhengCoulombScatteringLifetime1996, murphyLifetimeTwodimensionalElectrons1995}. Here $\Gamma(T)=\hbar/\tau(T)$, where $\tau(T)$ is the Coulomb lifetime. }
\label{fig:self_energy_exp}
\end{figure}
We conclude this section by presenting experimental Planckian parameters for the Coulomb scattering rate in Fig.~\ref{fig:self_energy_exp}, which we obtain using the lifetime of 2D GaAs measured through inelastic tunneling spectroscopy in Ref.~\cite{murphyLifetimeTwodimensionalElectrons1995}. It is important to note that the Planckian parameter linearly increases with increasing $T$ obeying the Planckian hypothesis with $\hbar/\tau\lesssim k_\mathrm{B} T$, in agreement with our theoretical Planckian analysis discussed in this section. These experimental inelastic tunneling scattering rates approximately agree with the RPA theory used in our work \cite{zhengCoulombScatteringLifetime1996}.
\section{ Dimensional analysis} \label{sec:5}
Our experimental analyses (Sec.~\ref{sec:2}) as well as theoretical analyses involving transport (Sec.~\ref{sec:3}) establish the approximate validity of generalized (i.e., within an order of magnitude) Planckian bounds on scattering rates in 2D semiconductors. We discuss some intuitive dimensional arguments, which are neither rigorous nor complete, but should serve as a motivation for future thinking on Planckian, which is currently an empirical finding.
As for the temperature-dependent inelastic scattering rate due to electron-electron interactions itself, as mentioned already above, energy-time uncertainty relation provides a crude `explanation' since the Fermi surface becomes `diffuse' by $k_\mathrm{B}T$ at finite temperatures, any energy uncertainty arising from inelastic scattering should be approximately bounded by $k_\mathrm{B} T$, leading to $\Gamma_\mathrm{ee} < k_\mathrm{B}T$ as found in Sec.~\ref{sec:4} above. Obviously such uncertainty relation induced inequality is only dimensionally applicable, and can at best be valid within an order of magnitude.
The above uncertainty argument, however, says nothing about transport, unless the resistive scattering itself arises from electron-electron interaction induced inelastic scattering, which is most certainly not the case for the systems of our interest in the current work (and also not for phonon-induced generic linear-in-$T$ resistivity in all normal metals). We believe that transport measurements themselves can only extract a momentum relaxation rate associated with resistive scattering, which in general says nothing about any underlying inelastic scattering. So, we need a different heuristic argument in order to understand the empirical validity (within an order of magnitude) of the Planckian bound for the temperature-dependent transport in doped semiconductors.
One possibility, already alluded to in Sec.~\ref{sec:3} analytical arguments, is that the approximate transport Planckian bound arises from the combination of two complementary definitions of a metal: Existence of coherent quasiparticles (Ioffe-Regel-Mott) criterion, $\Gamma < E_\mathrm{F}$, where $\Gamma= \hbar/2\tau$ with $\tau$ as the resistive momentum relaxation rate and the thermal constraint that temperature $k_\mathrm{B} T$ should not exceed the Fermi energy by too large a factor in a quantum metal. $E_\mathrm{F}$ is given in 2D (for a circular Fermi surface) by:
\begin{equation}
E_\mathrm{F} = \frac{\pi n \hbar^2}{m},
\end{equation}
which, combined with Eq.~(\ref{eq:IRM}) for the resistivity immediately gives:
\begin{equation}
\rho < \frac{\hbar}{e^2}.
\end{equation}
This is of course the Ioffe-Regel-Mott criterion limiting the 2D resistivity below the resistance quantum $h/e^2$ for the existence of a metal. Now, we write the scattering rate quite generally as
\begin{equation}
\hbar/\tau = \alpha k_\mathrm{B} T,
\end{equation}
leading immediately to [using Eq.~(\ref{eq:finite_temperature_resistivity})]:
$\rho = \pi \hbar^2 \alpha k_\mathrm{B} T /(\hbar e^2 E_\mathrm{F}) < h/e^2$, which then gives, after simplifying both sides:
\begin{equation}
E_\mathrm{F} > \frac{\alpha k_\mathrm{B} T} {2\pi}.
\end{equation}
If we now demand that a quantum metal is constrained to $E_\mathrm{F} < k_\mathrm{B} T$, we get:
\begin{equation}
\alpha < 2\pi,
\end{equation}
which is essentially our generalized Planck bound for transport.
While we do not claim this line of reasoning to be anything more than a heuristic dimensional argument, it is consistent with the finite temperature transport scattering rate to be constrained from above by a $\mathcal{O}(1- 10)$ based simply on the requirement that the electron system preserves a well-defined Fermi surface (which is the fundamental definition of a metal) both against momentum decoherence and thermal broadening. This dimensional argument hints that the empirical validity of the Planckian bound may simply be a manifestation of the dual facts that the natural scales for the quasiparticle momentum and energy in a Fermi system are Fermi momentum and Fermi energy, and this leads to a natural scale for the transport scattering rate being of the order of $k_\mathrm{B} T$. It would be meaningful to try to make this dimensional argument more rigorous although that may turn out to be a challenge in strongly-coupled disordered interacting systems with no natural small for any perturbative expansion to work.
\section{Conclusions} \label{sec:6}
We find that the strong observed temperature dependence of the low-temperature ($<10K$) resistivity in high-quality 2D doped semiconductors to follow, both experimentally (Sec.~\ref{sec:2}) and theoretically (Sec.~\ref{sec:3}), sub-Planckian ($\hbar/\tau < k_\mathrm{B}T$), Planckian ($\hbar/\tau \sim k_\mathrm{B}T$), and super-Planckian ($\hbar/\tau>k_\mathrm{B} T$) behaviors as a function of carrier density, with the super-Planckian behavior manifesting at lower carrier densities where the temperature dependence of the resistivity is the strongest and the resistivity itself is the largest (but still below the Ioffe-Regel-Mott limit to ensure metallicity). Two noteworthy features of our results are that (1) the violation of the Planckian bound in the super-Planckian regime is always rather modest, within a factor of $2$--$6$ (i.e., of the order of 10 or less), and (2) the super-Planckian behavior manifests not only at lower carrier densities, but also at a rather high effective dimensionless temperature ($T/T_\mathrm{F} \sim 1/3$ or so, a regime totally inaccessible in normal metals)—in fact, for all densities the dimensionless Planckian parameter ($\hbar/\tau$)/($k_\mathrm{B}T$) is at its maximum at a finite temperature $\sim T/T_\mathrm{F} \sim 1/3$, just before the quantum-classical crossover in the resistivity leading to a decreasing resistivity ($\sim 1/T$) with increasing temperature. The temperature dependence here arises not from electron-phonon interactions (which would be operational in these systems for $T>10K$--$20K$ typically) or from umklapp or Baber electron-electron scattering, but from a combination of disorder and indirect electron-electron interactions through the screening of the disorder (or equivalently from the temperature dependent Friedel oscillations). We emphasize that the temperature dependence disappears if the electron-electron interaction is set to zero, thus using unscreened disorder in the theory.
Our empirical findings based on a detailed quantitative analysis (Sec.~\ref{sec:2}) of the temperature and density dependent experimental 2D transport data in different materials are supported by our detailed transport theory (Sec.~\ref{sec:3}) based on resistive scattering by finite-temperature momentum-dependent screened Coulomb disorder (or equivalently by the Friedel oscillations associated with the screened charged impurity potential). The temperature-dependence in fact goes away if the theory uses long-wavelength Thomas-Fermi screening without the 2$k_\mathrm{F}$-kink in the $T=0$ 2D polarizability.
While the scattering rate $1/\tau$ for transport is a momentum relaxation rate, we also consider (Sec.~\ref{sec:4}) theoretically the temperature- and density-dependent inelastic scattering rate $1/\tau_\mathrm{ee}$ arising from the many-body electron-electron Coulomb interaction, which is momentum-conserving in our theory since a doped semiconductor does not allow umklapp scattering. The inelastic electron-electron interaction scattering rate is simply given by the imaginary part of the 2D self-energy, and is conceptually qualitatively different from the $1/\tau$ transport scattering rate defining the resistivity in the transport theory (Sec.~\ref{sec:3}). We find that $1/\tau_\mathrm{ee}$ also obeys the Planckian bound of being within a factor of $10k_\mathrm{B} T$ in general, with the peak value of the dimensionless Planckian parameter, ($\hbar/\tau_\mathrm{ee}$)/($k_\mathrm{B} T$), also occurring around $T\sim T_\mathrm{F}$ as in the transport problem. We find it intriguing that both our (energy-conserving) screened disorder induced transport scattering rate and our (momentum-conserving) interaction induced inelastic scattering rate approximately obey the Planckian bound, with the super-Planckian behavior being only a modest factor of $2$--$6$, happening only at lower densities and higher temperatures, i.e., around $T\sim T_\mathrm{F}$ (a lower density implies a lower $T_\mathrm{F}$). We emphasize that this approximate Planckian behavior we discover in 2D doped semiconductor properties applies to both theory (Secs.~\ref{sec:3} and \ref{sec:4}) and experiment (Secs.~\ref{sec:2} and \ref{sec:4}). Our Planckian analysis of the experimental data establishes an empirical applicability of the Planckian hypothesis to 2D semiconductor transport with a modest super-Planckian behavior at low densities around $T\sim T_\mathrm{F}$, and our theory, for both the momentum relaxation rate and the inelastic scattering rate, indicates that the Planckian hypothesis applies (within an order of magnitude) theoretically over essentially all densities and temperatures in the metallic regime for 2D semiconductors. Why? Why is the Planckian conjecture apparently approximately valid in our systems (involving different 2D semiconductor materials) over all metallic ranges of density/temperature?
The answer to the question of why the Planckian bound applies, even approximately, to any experimental (or even physical) properties is unclear at this point in spite of many experimental reports of its empirical validity in transport properties of many materials, starting with the important observation by Bruin, et al \cite{bruinSimilarityScatteringRates2013}. The reverse is, in fact, true\texttt{--}a compelling recent theoretical analysis by Lucas \cite{lucasOperatorSizeFinite2019}\cmmnt{[https://doi.org/10.1103/PhysRevLett.122.216601]} shows that no such bound should exist for the temperature-dependent resistivity.
In particular, temperature defines thermodynamic equilibrium whereas transport is necessarily a dissipative kinetic property, where temperature enters as a parameter defining the distribution functions, so why there should be a fundamental transport bound defined by $k_\mathrm{B} T$ is unclear.
It is also an obvious fact that the bound does not apply to the residual disorder-induced resistivity at $T=0$ that all metals (ignoring any superconductivity) must have. It is therefore suggested \cite{hartnollPlanckianDissipationMetals2021a}\cmmnt{[arXiv:2107.07802]} that any Planckian bound must not include any elastic scattering since at $T=0$ only elastic scattering (by disorder) can contribute to transport. This is, however, in sharp contrast with the most established empirical Planckian behavior observed routinely in normal metallic resistivity ($> 40$K), which was already emphasized in Ref.~\cite{bruinSimilarityScatteringRates2013}\cmmnt{ [1]} and has been well-known since the 1950s \cite{ziman1972principles}\cmmnt{ [Ziman, book]}, where a linear-in-$T$ resistivity ranging from sub-Planckian (e.g., Al) to super-Planckian (e.g., Pb) manifests itself arising from the quasi-elastic electron-phonon scattering in the equipartition regime above the Bloch-Gruneisen temperature \cite{hwangLinearinResistivityDilute2019}\cmmnt{ [cite Hwang-SDS Linear-in-T PRB 2019]}. The Planckian behavior of phonon-induced metallic resistivity follows from the simple fact that the corresponding scattering time, $\tau_\mathrm{ep}$, follows the simple formula
\begin{equation}
\hbar/\tau_\mathrm{ep} = 2\pi \lambda k_\mathrm{B} T,
\end{equation}
in the phonon equipartition regime, with $\lambda$ being the dimensionless electron-phonon Eliashberg-McMillan coupling constant. It just so happens that $\lambda$ in common metals lie typically between $0.1$ and $1.5$. But, even for this well-established phonon scattering induced linear-in-$T$ metallic resistivity, the Planckian bound seems to apply approximately (within a factor of 10) for all metals since the metallic electron-phonon coupling constant seems empirically to never exceed 1.5, thus ensuring that the Planckian bound is obeyed always within a factor of $10$ or so in all metals. It has been emphasized that the phonon-induced high-temperature electronic resistivity could, in principle, be anything as long as the coupling is large \cite{hwangLinearinResistivityDilute2019}\cmmnt{ [Hwang-SDS 2019]} with the transport theory imposing no bound at all, but empirically this does not seem to happen with the electron-phonon coupling never being much larger than unity ever. Why such a Planckian bound applies for the electron-phonon coupling remains unclear although the fact that this happens is empirically well-established. It has recently been speculated that the observed empirical limit on the electron-phonon coupling (i.e., $\lambda < 1.5$) may be related to a materials stability bound \cite{murthyStabilityBoundLinear2021}\cmmnt{ [ arXiv:2112.06966]}. This speculated lattice stability bound, however, has no relevance to our findings in the current work where phonons do not play any role.
The strongly temperature-dependent resistive scattering in doped 2D semiconductors considered in our work arises from a temperature-dependent effective disorder although the bare disorder, arising from random quenched charged impurities, is temperature independent, with the temperature dependence in the effective disorder arising from the anomalously strongly temperature-dependent finite-momentum 2D polarizabilty function controlling the screening of the bare disorder. The scattering in our case is an energy-conserving elastic scattering, as appropriate for quenched disorder, causing a momentum relaxation in spite of the strong temperature dependence.
We emphasize that the temperature dependence disappears in our theory if we suppress the electron-electron interaction or the electronic polarizability so that the disorder is unscreened, and thus, the temperature dependence arises entirely from the indirect effects of interaction.
The significant point is that the temperature dependence of resistivity here, in spite of arising from elastic scattering, is strong and asymptotically linear down to arbitrarily low temperatures, with nothing strange or non-Fermi-liquid about it except in the sense of a trivial non-Fermi liquid as defined in \cite{buterakosPresenceAbsenceTwodimensional2021a}\cmmnt{ [Buterakos-Das Sarma PRB 2020]}. This should serve as a cautionary note to numerous experimental claims in the literature that a linear-in-$T$ resistivity extending to low temperatures necessarily implies a non-Fermi liquid strange metal. The high-quality low-density doped 2D semiconductors are a manifest counterexample to such strange metal assertions.
The Planckian properties of our systems have some similarities and some differences with phonon-induced Planckian behavior of normal metals: (1) Both resistive scattering are energy-conserving and quasi-elastic; (2) both manifest strong temperature dependence; (3) both our systems and metals manifest sub-Planckian, Planckian, and super-Planckian behaviors, depending on the metal (i.e., the $\lambda$-dependence), and depending on the materials system and carrier density of the 2D semiconductor (i.e., the $q_\mathrm{TF}/2k_\mathrm{F}$-dependence); (4) the $T$-dependence in our case is induced by 2D screened disorder in contrast to normal metals where the $T$-dependence is induced by phonon scattering in the equipartition regime; (5) our $T$-dependence is a low-$T$ phenomenon set by the Fermi temperature ($T<T_\mathrm{F}$) whereas the $T$-dependence in normal metals is a high-$T$ ($>T_\mathrm{BG}$) phenomenon set by the Bloch-Gruneisen ($T_\mathrm{BG}$) or Debye temperature; (6) the phonon-induced $T$-dependent resistivity is essentially linear in the equipartition regime $T>T_\mathrm{BG}$, whereas our 2D resistivity is linear only for low temperatures, $T\ll T_\mathrm{F}$.
We mention that the Planckian bound, to the extent it applies, should not be constrained to metals manifesting just an approximate $T$-linear resistivity, since such a linear constraint is meaningless in solid state physics as no resistivity can really be precisely linear (the power law may be very close to unity over a limited range of $T$, but never precisely unity).
In fact, careful recent resistivity measurements in graphene layers show that the strange Planckian behavior persists for temperature power laws different from unity \cite{jaouiQuantumCriticalBehavior2022}.
Although the Planckian bound is often discussed in the context of `strange metals' manifesting a large linear-in-$T$ resistivity over a range of temperature, the two concepts are distinct and have nothing to do with each other (except perhaps in a negative sense in that the most established generic example of a Planckian behavior is found in the linear-in-$T$ resistivity of normal metals, a most non-strange situation, arising from phonon scattering in the equipartition regime \cite{wuPhononinducedGiantLinearin2019}\cmmnt{ [ cite Wu, Hwang, SDs 'ordinary strangeness PRB and Hwang-SDS linear-in-T PRB])}). Thus, we define the Planckian bound as a constraint on $\tau(T)$, where $\hbar/\tau(T) \lesssim \alpha k_\mathrm{B} T$, where $\alpha$ is a dimensionless number of order $1$--$10$, independent of whether the scattering time $\tau$ is strongly or weakly dependent on $T$.
In our case (Secs.~\ref{sec:2} and \ref{sec:3}), the resistivity is indeed linear in the asymptotic low-$T$ regime $T\ll T_\mathrm{F}$, but in reality the low-density 2D systems have rather low values of $T_\mathrm{F}$, and therefore the temperature-dependent resistivity often departs from linearity with increasing $T$ (eventually decreasing with increasing $T$ beyond the quantum-classical crossover regime). Our theory for the inelastic scattering has the explicit analytical form that $1/\tau_\mathrm{ee}$ goes as $T^2 \ln T$ ($T^{1/2}$) for $T\ll(\gg)T_\mathrm{F}$, but the Planckian hypothesis seems to apply at all temperatures approximately.
In the recent Planckian literature, a great deal of significance is placed on the coefficient $\alpha$ being unity (i.e., defining a mysterious and mystical entity, the so-called Planckian metal). We do not agree with this assertion of $\alpha=1$ being of deep significance as our results show that $\alpha$ could vary from being $\ll 1$ to being $\sim 1$--$10$ in the same sample depending on the carrier density (which cannot be varied in situ in a single sample in strongly correlated materials, each sample coming with its own fixed, and essentially unknown, carrier density). If we fine tune and choose some narrow post-selected density and temperature range, we get an effective $\alpha \sim 1$, but this is simply confirmation bias of varying parameters until we find what is being looked for. In addition, $\alpha=1$ as a strict requirement for a Planckian metal defines a set of measure zero since there is no claim that $\alpha$ is somehow an invariant. So any measured $\alpha$ would necessarily depart from unity, particularly since obtaining $\alpha$ from transport measurements necessitates a precise knowledge of $n/m$, which is simply unavailable in strongly correlated materials. Although we do not believe that much significance can be attached to the recent claims about $\alpha$ precisely being unity as being special, we are surprised that our results for 2D semiconductors exhibit that, although $\alpha$ can be much less than unity (``sub-Planckian''), it is never much larger than unity with the super-Planckian behavior being constrained by $\alpha <10$ always. Why?
In this context, we mention that in direct conflict with recent fine-tuned claims of hole-doped cuprates having $\alpha\sim 1$, and thus being strange Planckian metals, older direct optical measurements of $\alpha$ give results consistent with our conclusion that generically $\alpha$ could take values in the range of $1-3$ in the super-Planckian regime with there being nothing special about $\alpha \sim 1$ except in a fine-tuned sense. These optical measurements give $\alpha= 1.57$ (LSCO), $2.5$ (YBCO), $1.97$ (BSCCO) \cite{gaoInfraredPropertiesEpitaxial1993, gaoQuasiparticleDampingCoherence1996, romeroQuasiparticleDampingBi1992}.
In addition, recent measurements in electron-doped cuprates have reported \cite{poniatowskiCounterexampleConjecturedPlanckian2021}
$\alpha \sim 1$--$3$ in the super-Planckian regime, again showing, consistent with our finding, that any empirical bound on $\alpha$ is $\lesssim 10$, and not 1. This is also consistent with normal metals where the largest value (for Pb) of $\alpha$ is $\alpha \sim 9$, although many metals exhibit sub-Planckian behavior with $\alpha< 1$.
We believe that our work indicates that the theoretical focus should shift to why $\alpha$ is not arbitrarily large and remains bounded approximately by $10$ rather than focusing on the misleading fine-tuned claims of $\alpha \sim 1$ being a generic strange metal of particular interest \cite{grissonnancheLinearinTemperatureResistivity2021, legrosUniversalTlinearResistivity2019,caoStrangeMetalMagicAngle2020, yuanScalingStrangemetalScattering2022a}.
The empirical validity of the generalized Planckian conjecture (with the super-Planckian violation of the bound being always less than 10) in the transport properties of our systems (Secs.~\ref{sec:2} and \ref{sec:3}) as well as in other systems \cite{hartnollPlanckianDissipationMetals2021a}\cmmnt{ [Hartnoll review arXiv]} remain a mystery, and the possibility that it is a coincidence cannot be ruled out (We have ruled out the bound being unity, i.e. $\alpha=1$, as a fine-tuned confirmation bias since $\alpha > 1$, but $<10$, appears to be the generic super-Planckian behavior in many systems, including the ones studied in the current work\texttt{--} of course, the sub-Planckian $\alpha < 1$ situation is the most generic situation). One theoretical bound, which has attracted considerable attention, is associated with quantum chaos and relates a Lyapunov exponent, $1/\tau_\mathrm{L}$, for the rate of growth of chaos in thermal quantum systems with a large number of degrees of freedom \cite{maldacenaBoundChaos2016}\cmmnt{ [arXiv:1503.01409]}:
\begin{equation}
\hbar/\tau_\mathrm{L} \lesssim k_\mathrm{B} T.
\end{equation}
This bound is connected with the fact that chaos can be diagnosed using an out-of-time-order correlation (OTOC) function closely related to the commutator of operators separated in time \cite{larkinQuasiclassicalMethodTheory1969}. But OTOC and the corresponding Lyapunov exponent $1/\tau_\mathrm{L}$ do not have any established (or even speculated) relationship with the transport scattering rate, and therefore, the implication of this conjectured OTOC bound to Planckian transport bound considerations is unclear.\cite{xuButterflyEffectInteracting2019}\cmmnt{ [cite Xu,…Swingle, SDS butterfly paper]}. Thus, the empirical validity of the (approximate or generalized) Planckian transport bound is not explicable by quantum chaos considerations. There are speculations, none convincing at all, that the Planckian conjecture may be related to the holographic viscosity bound in quantum field theories, but the applicability of such holographic duality to concrete experimental condensed matter problems is purely speculative and unconvincing \cite{kovtunViscosityStronglyInteracting2005}.
One can construct artificial theoretical models which lead to linear-in-$T$ strange metal Planckian behaviors simply by embedding this physics intrinsically into the model without any microscopic rationale. The most-well-known such model is the so-called `marginal' Fermi liquid model \cite{varmaPhenomenologyNormalState1989},
where one just assumes ad hoc that the imaginary part of the electron self-energy goes as linear in $T$, and all scattering is umklapp, breaking momentum conservation, so that the scattering rate goes precisely as $ k_\mathrm{B} T$, and the system is marginally not a Fermi liquid. This is, however, assuming the result we want, and in spite of more than 30 years of efforts since the original introduction of this marginal Fermi liquid model in the context of the hole-doped cuprates, no microscopic theoretical justification exists for how such a singular self-energy, in contrast to the well-known $T^2\ln{T}$ behavior, could arise in 2D and why the corresponding resistivity should be given precisely by this imaginary part of the single-particle self-energy ignoring all momentum dependence in the problem. Without any microscopic justification, such a marginal Fermi liquid model is simply an assertion, and not a theory, and cannot be taken seriously in the context of experimental findings.
There was important theoretical work establishing non-Fermi-liquid behaviors of 2D fermions coupled to charged black holes \cite{leeNonFermiLiquidCharged2009}
using AdS/CFT correspondence \cite{maldacenaLargeNLimitSuperconformal1999}
as well as 2D fermions coupled with U(1) gauge fields \cite{leeLowenergyEffectiveTheory2009}.
While these works are important proofs of principle for the theoretical existence of 2D non-Fermi liquid behavior, their connections to any physical 2D materials systems in condensed matter physics have remained completely elusive in spite of many efforts.
There is impressive recent theoretical work \cite{patelUniversalLinearResistivity2022}
establishing that special classes of theoretical quantum critical 2D metals with completely spatially random fluctuations in the fermion-scalar Yukawa couplings may lead to linear-in-$T$ resistivity by construction, but these results do not manifest any generic Planckian behavior with the resulting $\alpha\ll 1$ in general (without fine-tuning). Of course, linear-in-$T$ resistivity with $\alpha<1$ arises generically in metals with weak electron-phonon coupling and in our screened disorder coupling theory of Sec.~\ref{sec:2} for high carrier density and low temperature. Finally, the minimal Hubbard model at very high temperatures (much larger than the band-width) also leads to a linear-in-$T$ resistivity if one assumes complete momentum-independent umklapp scattering with local interactions, where each electron-electron scattering event contributes to the resistivity. But this is a highly fine-tuned trivial result appearing simply from the leading-order high-$T$ expansion of the thermal averaging, and cannot have any practical significance to real electronic materials. Thus, no generic theoretical arguments exist for a Planckian dissipative resistivity in electronic materials.
It is speculated \cite{hartnollPlanckianDissipationMetals2021a}\cmmnt{ [Hartnoll review arxiv]} that the Planckian bound may apply to inelastic scattering rates arising from electron-electron interactions. This distinction between elastic and inelastic scattering has little significance for transport properties because transport only probes resistive scattering, which is associated with momentum relaxation, without distinguishing between elastic and inelastic scattering. In fact, as is well-known, electron-electron interaction by itself cannot relax momentum in a translationally invariant system and does not contribute to resistive scattering unless an explicit momentum conservation breaking mechanism (e.g., umpklapp scattering, Baber scattering) is invoked \cite{poniatowskiCounterexampleConjecturedPlanckian2021}.\cmmnt{[cite the recent Greene, SDS and other exptl paper]} Electron-electron interaction, however, leads to a real quasiparticle damping and finite lifetime, $\tau_\mathrm{ee}$, through the imaginary part of the self-energy, as described in Sec.~\ref{sec:4}, which may be studied experimentally by measuring the quasiparticle spectral function using inelastic tunneling spectroscopy or ARPES. We emphasize that $\tau_\mathrm{ee}$ is generically different from the transport scattering time $\tau$ obtained from the resistivity, and in general, there is no simple relationship connecting the two.
Focusing entirely on interaction-induced inelastic scattering rate (Sec.~\ref{sec:4}) from the Planckian perspective, emphasizing again that this is unrelated to the transport properties discussed in Secs.~\ref{sec:2} and \ref{sec:3}, we show that the temperature and density dependent inelastic scattering rate arising from electron-electron interaction also obeys the generalized Planckian bound approximately (i.e., within a factor of 10) for all temperature and density with a modest super-Planckian behavior manifesting around $T\sim T_\mathrm{F}$, where the scattering rate $1/\tau_\mathrm{ee}$ crosses over from the low-temperature (for $T\ll T_\mathrm{F}$) $T^2$ behavior to the high-temperature ($T\gg T_\mathrm{F}$) $T^{1/2}$ behavior. Thus, we find that for doped 2D semiconductor systems both the temperature dependent elastic scattering rate (arising from screened disorder) and the temperature dependent inelastic scattering rate (from electron-electron interaction) empirically obey the generalized Planckian bound approximately (i.e., is bounded, within an order of magnitude, by $k_\mathrm{B} T$)
An intuitively-appealing qualitative dimensional argument for why the inelastic scattering rate may have an approximate Planckian bound is the following. At finite temperatures, the Fermi distribution develops a thermal broadening of $k_\mathrm{B} T$ around the Fermi energy, leading to an energy uncertainty of $\mathcal{O}(k_\mathrm{B} T)$. Since an inelastic scattering rate of $1/\tau_\mathrm{ee}$ leads, by the uncertainty relation, to an energy uncertainty $\sim \hbar/\tau_\mathrm{ee}$, it is possible that $\hbar/\tau_\mathrm{ee}$ should not exceed $k_\mathrm{B} T$ by a large number. This uncertainty relation based qualitative argument, which is by no means rigorous, does indicate that the inelastic scattering rate $\hbar/\tau_\mathrm{ee}$ should be of $\mathcal{O}(k_\mathrm{B} T)$ or less, as we find in our detailed calculations presented in Sec.~\ref{sec:4}.
The qualitative energy uncertainty argument, however, provides no rationale whatsoever for why the transport scattering rate extracted from the resistivity, which for our systems is a temperature-dependent elastic scattering by screened disorder, should have anything at all to do with the Planckian bound of $k_\mathrm{B} T$ as established empirically for the experimental data in Sec.~\ref{sec:2}. The only clue for a Planckian bound can be discerned from our analytical theory in Sec.~\ref{sec:3} where we explicitly used the Ioffe-Regel-Mott criterion to bound the $T=0$ 2D resistivity by a maximum possible metallic resistivity of $h/e^2$, which then leads to the following constraint for our temperature dependent transport scattering rate $1/\tau$ (with the $T=0$ contribution subtracted out):
\begin{equation}
\hbar/\tau < 4k_\mathrm{B} T
\end{equation}
Our direct numerical results for the calculated 2D metallic resistivity are consistent with this analytical effective Planckian bound. Thus, at least for our problem, the Planckian transport properties appear to have an underlying deep (if somewhat indirect) connection to the Ioffe-Regel-Mott criterion for the existence of a $T=0$ metal. Since a metal with coherent quasiparticles cannot have an arbitrarily large resistivity, by definition, the temperature-dependent scattering rate cannot be arbitrarily large either, and the natural bound at finite temperature on a purely dimensional ground can only be $k_\mathrm{B} T$. Thus, the Planckian transport bound may ultimately be arising from the fact that the mean free path in a metal at finite temperatures cannot be arbitrarily short. In our analytical theory this connection between the Planckian bound on the finite-temperature scattering rate and the Ioffe-Mott-Reel criterion is explicit, but whether the same happens for all Planckian systems (many of which are also effectively 2D) is unknown, and should be investigated in future woks.
For our problem, we establish theoretically (Sec.~\ref{sec:3}) that the Planckian conjecture holds approximately, $\hbar/\tau < 4k_\mathrm{B} T$, because of the requisite consistency with the Ioffe-Regel-Mott criterion. This is true at low temperatures, $T\ll T_\mathrm{F}$, where $\rho (T)$ is linear in $T$, but the approximate Planckian bound continues holding all the way to $T \sim T_\mathrm{F}$ because the next-to-the-leading-order correction arising from the $(T/T_\mathrm{F})^{3/2}$ term remains small. Whether such a direct connection to the Ioffe-Regel-Mott metallicity condition applies to other strange Planckian metals are not unknown at this point.
We provide two additional circumstantial arguments somewhat in line with the our discussion above where the Planckian bound may be connected to the very definition of a metal: (1) First, the suggestion \cite{murthyStabilityBoundLinear2021} that the electron-phonon coupling may be bounded from above (which is the reason why phonon-induced resistivity of metals obeys the Planckian conjecture) is because a much larger electron-phonon coupling would lead to a lattice instability causing a metal-to-insulator transition (which is consistent with our finding that the Planckian bound may ultimately be arising from the definition of metallicity itself); (2) second, the fact that all strongly-correlated 2D systems saturating the Planckian bound (or manifesting super-Planckian behavior) typically always have very large resistivity (this again, reinforcing that the bound may arise simply to preserve metallicity). Much more work is needed to convert these qualitative ideas into a general theory which applies to all systems, but for our systems, we establish these ideas in Sec.~\ref{sec:3} through a detailed theory.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{table_figure.pdf}
\caption{Planckian parameters from the experimental literature plotted against temperature for many materials, showing that the Planckian conjecture is always approximately valid.}
\label{fig:table_figure}
\end{figure}
We conclude by mentioning that we have also investigated the experimentally reported finite temperature transport properties of 2D graphene related systems \cite{caoStrangeMetalMagicAngle2020,jaouiQuantumCriticalBehavior2022, wuPhononinducedGiantLinearin2019, dassarmaStrangeMetallicityMoir2022,hwangAcousticPhononScattering2008, efetovControllingElectronPhononInteractions2010,minChiralitydependentPhononlimitedResistivity2011,polshynLargeLinearintemperatureResistivity2019, chuPhononsQuantumCriticality2021} and 3D doped semiconductors \cite{semicondDatabase} (at high temperatures, so that elastic scattering effects from the impurities are not particularly quantitatively important), finding to our surprise that the Planckian hypothesis is always approximately empirically valid. This is true for 3D semiconductors even at very high temperatures ($\sim1000$K) where the resistive scattering arises entirely from inelastic optical phonon scattering, and the resistivity is essentially exponential in temperature. We show these results in Fig.~\ref{fig:table_figure} where the dimensionless Planckian parameter is plotted against temperature for many materials with the Planckian conjecture being always approximately valid. Why the Planckian conjecture seems to apply for quasi-elastic acoustic phonon scattering \cite{bruinSimilarityScatteringRates2013}\cmmnt{[Bruin Ref. 1]}, inelastic optical phonon scattering, temperature-dependent screened disorder scattering (Secs.~\ref{sec:2} and \ref{sec:3}), inelastic electron-electron scattering (Sec.~\ref{sec:4}), and perhaps in many other situations still remains a mystery except perhaps for its qualitative connections to the energy uncertainty relation and the stability of the metallic phase itself.
In conclusion, we find (Secs.~\ref{sec:2} and ~\ref{sec:3}), rather unexpectedly, that the temperature dependent part of the resistivity in 2D doped semiconductors, arising from resistive scattering by temperature-dependent screened Coulomb disorder, approximately obeys the generalized Planckian hypothesis (in the sense $\alpha<10$). In addition, we find (Fig.~\ref{fig:table_figure}) the very surprising result that doped semiconductors approximately obey the Planckian hypothesis at very high temperatures where the resistive scattering arises entirely from optical phonon scattering. We also establish (Sec.~\ref{sec:4}) that the inelastic scattering induced by electron-electron Coulomb interactions obeys the generalized Planckian hypothesis at all temperatures and densities. Our work considerably expands the scope of the Planckian conjecture by establishing its surprising empirical validity to doped semiconductors, which are not considered to be either `strongly correlated' or `quantum critical'\texttt{--} in fact, doped semiconductors are basically interacting electron liquids in a jellium background with the lattice or narrow band Mott-Hubbard physics playing no role in its transport. The fact that a generalized Planckian hypothesis may apply even to a temperature-dependent resistivity arising from disorder scattering (with the temperature dependence itself happening because of anomalous screening) is particularly perplexing since there can be no Planckian constraint at $T=0$ for the residual resistivity arising from the same disorder scattering. Our finding that the generalized Planckian hypothesis does apply to the inelastic electron-electron scattering seems more plausible, but we emphasize that there is no theory predicting it and our results simply agree empirically with the Planckian bound after the fact. None of the various theoretical attempts \cite{hartnollTheoryUniversalIncoherent2015, lucasResistivityBoundHydrodynamic2017, hartmanUpperBoundDiffusivity2017, hanLocalityBoundDissipative2018, nussinovExactUniversalChaos2021, hanQuantumScramblingState2019,hartnollPlanckianDissipationMetals2021a, zaanenPlanckianDissipationMinimal2019, zaanenPlanckianDissipationMinimal2019, chenManybodyQuantumDynamics2020, yinBoundQuantumScrambling2020, chenFiniteSpeedQuantum2019,lucasOperatorSizeFinite2019,xuButterflyEffectInteracting2019,maldacenaBoundChaos2016}\cmmnt{ [ e.g., arXiv:1405.3651 , arXiv:1704.07384 , arXiv:1706.00019 , arXiv:1806.01859 , arXiv:1812.07598 , arXiv:2107.07802 , https://doi.org/10.21468/SciPostPhys.6.5.061, arXiv:2007.10352 , arXiv:2005.07558 ; https://arxiv.org/abs/1907.07637; Phys. Rev. Lett. 122, 216601 (2019) ; arXiv:1902.07199 ; arXiv:1503.01409 ]} to connect the Planckian constraint to holography, hydrodynamics, quantum criticality, quantum chaos, scrambling, and other generic concepts can actually explain the unreasonable empirical validity of the generalized Planckian bound ranging from high-temperature resistivity in normal metals (i.e., acoustic phonon scattering) and semiconductors (i.e., optical phonon scattering) all the way to strongly correlated materials (i.e., some `unknown' mechanism) through the low-temperature transport in 2D doped semiconductors (i.e., screened disorder scattering). The Planckian hypothesis has a remarkable qualitative correspondence with the Ioffe-Regel-Mott criterion defining metallicity to be constrained by $\hbar/\tau < E_\mathrm{F}=k_\mathrm{B} T_\mathrm{F}$, which loosely describes the crossover from coherent metallic transport to disorder-induced strong localization (at $T=0$) or to incoherent transport (at finite $T$), but making this qualitative connection formally precise is a challenge. This is particularly so because the Ioffe-Regel-Mott criterion is not really a sharply defined transition, it is an intuitively appealing crossover criterion defining a metal as a system with coherent quasiparticles carrying current with momentum as a reasonable quantum number except for resistive scattering events changing momenta. It is therefore appealing that the Planckian hypothesis is in some sense the energy-time version of the Ioffe-Regel-Mott criterion (which is a position-momentum uncertainty argument), with $\hbar/\tau < k_\mathrm{B} T$, with the philosophy being that at finite temperature $k_\mathrm{B} T$ is the only dissipative energy scale. Even after accepting this somewhat ill-defined uncertainty argument, it is unclear why the dimensionless coupling constant $\alpha$, which should be sitting in front of $k_\mathrm{B} T$ in such dimensional reasoning, should be of order unity since no theory constrains it and in principle, it could be anything. But the two theoretical examples where detailed transport calculations are possible, namely the well-understood linear-in-$T$ metallic resistivity due to acoustic phonon scattering \cite{ziman1972principles, hwangLinearinResistivityDilute2019, wuPhononinducedGiantLinearin2019, dassarmaStrangeMetallicityMoir2022}\cmmnt{[ Ziman book, hwang-Das Sarma linear in T PRB 2019; arXiv:1811.04920 ; arXiv:2201.10270]} and low-temperature approximately linear-in-$T$ resistivity in 2D doped semiconductors (as well as the inelastic scattering in an electron liquid) in the current work, seem to obey the Planckian hypothesis unreasonably well (within a factor of $10$, $\alpha<10$), indicating that the effective coupling constant $\alpha$ entering the Planckian bound multiplying the temperature (in a strictly qualitative dimensional analysis) is indeed of order unity. Why it is so remains a mystery, and the possibility that this is merely a coincidence and that future experiments will discover strong violations of the Planckian hypothesis should not be ruled out.
Our work suggests that looking for strong violations of the Planckian bound in any materials, by much more than an order of magnitude, should be a serious goal of future experiments so that we know for sure whether the generalized bound within an order of magnitude is generically valid or not.
\section{Acknowledgement} \label{sec:acknowledgement}
This work is supported by the Laboratory for Physical Sciences.
\clearpage
| {
"attr-fineweb-edu": 1.46582,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUb8rxK0wg09FXUhLB | \section{Introduction}
Understanding the behavior of neural networks (NNs) learned through stochastic gradient descent (SGD) is a prerequisite to revealing the source of NN's generalization in practice. Information bottleneck (IB) \citep{tishby2000information} was a promising candidate to reveal the principle of NNs through the lens of information stored in encoded representations of inputs. Drawn from the conception of representation \emph{minimality} and \emph{sufficiency} in information theory, IB describes the objective of NNs as a trade-off, where NNs are abstracted by a Markov chain $Y \leftrightarrow X \leftrightarrow T$, as
\begin{equation}
\max_T I(T;Y) - \beta I(T;X),
\end{equation}
where $I(T;X)$ and $I(T;Y)$ are the mutual information of representation $T$ towards inputs $X$ and labels $Y$, respectively. IB theory claimed \citep{tishby2015deep} and then empirically corroborated \citep{shwartz2017opening} that NNs trained by plain cross entropy loss and SGD all confront an initial fitting phase and a subsequent compression phase. It also implied that the representation compression is a causal effect to the good generalization capability of NNs. IB theorem points out the importance of representation compression and ignites a series of follow-ups to propose new learning algorithms on IB to explicitly take compression into account \citep{burgess2018understanding,achille2018information,dai2018compressing,li2019specializing,achille2018emergence,kolchinsky2019nonlinear,wu2020graph,pan2020disentangled,goyal2018infobot,wang2019deep,wang2020information}.
However, recent critics challenged the universality of the above claims. At first, \citet{saxe2019information} argued that the representation compression phase only appears when \emph{double-sided saturating nonlinearities} like \texttt{tanh} and \texttt{sigmoid} are deployed. The boundary between the two phases fades away with other nonlinearities, e.g., \texttt{ReLU}.
Second, the claimed causality between compression and generalization was also questioned \citep{saxe2019information, goldfeld2019estimating}, i.e., sometime networks that do not compress still generalize well, and vice versa. To alleviate this issue, \citet{goldfeld2019estimating} proposed that the clustering of hidden representations concurrently occurs with good generalization ability. However, this new proposal still lacks solid theoretical guarantee. Third, mutual information becomes trivial in deterministic cases \citep{shwartz2020information}. Other problems encountered in deterministic cases were pointed out by \citet{kolchinsky2018caveats}. Although several techniques \citep{shwartz2017opening,goldfeld2019estimating}, e.g., binning and adding noise, are adopted to make stochastic approximation for the information term, they might either violate the principle of IB or be contradictory to the high performance of NNs. Motivated by these developments, we focus on the following questions:
\begin{itemize}[leftmargin=*, itemsep=0pt, labelsep=5pt]
\item Does a universal two-phase training behavior of NNs exist in practice? If this claim is invalid with the previous information measure $I(T;X)$, can we achieve this two-phase property through another information-theoretic perspective?
\item As $I(T;X)$ was unable to fully explain NN's generalization, can we find another measure with theoretical generalization guarantee? Also, how do we leverage it to amend the IB theory for deep neural network?
\item How do we utilize our new IB for efficient training and inference of NNs in practice?
\end{itemize}
In this work, we propose to handle the above questions through the lens of information stored in weights (IIW), i.e., $I(\mathbf{w};S)$ where $S=\{X_i,Y_i\}_{i=1}^n$ is a finite-sample dataset. Our main contributions are four-fold: (1) we propose a new information bottleneck under the umbrella of PAC-Bayes generalization guarantee, namely \textbf{P}AC-Bayes \textbf{I}nformation \textbf{B}ottleneck (PIB); (2) we derive an approximation of the intractable IIW; (3) we design a Bayesian inference algorithm grounded on stochastic gradient Langevin dynamics (SGLD) for sampling from the optimal weight posterior specified by PIB; and (4) we demonstrate that our new information measure covers the wide ground of NN's behavior. Crediting to the plug-and-play modularity of SGD/SGLD, we can adapt any existing NN to a PAC-Bayes IB augmented NN seamlessly. Demo code is at \url{https://github.com/RyanWangZf/PAC-Bayes-IB}.
\section{A New Bottleneck with PAC-Bayes Guarantee} \label{sec:new_bottleneck}
In this section, we present the preliminaries of PAC-Bayes theory and then introduce our new information bottleneck. A loss function $\ell(f^\mathbf{w}(X),Y)$ is a measure of the degree of prediction accuracy $f^\mathbf{w}(X)$ compared with the ground-truth label $Y$. Given the ground-truth joint distribution $p(X,Y)$, the expected true risk (out-of-sample risk) is taken on expectation as
\begin{equation}\label{eq:expected_true_risk}
L(\mathbf{w}) \triangleq \mathbb{E}_{p(\mathbf{w}|S)}\mathbb{E}_{p(X,Y)}[\ell(f^\mathbf{w}(X),Y)].
\end{equation}
Note that we take an additional expectation over $p(\mathbf{w}|S)$ because we are evaluating risk of the learned posterior instead of a specific value of parameter $\mathbf{w}$. In addition, we call $p(\mathbf{w}|S)$ posterior here for convenience while it is not the Bayesian posterior that is computed through Bayes theorem $p(\mathbf{w}|S) = \frac{p(\mathbf{w})p(S|\mathbf{w})}{p(S)}$. The PAC-Bayes bounds hold even if prior $p(\mathbf{w})$ is incorrect and posterior $p(\mathbf{w}|S)$ is arbitrarily chosen.
In practice, we only own finite samples $S$. This gives rise to the empirical risk as
\begin{equation}\label{eq:expected_empirical_risk}
L_S(\mathbf{w}) = \mathbb{E}_{p(\mathbf{w}|S)}\left[\frac1n \sum_{i=1}^n \ell(f^\mathbf{w}(X_i),Y_i)\right].
\end{equation}
With the above Eqs. \eqref{eq:expected_true_risk} and \eqref{eq:expected_empirical_risk} at hand, the generalization gap of the learned posterior $p(\mathbf{w}|S)$ in out-of-sample test is $\Delta L(\mathbf{w}) \triangleq L(\mathbf{w}) - L_S(\mathbf{w})$. \citet{xu2017information} proposed a PAC-Bayes bound based on the information contained in weights $I(\mathbf{w};S)$ that
\begin{equation} \label{eq:gen_gap}
\mathbb{E}_{p(S)}[L(\mathbf{w}) - L_S(\mathbf{w})] \leq \sqrt{\frac{2\sigma^2}n I(\mathbf{w};S)},
\end{equation}
when $\ell(f^\mathbf{w}(X),Y)$ is $\sigma$-sub-Gaussian.\footnote{A easy way to fulfill this condition is to clip the lost function to $\ell \in [0,a]$ hence it satisfies $\frac{a}2$-sub-Gaussian \citep{philippe2015high,xu2017information}.} A series of follow-ups tightened this bound and verified it is an effective measure of generalization capability of learning algorithms \citep{mou2018generalization,negrea2019information,pensia2018generalization,zhang2018information}. Therefore, it is natural to build a new information bottleneck grounded on this PAC-Bayes generalization measure, namely the PAC-Bayes information bottleneck (PIB), as
\begin{equation} \label{eq:pac_bayes_ib}
\min_{p(\mathbf{w}|S)} \mathcal{L}_{\text{PIB}} = L_S(\mathbf{w}) + \beta I(
\mathbf{w};S).
\end{equation}
For classification term, the loss term $L_S(\mathbf{w})$ becomes the cross-entropy between the prediction $p(Y|X,\mathbf{w})$ and the label $p(Y|X)$, hence PIB in Eq. \eqref{eq:pac_bayes_ib} is equivalent to
\begin{equation} \label{eq:pac_bayes_ib_max}
\max_{p(\mathbf{w}|S)} I(\mathbf{w};Y|X,S) - \beta I(\mathbf{w};S),
\end{equation}
which demonstrates a trade-off between maximizing the \emph{sufficiency} (the information of label $Y$ contained in $\mathbf{w}$) and minimizing the \emph{minimality} of learned \emph{parameters} $\mathbf{w}$ (the information of dataset $S$ contained in $\mathbf{w}$). Unlike previous IB based on \emph{representations}, our PIB is built on \emph{weights} that are not directly influenced by inputs and selected activation functions. Likewise, the trade-off described by PIB objective is more reasonable since its compression term is explicitly correlated to generalization of NNs.
\section{Estimating Information Stored in Weights} \label{sec:estimate_info_in_weights}
In this section, we present a new notion of IIW $I(\mathbf{w};S)$ built on the Fisher information matrix that relates to the flatness of the Riemannian manifold of loss landscape. Unlike Hessian eigenvalues of loss functions used for identifying flat local minima and generalization but can be made arbitrarily large \citep{dinh2017sharp}, this notion is invariant to re-parameterization of NNs \citep{liang2019fisher}. Also, our measure is invariant to the choice of activation functions because it is not directly influenced by input $X$ like $I(T;X)$. We leverage it to monitor the information trajectory of NNs trained by SGD and cross entropy loss and verify it is capable of reproducing the two-phase transition for varying activations (e.g., \texttt{ReLU}, \texttt{linear}, \texttt{tanh}, and \texttt{sigmoid}) in \S \ref{sec:exp_non_linearity}.
\subsection{Closed-form Solution with Gaussian Assumption}
By deriving a new information bottleneck PIB, we can look into how IIW $I(\mathbf{w};S)$ and $L_S(\mathbf{w})$ evolve during the learning process of NNs optimized by SGD. Now the key challenge ahead is how to estimate the IIW $I(\mathbf{w};S)$, as
\begin{equation} \label{eq:def_mutual_information}
I(\mathbf{w};S) = \mathbb{E}_{p(S)}[\text{KL}(p(\mathbf{w}|S)\parallel p(\mathbf{w}))]
\end{equation}
is the expectation of Kullback-Leibler (KL) divergence between $p(\mathbf{w}|S)$ and $p(\mathbf{w})$ over the distribution of dataset $p(S)$. And, $p(\mathbf{w})$ is the marginal distribution of $p(\mathbf{w}|S)$ as $p(\mathbf{w}) \triangleq \mathbb{E}_{p(S)}[p(\mathbf{w}|S)]$. When we assume both $p(\mathbf{w})=\mathcal{N}(\mathbf{w}|{\rm \bm{\theta}}_0,{\rm \bm{\Sigma}}_0)$ and $p(\mathbf{w}|S)=\mathcal{N}(\mathbf{w}|{\rm \bm{\theta}}_S,{\rm \bm{\Sigma}}_S)$ are Gaussian distributions, the KL divergence term in Eq. \eqref{eq:def_mutual_information} has closed-form solution as
\begin{equation} \label{eq:gaussian_kl}
\text{KL}(p(\mathbf{w}|S)\parallel p(\mathbf{w})) =\frac12 \left [ \log \frac{\det {\rm \bm{\Sigma}}_S}{\det {\rm \bm{\Sigma}}_0} - D + ({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0)^{\top}{\rm \bm{\Sigma}}_0^{-1}({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0) + \text{tr}\left({\rm \bm{\Sigma}}_0^{-1}{\rm \bm{\Sigma}}_S\right) \right ].
\end{equation}
Here $\det \mathbf{A}$ and $\text{tr}(\mathbf{A})$ are the determinant and trace of matrix $\mathbf{A}$, respectively; $D$ is the dimension of parameter $\mathbf{w}$ and is a constant for a specific NN architecture; ${\rm \bm{\theta}}_S$ are the yielded weights after SGD converges on the dataset $S$. If the covariances of prior and posterior are proportional,\footnote{Assuming the same covariance for the Gaussian randomization of posterior and prior is a common practice of building PAC-Bayes bound for simplification, see \citep{dziugaite2018data,rivasplata2018pac}.} the logarithmic and trace terms in Eq. \eqref{eq:gaussian_kl} both become constant. Therefore, the mutual information term is proportional to the quadratic term as
\begin{equation} \label{eq:information_in_weights_half}
I(\mathbf{w};S) \propto \mathbb{E}_{p(S)}\left[({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0)^{\top}{\rm \bm{\Sigma}}_0^{-1}({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0)\right]
= \mathbb{E}_{p(S)}\left[{\rm \bm{\theta}}_S^{\top} {\rm \bm{\Sigma}}_0^{-1} {\rm \bm{\theta}}_S\right] - {\rm \bm{\theta}}_0^{\top}{\rm \bm{\Sigma}}_0^{-1}{\rm \bm{\theta}}_0.
\end{equation}
In the next section, we will see how to set prior covariance ${\rm \bm{\Sigma}}_0$.
\subsection{Bootstrap Covariance of Oracle Prior}
Since the computation of exact oracle prior ${\rm \bm{\Sigma}}_0$ needs the knowledge of $p(S)$ \footnote{${\rm \bm{\Sigma}}_0$ is called oracle prior because it minimizes the term $I(\mathbf{w};S) = \mathbb{E}_{p(S)}[\text{KL}(p(w|S)\parallel p(W)]$ when $p(w)=\mathbb{E}_{p(S)}[p(w|S)]$ \citep{dziugaite2021role}. }, we propose to approximate it by \emph{bootstrapping} from $S$ as
\begin{equation} \label{eq:covariance_zero}
{\rm \bm{\Sigma}}_0 = \mathbb{E}_{p(S)}\left[({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0)({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0)^{\top}\right] \simeq \frac1K \sum_{k} ({\rm \bm{\theta}}_{S_k} - {\rm \bm{\theta}}_S)({\rm \bm{\theta}}_{S_k} - {\rm \bm{\theta}}_S)^{\top},
\end{equation}
where $S_k$ is a bootstrap sample obtained by re-sampling from the finite data $S$, and $S_k \sim p(S)$ is still a valid sample following $p(S)$. Now we are closer to the solution but the above term is still troublesome to calculate. For getting $\{\theta_{S_k}\}_{k=1}^K$, we need to optimize on a series of ($K$ times) bootstrapping datasets $\{S_k\}_{k=1}^K$ via SGD until it converges, which is prohibitive in deep learning practices. Therefore, we propose to approximate the difference ${\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0$ by \emph{influence functions} drawn from robust statistics literature \citep{cook1982residuals,koh2017understanding,wang2020less,wang2020finding}.
\begin{lemma}[Influence Function \citep{cook1982residuals}] \label{lemma:influence_function}
Given a dataset $S=\{(X_i,Y_i)\}_{i=1}^n$ and the parameter $\hat{{\rm \bm{\theta}}}_S \triangleq \mathop{\arg\!\min}_{{\rm \bm{\theta}}} L_S({\rm \bm{\theta}}) = \mathop{\arg\!\min}_{{\rm \bm{\theta}}} \frac1n \sum_{i=1}^n \ell_i({\rm \bm{\theta}})$\footnote{Note $L_S({\rm \bm{\theta}})$ is not the expected empirical risk $L_S(\mathbf{w})$ in Eq. \eqref{eq:expected_empirical_risk}; instead, it is the deterministic empirical risk that only relates to the mean parameter ${\rm \bm{\theta}}$. We also denote $\ell(f^{\rm \bm{\theta}}(X_i),Y_i)$ by $\ell_i({\rm \bm{\theta}})$ for the notation conciseness.} that optimizes the empirical loss function, if we drop sample $(X_j,Y_j)$ in $S$ to get a jackknife sample $S_{\setminus j}$ and retrain our model, the new parameters are $\hat{{\rm \bm{\theta}}}_{S_{\setminus j}} = \mathop{\arg\!\min}_{{\rm \bm{\theta}}} L_{S_{\setminus j}}({\rm \bm{\theta}}) = \mathop{\arg\!\min}_{{\rm \bm{\theta}}} \frac1n \sum_{i=1}^n \ell_i({\rm \bm{\theta}}) - \frac1n \ell_{j}({\rm \bm{\theta}})$. The approximation of parameter difference $\hat{{\rm \bm{\theta}}}_{S_{\setminus j}} - \hat{{\rm \bm{\theta}}}_S$ is defined by influence function $\bm{\psi}$, as
\begin{equation}
\hat{{\rm \bm{\theta}}}_{S_{\setminus j}} - \hat{{\rm \bm{\theta}}}_S \simeq -\frac1n \bm{\psi}_j, \ \text{where} \ \bm{\psi}_j = - {\mathbf{H}}_{\hat{{\rm \bm{\theta}}}_S}^{-1} \nabla_{\rm \bm{\theta}} \ell_j(\hat{{\rm \bm{\theta}}}_S) \in \mathbb{R}^D,
\end{equation}
and $\mathbf{H}_{\hat{{\rm \bm{\theta}}}_S} \triangleq \frac1n \sum_{i=1}^n \nabla^2_{{\rm \bm{\theta}}} \ell_i(\hat{{\rm \bm{\theta}}}_S) \in \mathbb{R}^{D\times D}$ is Hessian matrix.
\end{lemma}
The application of influence functions can be further extended to the case when the loss function is perturbed by a vector $\bm{\xi}=(\xi_1,\xi_2,\dots,\xi_n)^{\top} \in \mathbb{R}^{n}$ as $\hat{{\rm \bm{\theta}}}_{S,\bm{\xi}} = \mathop{\arg\!\min}_{{\rm \bm{\theta}}} \frac1n \sum_{i=1}^n \xi_i \ell_i({\rm \bm{\theta}})$. In this scenario, the parameter difference can be approximated by
\begin{equation}
\hat{{\rm \bm{\theta}}}_{S,\bm{\xi}} - \hat{{\rm \bm{\theta}}}_S \simeq \frac1n \sum_{i=1}^n \left(\xi_i - 1 \right) \bm{\psi}_i = \frac1n {\rm \bm{\Psi}}^{\top} \left(\bm{\xi} - \bm{1} \right),
\end{equation}
where ${\rm \bm{\Psi}} = (\bm{\psi}_1, \bm{\psi}_2, \dots, \bm{\psi}_n)^{\top} \in \mathbb{R}^{n \times D}$ is a combination of all influence functions $\bm{\psi}$; $\bm{1} = (1,1,\dots,1)^{\top}$ is an $n$-dimensional all-one vector. Lemma \ref{lemma:influence_function} gives rise to the following lemma on approximation of oracle prior covariance in Eq. \eqref{eq:covariance_zero}:
\begin{algorithm}[t]
\caption{Efficient approximate information estimation of $I(\mathbf{w};S)$ \label{alg:2}}
\LinesNumbered
\KwData{Total number of samples $n$, batch size $B$, number of mini-batches in one epoch $T_0$, number of information estimation iterations $T_1$, learning rate $\eta$, moving average hyperparameters $\rho$ and $K$, a sequence of gradients set $\nabla \mathcal{L} = \emptyset$}
\KwResult{Calculated approximate information $\widetilde{I}(\mathbf{w};S)$}
Pretrain the model by vanilla SGD to obtain the prior mean ${\rm \bm{\theta}}_0$ \;
\For{t=1:$T_0$}{
$\nabla L_t \gets \nabla_{{\rm \bm{\theta}}}\frac1B \sum_b \ell_b(\hat{{\rm \bm{\theta}}}_{t-1})$, $\hat{{\rm \bm{\theta}}}_{t} \gets \hat{{\rm \bm{\theta}}}_{t-1} - \eta \nabla L_t $ \tcc*{Vanilla SGD}
$\nabla \mathcal{L} \gets \nabla \mathcal{L} \bigcup \{\nabla L_t \}$ \tcc*{Store gradients}
$\Bar{{\rm \bm{\theta}}}_{t} \gets \sqrt{ \rho \Bar{{\rm \bm{\theta}}}^2_{t-1} + \frac{1-\rho}{K}\sum_{k=0}^{K-1} \hat{{\rm \bm{\theta}}}^2_{t-k}}$ \tcc*{Moving average}
}
$\Delta {\rm \bm{\theta}} \gets \Bar{{\rm \bm{\theta}}}_{T_0} - {\rm \bm{\theta}}_0$, \ $\Delta {\mathbf{F}}_0 \gets 0$ \;
\For{t=1:$T_1$}{
$\Delta{{\mathbf{F}}}_t \gets \Delta{{\mathbf{F}}}_{t-1} + (\Delta {\rm \bm{\theta}}^{\top} \nabla L_t)^2$ \tcc*{Storage-friendly computation}
}
$\widetilde{I}(\mathbf{w};S) \gets \frac{n}{T_1} \Delta{{\mathbf{F}}}_{T_1}$\;
\end{algorithm}
\begin{lemma}[Approximation of Oracle Prior Covariance] \label{lemma:oracle_covariance}
Given the definition of influence functions (Lemma \ref{lemma:influence_function}) and Poisson bootstrapping (Lemma \ref{lemma:poisson}), the covariance matrix of the oracle prior can be approximated by
\begin{equation} \label{eq:prior_cov_fisher}
{\rm \bm{\Sigma}}_0 = \mathbb{E}_{p(S)}\left[({\rm \bm{\theta}}_{S}- {\rm \bm{\theta}}_0)({\rm \bm{\theta}}_{S} - {\rm \bm{\theta}}_0)^{\top}\right] \simeq \frac1K \sum_{k=1}^K \left(\hat{{\rm \bm{\theta}}}_{\bm{\xi}^k}- \hat{{\rm \bm{\theta}}}\right) \left(\hat{{\rm \bm{\theta}}}_{\bm{\xi}^k} - \hat{{\rm \bm{\theta}}}\right)^{\top}
\simeq \frac1n {\mathbf{H}}_{\hat{{\rm \bm{\theta}}}}^{-1} {\mathbf{F}}_{\hat{{\rm \bm{\theta}}}} {\mathbf{H}}_{\hat{{\rm \bm{\theta}}}}^{-1} \simeq \frac1n {\mathbf{F}}_{\hat{{\rm \bm{\theta}}}}^{-1},
\end{equation}
where ${\mathbf{F}}_{\hat{{\rm \bm{\theta}}}}$ is Fisher information matrix (FIM); we omit the subscript $S$ of $\hat{{\rm \bm{\theta}}}_S$ and $\hat{{\rm \bm{\theta}}}_{S,\bm{\xi}}$ for notation conciseness, and $\bm{\xi}^k$ is the bootstrap resampling weight in the $k$-th experiment.
\end{lemma}
Please refer to Appendix \ref{appx:proof_4_1} for the proof of this lemma.
\subsection{Efficient Information Estimation Algorithm}
After the approximation of oracle prior covariance, we are now able to rewrite the IIW term $I(\mathbf{w};S)$ in Eq. \eqref{eq:information_in_weights_half} to
\begin{equation} \label{eq:information_final}
I(\mathbf{w};S) \propto n \mathbb{E}_{p(S)}\left[({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0)^{\top} {\mathbf{F}}_{\hat{{\rm \bm{\theta}}}} ({\rm \bm{\theta}}_S - {\rm \bm{\theta}}_0) \right] \simeq n (\Bar{{\rm \bm{\theta}}}_S - {\rm \bm{\theta}}_0)^{\top} {\mathbf{F}}_{\hat{{\rm \bm{\theta}}}} (\Bar{{\rm \bm{\theta}}}_S - {\rm \bm{\theta}}_0) = \widetilde{I}(\mathbf{w};S).
\end{equation}
We define the approximate information by $\widetilde{I}(\mathbf{w};S)$ where we approximate the expectation $\mathbb{E}_{p(S)}[{\rm \bm{\theta}}_S^{\top} F_{\hat{{\rm \bm{\theta}}}} {\rm \bm{\theta}}_S]$ by
$\Bar{{\rm \bm{\theta}}}_S = \sqrt{\frac1K \sum_{k=1}^K \hat{{\rm \bm{\theta}}}_k^2} = \left(\sqrt{\frac1K \sum_{k=1}^K \hat{\theta}_{1,k}^2}, \dots, \sqrt{\frac1K \sum_{k=1}^K \hat{\theta}_{D,k}^2} \right)^{\top}$.\footnote{The quadratic mean is closer to the true value than arithmetic mean because of the quadratic term within the expectation function.} In Eq. \eqref{eq:information_final}, the information consists of two major components: $\Delta{{\rm \bm{\theta}}} = \Bar{{\rm \bm{\theta}}}_S - {\rm \bm{\theta}}_0 \in \mathbb{R}^D$ and ${\mathbf{F}}_{\hat{{\rm \bm{\theta}}}} \in \mathbb{R}^{D\times D}$, which can easily cause out-of-memory error due to the high-dimensional matrix product operations. We therefore hack into FIM to get
\begin{equation} \label{eq:final_iiw}
\widetilde{I}(\mathbf{w};S) = n \Delta{{\rm \bm{\theta}}}^{\top} \left [\frac{1}{T} \sum_{t=1}^T \nabla_{{\rm \bm{\theta}}} \ell_t(\hat{{\rm \bm{\theta}}}) \nabla_{{\rm \bm{\theta}}} \ell^{\top}_t(\hat{{\rm \bm{\theta}}}) \right] \Delta{{\rm \bm{\theta}}}
= \frac{n}{T}\sum_{t=1}^T \left[\Delta{{\rm \bm{\theta}}}^{\top} \nabla_{{\rm \bm{\theta}}} \ell_t(\hat{{\rm \bm{\theta}}}) \right]^2,
\end{equation}
such that the high dimensional matrix vector product reduces to vector inner product. We encapsulate the algorithm for estimating IIW during vanilla SGD by Algorithm \ref{alg:2}.
\begin{algorithm}[t]
\caption{Optimal Gibbs posterior inference by SGLD. \label{alg:3}}
\LinesNumbered
\KwData{Total number of samples $n$, batch size $B$, learning rate $\eta$, temperature $\beta$}
\KwResult{A sequence of weights $\{\mathbf{w}_t\}_{t\geq \hat{k}}$ following $p(\mathbf{w}|S^*)$}
\Repeat{the weight sequence $\{\mathbf{w}_t\}_{t\geq \hat{k}}$ becomes stable}
{
\tcc{Mini-batch gradient of energy function}
$\nabla \widetilde{U}_{S^*}(\mathbf{w}_{t-1}) \gets \nabla \left(-\frac{B}{n} \sum_b \log p(Y_b|X_b,\mathbf{w}_{t-1}) - \beta_{t-1} \log p(\mathbf{w}_{t-1}) \right)$ \;
\tcc{SGLD by gradient plus isotropic Gaussian noise}
$\varepsilon_t \gets \mathcal{N}(\varepsilon|\mathbf{0},\mathbf{I}_D)$, $\mathbf{w}_{t} \gets \mathbf{w}_{t-1} - \eta_{t-1} \nabla \widetilde{U}_{S^*}(\mathbf{w}_{t-1}) + \sqrt{2 \eta_{t-1} \beta_{t-1} }\varepsilon_t $ \;
\tcc{Learning rate \& temperature decay}
$\eta_t \gets \phi_{\eta}(\eta_{t-1})$, $\beta_t \gets \phi_{\beta}(\beta_{t-1})$, $t \gets t+1$ \;
}
\end{algorithm}
\section{Bayesian Inference for the Optimal Posterior} \label{sec:bayes_inference_pib}
Recall that we designed a new bottleneck on the expected generalization gap drawn from PAC-Bayes theory in \S \ref{sec:new_bottleneck}, and then derived an approximation of the IIW in \S \ref{sec:estimate_info_in_weights}. The two components of PAC-Bayes IB in Eq. \eqref{eq:pac_bayes_ib_max} are hence tractable as a learning objective. We give the following lemma on utilizing it for inference.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\linewidth]{figures/info_acc_activations.pdf}
\caption{IIW (up), loss and accuracy (down) of different activation functions (\texttt{linear}, \texttt{tanh}, \texttt{ReLU}, and \texttt{sigmoid}) NNs. There is a clear boundary between the initial \textbf{fitting} and the \textbf{compression} phases identified by IIW. Meanwhile, the train loss encounters the inflection point that keeps decreasing with slower slope. Note that the learning rate is set small (1e-4) except for \texttt{sigmoid}-NN for the better display of two phases. }
\label{fig:exp_nonlinearities}
\vspace{-1.5em}
\end{figure}
\begin{lemma}[Optimal Posterior for PAC-Bayes Information Bottleneck]\label{lemma:optimal_posterior}
Given an observed dataset $S^*$, the optimal posterior $p(\mathbf{w}|S^*)$ of PAC-Bayes IB in Eq. \eqref{eq:pac_bayes_ib} should satisfy the following form that
\begin{equation}
p(\mathbf{w}|S^*) = \frac1{Z(S)} p(\mathbf{w}) \exp \left\{-\frac{1}{\beta} \hat{L}_{S^*}(\mathbf{w}) \right\} = \frac1{Z(S)} \exp \left\{-\frac{1}{\beta} U_{S^*}(\mathbf{w}) \right\},
\end{equation}
where $U_{S^*}(\mathbf{w})$ is the energy function defined as $U_{S^*}(\mathbf{w}) = \hat{L}_{S^*}(\mathbf{w}) - \beta \log p(\mathbf{w})$, and $Z(S)$ is the normalizing constant.
\end{lemma}
Please refer to Appendix \ref{appx:proof_4_2} for the proof. The reason why we write the posterior in terms of an exponential form is that it is a typical \emph{Gibbs distribution} \citep{kittel2004elementary} (also called Boltzmann distribution) with \emph{energy function} $ U_{S^*}(\mathbf{w})$ and \emph{temperature} $\beta$ (the same $\beta$ of PIB appears in Eq. \eqref{eq:pac_bayes_ib}). Crediting to this formula, we can adopt Markov chain Monte Carlo (MCMC) for rather efficient Bayesian inference. Specifically, we propose to use stochastic gradient Langevin dynamics (SGLD) \citep{welling2011bayesian} that has been proved efficient and effective in large-scale posterior inference. SGLD can be realized by a simple adaption of SGD as
\begin{equation} \label{eq:sgld}
\mathbf{w}_{k+1} = \mathbf{w}_{k} - \eta_k \mathbf{g}_k + \sqrt{2\eta_k \beta} \varepsilon_k,
\end{equation}
where $\eta_k$ is step size, $\varepsilon_k \sim \mathcal{N}(\varepsilon|\mathbf{0},\mathbf{I}_D)$ is a standard Gaussian noise vector, and $\mathbf{g}_k$ is an unbiased estimate of energy function gradient $\nabla U(\mathbf{w}_k)$. SGLD can be viewed as a discrete Langevin diffusion described by stochastic differential equation \citep{raginsky2017non,borkar1999strong}: $d \mathbf{w}(t) = - \nabla U(\mathbf{w}(t)) dt + \sqrt{2 \beta}d B(t)$, where $\{B(t)\}_{t\geq 0}$ is the standard Brownian motion in $\mathbb{R}^D$. The Gibbs distribution $\pi(\mathbf{w}) \propto \exp(-\frac{1}{\beta} U(\mathbf{w}))$ is the unique invariant distribution of the Langevin diffusion. And, distribution of $\mathbf{w}_t$ converges rapidly to $\pi(\mathbf{w})$ when $t \to \infty$ with sufficiently small $\beta$ \citep{chiang1987diffusion}. Similarly for SGLD in Eq. \eqref{eq:sgld}, under the conditions that $\sum_t^{\infty} \eta_t \to \infty \ \text{and} \ \sum_t^{\infty} \eta_t^2 \to 0$,
and an annealing temperature $\beta$, the sequence of $\{\mathbf{w}_k\}_{k\geq \hat{k}}$ converges to Gibbs distribution with sufficiently large $\hat{k}$.
As we assume the oracle prior $p(\mathbf{w}) = \mathcal{N}(\mathbf{w} | {\rm \bm{\theta}}_0, {\rm \bm{\Sigma}}_0)$, $\log p(\mathbf{w})$ satisfies
\begin{equation} \label{eq:prior_solution}
- \log p(\mathbf{w}) \propto (\mathbf{w} - {\rm \bm{\theta}}_0)^{\top} {\rm \bm{\Sigma}}_0^{-1} (\mathbf{w} - {\rm \bm{\theta}}_0) + \log(\det {\rm \bm{\Sigma}}_0).
\end{equation}
The inference of the optimal posterior is then summarized by Algorithm \ref{alg:3}. $\phi_\eta(\cdot)$ and $\phi_\beta(\cdot)$ are learning rate decay and temperature annealing functions, e.g., cosine decay, respectively. Our SGLD based algorithm leverages the advantage of MCMC such that it is capable of sampling from the optimal posterior even for very complex NNs. Also, it does need to know the groundtruth distribution of $S$ while still theoretically allows global convergence avoiding local minima. It can be realized with a minimal adaptation of common auto-differentiation packages, e.g., PyTorch \citep{paszke2019pytorch}, by injecting isotropic noise in the SGD updates. Please refer to Appendix \ref{appx:computation} for the details of computation for PIB object.
\section{Experiments}
In this section, we aim to verify the intepretability of the proposed notion of IIW by Eq. \eqref{eq:final_iiw}. We monitor the information trajectory when training NNs \textbf{with plain cross entropy loss and SGD} for the sake of activation functions (\S \ref{sec:exp_non_linearity}), architecture (\S \ref{sec:exp_architecture}), noise ratio (\S \ref{sec:exp_random_label}), and batch size (\S \ref{sec:exp_info_batch_size}). We also substantiate the superiority of optimal Gibbs posterior inference based on the proposed Algorithm \ref{alg:3}, where PIB instead of plain cross entropy is used as the objective function (\S \ref{sec:exp_energy_function}). We conclude the empirical observations in \S \ref{sec:exp_summary} at last. Please refer to Appendix \ref{appx:exp_protocol} for general experimental setups about the used datasets and NNs.
\begin{wrapfigure}{R}{0.5\textwidth}
\centering
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/mlp_1_info.pdf}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/mlp_2_info.pdf}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/mlp_3_info.pdf}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[width=0.95\textwidth]{figures/mlp_4_info.pdf}
\end{subfigure}
\caption{Information compression with varying number of layers (1, 2, 3, and 4) for ReLU MLPs. All follow the general trend of fitting to compression phase transition. And, deeper layers can accelerate both the fitting and compressing phases. \label{fig:exp_info_layers}}
\vspace{-2em}
\end{wrapfigure}
\subsection{Information with Different Activation Functions}\label{sec:exp_non_linearity}
We train a 2-layer MLP (784-512-10) with plain cross-entropy loss by Adam on the MNIST dataset, meanwhile monitor the trajectory of the IIW $I(\mathbf{w};S)$. Results are illustrated in Fig. \ref{fig:exp_nonlinearities} where different activation functions (\texttt{linear}, \texttt{tanh}, \texttt{ReLU}, and \texttt{sigmoid}) are testified. We identify that there is a clear boundary between fitting and compression phases for all of them. For example, for the linear activation function on the first column, the IIW $I(\mathbf{w};S)$ surges within the first several iterations then drops slowly during the next iterations. Simultaneously, we could see that the training loss reduces sharply at the initial stage, then keeps decreasing during the information compression. At the last period of compression, we witness the information fluctuates near zero with the recurrence of memorization phenomenon (IIW starts to increase). This implies that further training is causing over-fitting. IIW shows great universality
on representing the information compression of NNs.
\subsection{Information with Deeper and Wider Architecture}\label{sec:exp_architecture}
Having identified the phase transition of the 2-layer MLP corresponding to IIW $I(\mathbf{w};S)$, we test it under more settings: different architectures and different batch sizes. For the architecture setting, we design MLPs from 1 to 4 layers. Results are shown in Fig. \ref{fig:exp_info_layers}. The first and the last figures show the information trajectory of the 1-layer/4-layer version of MLP-Large (784-10/784-512-100-80-10) where clear two-phase transitions happen in all these MLPs.
The 1-layer MLP is actually a softmax regression model. It can be identified that this model fits and compresses very slowly w.r.t. IIW compared with deeper NNs. This phenomenon demonstrates that deep models have overwhelmingly learning capacity than shallow models, because deep layers can not only boost the memorization of data but also urges the model to compress the redundant information to gain better generalization ability. Furthermore, when we add more layers, the fitting phase becomes shorter. Specifically, we observe the incidence of overfitting at the end of the 4-layer MLP training as IIW starts to increase.
We also examine how IIW explains the generalization w.r.t. the number of hidden units, a.k.a. the width of NNs, by Fig. \ref{fig:exp_units}. We train a 2-layer MLP without any regularization on MNIST. The left panel shows the training and test errors for this experiment. Notably, the difference of test and train acc can be seen an indicator of the \textbf{generalization gap} in Eq. \eqref{eq:gen_gap}. IIW should be aligned to this gap by definition. While 32 units are (nearly) enough to interpolate the training set, more hidden units still achieve better generalization performance, which illustrates the effect of overparameterization. In this scenario, weights $\ell_2$-norm keeps increasing with more units while IIW decays, similar to the test error. We identify that more hidden units do not render much increase of IIW, which is contrast to the intuition that wider NNs always have larger information complexity. More importantly, we find IIW is consistent to the generalization gap on each width.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.47\textwidth}
\begin{flushleft}
\includegraphics[width=0.99\linewidth]{figures/info_width.pdf}
\caption{\textbf{Left}: Training and test accuracy w.r.t. \# units; \textbf{Right}: Complexity measure (IIW and $\ell_2$-norm) w.r.t. \# units. Blue dash line shows the gap between train/test acc (generalization gap). We find $\ell_2$-norm keeps increasing with more hidden units. Instead, IIW keeps pace with the generalization gap: the larger the gap, the larger the IIW. \label{fig:exp_units}}
\end{flushleft}
\end{minipage}
\hspace{.3em}
\begin{minipage}[t]{0.48\textwidth}
\begin{flushright}
\includegraphics[width=0.99\linewidth]{figures/info_noise.pdf}
\caption{\textbf{Left}: IIW, train, and test accuracy when noise ratio in labels changes. IIW rises when noise ratio grows. \textbf{Right}: IIW with varying size of random-label data. Test acc keeps constant while train acc decays. Hence, more data causes lower IIW because of the shrinking gap between the train and test accuracy. \label{fig:exp_random_true_labels}}
\end{flushright}
\end{minipage}
\vspace{-1.0em}
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.47\textwidth}
\begin{flushleft}
\includegraphics[width=0.99\linewidth]{figures/info_batch_size.pdf}
\caption{\textbf{Left}: Training and test accuracy w.r.t. \# batch size; \textbf{Right}: IIW w.r.t. \# batch size. We find IIW keeps pace with the generalization gap: the larger the gap, the larger the IIW. From IIW we can specify that 16 is the best which reaches the least generalization gap without the need of having the model tested. \label{fig:exp_info_batch_size}}
\end{flushleft}
\end{minipage}
\hspace{.3em}
\begin{minipage}[t]{0.48\textwidth}
\begin{center}
\includegraphics[width=0.52\linewidth]{figures/IIW_VGG.pdf}
\caption{The tracked IIW of the VGG net during the training by four ways: vanilla, $\ell_2$-norm regularization, dropout, and PIB training. We can identify that: first, all of them still follow a fitting-compressing paradigm specified by IIW; second, vanilla VGG reaches the largest IIW far above the others; third, PIB regularizes IIW directly thus yielding the smallest IIW. \label{fig:iiw_vgg}}
\end{center}
\end{minipage}
\vspace{-1.5em}
\end{figure}
\subsection{Random Labels vs. True Labels}\label{sec:exp_random_label}
According to the PAC-Bayes theorem, the IIW is a promising measure to explain/predict the generalization capability of NNs. NNs are often over-parameterized thus can perfectly fit even the random labels, obviously without any generalization capability. For example, 2-layer MLP has $15,728,640$ parameters that are much larger than the sample number of CIFAR-10 (50,000). That causes the number of parameters an unreliable measure of NN complexity in overparameterization settings \citep{neyshabur2015search}. Alternatively, $\ell_2$-norm is often used as a complexity measure to be imposed on regularizing model training in practices.
We investigate the model trained with different levels of label corruption, as shown by the left panel of Fig. \ref{fig:exp_random_true_labels}. We train a 2-layer MLP on CIFAR-10 and find that the increasing noise causes sharp test acc decay while train acc does not change much. Meanwhile, IIW keeps growing with the fall of test acc and expansion of generalization gap. This demonstrates IIW's potential in identifying the noise degree in datasets or the mismatch between the test and train data distributions.
We further build a random-label CIFAR-10, results are on the right of Fig. \ref{fig:exp_random_true_labels}. Although the model can still nearly interpolate the train data, with the rise of random-label data, the model keeps 10\% test acc but has less train acc, which renders larger generalization gap. This phenomenon is also captured by IIW.
\subsection{Information in weights w.r.t. Batch Size} \label{sec:exp_info_batch_size}
We also consider how batch size influences IIW and generalization gap. Recent efforts on bounding $I(\mathbf{w};S)$ of iterative algorithms (e.g., SGD and SGLD) \citep{mou2018generalization,pensia2018generalization} imply that the variance of gradients is a crucial factor. For the ultimate case where batch size equals full sample size, the gradient variance is zero, and the model is prone to over-fitting grounded on empirical observations. When batch size equals one, the variance becomes tremendously large.
We conjecture there is an optimal batch size that reaches the minimum generalization gap, in other word, the minimum IIW. This conjecture is raised on our empirical findings, displayed in Fig. \ref{fig:exp_info_batch_size} where we test IIW on model with varying batch size. Each model is updated with the same total number of iterations and the same learning rate. We identify that when batch size is 16, the model reaches the best test acc and the least generalization gap, which means this optimal batch size should fall into (4,16) or (16, 64). On the left, the model reaches the minimum IIW when batch size is 16.
\subsection{Bayesian Inference with Varying Energy Functions}\label{sec:exp_energy_function}
To confirm the superiority of the proposed PIB in \S \ref{sec:bayes_inference_pib}, we compare it with vanilla SGD and two widely used regularizations: $\ell_2$-norm and dropout. We train a large VGG network \citep{simonyan2014very} on four open datasets: CIFAR10/100 \citep{krizhevsky2009learning}, STL10 \citep{coates2011analysis}, and SVHN \citep{netzer2011reading}, as shown in Table \ref{tab:pib_performance}, where we find PIB consistantly outperforms the baselines. We credit the improvement to the explicit consideration of information regularization during the training, which forces the model to \emph{forget} the training dataset to regularize the generalization gap. This is verified by Fig. \ref{fig:iiw_vgg} where PIB helps restrict IIW in the lowest level. Please refer to Appendix \ref{appx:exp_protocol} for experimental setups.
\begin{table}[t]
\centering
\caption{Test performance of the proposed PIB algorithm compared with two other common regularization techniques: $\ell_2$-norm and dropout, on VGG-net \citep{simonyan2014very}. The 95\% confidence intervals are shown in parentheses. Best values are in bold.}
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Test ACC (\%)} & \textbf{CIFAR10} & \textbf{CIFAR100} & \textbf{STL10} & \textbf{SVHN} \\
\hline
vanilla SGD & 77.03(0.57) & 52.07(0.44) & 54.31(0.65) & 93.57(0.67) \\
SGD+$\ell_2$-norm & 77.13(0.53) & 50.84(0.71) & 55.30(0.68) & 93.60(0.68) \\
SGD+dropout & 78.95(0.60) & 52.34(0.66) & 56.35(0.78) & 93.61(0.76) \\
SGD+PIB & \textbf{80.19(0.42)} & \textbf{56.47(0.62)} & \textbf{58.83(0.75)} & \textbf{93.88(0.88)}\\
\hline
\end{tabular}%
\label{tab:pib_performance}%
\vspace{-1em}
\end{table}%
\subsection{Summary of Experiments}\label{sec:exp_summary}
We made the following observations from our experiments:
\begin{enumerate}[leftmargin=*, itemsep=0pt, labelsep=5pt]
\item We can clearly identify the fitting-compression phase transition during training through our new information measure, i.e., information stored in weights (IIW). Unlike the representation-based information measure $I(X;Z)$, IIW applies to various activation functions including \texttt{ReLU}, \texttt{sigmoid}, \texttt{tanh}, and \texttt{linear}.
\item We further identify that the phase transition applies to deeper and wider architecture. More importantly, deeper model is proved to reach faster fitting and compression than shallow models.
\item Unlike $\ell_2$-norm of weights that rise together with wider models, IIW better illustrates the true model complexity and its generalization gap.
\item IIW can explain the performance drop w.r.t. the degree of label noise. Also, IIW can even identify the generalization gap for models learned from random labels.
\item There might exist an optimal batch size for the minimum generalization gap, which is empirically demonstrated by our experiments.
\item Adopting SGD based on the energy function derived from PAC-Bayes IB enables good inference to the optimal posterior of NNs. This works for practical large networks in the literature.
\end{enumerate}
\section{Conclusion} \label{sec:exp_chap_summary}
In this paper, we proposed PAC-Bayes information bottleneck and the corresponding algorithm for measuring information stored in weights of NNs and training NNs with information principled regularization. Empirical results show the universality of our information measure on explaining NNs, which sheds light on understanding NNs through information bottleneck. We aim to further investigate its performance and develop this into practical NNs for production in the future.
{\small
| {
"attr-fineweb-edu": 1.864258,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbCI4c3aisGQulqtj |
\section{Introduction}
\label{sec:intro}
Diffusion models have shown impressive performance both as generative models themselves~\cite{song2020score,dhariwal2021diffusion}, and also as unsupervised inverse problem solvers~\cite{song2020score,choi2021ilvr,chung2022come,kawar2022denoising} that do not require problem-specific training.
Specifically, given a pre-trained unconditional score function (i.e. denoiser), solving the reverse stochastic differential equation (SDE) numerically would amount to sampling from the data generating distribution~\cite{song2020score}.
For many different inverse problems (e.g. super-resolution~\cite{choi2021ilvr,chung2022come}, inpainting~\cite{song2020score,chung2022come}, compressed-sensing MRI (CS-MRI)~\cite{song2022solving,chung2022come}, sparse view CT (SV-CT)~\cite{song2022solving}, etc.), it was shown that simple incorporation of the measurement process produces satisfactory conditional samples, even when the model was not trained for the specific problem.
Nevertheless, for certain problems (e.g. inpainting), currently used algorithms often produce unsatisfactory results when implemented naively (e.g. boundary artifacts, as shown in \cref{fig:method} (b)).
The authors in \cite{lugmayr2022repaint} showed that in order to produce high quality reconstructions, one needs to iterate back and forth between the noising and the denoising step at least $> 10$ times {\em per iteration}. These iterations are computationally demanding and should be avoided, considering that diffusion models are slow to sample from even without such iterations.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{./figs/cover.jpg}
\caption{Visual schematic of the MCG correction step. (a) \textcircled{\raisebox{-0.9pt}{1}} Unconditional reverse diffusion generates ${\boldsymbol x}_i$; \textcircled{\raisebox{-0.9pt}{2}} $Q_i$ maps the noisy ${\boldsymbol x}_i$ to generate $\hat{{\boldsymbol x}}_0$; \textcircled{\raisebox{-0.9pt}{3}} Manifold Constrained Gradient (MCG) $\frac{\partial}{\partial{\boldsymbol x}_i}\|{\boldsymbol W}({\boldsymbol y} - {\boldsymbol H}\hat{{\boldsymbol x}}_0)\|_2^2$ is applied to fix the iteration on manifold; \textcircled{\raisebox{-0.9pt}{4}} Takes the orthogonal complement; \textcircled{\raisebox{-0.9pt}{5}} Samples from $p({\boldsymbol y}_i|{\boldsymbol y})$, then combines ${\boldsymbol A}{\boldsymbol x}'_{i-1}$ and ${\boldsymbol y}_i$. (b) Representative results of inpainting, compared with score-SDE~\cite{song2020score}. Reconstructions with score-SDE produce incoherent results, while our method produces high fidelity solutions.}
\label{fig:method}
\end{figure}
Recently, another type of score-based approach, called Noise2Score \cite{kim2021noisescore}, was proposed for image denoising without clean references.
In contrast to the diffusion models, Noise2Score is computationally efficient as it is a deterministic single-step approach that does not involve any stochastic sampling. Unfortunately, the performance of Noise2Score is inferior to the diffusion models that rely on iterative denoising. Furthermore, Noise2Score has never been applied to other inverse problem applications beyond the image denoising.
Given the two types of seemingly different approaches that rely on the score function, one may wonder what is the relation between the two, and whether there is a way to synergistically combine the two. Accordingly, one of the main contributions of this work is to first demonstrate that the key idea of Noise2Score leads to what we call the {\em manifold constraint} that can be combined in a complementary fashion
with the existing diffusion models to significantly improve the performance of reconstruction across various problems despite the simplicity in the implementation.
Moreover, we theoretically prove that the correction term from the manifold constraint enforces the sample path to stay on the plane tangent to the data manifold\footnote{We coin our method \textbf{M}anifold \textbf{C}onstrained \textbf{G}radient (MCG).}, so by combining with the reverse diffusion step, the solution becomes more stable and accurate.
\section{Related Works}
\label{sec:background}
\subsection{Diffusion Models}
\paragraph{Continuous Form}
For a continuous diffusion process ${\boldsymbol x}(t) \in {\mathbb R}^n,\,t \in [0, 1]$, we set ${\boldsymbol x}(0) \sim p_0({\boldsymbol x}) = p_{data}$, where $p_{data}$ represents the data distribution of interest, and ${\boldsymbol x}(1) \sim p_1({\boldsymbol x})$, with $p_1({\boldsymbol x})$ approximating spherical Gaussian distribution, containing no information of data.
Here, the forward noising process is defined with the following It$\hat{\text{o}}$ stochastic differential equation (SDE)~\cite{song2020score}:
\begin{equation}
\label{eq:forward-sde}
d{\boldsymbol x} = \bar{\boldsymbol f}({\boldsymbol x}, t)dt + \bar g(t)d{\boldsymbol w},
\end{equation}
with $\bar{\boldsymbol f}: {\mathbb R}^d \mapsto {\mathbb R}^d$ defining the linear drift function, $\bar g(t):{\mathbb R} \mapsto {\mathbb R}$ defining a scalar diffusion coefficient, and ${\boldsymbol w} \in {\mathbb R}^n$ denoting the standard $n-$dimensional Wiener process. Choosing $\bar{\boldsymbol f} = 0,\,\bar g(t) = \sqrt{d[\sigma^2(t)/dt]}$ leads to the Brownian motion, called the variance exploding SDE (VE-SDE), and choosing $\bar{\boldsymbol f}({\boldsymbol x},t) = -\beta(t){\boldsymbol x}/2,\, \bar g(t) = \sqrt{\beta(t)}$ leads to the Ornstein-Uhlenbeck process~\cite{sarkka2019applied} where the mean decays to 0 as $t \rightarrow 1$, coined variance preserving SDE (VP-SDE). The forward SDE in \eqref{eq:forward-sde} is coupled with the following reverse SDE by Anderson's theorem~\cite{anderson1982reverse,song2020score}
\begin{align}\label{eq:reverse_SDE}
d{\boldsymbol x} &= [\bar{\boldsymbol f}({\boldsymbol x}, t) - \bar g(t)^2 \nabla_{\boldsymbol x} \log p_t({\boldsymbol x})]dt + \bar g(t) d\bar{{\boldsymbol w}},
\end{align}
with $dt$ denoting the infinitesimal negative time step, and $\bar{{\boldsymbol w}}$ defining the standard Wiener process running backward in time. Note that the reverse SDE defines the generative process through the score function $\nabla_{\boldsymbol x} p_t({\boldsymbol x})$, which is
typically trained by minimizing the following score-matching objective
\begin{equation}
\label{eq:sm}
\min_\theta {\mathbb E}_{t \sim U(\varepsilon, 1), {\boldsymbol x}(t) \sim p({\boldsymbol x}(t))}\left[\|{\boldsymbol s}_\theta({\boldsymbol x}_t, t) - \nabla_{{\boldsymbol x}_t}\log p({\boldsymbol x}(t))\|_2^2\right].
\end{equation}
Once the parameter $\theta^*$ for the score function is estimated, one can replace $\nabla_{{\boldsymbol x}_t} \log p({\boldsymbol x}(t))$ with $s_{\theta^*}({\boldsymbol x}_t, t)$ to solve the reverse SDE.
\paragraph{Discrete Form}
Due to the linearity of $\bar{\boldsymbol f}$ and $\bar g$, the forward diffusion step can be implemented with a simple reparameterization trick~\cite{kingma2013auto}. Namely, the general form of the forward diffusion is
\begin{align}
\label{eq:forward_discrete}
{\boldsymbol x}_i = a_i {\boldsymbol x}_0 + b_i {\boldsymbol z},\, \quad {\boldsymbol z} \sim {\mathcal N}(0, {\boldsymbol I}),
\end{align}
where we have replaced the continuous index $t \in [0, 1]$ with the discrete index $i \in {\mathbb N}$.
On the other hand, the discrete reverse diffusion step is implemented as
\begin{align}\label{eq:reverse_discrete}
{\boldsymbol x}_{i-1} = {\boldsymbol f}({\boldsymbol x}_i,{\boldsymbol s}_\theta) + g({\boldsymbol x}_i){\boldsymbol z},\, \quad {\boldsymbol z} \sim {\mathcal N}(0, {\boldsymbol I}),
\end{align}
where we have replaced the ground truth score function with the trained one.
We detail the choice of $a_i, b_i, {\boldsymbol f}, g$ in \cref{sec:app-sde}.
\subsection{Conditional Diffusion for Inverse problems}
The main problem of our interest in this paper is the inverse problem, retrieving the unknown ${\boldsymbol x} \in {\mathbb R}^n$ from a measurement ${\boldsymbol y}$:
\begin{align}\label{eq:forward}
{\boldsymbol y} = {\boldsymbol H}{\boldsymbol x}+\bm{\epsilon} ,\, \quad {\boldsymbol y} \in {\mathbb R}^m, {\boldsymbol H} \in {\mathbb R}^{m \times n}.
\end{align}
where $\bm{\epsilon} \in {\mathbb R}^{n}$ is the noise in the measurement.
Accordingly, for the case of the inverse problems,
our goal is to generate samples from a conditional distribution with respect to the measurement ${\boldsymbol y}$, i.e. $p({\boldsymbol x}|{\boldsymbol y})$. Accordingly, the score function $\nabla_{\boldsymbol x} \log p_t({\boldsymbol x})$
in \eqref{eq:reverse_SDE} should be replaced by the conditional score $ \nabla_{\boldsymbol x} \log p_t({\boldsymbol x}|{\boldsymbol y})$. Unfortunately, this strictly
restricts the generalization capability of the neural network since the conditional score should be retrained whenever the conditions change.
To address this, recent conditional diffusion models \cite{kadkhodaie2020solving,song2020score,choi2021ilvr,chung2022come} utilizes the unconditional
score function $\nabla_{\boldsymbol x} \log p_t({\boldsymbol x})$ but relies on the measurement constraint to impose the conditions.
Specifically, one can apply the following:
\begin{align}
{\boldsymbol x}'_{i-1} &= {\boldsymbol f}({\boldsymbol x}_i,s_\theta) + g({\boldsymbol x}_i){\boldsymbol z},\, \quad {\boldsymbol z} \sim {\mathcal N}(0, {\boldsymbol I}), \label{eq:reverse_discrete_ip}\\
{\boldsymbol x}_{i-1} &= {\boldsymbol A} {\boldsymbol x}'_{i-1} + {\boldsymbol b}_i \label{eq:nem},
\end{align}
where
${\boldsymbol A},{\boldsymbol b}_i$ are function of ${\boldsymbol H},{\boldsymbol y}_0,$ and ${\boldsymbol x}_0$.
Note that \eqref{eq:reverse_discrete_ip} is identical to the unconditional reverse diffusion step in \eqref{eq:reverse_discrete}, whereas
\eqref{eq:nem} effectively imposes the condition. It was shown in~\cite{chung2022come} that any general contraction mapping (e.g. projection onto convex sets, gradient step) may be utilized as \eqref{eq:nem}
to impose the constraint.
\subsection{Noise2Score}
In the score matching problem \eqref{eq:sm},
$\nabla_{{\boldsymbol x}_i}\log p({\boldsymbol x}_i)$ is often replaced by $\nabla_{{\boldsymbol x}_i}\log p({\boldsymbol x}_i|{\boldsymbol x}_0)$,
in which the procedure is often called the denoising score matching (DSM)~\cite{song2020score,ho2020denoising}:
\begin{equation}
\label{eq:dsm}
\min_\theta {\mathbb E}_{i, {\boldsymbol x}_0, {\boldsymbol x}_i}\left[\|{\boldsymbol s}_\theta({\boldsymbol x}_i, i) +
({\boldsymbol x}_i - a_i{\boldsymbol x}_0) / b_i^2\|_2^2\right]
\end{equation}
where we use \eqref{eq:forward_discrete} to provide a closed-form Gaussian distribution.
For the estimated parameter $\theta^*$ and
the fixed sample ${\boldsymbol x}_i$,
it is straightforward to see that \eqref{eq:dsm} leads to
the one-step estimate of the clean image ${\boldsymbol x}_0$:
\begin{equation}
\label{eq:hatx0}
\hat{\boldsymbol x}_0 = ({\boldsymbol x}_i + b_i^2 {\boldsymbol s}_\theta({\boldsymbol x}_i,i)) / a_i,
\end{equation}
which is known as the denoising autoencoder \cite{ho2020denoising,song2020denoising}.
In fact, the estimate in \eqref{eq:hatx0} is shown Bayes optimal when the noise measurement ${\boldsymbol x}_i$ is given by \eqref{eq:forward_discrete} \cite{stein1981estimation}.
Furthermore, this formula can be generalized to other noise models such as Poisson, Gamma, etc. using Tweedie's formula \cite{efron2011tweedie, robbins1992empirical}. The resulting denoising formula can all be represented in terms of the score function so that the algorithm is called the Noise2Score \cite{kim2021noisescore}.
\section{Conditional Diffusion using Manifold Constraints}
\subsection{Manifold Constrained Score}
Although our original motivation of using the measurement constraint step in \eqref{eq:nem} was to utilize the unconditionally trained score function in the reverse diffusion step in \eqref{eq:reverse_discrete_ip}, there is room for imposing additional constraints while still using the unconditionally trained score function.
Specifically, for a general conditioning ${\mathcal C}$, the Bayes rule $p({\boldsymbol x}|{\mathcal C})=p({\mathcal C}|{\boldsymbol x})p({\boldsymbol x})/p({\mathcal C})$ gives us
\begin{align}
\log p({\boldsymbol x}|{\mathcal C}) = \log p({\mathcal C}|{\boldsymbol x})+\log p({\boldsymbol x})-\log p({\mathcal C})
\end{align}
which leads to
\begin{align}
\label{eq:cond_score}
\nabla_{\boldsymbol x} \log p({\boldsymbol x}|{\mathcal C}) = \nabla_{{\boldsymbol x}} \log p({\boldsymbol x})+ \nabla_{\boldsymbol x}\log p({\mathcal C}|{\boldsymbol x}),
\end{align}
when ${\mathcal C}$ is not a function of ${\boldsymbol x}$. Hence, the score function in the reverse SDE in \eqref{eq:reverse_SDE} can be replaced by \eqref{eq:cond_score}. Furthermore, the new conditioning ${\mathcal C}$ is different from the measurement constrain in \eqref{eq:nem},
so we can still use the measurement consistency step in \eqref{eq:nem} in addition
to the diffusion step in \eqref{eq:reverse_discrete_ip} with the modified score in \eqref{eq:cond_score}.
One of the important contributions of this paper is to reveal that
the Bayes optimal denoising step in \eqref{eq:hatx0} from Noise2Score leads to a preferred condition both empirically and theoretically. Specifically, we define the set and the constraint
\begin{align}
{\mathcal C} = \{({\boldsymbol x},{\boldsymbol y})| {\boldsymbol x} \in {\mathcal X}_0\},\quad&\mbox{where}\quad {\mathcal X}_0 = \{{\boldsymbol x} \in {\mathbb R}^n~|~{\boldsymbol x} = ( {\boldsymbol x}+b_t^2(t) {\boldsymbol s}_\theta({\boldsymbol x},t))/a_t(t) \}
\end{align}
which we call the {\em manifold constraint (MCG)}.
Under the manifold constraint, if the noise $\bm{\epsilon}$ in \eqref{eq:forward} is Gaussian, we have
\begin{align}
\log p({\mathcal C}|{\boldsymbol x}) = -\alpha\|{\boldsymbol W}({\boldsymbol y}- {\boldsymbol H}\hat{\boldsymbol x}_0)\|_2^2,\quad \hat{\boldsymbol x}_0 := ( {\boldsymbol x}+b_t^2(t) {\boldsymbol s}_\theta({\boldsymbol x},t))/a_t(t),
\end{align}
where $\alpha$ and ${\boldsymbol W}$ depend on the noise covariance. Accordingly, the discrete reverse diffusion under the additional manifold constraint can be represented by
\begin{align}
{\boldsymbol x}'_{i-1} &= {\boldsymbol f}({\boldsymbol x}_i,{\boldsymbol s}_\theta) - \alpha\frac{\partial}{\partial{\boldsymbol x}_i} \|{\boldsymbol W}({\boldsymbol y}_0 - {\boldsymbol H}\hat{\boldsymbol x}_0)\|_2^2 + g({\boldsymbol x}_i){\boldsymbol z},\,\quad {\boldsymbol z} \sim {\mathcal N}(0, {\boldsymbol I}), \label{eq:reverse_discrete_ip_mcg}\\
{\boldsymbol x}_{i-1} &= {\boldsymbol A} {\boldsymbol x}'_{i-1} + {\boldsymbol b} \label{eq:nem_mcg}.
\end{align}
\add{We illustrate our scheme visually in Fig.~\ref{fig:method} (a), specifically for the task of image inpainting. The additional step leads to dramatic performance boost, as can be seen in Fig.~\ref{fig:method} (b).}
In the following, we articulate how the proposed MCG can be incorporated into each application, and study the theoretical properties of the method. Further algorithmic details are presented in \cref{sec:app-algo}.
We note that the authors of~\cite{ho2022video} proposed a similar gradient method for the application of temporal imputation and super-resolution. When combining \eqref{eq:reverse_discrete_ip_mcg} with \eqref{eq:nem_mcg}, one can arrive at a similar gradient method proposed in~\cite{ho2022video}, and hence our method can be seen as a generalization to arbitrary linear inverse problems.
\subsection{Specific Applications}
\paragraph{Inpainting}
The forward model for inpainting is given as
\begin{align}
\label{eq:fm-inpaint}
{\boldsymbol y} = {\boldsymbol P}{\boldsymbol x} + \bm{\epsilon},\quad {\boldsymbol P} \in {\mathbb R}^{m \times n},
\end{align}
where ${\boldsymbol P} \in \{0,1\}^{m \times n}$ is the matrix consisting of the columns with standard coordinate vectors indicating the indices of measurement.
For the steps in \eqref{eq:reverse_discrete_ip_mcg}, \eqref{eq:nem_mcg}, we choose the following
\begin{align}
\label{eq:choice-inpaint}
{\boldsymbol W} = {\boldsymbol I},\quad {\boldsymbol A} = {\boldsymbol I} - {\boldsymbol P}^T{\boldsymbol P},\quad {\boldsymbol b}_i = {\boldsymbol P}^T{\boldsymbol y}_i,\,\quad {\boldsymbol y}_i \sim q({\boldsymbol y}_i|{\boldsymbol y}) := {\mathcal N}({\boldsymbol y}_i|a_i{\boldsymbol y}, b_i^2 {\boldsymbol I}).
\end{align}
Specifically, ${\boldsymbol A}$ takes the orthogonal complement of ${\boldsymbol x}'_{i-1}$, meaning that the measurement subspace is corrected by ${\boldsymbol y}_i$, while the orthogonal components are updated from ${\boldsymbol x}'_{i-1}$. Note that we use ${\boldsymbol y}_i$ sampled from ${\boldsymbol y}$ to match the noise level of the current estimate.
\paragraph{Colorization}
The forward model for colorization is specified as
\begin{equation}
\label{eq:fm-color}
{\boldsymbol y} = {\boldsymbol C}{\boldsymbol x} + \bm{\epsilon} := {\boldsymbol P}{\boldsymbol M}{\boldsymbol x} + \bm{\epsilon},\quad {\boldsymbol P} \in {\mathbb R}^{m \times n},\quad {\boldsymbol M} \in {\mathbb R}^{n \times n},
\end{equation}
where ${\boldsymbol P}$ is the matrix that was used in inpainting, and \add{${\boldsymbol M}$} is an orthogonal matrix that couples the RGB colormaps. \add{${\boldsymbol M}^T$} is a matrix that de-couples the channels back to the original space. In other words, one can view colorization as performing imputation in some spectral space. Subsequently, for our colorization method we choose
\begin{equation}
\label{eq:choice-color}
{\boldsymbol W} = {\boldsymbol C}^T,\quad {\boldsymbol A} = {\boldsymbol I} - {\boldsymbol C}^T{\boldsymbol C},\quad {\boldsymbol b}_i = {\boldsymbol C}^T{\boldsymbol y}_i,\ \quad {\boldsymbol y}_i \sim q({\boldsymbol y}_i|{\boldsymbol y}).
\end{equation}
Again, our forward measurement matrix is orthogonal, and we choose ${\boldsymbol A}$ such that we only affect the orthogonal complement of the measurement subspace.
\paragraph{CT Reconstruction}
For the case of CT reconstruction, the forward model reads
\begin{equation}
\label{eq:fm-ct}
{\boldsymbol y} = {\boldsymbol R}{\boldsymbol x} + \bm{\epsilon},\quad {\boldsymbol R} \in {\mathbb R}^{m \times n},
\end{equation}
where ${\boldsymbol R}$ is the discretized Radon transfrom~\cite{buzug2011computed} that measures the projection images from different angles. Note that for CT applications, ${\boldsymbol R}^T$ corresponds to performing backprojection (BP), and ${\boldsymbol R}^{\dagger}$ corresponds to performing filtered backprojection (FBP). We choose
\begin{equation}
\label{eq:choice-ct}
{\boldsymbol W} = {\boldsymbol R}^\dagger,\quad {\boldsymbol A} = {\boldsymbol I} - {\boldsymbol R}^T({\boldsymbol R}\Rb^T)^\dagger {\boldsymbol R},\quad {\boldsymbol b}_i = {\boldsymbol R}^T({\boldsymbol R}\Rb^T)^\dagger{\boldsymbol y}_i,\,\quad {\boldsymbol y}_i \sim q({\boldsymbol y}_i|{\boldsymbol y}),
\end{equation}
where the choice of ${\boldsymbol A}$ reflects that the Radon transform is not orthogonal, and we need the term $({\boldsymbol R}\Rb^T)^\dagger$ as a term analogous to the filtering step. Indeed, this form of update is known as the algebraic reconstruction technique (ART), a classic technique in the context of CT~\cite{gordon1970algebraic}. We note that this choice is different from what was proposed in \cite{song2022solving}, where the authors repeatedly apply projection/FBP by explicitly replacing the sinogram in the measured locations. From our experiments, we find that repeated application of FBP is highly numerically unstable, often leading to overflow. This is especially the case when we have limited resources for training data (we use ~4k, whereas \cite{song2022solving} use ~50k), as we further show in \cref{sec:exp}.
\section{Theoretical Findings}
\label{sec:theory}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.45\textwidth]{figs/manifold.png}
\caption{Geometric view of the MCG correction. Each curve represents a manifold of (noisy) data, when $a_i = 1$. By Proposition~\ref{prop:noisy}, noisy data are concentrated on the manifold that is away from the data manifold by $b_i\sqrt{n-1}$. In this viewpoint, the backward (or forward) step can be considered as a transition from ${\mathcal M}_i$ to ${\mathcal M}_{i-1}$ (or ${\mathcal M}_{i-1}$ to ${\mathcal M}_i$, respectively). Arrows suggest directions of conventional projection onto convex sets (POCS) step (green arrow) and MCG step (red arrow) which can be predicted by Theorem \ref{thm:MCG}. Since conventional POCS step may leave the manifold, we cannot
guarantee the accuracy of the sampling while MCG step \add{keeps} the samples on the manifold.}
\label{fig:theory}
\end{figure*}
In this section, we theoretically support the effectiveness of the proposed algorithm by showing the problematic behavior of the earlier algorithm and how the proposed algorithm resolves the problem.
\add{We defer all proofs in the supplementary section.}
\add{To begin with,} we borrow a geometrical viewpoint of the data manifold.
\textbf{Notation} For a scalar $a$, points ${\boldsymbol x},{\boldsymbol y}$ and a set $A$, we use \add{the} following notations.
$aA:= \{a {\boldsymbol x}: {\boldsymbol x} \in A\}$;
$d({\boldsymbol x},A) := \inf_{{\boldsymbol y} \in A} ||{\boldsymbol x}-{\boldsymbol y}||_2$;
$B_r(A):= \{{\boldsymbol x}: d({\boldsymbol x}, A) < r\}$; $T_{\boldsymbol x} {\mathcal M}$: the tangent space of to a manifold ${\mathcal M}$ at ${\boldsymbol x}$; ${\boldsymbol J}_f$: the jacobian matrix of a vector valued function $f$.
To develop the theory, we need an assumption on the data distribution, called the manifold assumption, which is widely exploited in machine learning literature.
\begin{restatable}[Manifold assumption]{assumption}{manifoldassumption}
Suppose ${\mathcal M}\subset {\mathbb R}^n$ is the set of all data points, here we call the data manifold. Then, the manifold coincides with the tangent space with dimension $l\ll n$.
$$\mathcal{M} \cap B_R({\boldsymbol x}_0) = T_{x_0} {\mathcal M} \cap B_R(x_0) \text{ and } T_{x_0} {\mathcal M} \cong {\mathbb R}^l.$$
Moreover, the data distribution $p_0$ is the uniform distribution on the data manifold ${\mathcal M}$.
\end{restatable}
Under the assumption, the following proposition shows how the data perturbed by noise lies in the ambient space. Fig.~\ref{fig:theory} illustrates the proposition pictorially.
\begin{restatable}[Concentration of noisy data]{proposition}{noisymanifold}
\label{prop:noisy}
Consider the distribution of noisy data $p_i({\boldsymbol x}_i) = \int p({\boldsymbol x}_i|{\boldsymbol x})p_0({\boldsymbol x}) d{\boldsymbol x}, p({\boldsymbol x}_i|{\boldsymbol x}) \sim {\mathcal N}(a_i{\boldsymbol x}, b_i^2 {\boldsymbol I}) $.
Then $p_i({\boldsymbol x}_i)$ is concentrated on $(n-1)$-dim manifold ${\mathcal M}_i := \{ {\boldsymbol y} \in {\mathbb R}^n : d({\boldsymbol y}, a_i {\mathcal M}) = r_i := b_i \sqrt{n-l} \}$. Rigorously, $p(B_{\epsilon r_i}({\mathcal M}_i)) > 1 - \delta$.
\end{restatable}
\begin{remark}
\label{rmk:score}
We can infer from the proposition that the score functions are trained only with the data points concentrated on the noisy data manifolds. Therefore, inaccurate inference might be caused by application of a score function on points away from the noisy data manifold.
\end{remark}
\begin{restatable}[score function]{proposition}{score}
\label{prop:score}
Suppose $s_\theta$ is the minimizer of \eqref{eq:dsm}.
Let $Q_i$ be the function that maps ${\boldsymbol x}_i$ to $\hat{{\boldsymbol x}}_0$ for each $i$, $$Q_i:{\mathbb R}^d \rightarrow {\mathbb R}^d, {\boldsymbol x}_i \mapsto \frac{1}{a_i} ({\boldsymbol x}_i + b_i^2 s_\theta ({\boldsymbol x}_i,i) ).$$
Then, $Q_i({\boldsymbol x}_i) \in {\mathcal M}$ and ${\boldsymbol J}_{Q_i}^2 = {\boldsymbol J}_{Q_i} = {\boldsymbol J}_{Q_i}^T: {\mathbb R}^d \rightarrow T_{Q_i({\boldsymbol x}_i)}{\mathcal M}$. Intuitively, $Q_i$ is locally an orthogonal projection onto ${\mathcal M}$.
\end{restatable}
According to the proposition, the score function only concerns the normal direction of the data manifold.
In other words, the score function cannot discriminate two data points whose difference is tangent to the manifold.
In solving inverse problems, however, we desire to discriminate data points to reconstruct the original signal, and the discrimination is achievable by measurement fidelity.
In order to achieve the original signal, the measurement plays a role in correcting the tangent component near the data manifold.
Furthermore, with regard to \cref{rmk:score}, diffusion model-based inverse problem solvers should follow the tangent component.
The following theorem shows how existing algorithms and the proposed method are different in this regard.
\begin{restatable}[Manifold constrained gradient]{theorem}{Improved}
A correction by the manifold constrained gradient does not leave the data manifold. Formally,
\begin{align*}
\frac{\partial}{\partial{\boldsymbol x}_i} \|{\boldsymbol W}({\boldsymbol y} - {\boldsymbol H}\hat{\boldsymbol x}_0)\|_2^2 =-2 {\boldsymbol J}_{Q_i}^T {\boldsymbol H}^T {\boldsymbol W}^T {\boldsymbol W} ({\boldsymbol y} - {\boldsymbol H}\hat{\boldsymbol x}_0) \in T_{\hat{\boldsymbol x}_0}{\mathcal M},
\end{align*}
the gradient is the projection of data fidelity term onto $T_{\hat{\boldsymbol x}_0}{\mathcal M}$,
\label{thm:MCG}
\end{restatable}
This theorem suggests that in diffusion models, the naive measurement fidelity step (without considering the data manifold) pushes the inference path out of the manifolds and might lead to inaccurate reconstruction.
On the other hand, our correction term from the manifold constraint guides the diffusion to lie on the data manifold, leading to better reconstruction.
Such geometric views are illustrated in Fig.~\ref{fig:theory}.
\section{Experiments}
\label{sec:exp}
\begin{figure}[t]
\centering
\includegraphics[width=1.0\textwidth]{./figs/results_inpaint_main_v4.jpg}
\caption{Inpainting results on FFHQ (1st, 2nd row) and ImageNet (3rd, 4th row). (a) Measurement, (b) Ground truth, (c) LaMa~\cite{suvorov2022resolution}, (d) DDRM~\cite{kawar2022denoising}, (e) Score-SDE~\cite{song2020score}, (f) RePAINT~\cite{lugmayr2022repaint}, (g) MCG (Ours). Out of 256 $\times$ 256 image, the 1st and the 3rd row is masked with size $128 \times 128$ box. 92\% of pixels (all RGB channels) from the images in the 2nd and 4th row are blocked.}
\label{fig:results_inpainting}
\end{figure}
For all tasks, we aim to verify the superiority of our method against other diffusion model-based approaches, and also against strong supervised learning-based baselines. Further details can be found in \cref{sec:app_exp_detail}.
\paragraph{Datasets and Implementation}
For inpainting, we use FFHQ 256$\times$256~\cite{karras2019style}, and ImageNet 256$\times$256~\cite{deng2009imagenet} to validate our method. We utilize pre-trained models from the open sourced repository based on the implementation of ADM (VP-SDE)~\cite{dhariwal2021diffusion}. We validate the performance on 1000 held-out validation set images for both FFHQ and ImageNet dataset.
For the colorization task, we use FFHQ, and LSUN-bedroom 256$\times$256~\cite{yu2015lsun}. We use pre-trained score functions from score-SDE~\cite{song2020score} based on VE-SDE. We use 300 validation images for testing the performance with respect to the LSUN-bedroom dataset.
For experiments with CT, we train our model based on \texttt{ncsnpp} as a VE-SDE from score-SDE~\cite{song2020score}, on the 2016 American Association of Physicists in Medicine (AAPM) grand challenge dataset, and we process the data as in~\cite{kang2017deep}. Specifically, the dataset contains 3839 training images resized to 256$\times$256 resolution. We simulate the CT measurement process with parallel beam geometry with evenly-spaced 180 degrees. Evaluation is performed on 421 held-out validation images from the AAPM challenge.
\begin{table*}[]
\centering
\setlength{\tabcolsep}{0.2em}
\resizebox{0.9\textwidth}{!}
\begin{tabular}{lllllll@{\hskip 15pt}llllll}
\toprule
{} & \multicolumn{6}{c}{\textbf{FFHQ} ($\bf 256 \times 256$)} & \multicolumn{6}{c}{\textbf{ImageNet} ($\bf 256 \times 256$)} \\
\cmidrule(lr){2-7}
\cmidrule(lr){8-13}
{} & \multicolumn{2}{c}{\textbf{Box}} & \multicolumn{2}{c}{\textbf{Random}} & \multicolumn{2}{c}{\textbf{Extreme}} & \multicolumn{2}{c}{\textbf{Box}} & \multicolumn{2}{c}{\textbf{Random}} & \multicolumn{2}{c}{\textbf{Wide masks}} \\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
\cmidrule(lr){6-7}
\cmidrule(lr){8-9}
\cmidrule(lr){10-11}
\cmidrule(lr){12-13}
{\textbf{Method}} & {FID $\downarrow$} & {LPIPS $\downarrow$} & {FID $\downarrow$} & {LPIPS $\downarrow$} & {FID $\downarrow$} & {LPIPS $\downarrow$} & {FID $\downarrow$} & {LPIPS $\downarrow$} & {FID $\downarrow$} & {LPIPS $\downarrow$} & {FID $\downarrow$} & {LPIPS $\downarrow$} \\
\midrule
MCG~\textcolor{trolleygrey}{(ours)} & \textbf{23.7} & \underline{0.089} & \textbf{21.4} & \textbf{0.186} & \textbf{30.6} & \textbf{0.366} & \textbf{25.4} & 0.157 & \textbf{34.8} & \textbf{0.308} & \underline{21.9} & \underline{0.148}\\
\cmidrule(l){1-13}
Score-SDE~\cite{song2020score} & 30.3 & 0.135 & 109.3 & 0.674 & 48.6 & 0.488 & 43.5 & 0.199 & 143.5 & 0.758 & 39.6 & 0.200\\
RePAINT$^*$~\cite{lugmayr2022repaint} & 30.5 & 0.133 & 110.6 & 0.665 & 48.5 & 0.487 & 43.2 & 0.203 & 139.7 & 0.756 & 37.0 & 0.205\\
DDRM~\cite{kawar2022denoising} & 28.4 & 0.109 & 111.6 & 0.774 & \underline{48.1} & 0.532 & 88.8 & 0.386 & \underline{99.6} & 0.767 & 80.6 & 0.398\\
LaMa~\cite{suvorov2022resolution} & 27.7 & \textbf{0.086} & 188.7 & 0.648 & 61.7 & 0.492 & \underline{26.8} & \textbf{0.139} & 134.1 & 0.567 & \textbf{20.4} & \textbf{0.140}\\
AOT-GAN~\cite{zeng2022aggregated} & 29.2 & 0.108 & 97.2 & 0.514 & 69.5 & 0.452 & 35.3 & 0.163 & 119.6 & \underline{0.583} & 29.8 & 0.161 \\
ICT~\cite{wan2021high} & \underline{27.3} & 0.103 & \underline{91.3} & \underline{0.445} & 56.7 & \underline{0.425} & 31.9 & \underline{0.148} & 131.4 & 0.584 & 25.4 & 0.148\\
DSI~\cite{peng2021generating} & 27.9 & 0.096 & 126.4 & 0.601 & 77.5 & 0.463 & 34.5 & 0.155 & 132.9 & 0.549 & 24.3 & 0.154\\
\bottomrule
\end{tabular}
}
\vspace{0.2em}
\caption{
Quantitative evaluation (FID, LPIPS) of inpainting task on FFHQ and ImageNet. $^*$: Re-implemented with our score function. MCG, Score-SDE, RePAINT, and DDRM all share the same score function and differ only in the inference method. \textbf{Bold}: Best, \underline{under}: second best.
}
\label{tab:comparison-inpainting}
\end{table*}
\paragraph{Inpainting}
Score-SDE~\cite{song2020score}, REPAINT~\cite{lugmayr2022repaint}, DDRM~\cite{kawar2022denoising} were chosen as baseline diffusion models to compare against the proposed method. For a fair comparison, we use the same score function for all methods including MCG, and only differentiate the inference method that is used. We also include comparisons against supervised learning based baselines: LaMa~\cite{suvorov2022resolution}, AOT-GAN~\cite{zeng2022aggregated}, ICT~\cite{wan2021high}, and DSI~\cite{peng2021generating}. We use various forms of inpainting masks: box (128 $\times$ 128 sized square region is missing), extreme (only the box region is existent), random (90-95\% of pixels are missing), and LaMa-thick. Quantitative evaluation is performed with two metrics - Frechet Inception Distance (FID)-1k~\cite{NIPS2017_8a1d6947}, and Learned Perceptual Image Patch Similarity (LPIPS)~\cite{zhang2018unreasonable}.
Our method outperforms the diffusion model baselines~\cite{song2020score,lugmayr2022repaint,kawar2022denoising} by a large margin. Moreover, our method is also competitive with, or even better than the best-in-class fully supervised methods, as can be seen in \cref{tab:comparison-inpainting}. In \cref{fig:results_inpainting}, we depict representative results that show the superiority of the method. For box-type inpainting, other diffusion model-based approaches all fail at reconstructing a feasible human face, clearly showing discrepancies inside and outside of the masked region. It is widely known that solving inverse problems on ImageNet is a much harder task due to the variability of the data, and for that matter even the SOTA supervised learning baselines fail to produce coherent reconstructions in the 3rd row. The random masking strategy that we adopt in the 2nd and the 4th row heavily limits the information available about the ground truth image, but our method can still faithfully recover the original. In contrast, the performance of the supervised learning baselines largely deteriorate due to the bias towards masks that were used during training. Diffusion model baselines also perform poorly due to the extremely limited measurement.
\begin{figure}[t]
\centering
\includegraphics[width=0.90\textwidth]{./figs/results_color_main_v4.jpg}
\caption{Colorization results on FFHQ and LSUN-bedroom. (a) Measurement, (b) pix2pix~\cite{isola2017image} (c) cINN~\cite{ardizzone2019guided} (d) DDRM~\cite{kawar2022denoising}, (e) Score-SDE~\cite{song2020score}, (f) MCG (ours).}
\label{fig:results_color}
\end{figure}
\paragraph{Colorization}
We choose score-SDE~\cite{song2020score}, and DDRM~\cite{kawar2022denoising} as diffusion-model based comparison methods, and also compare against cINN~\cite{ardizzone2019guided}, and pix2pix~\cite{isola2017image}. Two metrics were used for evaluation: structural similarity index (SSIM), and LPIPS. Consistent with the findings from inpainting, we achieve much improved performance than score-SDE, and also is favorable against state-of-the-art (SOTA) superivsed learning based methods. We use the pre-trained model checkpoints whenever available, and train the model from scratch using the code from the official github repository when it is not. As can be seen in \cref{fig:results_color}, MCG tends to generate vibrant color throughout the structure, whereas other methods are either biased towards a single tone, or fails to capture the diversity. For example, when comparing the colorization results of the FFHQ dataset, ours is the only method that colors the lips red.
In \cref{tab:comparison-color}, we see that the proposed method outperforms all other methods in terms of both PSNR/LPIPS in LSUN-bedroom, and also achieves strong performance in the colorization of FFHQ dataset.
\begin{wraptable}[12]{r}{0.5\textwidth}
\centering
\setlength{\tabcolsep}{0.2em}
\resizebox{0.4\textwidth}{!}
\begin{tabular}{lllll}
\toprule
\cmidrule(lr){2-5}
{Data} & \multicolumn{2}{c}{\textbf{FFHQ(256$\times$256)}} & \multicolumn{2}{c}{\textbf{LSUN(256$\times$256)}} \\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
{\textbf{Method}} & {SSIM $\uparrow$} & {LPIPS $\downarrow$} & {SSIM $\downarrow$} & {LPIPS $\downarrow$} \\
\midrule
MCG~\textcolor{trolleygrey}{(ours)} & \underline{0.951} & \textbf{0.146} & \textbf{0.959} & \textbf{0.160}\\
\cmidrule(l){1-5}
Score-SDE~\cite{song2020score} & 0.936 & 0.180 & 0.945 & 0.199\\
DDRM~\cite{kawar2022denoising} & 0.918 & 0.326 & \underline{0.957} & \underline{0.182}\\
cINN~\cite{ardizzone2019guided} & \textbf{0.952} & \underline{0.166} & 0.952 & 0.180 \\
pix2pix~\cite{isola2017image} & 0.935 & 0.184 & 0.947 & 0.174 \\
\bottomrule
\end{tabular}
}
\vspace{0.2em}
\caption{
Quantitative evaluation (SSIM, LPIPS) of colorization task. \textbf{Bold}: best, \underline{under}: second best.
}
\label{tab:comparison-color}
\end{wraptable}
\paragraph{CT reconstruction}
To the best of our knowledge, \cite{song2022solving} is the only method that tackles CT reconstruction directly with diffusion models. We compare our method against \cite{song2022solving}, which we refer to as score-CT henceforth. We also compare with the best-in-class supervised learning methods, cGAN~\cite{ghani2018deep} and SIN-4c-PRN~\cite{wei20202}. As a compressed sensing baseline, FISTA-TV~\cite{beck2009fast} was included, along with the analytical reconstruction method, FBP. We use two standard metrics - peak-signal-to-noise-ratio (PSNR), and SSIM for quantitative evaluation.
From \cref{tab:comparison-ct}, we see that the newly proposed MCG method outperforms the previous score-CT~\cite{song2022solving} by a large margin. We can observe the superiority of MCG over other methods more clearly in \cref{fig:results_CT}, where MCG reconstructs the measurement with high fidelity, closely mimicking the ground truth. All other methods including the fully supervised baselines fall behind the proposed method.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\textwidth]{./figs/results_ct_main_v4.jpg}
\caption{CT reconstruction results on AAPM dataset (30 view measurement). (a) FBP, (b) FISTA-TV~\cite{beck2009fast}, (c) SIN-4c-PRN~\cite{wei20202}, (d) Score-CT~\cite{song2022solving}, (e) MCG (ours), and (f) ground truth (GT). Yellow numbers indicate the PSNR metric.}
\label{fig:results_CT}
\end{figure}
\paragraph{Properties of MCG}
Our proposed method is fully unsupervised and is not trained on solving a specific inverse problem. For example, our box masks and random masks have very different forms of erasing the pixel values. Nevertheless, MCG generalizes perfectly well to such different measurement conditions, while other methods have a large performance gap between the different mask shapes. We further note two appealing properties of MCG as an inverse problem solver: 1) the ability to generate multiple solutions given a condition, and 2) the ability to maintain perfect measurement consistency. The former ability often lacks in supervised learning-based methods~\cite{suvorov2022resolution,wei20202}, and the latter is often not satisfied for unsupervised GAN-based solutions~\cite{daras2021intermediate,bora2017compressed}.
\begin{wraptable}[]{r}{0.5\textwidth}
\centering
\setlength{\tabcolsep}{0.2em}
\resizebox{0.4\textwidth}{!}
\begin{tabular}{lllll}
\toprule
{} & \multicolumn{4}{c}{\textbf{AAPM} ($\bf 256 \times 256$)}\\
\cmidrule(lr){2-5}
{Views} & \multicolumn{2}{c}{\textbf{18}} & \multicolumn{2}{c}{\textbf{30}} \\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
{\textbf{Method}} & {PSNR $\uparrow$} & {SSIM $\uparrow$} & {PSNR $\uparrow$} & {SSIM $\uparrow$} \\
\midrule
MCG~\textcolor{trolleygrey}{(ours)} & \textbf{33.57} & \textbf{0.956} & \textbf{36.09} & \textbf{0.971}\\
Score-POCS~\textcolor{trolleygrey}{(ours)} & 30.77 & 0.907 & 32.68 & 0.923\\
\cmidrule(l){1-5}
Score-CT~\cite{song2022solving} & 29.85 & 0.897 & 31.97 & 0.913\\
SIN-4c-PRN~\cite{wei20202} & 26.96 & 0.850 & 30.23 & 0.917 \\
cGAN~\cite{ghani2018deep} & 24.38 & 0.823 & 27.45 & 0.927 \\
FISTA-TV~\cite{beck2009fast} & 21.57 & 0.791 & 23.92 & 0.861\\
\bottomrule
\end{tabular}
}
\vspace{0.2em}
\caption{
Quantitative evaluation (PSNR, SSIM) of CT reconstruction task. \textbf{Bold}: best.
}
\label{tab:comparison-ct}
\end{wraptable}
\section{Conclusion}
\label{sec:conclusion}
In this work, we proposed a general framework that can greatly enhance the performance of the diffusion model-based solvers for solving inverse problems. We showed several promising applications - inpainting, colorization, sparse view CT reconstruction, and showed that our method can outperform the current state-of-the-art methods. We analyzed our method theoretically and show that MCG prevents the data generation process from falling off the manifold, thereby reducing the errors that might accumulate at every step. Further, we showed that MCG controls the direction tangent to the data manifold, whereas the score function controls the direction that is normal, such that the two components complement each other.
\paragraph{Limitations and Broader Impact}
The proposed method is inherently stochastic since the diffusion model is the main workhorse of the algorithm. When the dimension $m$ is pushed to low values, at times, our method fails to produce high quality reconstructions, albeit being better than the other methods overall. We note that our method is slow to sample from, inheriting the existing limitations of diffusion models. This would likely benefit from leveraging recent solvers aimed at accelerating the inference speed of diffusion models. In line with the arguments of other generative model-based inverse problem solvers, our method is a solver that relies heavily on the underlying diffusion model, and can thus potentially create malicious content such as deepfakes. Further, the reconstructions could intensify the social bias that is already existent in the training dataset.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.44043,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbCjxK6nrxq6DldKY | \section{Introduction}
\label{sec:intro}
\vspace{-0.14cm}
The literature of iris recognition has been investigating the performance of humans at tasks such as iris texture perception~\cite{stark_2010, bowyer_2010, hollingsworth_2010, hollingsworth_2011, shen_2013} and identity verification~\cite{mcginn_2013, guest_2013}.
Understanding how people perceive and analyze iris features is useful not only for inspiring the development of better solutions, but also for making them more \emph{human-intelligible}.
Human-intelligible iris recognition is particularly necessary in forensic applications, where experts often rely on the outputs of algorithms for sustaining conclusions and presenting them in a court of law.
As pointed out by Chen \etal~\cite{chen_2016}, in spite of the traditional iris recognition solutions providing nearly-perfect false match rates~\cite{daugman_2006}, they are yet far from being human-friendly enough to convince people who do not possess image processing expertise.
Moreover, human-intelligible iris recognition helps to meet the need to deploy more transparent and accountable systems~\cite{nissembaum_1994, wachter_2017}.
As stated in the new European General Data Protection Regulation (GDPR)~\cite{gdpr_2018}, citizens have the right to obtain an explanation of decisions made about them by algorithms belonging to either government or industry.
As a consequence, iris recognition solutions lacking human-intelligible decision processes may face usage hindrances.
Indeed, the accountability discussion is also present in the American scientific community, proven by the recent efforts of the National Science Foundation (NFS) in funding research on the topic~\cite{cornell_2017}.
\begin{table*}[t]
\renewcommand{\arraystretch}{1.2}
\caption{Literature of human performance in iris recognition. The present work is added to the last row.}
\label{tab:rw}
\centering
\footnotesize
\begin{tabular}{L{1.7cm} L{1.4cm} C{1.1cm} C{1.2cm} C{1.3cm} L{2.3cm} L{5.5cm}}
\hline
\rowcolor{gray!8}{\bf Reference} & {\bf Problem} & {\bf Subjects (\#)} & {\bf Trials per session (\#)} & {\bf Average session time (min)} & {\bf Used images} & {\bf Data details} \\
\hline
Stark {\it et~al.}~\cite{stark_2010} & Iris texture perception & 21 & 100 & 30 & Segmented iris only & 100~images~depicting~100~distinct~irises~from 100 distinct individuals\\
\cmidrule(lr){1-7}
Bowyer {\it et~al.}~\cite{bowyer_2010} & Iris texture perception & 55 & 210 & 10$^\dagger$ & Whole eye, segmented iris only, or periocular only & 630~images~depicting~630~distinct~irises~from 315 distinct individuals \\
\cmidrule(lr){1-7}
Hollingsworth {\it et~al.}~\cite{hollingsworth_2010} & Iris texture perception & 28 & 196 & 10$^\dagger$ & Segmented iris only or periocular only & 392 images depicting 392 distinct irises from 196 distinct individuals (including twins' pairs)\\
\cmidrule(lr){1-7}
Shen and Flynn~\cite{shen_2013} & Iris texture perception & 21$^\ddagger$ & 64$^\ddagger$ & 270 & Strip-normalized iris & 124 images depicting 62 distinct irises from 62 distinct individuals\\
\cmidrule(lr){1-7}
McGinn {\it et~al.}~\cite{mcginn_2013} & Identity verification & 22 & 190 & 26 & Whole eye & 202 images depicting 109 distinct irises from 109 distinct individuals (including twins' pairs)\\
\cmidrule(lr){1-7}
Guest {\it et~al.}~\cite{guest_2013} & Identity verification & 32 & 52 & 10 & Whole eye & 208 images depicting 104 distinct irises from 104 distinct individuals\\
\cmidrule(lr){1-7}
\textbf{This work} & {\bf Identity verification} & {\bf 114} & {\bf 30$^\ast$, 24$^\star$} & {\bf 17$^\ast$, 15$^\star$} & {\bf Segmented iris only} & {\textbf{1360 images of 512 distinct irises from 512 individuals (with varied pupil dilation, twins', disease-affected, and post-mortem samples)}}\\
\hline
\end{tabular}\\
\vspace{0.2cm}
$\dagger$ Lower-bound estimated value; each subject had three seconds to inspect each trial --- $\ddagger$ Average value of three sessions\\
$\ast$ Conducted at the University of Notre Dame --- $\star$ Conducted at the Research and Academic Computer Network (NASK)
\vspace{-0.1cm}
\end{table*}
This paper contributes to the understanding of how an everyman performs identity verification based on iris patterns.
In each of two experiments, subjects are presented a pair of irises and asked to decide whether the pair belongs to the same eye.
In the first experiment, we apply a typical multiple-choice questionnaire~\cite{mcginn_2013, guest_2013}, with no request for image regions or features that justify the decisions.
In the second experiment, subjects are asked to manually annotate matching and non-matching regions in the pair of irises, which support their decisions, as an effort to make them more conscious about the task.
In the experiments, there are image pairs representing six different conditions, which are either commonplace or reportedly known to pose challenges to automated systems or to human examiners: (1)~healthy eyes that are easily handled by an example iris recognition software, (2)~healthy eyes that are challenging for the same software, (3)~disease-affected eyes, (4)~iris pairs with extensive difference in pupil dilation, (5)~irises of identical twins, and (6)~iris images acquired from deceased individuals.
This variety of conditions allowed us to observe that pairs of images depicting the same iris but with different pupil dilation, iris images of twins, and post-mortem samples are the most challenging to humans.
Also, subjects were able to improve their recognition accuracy when they were asked to manually annotate regions supporting their decision.
That was not true, however, in the case of iris images of identical twins, which were so confusing that the assessed numbers of improved and worsened decisions were similar.
In summary, this paper advances the state of the art in human examination of iris images with the following contributions:
\begin{itemize}
\item Assessment of human skills in verifying the identity of iris images presenting different conditions, including healthy eyes of unrelated individuals, of identical twins, and never used before disease-affected and post-mortem iris samples.
\item Employment of custom software to allow subjects to annotate the image regions they rely upon to classify an iris pair, and analysis of how this helps them to provide more accurate decisions.
\item Introduction of the notation of \emph{non-matching regions}, besides the typical concept of \emph{matching regions}, in the process of matching pairs of iris images.
\end{itemize}
The remainder of this paper has four sections.
In Sec.~\ref{sec:rw}, we discuss the related work, while in Sec.~\ref{sec:es}, we detail the configuration of experiments.
In Sec.~\ref{sec:results}, in turn, we report the obtained results, followed by Sec.~\ref{sec:conclusions}, where we discuss the lessons learned from the experiments.
\section{Related Work}
\label{sec:rw}
\vspace{-0.1cm}
There are only a few works related to human examination of iris images.
Stark \etal~\cite{stark_2010} studied how people classify iris textures into categories.
They used a software tool that allowed subjects to browse a set of segmented near-infrared iris images and use a drag-and-drop scheme to organize the images into groups based on their perception of the iris textures.
They found that people consistently identify similar categories and subcategories of irises.
\begin{figure*}[t]
\centering
\includegraphics[width=17.4cm]{pipeline.pdf}
\caption{Experimental methodology overview.
Rounded rectangular boxes denote subjects' activities, solid arrows
represent their precedence, and dashed arrows denote data flow.
Experiments always begin with \emph{Session~1}, namely the annotation-less experimental part.
Subjects available for \emph{Session~2} then participate in the annotation-driven experimental part.
}
\label{fig:pipeline}
\end{figure*}
Bowyer \etal~\cite{bowyer_2010} investigated people's ability to recognize right and left irises as belonging to the same person or not.
Through experiments, they discovered that humans perceive texture similarities that are not detected by automated solutions.
As a consequence, they can correctly guess, with only three seconds viewing, if left and right eyes belong to the same individual.
When evaluating near-infrared images of the whole eye, subjects achieved an accuracy of 86\% in the task at hand.
When evaluating images with iris portions masked out, subjects achieved an accuracy of 83\%, by relying only on the periocular parts of samples.
Subjects' ratings of image pairs were collected using a five-level scale, ranging from (1)~``same individual (certain)'', (2)~``same individual (likely)'', (3)~``uncertain'', (4)~``different people (likely)'', to (5)~``different people (certain)''.
In a similar fashion, Hollingsworth \etal~\cite{hollingsworth_2010} investigated people's skills in deciding if two different iris images depict the eyes of twin siblings or not.
Contrary to the typical identity verification pipeline~\cite{daugman_2004}, which the authors reported as being useless for the task at hand, human examiners could reach an accuracy of 81\% when spending only three seconds analyzing pairs of near-infrared segmented iris images.
The accuracy dropped to 76.5\% when only periocular regions were available.
Again, subjects' responses were collected using a five-level rating.
Hollingsworth \etal~\cite{hollingsworth_2011} present the combined findings of \cite{bowyer_2010} and \cite{hollingsworth_2010}.
Shen and Flynn~\cite{shen_2013} asked people to manually annotate \emph{iris crypts}, oval-shaped iris regions with strong edges and darker interior, over near-infrared images.
Using annotation software, subjects were asked to outline the borders of the crypts, finding ``easy'' and ``challenging'' samples, depending on the clarity of crypts.
Presented images comprised strip-normalized iris images, with non-iris-texture regions masked out.
The aim of the research was to figure out the utility of crypts for developing more human-interpretable iris-based identity verification.
For that, they assessed the repeatability of annotated crypts across subjects, finding that it was possible in the case of ``easy'' samples.
McGinn \etal~\cite{mcginn_2013} assessed the performance of human examiners in iris-based identity verification.
For that, they asked subjects to classify pairs of irises as either genuine (two images depicting the same eye) or impostor (two images depicting different eyes), again with a five-level rating scale.
Presented images comprised near-infrared samples, containing whole eyes of either close-age and same-ethnicity unrelated individuals, or of identical twins.
They concluded that identical twins pose a challenge to human performance.
In spite of that, the overall accuracy was very high: 92\% of the time subjects were successful in classifying iris pairs.
Finally, results suggested that subjects improved skills as they gained experience.
Guest \etal~\cite{guest_2013}, in turn, investigated the performance of humans in deciding if two distinct infrared whole-eye images depict the same eye or not.
In the experiments, subjects presented an overall decision accuracy of 83.2\%.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{annotation-example.png}
\caption{An example of manual annotation containing matching (in green) and non-matching (in red) regions between two post-mortem irises.}
\label{fig:annotation}
\vspace{-0.2cm}
\end{figure}
Table~\ref{tab:rw} summarizes these previous works and the work described in this paper.
To our knowledge, there are no other publications about human examination of iris images.
\section{Experimental Setup}
\label{sec:es}
\vspace{-0.1cm}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{easy_gen_06122d604.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{easy_gen_06122d978.png}}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{easy_imp_07019d21.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{easy_imp_07020d6.png}}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{difficult_gen_05857d1321.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{difficult_gen_05857d1645.png}}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{difficult_imp_06838d125.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{difficult_imp_06840d98.png}}
\caption{}
\end{subfigure}\vskip2mm
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{pd_gen_66_1_375.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{pd_gen_66_1_440.png}}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{twins_imp_90453d1.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{twins_imp_90454d1.png}}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{pm_0007_L_1_1.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{pm_0007_L_2_1.png}}
\caption{}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.24\textwidth}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{disease_gen_0116_R_IG_1_1.png}
\fcolorbox{black}{black}{\includegraphics[width=0.43\textwidth]{disease_gen_0116_R_TC_1_1.png}}
\caption{}
\end{subfigure}
\caption{Examples of iris pairs presented to subjects: (a)~pair with the same iris generating {\it low} Hamming distance, (b)~different irises generating {\it high} Hamming distance, (c)~pair with the same iris image generating {\it high} Hamming distance, (d)~different irises generating {\it low} Hamming distance, (e)~the same iris before and after {\it visible light stimulation}, (f)~different irises of {\it identical twins}, (g)~the same {\it post-mortem} iris captured five and 16 hours after death, (h)~the same disease-affected iris captured in two different sessions.}
\label{fig:samples:print_and_lens}
\vspace{-0.4cm}
\end{figure*}
The experimental setup is described in five parts.
In Sec.~\ref{sec:method}, we introduce the two-session experimental methodology, while in Sec.~\ref{sec:dataset} we explain the chosen categories of iris pairs, including data sources and image pre-processing.
In Sec.~\ref{sec:nd}~and~\ref{sec:wu}, respectively, we describe the experiments conducted at the University of Notre Dame and at NASK headquarters.
Finally, in Section~\ref{sec:os}, we detail the experimental setup for employing \emph{OSIRIS}~\cite{othman_2016}, which contains an open-source implementation of Daugman's method for iris recognition~\cite{daugman_2004}, and \emph{IriCore}~\cite{iricore_2018} and \emph{MIRLIN}~\cite{mirlin_2018},
two state-of-the-art solutions for iris recognition.
The idea is to provide, along with the performance of humans, the results of fully automated strategies.
\subsection{Experimental methodology}
\label{sec:method}
\vspace{-0.1cm}
We propose a two-session experimental method that allows humans to perform identity verification through the examination of iris patterns.
For that, we collect subjects' decisions on whether iris image pairs depict the same eye or not.
In the first session, subjects are expected only to provide their decision, with no need for clarification.
In the second session, subjects are asked to provide a manual annotation of the image regions that they see as matching or diverging between the pair of iris images, in order to justify their classification of the image pair.
This serves as an effort to make them more conscious about the task at hand.
Fig.~\ref{fig:pipeline} provides an overview of the proposed experimental method, with the activities that we envision for each subject.
\emph{Session~1} starts with an orientation and signing of the consent form (\emph{Session 1 Orientation} in Fig.~\ref{fig:pipeline}).
The subject then views a sequence of image pairs and judges whether or not they represent the same eye (\emph{Decision Selection} in Fig.~\ref{fig:pipeline}).
The list of trials (\emph{trial list 1} in Fig.~\ref{fig:pipeline}) is previously generated through a pre-processing step, in which irises are selected and genuine and impostor pairs are created.
Details about the selected samples are presented in Sec.~\ref{sec:dataset}.
Decisions are then recorded and stored for further analysis.
Subjects are given as much time as they need to make a decision, and the decision times are collected for each trial (explaining the chronometer icons in Fig.~\ref{fig:pipeline}).
This first session was followed in experiments at two different institutions, which allowed us to get more diverse results across distinct subject populations.
\emph{Session~2} is an annotation-driven part of the experiment that subjects are asked if they have time to complete.
Subjects who elect to do this session receive additional instructions on how to provide manual annotation.
A subset of iris image pairs used in the first session is selected for annotation in this second session (\emph{trial list 2} in Fig.~\ref{fig:pipeline}).
Details about the selected samples are given in Sec.~\ref{sec:dataset}.
For each trial, the subject is asked to annotate and connect matching and non-matching regions between the two irises.
Subjects are also given access to their decision made in \emph{Session~1}, and allowed to change such decision.
As in \emph{Session~1}, subjects can spend as much time as they need in this task, and the time intervals are recorded.
Fig.~\ref{fig:annotation} depicts an example of manual annotation of matching and non-matching regions between two postmortem irises.
\subsection{Dataset of iris image pairs}
\label{sec:dataset}
To assess the influence of different conditions on the accuracy of human iris recognition, we conducted experiments with six categories of irises, which are either commonplace or reportedly known to pose challenges to automated systems or human examiners:
\vspace{-0.1cm}
\begin{enumerate}
\item \textit{\textbf{Healthy and easy:}} images depicting apparently healthy eyes that pose no challenge to the OSIRIS iris recognition software used in this study~\cite{othman_2016}.
That is, the case of genuine pairs generating low Hamming distances between them (Fig.~\ref{fig:samples:print_and_lens}(a)), or of impostor pairs whose iris codes yield large Hamming distances (Fig.~\ref{fig:samples:print_and_lens}(b)).
\vspace{-0.1cm}
\item \textit{\textbf{Healthy but difficult:}} apparently healthy eyes that pose challenges to the OSIRIS software.
That is, the case of genuine pairs generating unexpectedly large Hamming distances (Fig.~\ref{fig:samples:print_and_lens}(c)), or of impostor pairs generating unexpectedly small Hamming distances (Fig.~\ref{fig:samples:print_and_lens}(d)).
\vspace{-0.1cm}
\item \textit{\textbf{Large difference in pupil dilation:}} images of the same eye with significantly difference in pupil dilation, as representatives of the natural iris transformations that occur due to variations in environment lighting (Fig.~\ref{fig:samples:print_and_lens}(e)).
\vspace{-0.1cm}
\item \textit{\textbf{Twins:}} images depicting different eyes, one from each of a pair of identical twins, which are reportedly recognizable by humans, in opposition to being indifferent to automated systems~\cite{hollingsworth_2011} (Fig.~\ref{fig:samples:print_and_lens}(f)).
\vspace{-0.1cm}
\item \textit{\textbf{Post-mortem:}} images depicting either the same or different eyes, captured from deceased individuals, which are known to be surprisingly useful for iris recognition \cite{Trokielewicz_2016} (Fig.~\ref{fig:samples:print_and_lens}(g)).
\vspace{-0.1cm}
\item \textit{\textbf{Disease-affected:}} images depicting the same eye, which suffers from varied eye diseases that may deteriorate the recognition reliability of automated systems~\cite{Trokielewicz_2015} (Fig.~\ref{fig:samples:print_and_lens}(h)).
\vspace{-0.1cm}
\end{enumerate}
Fig.~\ref{fig:easy-difficult-distro} illustrates the distributions of genuine and impostor comparison scores generated by OSIRIS to image pairs of healthy eyes.
This information was used to select ``easy'' and ``difficult'' cases.
Additionally, we generated both genuine and impostor pairs for disease-affected and post-mortem eyes.
With respect to twins' samples, it was obviously not possible to generate genuine pairs.
To balance the number of impostor and genuine trials, we did not generate impostor pairs from images presenting a large difference in pupil size.
Also, when generating the genuine and impostor pairs of healthy, post-mortem, and disease-affected irises, we neither mixed different categories, nor created pairs of images that were captured on the same day.
Given that our intent was to focus on the iris texture and that the dataset was very diverse, we manually segmented all the images and masked out the regions that should not be used by subjects in their judgment, such as eyelashes, eyelids, specular reflections, and severe effects from disease or post-mortem deterioration (\eg corneal wrinkles).
In addition, contrast-limited adaptive histogram equalization (CLAHE~\cite{pizer_1987}) was used to enhance contrast for image display, as illustrated in Fig.~\ref{fig:samples:print_and_lens}.
Images of healthy eyes were collected from the \emph{ND-CrossSensor-Iris-2013} dataset~\cite{ndcsdataset_2013}.
Disease-affected iris images were picked from \emph{Warsaw-BioBase-Disease-Iris v2.1} database \cite{Trokielewicz_2015}.
Post-mortem iris images were selected from \emph{Warsaw-BioBase-Post-Mortem-Iris v1.0} dataset \cite{Trokielewicz_2016}.
Iris~images of twins and images presenting high difference in pupil dilation were selected from datasets of the University of Notre Dame, including the one used by Hollingsworth~\etal~\cite{hollingsworth_2011}.
\subsection{Notre Dame Experiments}
\label{sec:nd}
Custom software was prepared for both annotation-less and annotation-driven sessions.
In the annotation-less \emph{Session~1}, 86 adult individuals (between 18 and 65 years old) from the university community (including students, staff, and faculty) volunteered to participate, with no constraints related to gender and ethnicity.
All were subject to the same protocol, approved by the internal academic \emph{Human Subjects Institutional Review Board}.
Each volunteer was asked to evaluate a set of 20 iris image pairs, which always contained the following distribution of image pairs, presented in randomized order for each subject:
\begin{itemize}
\item four healthy easy pairs, with two impostor and two genuine samples;
\vspace{-0.2cm}
\item four healthy difficult pairs, with two impostor and two genuine samples;
\vspace{-0.2cm}
\item four genuine pairs of irises with large difference in pupil dilation;
\vspace{-0.2cm}
\item four impostor twins' pairs;
\vspace{-0.2cm}
\item four genuine post-mortem pairs.
\vspace{-0.2cm}
\end{itemize}
In each trial, the software displayed a pair of iris images and asked the subject to select one of the following: \emph{``1.~same person (certain)''}, \emph{``2.~same person (likely)''}, \emph{``3.~uncertain''}, \emph{``4.~different people (likely)''}, and \emph{``5.~different people (certain)''}.
Fig.~\ref{fig:session-1-printscreen} depicts the interface of the software, showing one of the 20 trials and possible answers.
Each subject could spend as much time as necessary before selecting the answer.
The following trial was only displayed after the acceptance of the current selection.
In total, the software used 20 disjoint sets of 20 image pairs, leading to 400 available trials composed from 800 manually segmented iris images.
As a consequence, each trial set was submitted, on average, to four subjects, but never with the same order, since, for each individual, the tool randomly shuffled the 20 trials to be presented.
On average, each subject spent seven minutes participating in the annotation-less first session.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{easy-diff-distro.eps}
\vspace{-0.2cm}
\caption{Distributions of the fractional Hamming distance between iris codes.
Codes were calculated using the OSIRIS software.
These distributions were used to select ``easy'' and ``difficult'' cases to use in experiments.}
\label{fig:easy-difficult-distro}
\vspace{-0.2cm}
\end{figure}
In the second session, 85 of the 86 subjects continued to provide manual annotations for both matching and non-matching regions between irises.
For each person, from the previous 20 trials they have opined, the software automatically selected 10 trials in an iris-category balanced manner.
In addition, the tool tried to present, if possible, at least one hit and one miss of each category.
Subjects were allowed to annotate as many pairs of matching or non-matching regions as they want, however annotating from 2 to 5 feature pairs was recommended. Also, they were advised to avoid using masked out black regions.
Fig.~\ref{fig:session-2-printscreen} depicts the annotation interface of the custom software, showing an example annotation.
Subjects could freely change and update their decisions while annotating a particular pair.
Each subject spent between 10 and 20 minutes participating in the annotation-driven session.
\subsection{NASK Experiments}
\label{sec:wu}
\vspace{-0.1cm}
These experiments consisted of only the annotation-less (first) session.
In total, 28 subjects (different from those attending the experiments at Notre Dame) participated, committing themselves to the exact same protocol, locally approved by the NASK data-protection office.
Each subject was asked to evaluate a set of 24 image pairs, which always contained the following setup:
\vspace{-0.1cm}
\begin{itemize}
\item five genuine post-mortem iris pairs;
\vspace{-0.16cm}
\item five impostor post-mortem iris pairs;
\vspace{-0.16cm}
\item five genuine disease-affected iris pairs;
\vspace{-0.16cm}
\item five impostor disease-affected iris pairs;
\vspace{-0.16cm}
\item four repeated pairs, each one being randomly selected from one of the above subsets.
\vspace{-0.16cm}
\end{itemize}
In each trial, a custom software displayed a pair of iris images and asked the subject to provide a binary decision on whether images depicted the same eye or not.
For the 28 subjects, the software had 10 disjoint sets of 24 trials available, leading to a total of 240 available iris pairs.
As a consequence, each trial set was presented to at least two subjects.
On average, each subject spent 15 minutes participating in this experiment.
\subsection{OSIRIS, IriCore, and MIRLIN Setup}
\label{sec:os}
\vspace{-0.1cm}
We used OSIRIS~\cite{othman_2016}, IriCore~\cite{iricore_2018}, and MIRLIN~\cite{mirlin_2018} as representatives of automated iris-matching algorithms.
OSIRIS implements a Daugman-style solution~\cite{daugman_2004}, therefore relying on Gabor filters to generate iris codes that are compared through fractional Hamming distance.
IriCore and MIRLIN, in turn, comprise two commercial iris recognition solutions, which together represent the current state of the art in this area.
All three methods generate genuine comparison scores that should be close to zero.
Since OSIRIS does not apply Daugman's score normalization~\cite{daugman_2006}, we assumed an acceptance threshold equal to 0.32, as earlier suggested in~\cite{daugman_2004}.
With respect to IriCore, we adopted an acceptance threshold of 1.1, as suggested in its documentation.
For MIRLIN, in turn, we used a threshold of 0.2, as recommended in~\cite{czajka_2016}.
\begin{figure}[t]
\centering
\frame{\includegraphics[width=0.47\textwidth]{session-1-printscreen.png}}
\caption{An example screen of a \emph{Session~1} trial. To proceed to the next image pair, the subject had to select one decision and click ``Next''. }
\label{fig:session-1-printscreen}
\vspace{-0.1cm}
\end{figure}
\begin{figure}[t]
\centering
\frame{\includegraphics[width=0.47\textwidth]{session-2-printscreen.png}}
\caption{An example screen of a \emph{Session~2} trial.
Subjects were allowed to freely annotate and connect matching (green) or non-matching (red) features over the two presented irises, to support their decision.
They could update their previous decision by clicking on the ``change'' option.}
\label{fig:session-2-printscreen}
\vspace{-0.4cm}
\end{figure}
\section{Results}
\label{sec:results}
\vspace{-0.1cm}
Table~\ref{tab:session-1-acc} shows the performance of human subjects in assessing the comparison type (genuine or impostor) of iris pairs during the first annotation-less session of experiments, combined for both \emph{Notre Dame} and \emph{NASK} experiments.
Reported accuracy expresses the percentage of correctly classified trials, across all subjects.
A subject's response was considered correct, or a \emph{hit}, if the subject selected ``same person'', with either ``certain'' or ``likely'' as their confidence, and the image pair was in fact from the same iris.
A ``different people'' response, with either ``certain'' or ``likely'' confidence, was considered correct, or also a \emph{hit}, if the image pair was in fact of different irises.
All other responses, including the ``uncertain'' option, were treated as a mistake, \ie a \emph{miss}.
Given that people's decisions were discrete, we do not provide ROC-like curves in the results.
In addition, OSIRIS, IriCore, and MIRLIN were used according to their respective recommended operating points.
\begin{table}[t]
\caption{
Annotation-less performance of human subjects in iris identification.
Subjects were only asked to select their decisions.
For comparison sake, we report results of the OSIRIS, IriCore, and MIRLIN software, with acceptance thresholds equal to 0.32, 1.1, and 0.2, respectively.
}
\label{tab:session-1-acc}
\centering
\footnotesize
\begin{tabular}{C{0.26cm} R{1.9cm} C{0.94cm} C{0.9cm} C{0.82cm} C{1.0cm}}
\hline
& \multirow{2}{*}{Iris category} & \multicolumn{4}{c}{Accuracy (\%)}\\
& & Humans & OSIRIS & IriCore & MIRLIN \\
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{Genuine pairs}} & Healthy easy & 91.28 & 95.00 & 100.00 & 97.50 \\
& Healthy difficult & 79.07 & 90.00 & 97.50 & 97.50 \\
& Pupil-dynamic & 43.90 & 61.25 & 95.00 & 97.50 \\
& Post-mortem & 51.95 & 33.57 & 73.57 & 47.14 \\
& Disease-affected & 70.80 & 25.00 & 53.33 & 25.00 \\
\cmidrule(lr){2-6}
& Combined & \textbf{60.60} & \textbf{58.86} & \textbf{80.56} & \textbf{65.83}\\
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{Impostor pairs}} & Healthy easy & 84.30 & 100.00 & 100.00 & 100.00 \\
& Healthy difficult & 76.16 & 100.00 & 100.00 & 100.00 \\
& Twins & 55.81 & 100.00 & 100.00 & 100.00 \\
& Post-mortem & 83.90 & 100.00 & 100.00 & 100.00 \\
& Disease-affected & 91.00 & 100.00 & 100.00 & 100.00 \\
\cmidrule(lr){2-6}
& Combined & \textbf{74.41} & \textbf{100.00} & \textbf{100.00} & \textbf{100.00} \\
\hline
\multicolumn{2}{l}{Overall} & \textbf{70.11} & \textbf{79.43} & \textbf{89.06} & \textbf{80.78}\\
\hline
\end{tabular}
\end{table}
Overall, subjects were correct nearly 70\% of the time, while the best algorithm (IriCore) achieved a higher overall accuracy of 89.06\%.
Both human subjects and software tools were more successful in identifying impostor than genuine pairs, with all the three tools not making a single mistake in recognizing impostors.
Nonetheless, in the particular case of genuine samples, humans performed on par with OSIRIS and MIRLIN, exceeding their results in face of genuine post-mortem samples.
Moreover, people presented a much superior performance when analyzing disease-affected samples, surpassing all the three tools.
Indeed, in such cases, software was always biased towards classifying samples as impostors, justifying close-to-chance (IriCore) or worse-than-chance (OSIRIS, MIRLIN) accuracies for genuine pairs, and perfect hit rates for impostors.
Subjects performed better than chance, \ie the accuracy was higher than 50\%, in most of the iris categories.
However, for the subset composed of iris images with large difference in pupil dilation, the accuracy was only 43.9\%.
Variations in pupil dilation were the most challenging cases for subjects, impairing their ability to recognize different versions of the same eye.
Subjects also had difficulty in analyzing post-mortem iris pairs.
In general, they tended to classify post-mortem samples as impostors, leading to a low accuracy in genuine cases (51.95\%, slightly better than chance), and a higher accuracy in impostor cases (83.90\%).
Similar to the observations of Bowyer \etal~\cite{bowyer_2010}, irises of twins also revealed themselves as challenging for people, but easy for automated solutions.
Among impostor samples, they are the category where subjects had the lowest accuracy (55.81\%).
\begin{figure}[t]
\centering
\includegraphics[width=8.3cm]{opinion-conviction.png}
\caption[]{
Normalized frequencies of the decisions of the 86 subjects of the \emph{Notre Dame} experiments, according to their decisions and groundtruth.
Genuine iris pairs are represented by black bars, while impostor pairs are represented by gray ones.
In an ideal classification output, black bars should happen only on the left part of the chart, while gray bars should happen only on the right side.
}
\label{fig:opinion-conviction}
\vspace{-0.4cm}
\end{figure}
Fig.~\ref{fig:opinion-conviction} shows the subjects' confidence level when classifying the iris pairs, during the first session of \emph{Notre Dame} experiments.
Bars depict the normalized frequencies of each response; as a consequence, they sum up to 1.0.
According to the adopted groundtruth color notation, black-bar regions represent genuine pairs and gray regions represent impostor pairs.
Therefore, black regions are expected to occur mostly on the left side of the chart (which is respective to people's claims of seeing genuine pairs), while gray regions are expected to occur mostly on the right side (which is respective to impostor pairs).
Gray regions on the left side and black regions on the right side are all errors, as well as any answer in the center (``uncertain'' option).
As one might observe, the ``uncertain'' option was the least selected choice (being taken in less than 8\% of the trials), agreeing with the reports of McGinn et al.~\cite{mcginn_2013}.
Among the ``same person (certain)'' answers (18\% of all answers), one in each nine (nearly 11\%) was wrong.
In opposition, among the ``different people (certain)'' answers (20\% of all answers), nearly one third was wrong, revealing more errors in people's convictions of seeing impostor pairs.
In accordance to the data presented in Table~\ref{tab:session-1-acc}, this indicates that people had more problems in recognizing genuine pairs, wrongly classifying many of them as impostors with high confidence.
Table~\ref{tab:session-2-acc} provides a comparison of the performances of 85 subjects who participated in the first session of \emph{Notre Dame} experiments (without annotations), and were able to return to the second session, when they were asked to provide annotations for the matching and non-matching regions between the irises of each presented pair.
Reported accuracy regards the correctness of the decisions for the subset of iris pairs they have already seen in the first session.
\begin{table}[t]
\caption{
Comparison of 85 subjects' accuracy when performing iris identification without annotations versus with annotations, over exactly the same iris samples.
}
\label{tab:session-2-acc}
\centering
\footnotesize
\begin{tabular}{C{1.1cm} R{1.9cm} C{1.5cm} C{1.5cm}}
\hline
\multirow{3}{*}{Pair class} & \multirow{3}{*}{Iris category} & \multicolumn{2}{c}{Accuracy (\%)}\\
& & without annotations & with annotations\\
\hline
\multirow{5}{*}{Genuine} & Healthy easy & 87.06 &96.47\\
& Healthy difficult & 75.29 & 84.71\\
& Pupil-dynamic & 41.18 & 52.35\\
& Post-mortem & 45.29 & 54.12\\
\cmidrule(lr){2-4}
& Combined & \textbf{55.88} & \textbf{65.69}\\
\hline
\multirow{4}{*}{Impostor} & Healthy easy & 85.88 & 90.59\\
& Healthy difficult & 80.00 & 78.82\\
& Twins & 59.41 & 60.59\\
\cmidrule(lr){2-4}
& Combined & \textbf{71.18} & \textbf{72.65}\\
\hline
Overall & & \textbf{62.00} &\textbf{68.47}\\
\hline
\end{tabular}\\
\vspace{-0.2cm}
\end{table}
The annotation feature helped subjects to improve their decisions in all iris categories, except for impostor healthy difficult pairs (in which accuracy slightly dropped from 80.00\% to 78.88\%).
Iris image pairs with large difference in pupil dilation were the category that benefited the most from annotations, with an improvement in accuracy from 41.18\% to 52.35\%.
Accuracy for post-mortem cases also was significantly improved.
Fig.~\ref{fig:revised-opinions} details how decisions were revised when subjects provided manual annotation.
Black bars express the absolute number of revised decisions that were worsened (\ie, a correct decision after the first session, updated to an incorrect decision during the second session).
Conversely, gray bars express the number of revised decisions that were fixed (\ie, they were originally a miss after first session, but then were updated to a correct decision during the second one).
In general, more decisions were fixed (74 decisions) than worsened (19 decisions).
Interestingly, post-mortem samples presented only improvements (15 decisions were revised), suggesting that people perceived new details on them while providing annotations.
Twins' samples, in turn, once more revealed how confusing they appear to people; 11 incorrect decisions were corrected, but 9 correct decisions were changed to be incorrect.
Last but not least, we could not find correlation between time spent by subjects and accuracy.
Fig.~\ref{fig:time-acc} depicts the distributions of time spent by subjects to decide each trial in the first annotation-less sessions (combining both \emph{Notre Dame} and \emph{NASK} experiments, shown in the left side of the chart), and to annotate each trial in the second annotation-driven sessions (shown in the right side of the chart).
As one might observe, regardless of iris pairs being genuine or impostor, and of decisions being hits or misses, distributions were not significantly different.
As expected, annotation-driven trials were, on average, longer than annotation-less trials.
\begin{figure}[t]
\centering
\includegraphics[width=8.3cm]{revised-opinions.png}
\caption[]{
Numbers of revised iris pairs grouped by iris category.
While manually annotating an iris pair, subjects could change their decision, either improving it (\ie making it right, depicted in gray), or worsening it (\ie making it wrong, depicted in black).
}
\label{fig:revised-opinions}
\vspace{-0.2cm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=8.3cm]{time-accuracy.png}
\caption[]{
Times spent by subjects to answer each trial.
Left side: times spent in the annotation-less sessions.
Right side: times spent in the annotation-driven sessions.}
\label{fig:time-acc}
\vspace{-0.2cm}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
\vspace{-0.1cm}
This paper presents results of a unique study estimating the accuracy of human subjects in comparing iris images of different levels of difficulty, including healthy and disease-affected eyes, and images acquired from cadavers.
The first observation from this study is that we may expect people to be worse than automated iris-recognition methods when comparing healthy eyes.
However, they can be better in cases not yet considered in the development of automated algorithms, such as eyes suffering from diseases or post-mortem deformations.
The second observation is that human examiners on average improve their accuracy when they are asked to annotate matching and non-matching features that support their decision.
Although this improvement is larger for genuine pairs than for impostor pairs, it still suggests that a comparison of iris images performed by humans should be organized in a way that allows them to annotate the features they are using in their judgment.
As future work, this may help in the development of a method for the examination and documentation of irises that is analogous to \emph{ACE-V}~\cite{ashbaugh_1999}, originally proposed for fingerprints.
The third observation is that different categories of iris images result in significantly different performance of human subjects.
Three categories of samples that seem to be particularly challenging to humans: irises of identical twins, iris images showing large difference in pupil dilation, and irises of deceased individuals.
{\small
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 1.760742,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbDM5jDKDyDz_7AfD | \section{Introduction}
Monolayer $^{3}$He on Grafoil (exfoliated graphite) preplated with
$^{4}$He monolayer is an ideal model system
for strongly correlated two-dimensional (2D) fermions.
The advantage of this system is that the correlation can be varied in a
wide range by changing $^{3}$He areal density ($\rho$).
When $\rho$ is relatively small, the system behaves as a 2D Fermi fluid.
As $\rho$ increases, the correlation between the particles becomes stronger,
and they localize at $\rho_{4/7}=6.80$ nm$^{-2}$ with the assistance
of the substrate potential corrugation \cite{Matsumoto_JLTP, Murakawa}.
This is attributed to the Mott-Hubbard transition \cite{Casey_PRL}.
The localized phase is a commensurate solid with the 4/7 density of
the first layer $^{4}$He, so called the 4/7 phase \cite{Elser_PRL}.
The 4/7 phase has a triangular lattice structure, which causes magnetic
fluctuation among the $^{3}$He nuclear spins.
Previous heat capacity measurements by our group strongly suggest the
existence of ``zero point vacancies (ZPVs)'' in a density range of
$0.8\leq n\equiv \rho /\rho_{4/7}\leq 1$ \cite{Matsumoto_JLTP}.
The ZPV is an atomic vacancy which exists stably even in the ground state.
It is spontaneously created in quantum solids when the half band-width
of quantum-mechanically hopping ZPVs exceeds the
vacancy creation energy.
Although the possible existence of ZPVs in solid He had been
proposed by Andreev and Lifshitz in 1969 \cite{Andreev_JETP},
it has not been found experimentally yet.
However, the ZPVs are supposed to emerge in 2D $^{3}$He system
because vacancies may be doped, retaining the 4/7 structure
in order to reduce the potential energy caused by the underlayer.
In this work, we carried out pulsed-NMR measurements with the spin-echo method which
reflect the microscopic and dynamical natures of the system.
If the macroscopic phase-separation happens rather than the emergence of
the ZPV phase, the NMR transverse relaxation should have two components,
each of which has the characteristic spin-spin relaxation time ($T_{2}$).
In other words, if the single exponential relaxation is observed
in the corresponding density region:
$0.8\leq n\leq 1.0$, the macroscopic phase separation does not happen.
\section{Experimental}
We used Grafoil substrate which consists of micro-crystallites (platelets)
with atomically flat surfaces of about 10 nm size \cite{Niimi_PRB}.
The mosaic angle spread of platelets is about $\pm$15$^{\circ}$ \cite{Taub_PRB}.
The total surface area is measured to be 53.6 m$^{2}$
from an N$_{2}$ adsorption isotherm at 77 K. The first layer ($^{4}$He)
is adsorbed at 4.2 K and the second layer ($^{3}$He) at 2.7 K.
The NMR transverse relaxation process was monitored by the spin-echo technique
with the pulse sequence of $90^{\circ}$-\,$t$\,-$180^{\circ}$-\,$t$
in a static magnetic field of 172 mT ($f=5.5$ MHz) parallel to the graphite basal plane.
We averaged the raw echo signals 50 to 2000 times depending on the signal amplitude.
Other experimental methods have been described before \cite{Murakawa}.
\section{Results and discussions}
\begin{figure}
\begin{minipage}{14pc}
\includegraphics[width=14pc]{n=0_95.eps}
\end{minipage}
\hspace{2pc}
\begin{minipage}{20pc}
\caption{\label{fig:n=0_95}
Spin echo height as a function of $t$
in the anomalous phase of 2D $^{3}$He at $n=0.95$.
$B=172$ mT. Closed and open circles
are data at $T=20$ and 100 mK, respectively. The dash-dotted lines are
the double exponential fitting (Eq.\,(\ref{eq:decay})).
The dotted line is the single exponential behavior
at $T=100$ mK representing only the first term in Eq.\,(\ref{eq:decay}).
The dashed line is the decay at 20 mK
estimated from the macroscopic two-phase coexistence model.
The solid line shows the decay at
100 mK calculated from Eq.\,(\ref{eq:angle}).
The inset shows raw echo signals.}
\end{minipage}
\end{figure}
\begin{wrapfigure}[17]{r}{16pc}
\begin{center}
\includegraphics[width=14pc]{n=0_85.eps}
\caption{\label{fig:n=0_85}
Spin echo height as a function of $t$
in the anomalous phase of 2D $^{3}$He at $n=0.85$.
Definitions for the symbols and lines are the same as Figure
\ref{fig:n=0_95}.}
\end{center}
\end{wrapfigure}
The measured transverse relaxations at $n=0.95$ and 0.85 are shown in Figures
\ref{fig:n=0_95} and \ref{fig:n=0_85}, respectively.
We carried out the measurements at $T=100$ and 20 mK because $T_{2}$ is independent
of $T$ in the temperature range of $10\leq T\leq 700$ mK
like the exchange plateau in bulk solid $^{3}$He \cite{Guyer_RMP}.
At first glance, the relaxations have two components.
The longer $T_{2}$ component is not due to the inappropriate background subtraction
(see the raw signals shown in the inset).
Within the macroscopic two-phase coexistence model, the decay of the echo amplitude $S$
should follow a double exponential function
\begin{equation}
S=A^{\mathrm{short}}\exp (-t/T_{2}^{\mathrm{short}})
+A^{\mathrm{long}}\exp (-t/T_{2}^{\mathrm{long}}).
\label{eq:decay}
\end{equation}
The $T_{2}^{\mathrm{short}}$ and $T_{2}^{\mathrm{long}}$ components are
signals from the 4/7 phase and the high density Fermi fluid, respectively.
The results of fitting to Eq.\,(\ref{eq:decay}) are
the dash-dotted lines in Figures \ref{fig:n=0_95}
and \ref{fig:n=0_85}. Although the fitting quality is seemingly well,
this model is inadequate in several respects.
Firstly, in this model, the decay at $T=20$ mK is estimated from
the $T=100$ mK data as the dashed lines.
We used here the known $T$-dependence of magnetization for
the high-density Fermi fluid
and the 4/7 phase measured in Ref.\,\cite{Murakawa}.
These estimations do not describe the 20 mK data at all.
Secondly, the ratio $A^{\mathrm{long}}/A^{\mathrm{short}}$
remains unchanged or even decreases with decreasing $n$
as shown in Figure \ref{fig:prefactor}.
If the system consists of two components,
$A^{\mathrm{long}}/A^{\mathrm{short}}$ should increase linearly
with decreasing density in the coexistence region.
Therefore, the macroscopic two-phase coexistence model is clearly excluded.
\begin{figure}[b]
\begin{minipage}{18pc}
\includegraphics[width=14pc]{prefactor.eps}
\caption{\label{fig:prefactor}Density dependence of the ratio
$A^{\mathrm{long}}/A^{\mathrm{short}}$ of prefactors
in Eq.\,(\ref{eq:decay}).}
\end{minipage}
\hspace{2pc}
\begin{minipage}{18pc}
\includegraphics[width=13.6pc]{bulk.eps}
\caption{\label{fig:bulk}Spin echo height as a function of $t$
in bulk $^{3}$He of 0 Pa at $T=100$ mK.
The dotted line is the extrapolation of the single exponential
behavior at $t\leq 30$ ms.}
\end{minipage}
\end{figure}
We also made the same measurement filling the sample cell with
liquid $^{3}$He at $P=0$.
The substrate surface is still preplated by a $^{4}$He monolayer.
As shown in Figure \ref{fig:bulk},
the slowdown of decay (the existence of slow component) is not observed in this case.
This means that the long $T_{2}$ component is characteristic
of the 2D samples.
The detailed structure in the decay around $t=30$ ms is probably
due to the complicated diffusion process
since the sample occupies spaces within and without the Grafoil stack.
Then, what is the slowdown of decay caused by?
Cowan \cite{Cowan_JPC} showed theoretically that $T_{2}$ depends strongly
on the angle ($\beta$) between the static magnetic field direction and
the vector normal to plane in 2D systems.
This effect has been confirmed by the previous
NMR experiments at $1.2\leq T\leq 4.2$ K \cite{Satoh_JLTP}.
If we assume that a Gaussian distribution
with a standard deviation $\sigma =15^{\circ}$
for the mosaic angle spread of Grafoil substrate,
$S$ is calculated by
\begin{equation}
S=\int_{-90^{\circ}}^{90^{\circ}}\mathrm{d}\beta
\exp \{-tT_{2}^{-1}(\beta)\}\frac{1}{\sqrt{2\pi}\sigma}
\exp\left\{-\frac{(\beta -90^{\circ})^{2}}{2\sigma^{2}}\right\} .
\label{eq:angle}
\end{equation}
We used here the $\beta$-dependence of $T_{2}$, $T_{2}(\beta)$,
for $\omega_{0}\tau_{0}=10^{-1}$ given in
Figure 4 of Ref.\,\cite{Cowan_JPC}, where $\omega_{0}= 2\pi f$ is
the Larmor frequency. $\tau_{0}$ is defined as $a^{2}/2D$
in which $a(= 4.1\times 10^{-10}\;\mathrm{m})$ is the lattice constant and
$D(\sim 10^{-11}\;\mathrm{m}^{2}\cdot\mathrm{s}^{-1})$ is the spin diffusion constant.
The fitted result is the solid lines in Figures
\ref{fig:n=0_95} and \ref{fig:n=0_85}.
Although the fitting quality is not very good because the mosaic angle distribution
of our substrate and the value of $D$ are not accurately known,
the anisotropy of $T_{2}$ explains the measured slowdown of decay semi-qualitatively.
The insensitivity of $A^{\mathrm{long}}/A^{\mathrm{short}}$ to $n$ and
$T$ can be naturally understood along this consideration
that the long tail component originates from the extrinsic effect
due to the substrate.
Therefore, the short $T_{2}$ component,
which contributes to the total magnetization by about 95 \%
(see Figure \ref{fig:prefactor}), is an intrinsic transverse relaxation,
i.e., $T_{2}^{\mathrm{short}}=T_{2}$.
In other words, the macroscopic phase separation model can be excluded.
The density dependence of $T_{2}^{-1}$ at $T=100$ mK
determined in this way is shown in Figure \ref{fig:dens_dep}.
In the region of $0.7\leq n\leq 1$,
$T_{2}^{-1}$ decreases monotonically with decreasing $n$.
This is consistent with the expectation from the ZPV model that the number of ZPVs
doped into the 4/7 phase linearly increases with decreasing $n$.
It should be noted that if the system is phase separated microscopically
on the length scale much shorter than the diffusion length $l_{D}$
within the time interval $T_{2}$, the fluid and 4/7 phase domains are
in the fast-exchange limit \cite{Hammel} and a single $T_{2}$ will be observed.
We estimate $l_{D} \sim 200$ nm from the relation $l_D \sim a \sqrt{T_{2}J}$
where $J$ is the exchange frequency. Thus, our experimental results
do not exclude the possibilities of the microscopic phase separation
on the length scale shorter than
several tens nm due to either intrinsic (e.g. domain wall structures)
or extrinsic (e.g. substrate heterogeneities \cite{Morhard}) effects.
\begin{figure}[t]
\begin{minipage}{18pc}
\includegraphics[width=14pc]{dens_dep_inv.eps}
\caption{\label{fig:dens_dep}Density dependence of $T_{2}^{-1}$ at $T=100$ mK.
The solid line is a guide to the eye.}
\end{minipage}
\hspace{2pc}
\begin{minipage}{18pc}
\includegraphics[width=14pc]{n=0_40.eps}
\caption{\label{fig:n=0_40}Spin echo height as a function of $t$
in the Fermi fluid phase of 2D $^{3}$He at $n=0.40$.
The closed and open circles are data at $T=20$ and
100 mK, respectively. The solid lines are guides to the eye.}
\end{minipage}
\end{figure}
Finally, we briefly discuss the data obtained in the Fermi fluid phase
($n=0.40$) shown in Figure \ref{fig:n=0_40}.
The echo signal extrapolated to $t=0$, i.e. the magnetization,
is unchanged between $T=20$ and 100 mK,
which is characteristic of degenerated Fermi fluid.
The decay rate also decreases with increasing $t$ here.
However, this does not originate only from the $T_{2}$ anisotropy,
but could be related to the Fermi liquid effects such as increasing $D$
with decreasing $T$, because both $T_{2}^{\mathrm{short}}$ and $T_{2}^{\mathrm{long}}$
vary with $T$.
Further measurements of the detailed $T$-dependence of the relaxation and
of $D$ values under field gradients will provide us useful
information on the spin diffusion in 2D fermions.
This work was financially supported by Grant-in-Aid for Scientific
Research on Priority Areas (No. 17071002) from MEXT, Japan.
\section*{References}
| {
"attr-fineweb-edu": 1.722656,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbEvxK6nrxjHy_fg8 | \section{Introduction}
In many signal processing applications such as estimation of brain activity from MEG time-series \cite{phillips1997meg}, estimation of time-varying networks \cite{kolar2010estimating}, electroencephalogram (EEG) analysis \cite{nunez1995neocortical}, calcium imaging \cite{vogelstein2010fast}, functional magnetic resonance imaging (fMRI) \cite{chang2010time}, and video compression \cite{jung2010motion}, the signals often exhibit abrupt changes which are blurred through convolution with unknown kernels due to the intrinsic measurements constraints. Traditionally, state-space models have been used for estimating the underlying signal given the blurred and noisy observations. Gaussian state-space models in particular are widely used to model smooth state transitions. Under normality assumptions, posterior mean filters and smoothers are optimal estimators, where the analytical solution is given respectively by the Kalman filter and the fixed interval smoother \cite{haykin2008adaptive}.
When applied to observations from abruptly changing states, Gaussian state-space models exhibit poor performance in recovering sharp transitions of the states due to their underlying smoothing property. Although filtering and smoothing recursions can be obtained in principle for non-Gaussian state-space models, exact calculations are no longer possible \cite{fahrmeirstate}. Apart from crude approximations like the extended Kalman filter, several methods have been proposed for state estimation including numerical methods for low-dimensional states \cite{kitagawa1998self}, Monte Carlo filters \cite{kitagawa1998self,hurzeler1998monte}, posterior mode estimation \cite{fruhwirth1994applied,fruhwirth1994data}, and fully Bayesian smoothing using Markov chain Monte Carlo simulation \cite{fahrmeirstate, knorr1999conditional, shephard1997likelihood}. In order to exploit sparsity, several dynamic compressed sensing (CS) techniques, such as the Kalman filtered CS algorithm, have been proposed which typically assume partial information about the sparse support or estimate it in a greedy and online fashion \cite{vaswani2010ls, vaswani2008kalman, carmi2010methods, ziniel2013dynamic, zhan2015time}. However, little is known about the theoretical performance guarantees of these algorithms.
In this paper, we consider the problem of estimating state dynamics from noisy observations, where the state transitions are governed by autoregressive models with compressible innovations. Motivated by the theory of CS, we employ an objective function formed by the $\ell_1$-norm of the state innovations \cite{ba2012exact}. Unlike the traditional compressed sensing setting, the sparsity is associated with the dynamics and not the states themselves. In the absence of observation noise, the CS recovery guarantees are shown to extend to this problem \cite{ba2012exact}. However, in a realistic setting in presence of observation noise, it is unclear how the CS recovery guarantees generalize to this estimation problem.
We will present stability guarantees for this estimator under a convergent state transition matrix, which confirm that the CS recovery guarantees can be extended to this problem. The corresponding optimization problem in its Lagrangian form is akin to the MAP estimator of the states in a linear state-space model where the innovations are Laplace distributed. This allows us to integrate methods from Expectation-Maximization (EM) theory and Gaussian state-space estimation to derive efficient algorithms for the estimation of states as well as the state transition matrix, which is usually unknown in practice. To this end, we construct two nested EM algorithms in order to jointly estimate the states and the transition matrix. The outer EM algorithm for state estimation is akin to the fixed interval smoother, and the inner EM algorithm uses the state estimates to update the state transition matrix \cite{shumway1982approach}. The resulting EM algorithm is recursive in time, which makes the computational complexity of our method scale linearly with temporal dimension of the problem. This provides an advantage over existing methods based on convex optimization, which typically scale super-linearly with the temporal dimension. Finally, we provide simulation results which reveal that the sparse estimates of the compressible state-space models significantly outperform the traditional basis pursuit estimator. We further apply our estimator to two-photon imaging data for deconvolution of spikes from calcium traces, which confirms the superior performance of our estimator.
The rest of this paper is organized as follows. In Section \ref{tv:formulation}, we introduce our notation and describe the problem formulation. In Section \ref{sec:tv_theory}, we present our main theoretical result and develop a fast estimator using two nested EM algorithms. We provide simulation studies and application to two-photon imaging data in Section \ref{sec:tv_sim}, followed by concluding remarks in Section \ref{sec:tv_conc}.
\section{Notation and Problem Formulation} \label{tv:formulation}
Throughout the paper we use bold lower and upper case letters for denoting vectors and matrices, respectively. We denote the support of a vector $\mathbf{x}_t \in \mathbb{R}^p$ by $\support(\mathbf{x}_t)$ and its $j$th element by $(\mathbf{x}_{t})_j$.
We use the notation $[p]:=\{1,2,\cdots,p\}$, and $\mathbf{u}_s$ to denote the best $s$-term approximation to $\mathbf{u}$ in the $\ell_1$-sense. A vector $\mathbf{u}$ of length $p$ is called $s$-sparse (resp. $(s,\xi)$--compressible), if is has $s$ non-zero elements (resp. if $\|\mathbf{u}-\mathbf{u}_s\|_1 \sim \mathcal{O}(s^{\frac{1}{2} - \frac{1}{\xi}})$ for some $\xi \in (0,1)$). We assume the state innovations to be sparse (resp. compressible), i.e. $\mathbf{x}_t-\theta \mathbf{x}_{t-1}$ is $s_t$-sparse (resp. $(s_t,\xi)$--compressible) with $s_1 \gg s_t$ for $t \in [T]\backslash\{1\}$. In the compressive regime that we are interested in, $s_t < n_t \ll p$. For simplicity of notation, we let $\mathbf{x}_0$ to be the all-zero vector in $\mathbb{R}^p$. For an arbitrary set $\mathcal{M} \subset [p]$, $(\mathbf{x}_{t})_{\mathcal{M}}$ denotes the vector $\mathbf{x}_t$ restricted to $\mathcal{M}$, i.e. all the components outside $\mathcal{M}$ set to zero. Given a sparsity level $s$ and a vector $\mathbf{x}$, we denote the set of its $s$ largest magnitude entries by $S$, and its best $s$-term approximation error by $\sigma_s(\mathbf{x}): = \|\mathbf{x}-\mathbf{x}_s\|_1$.
We consider a linear state-space model given by
\begin{equation}
\label{eq:tv_lap_state_space}
\begin{array}{ll}
\mathbf{x}_t = \theta \mathbf{x}_{t-1}+ \mathbf{w}_t,\\
\mathbf{y}_t = \mathbf{A}_t \mathbf{x}_t + \mathbf{v}_t, &\mathbf{v}_t\sim \mathcal{N}(\mathbf{0},\sigma^2 I)
\end{array},
\end{equation}
where $(\mathbf{x}_t)_{t=1}^{T} \in \mathbb{R}^p$ denote states to be estimated, $\theta$ is the state transition parameter satisfying $|\theta|<1$, $\mathbf{w}_t \in \mathbb{R}^p$ is the innovation sequence, $(\mathbf{y}_t)_{t=1}^T \in \mathbb{R}^{n_t}$ are the linear observations, $\mathbf{A}_t \in \mathbb{R}^{n_t \times p}$ denotes the measurement matrix, and $\mathbf{e}_t \in \mathbb{R}^{n_t}$ denotes the Gaussian measurement noise of known covariance matrix $\sigma^2 \mathbf{I}$. We assume that the innovation $\mathbf{w}_t$ is $(s_t, \xi)$-compressible, and we call this model a \emph{compressible} state-space model to highlight this fact.
For a matrix $\mathbf{A}$, we denote restriction of $\mathbf{A}$ to its first $n$ rows by $(\mathbf{A})_n$ . We say that the matrix $\mathbf{A} \in \mathbb{R}^{n \times p}$ satisfies the restricted isometry property (RIP) of order $s$, if for all $s$-sparse $\mathbf{x}\in \mathbb{R}^p$, we have
\begin{equation}
\label{eq:tv_rip}
(1-\delta_s) \|\mathbf{x}\|_2^2 \leq \|\mathbf{A}\mathbf{x}\|_2^2 \leq (1+\delta_s)\|\mathbf{x}\|_2^2,
\end{equation}
where $\delta_s \in (0,1)$ is the smallest constant for which Eq. (\ref{eq:tv_rip}) holds \cite{candes2008introduction}. In order to avoid prohibitive storage we assume the rows of $\mathbf{A}_t$ are a subset of rows of $\mathbf{A}_1$, i.e. $\mathbf{A}_t = (\mathbf{A}_{1})_{n_t}$, and define $\tilde{\mathbf{A}}_t = \sqrt{\frac{n_1}{n_t}}\mathbf{A}_t$. In order to promote sparsity of the state dynamics, we consider the dynamic $\ell_1$-regularization (dynamic CS from now on) problem given by
\begin{equation}
\label{eq:tv_prob_def_primal}
\minimize_{\mathbf{x}_1,\mathbf{x}_2,\cdots,\mathbf{x}_T, \theta} \quad \sum_{t=1}^T \frac{\|\mathbf{x}_t-\theta \mathbf{x}_{t-1}\|_1}{\sqrt{s_t}} \;\;\st \;\; \|\mathbf{y}_t-\mathbf{A}_t\mathbf{x}_t\|_2 \leq \sqrt{\frac{n_t}{n_1}}\epsilon.
\end{equation}
Note that this is a variant of the model used in \cite{ba2012exact}. We also consider the (modified) dual form of (\ref{eq:tv_prob_def_primal}) given by
\begin{equation}
\label{eq:tv_prob_def_dual}
\minimize_{\mathbf{x}_1,\mathbf{x}_2,\cdots,\mathbf{x}_T, \theta} \quad \lambda \sum_{t=1}^T \frac{\|\mathbf{x}_t-\theta \mathbf{x}_{t-1}\|_1}{\sqrt{s_t}} + \frac{1}{n_t}\frac{\|\mathbf{y}_t-\mathbf{A}_t\mathbf{x}_t\|_2^2}{2\sigma^2}.
\end{equation}
Note that Eq. (\ref{eq:tv_prob_def_dual}) is equivalent to the MAP estimator of the states in (\ref{eq:tv_lap_state_space}) if the innovations are given by i.i.d Laplace random variables with parameter $\lambda$. We will next describe the main theoretical results of our paper for stable recovery of the dynamic CS problem.
\section{Theoretical Results and Algorithm Development} \label{sec:tv_theory}
In this section, we state the main theoretical result of our paper regarding the stability of the estimator in \ref{eq:tv_prob_def_dual} and use the EM theory and state-space estimation, to obtain a fast solution to (\ref{eq:tv_prob_def_dual}), which jointly estimates the states as well as their transitions.
\subsection{Stability Guarantees}
Uniqueness and exact recovery of the sequence $(\mathbf{x}_t)_{t=1}^T$ in the absence of noise was proved in \cite{ba2012exact} for $\theta =1$, by an inductive construction of dual certificates. Our main result on stability of the solution of \ref{eq:tv_prob_def_primal} is given in the following Theorem
\begin{thm}[Stable Recovery in the Presence of Noise]
\label{thm:tv_main}
Let $(\mathbf{x}_t)_{t=1}^T \in \mathbb{R}^p$ be a sequence of states such, $\mathbf{A}_1$ and $\tilde{\mathbf{A}}_t$, $t\geq 2$ satisfy RIP of order ${4s}$ with $\delta_{4s} <1/3$, then for fixed known $\theta$, any solution $(\widehat{\mathbf{x}}_t)_{t=1}^{T}$ to (\ref{eq:tv_prob_def_primal}) satisfies
\begin{equation*}
\label{eq:tv_main_stable}
\resizebox{\columnwidth}{!}{$\displaystyle \frac{1}{T} \sum_{t=1}^T \|\mathbf{x}_t-\widehat{\mathbf{x}}_t\|_2 \le\!\frac{1-\theta^T}{1-\theta}\!\! \left(12.6 \left(1+\frac{\sqrt{n_1}-\sqrt{n_2}}{T\sqrt{n_2}}\right) \epsilon + \frac{3}{T}\sum_{t=1}^T \frac{ \sigma_{s_t}(\mathbf{x}_t-\theta \mathbf{x}_{t-1})}{\sqrt{s_t}}\right)$}.
\end{equation*}
\end{thm}
\noindent \textbf{Remarks:} The first term on the right hand side of Theorem \ref{thm:tv_main} implies that the average reconstruction error of the sequence $(\mathbf{x}_t)_{t=1}^T$ is upper bounded proportional to the noise level $\epsilon$, which implies the stability of the estimate. The second term is a measure of compressibility of the innovation sequence and vanishes when the sparsity condition is exactly met.
\noindent \textit{\textbf{Proof Sketch.}} The proof of Theorem \ref{thm:tv_main} is based on establishing a modified cone and tube constraint for the dynamic CS problem and using the boundedness of the Frobenius operator norm of the inverse differencing operator. A full proof can be found in \cite{kazemipour_tv_paper}.
\subsection{Fast Iterative Solution via the EM Algorithm}
In order to obtain a fast solution, we use two nested instances of the EM algorithm. The full details of the algorithm development are given in \cite{kazemipour_tv_paper}, of which we will present a summary in this paper. The outer EM algorithm is also known as the Iteratively Re-weighted Least Squares (IRLS) method \cite{babadi_IRLS}, which aims at estimating the solution to (\ref{eq:tv_prob_def_dual}) in a recursive fashion and is described by iteratively alternating between the following two steps:
\noindent \textbf{Outer E-Step:} In the $(l+1)$-st iteration, given the observed values $(\mathbf{y}_t)_{t=1}^T$, an estimate $(\mathbf{x}_t^{(l)})_{t=1}^T,\theta^{(l)}$ and a small threshold $\epsilon$ the EM algorithm finds the solution to (\ref{eq:tv_prob_def_dual}) as a recursion of
\begin{align}
\label{eq:tv_prob_def_dual_irls}
\minimize_{\mathbf{x}_1,\mathbf{x}_2,\cdots,\mathbf{x}_T, \theta} \quad \frac{\lambda}{2} & \sum_{j=1}^p\sum_{t=1}^T \frac{\left( (\mathbf{x}_{t})_j-\theta (\mathbf{x}_{t-1})_j\right)^2
+\epsilon^2}{\sqrt{s_t}\sqrt{\left( (\mathbf{x}_{t}^{(l)})_j-{\theta}^{(l)} (\mathbf{x}_{t-1}^{(l)})_j\right)^2+\epsilon^2}}\\
\notag &+ \sum_{t=1}^T \frac{1}{n_t}\frac{\|\mathbf{y}_t-\mathbf{A}_t\mathbf{x}_t^{(l)}\|_2^2}{2\sigma^2}.
\end{align}
\textbf{Outer M-Step:}
Given an estimate $(\mathbf{x}_t^{(l)})_{t=1}^T,\theta^{(l)}$, the maximization step of (\ref{eq:tv_prob_def_dual_irls}) involves another instance of EM algorithm index by $m$ as follows:
\noindent \textbf{Inner E-Step:} The main difference between (\ref{eq:tv_prob_def_dual_irls}) and (\ref{eq:tv_prob_def_dual}) is the quadratic form of (\ref{eq:tv_prob_def_dual_irls}). Given an update $\theta^{(l,m)}$, equation (\ref{eq:tv_prob_def_dual_irls}) can be thought of the MAP solution to the Gaussian state-space model given by
\begin{equation}
\label{eq:tv_dynamic}
\begin{array}{l}
\mathbf{x}_t = \theta^{(l,m)} \mathbf{x}_{t-1}+ \mathbf{w}_t, \\
\mathbf{w}_t\sim \mathcal{N}\left(\mathbf{0},{\sf diag}\left\{\frac{ {\sqrt{s_t}\sqrt{\left( (\mathbf{x}_{t}^{(l)})_j-{\theta}^{(l)} (\mathbf{x}_{t-1}^{(l)})_j\right)^2+\epsilon^2}}} {\lambda}\right\}_{j=1}^p \right),\\
\mathbf{y}_t = \mathbf{A}_t \mathbf{x}_t + \mathbf{v}_t, \quad \mathbf{v}_t\sim \mathcal{N}(\mathbf{0},n_t\sigma^2 I).
\end{array}
\end{equation}
The inner E-step involves calculation of
\begin{equation}
\label{eq:tv_inner_E}
\mathbb{E}\left\{ \log p \left((\mathbf{y}_t)_{t=1}^T,(\mathbf{x}_t^{(l,m+1)})_{t=1}^T|\theta \right)\Big|(\mathbf{y}_t)_{t=1}^T,\theta^{(l,m)} \right\},
\end{equation}
which can be done by a fixed interval smoother as a fast solution (\ref{eq:tv_dynamic}). We denote the outputs of the smoother in the IRLS algorithm by
\begin{equation*}
\mathbf{x}^{(l, m+1)}_{{t|T}} = {\mathbb{E}}\left \{\mathbf{x}_t\Big|(\mathbf{y}_t)_{t=1}^T, {\theta}^{(l,m)}\right \},
\end{equation*}
\begin{equation*}\mathbf{\Sigma}^{(l,m+1)}_{t|T} = {\mathbb{E}}\left \{ \mathbf{x}_t \mathbf{x}_t'\Big|(\mathbf{y}_t)_{t=1}^T, {\theta}^{(l,m)}\right \},
\end{equation*}
and
\begin{equation*}
\mathbf{\Sigma}^{(l,m+1)}_{t-1,t|T}=\mathbf{\Sigma}^{(l,m+1)}_{t,t-1|T}={\mathbb{E}}\left \{\mathbf{x}_{t-1} \mathbf{x}_t'\Big|(\mathbf{y}_t)_{t=1}^T,{\theta}^{(l,m)}\right \}.
\end{equation*}
The new estimate $(\mathbf{x}_t^{(l,m+1)})_{t=1}^T$ will then replace the estimate of the smoother, that is,
\begin{equation}
\mathbf{x}_t^{(l,m+1)} \leftarrow \mathbf{x}^{(l,m+1)}_{{t|T}}.
\end{equation}
The first and second moments in (\ref{eq:tv_prob_def_dual_irls}) are also replaced using these estimates.
\noindent \textbf{Inner M-Step:} The inner M-step involves maximizing the estimated expectation in (\ref{eq:tv_inner_E}) with respect to $\theta$. Given the observed values $(\mathbf{y}_t)_{t=1}^T$, and an estimate of unobserved values $(\mathbf{x}_t^{(l,m+1)})_{t=1}^T$ and $\theta^{(l,m)}$ the update is given by \cite{kazemipour_tv_paper}:
\begin{equation}
\label{eq:tv_theta_upd}
\theta^{(l, m+1)} = \frac{\sum \limits_{t,j=1}^{T,p} { \frac{ (\mathbf{x}_{t-1|T})_j (\mathbf{x}_{t|T})_j+ \left(\mathbf{\Sigma}_{t-1,t|T}^{(l,m+1)}\right)_{(j,j)}}{\sqrt{s_t}\sqrt{\left( (\mathbf{x}_{t}^{(l)})_j-{\theta}^{(l)} (\mathbf{x}_{t-1}^{(l)})_j\right)^2+\epsilon^2}}}}{\sum \limits_{t,j=1}^{T,p} { \frac{ (\mathbf{x}_{t-1|T})_j^2 + \left(\mathbf{\Sigma}_{t-1|T}^{(l,m+1)}\right)_{(j,j)}}{\sqrt{s_t}\sqrt{\left( (\mathbf{x}_{t}^{(l)})_j-{\theta}^{(l)} (\mathbf{x}_{t-1}^{(l)})_j\right)^2+\epsilon^2}}}}.
\end{equation}
This process is repeated for $M$ iterations of the inner loop and $L$ iterations of the outer loop at which a convergence criterion is met. We then have:
\begin{equation*}
{\theta}^{(l+1)} \leftarrow {\theta}^{(l,M)}, \quad ({\mathbf{x}}_t^{(l+1)})_{t=1}^{T} \leftarrow ({\mathbf{x}}_t^{(l,M)})_{t=1}^{T}.
\end{equation*}
\begin{equation*}
\widehat{\theta} \leftarrow {\theta}^{(L)}, \quad (\widehat{\mathbf{x}}_t)_{t=1}^{T} \leftarrow ({\mathbf{x}}_t^{(L)})_{t=1}^{T}.
\end{equation*}
\section{Simulations and Application to Calcium Imaging Data}
\label{sec:tv_sim}
\subsection{Application to Simulated Data}
In this section, we apply the dynamic CS algorithm to simulated data and compare its performance with basis pursuit \cite{chen2001atomic}. We used $p = 200, T = 200, s_1 =8,s_2 = 4,\epsilon = 10^{-10}$, and $\theta = 0.95$. We choose $\frac{s}{n} = \frac{\sum_{t=1}^Ts_t}{\sum_{t=1}^Tn_t}=\frac{s_t}{n_t}$, which is justified by the choice of $n_t = C s_t \log p$ for satisfying the RIP. Theory of LASSO and in general M-estimators \cite{Negahban}, suggests that a good choice for $\lambda$ is given by ${\lambda} \geq 2\sqrt{2} \sigma \sqrt{\frac{s\log p}{n}}$. We have tuned the choice of $\lambda$ around its theoretical value by cross validation. Moreover we have used estimated the innovation sequence (spikes) from $\widehat{\mathbf{x}}_t-\widehat{\theta} \widehat{\mathbf{x}}_{t-1}$, by thresholding. The thresholding level was chosen using the $90\%$ confidence bounds, such that for the resulting spikes the lower confidence bound of the peaks are higher than the upper confidence bound of preceding troughs.
Figure \ref{fig:tv_2ksamples} shows $800$ samples of $(\mathbf{x}_t)_1$ (black trace) and its denoised version (red trace) for an SNR value of $5~\text{dB}$. The denoised signal tracks the jumps in the state sequence which significantly denoising the trace. Figure \ref{fig:tv_raster_simulated} shows the simulated and estimated states across time. The estimated states are significantly denoised while the sparsity structure is preserved.
\begin{figure}[htb!]
\vspace{-2mm}
\centering
{\includegraphics[width=70mm]{tv_sample_2k_paper}}
\caption{Performance of Dynamic CS on simulated data.}
\label{fig:tv_2ksamples}
\vspace{-2mm}
\end{figure}
\begin{figure}[htb!]
\centering
{\includegraphics[width=85mm]{imagesc_sim}}
\caption{Performance of Dynamic CS: noisy (left) vs. denoised (right) data}
\label{fig:tv_raster_simulated}
\vspace{-4mm}
\end{figure}
Figures \ref{fig:tv_ds} and \ref{fig:tv_ds_spikes} show respectively the denoised traces and the detected spikes for varying compression levels of $1-n/p = 0, 0.25, 0.5,$ and $0.75$. As the compression level increases, the performance of the algorithm degrades, but strikingly the significant spikes can still be detected at a compression level of $0.75$.
\begin{figure}[htb!]
\centering
\subfigure[State Dynamics]{\label{fig:tv_ds}\includegraphics[width=40mm]{tv_ds_paper}}
\subfigure[Ground-Truth Spikes]{\label{fig:tv_ds_spikes}\includegraphics[width=41.5mm]{tv_ds_spikes_paper}}
\caption{Reconstructed states (left) and spikes (right) using Dynamic CS with varying compression level.}
\vspace{-4mm}
\end{figure}
\begin{figure*}[htb!]
\vspace{-3mm}
\centering
\subfigure[Raw Calcium Imaging Data (Noisy)]{\label{fig:tv_cal_true}\includegraphics[width=43.5mm]{tv_4neuron_true_paper}}
\subfigure[Reconstructed States After Denoising]{\label{fig:tv_cal_denoising}\includegraphics[width=43.75mm]{tv_4neuron_denoising_paper}}
\subfigure[Reconstructed States After Denoising and Compression $n/p = 2/3$]{\label{fig:tv_cal_comp}\includegraphics[width=43.75mm]{tv_4neuron_comp_paper}}
\caption{Performance of Dynamic CS on calcium imaging data}
\label{fig:tv_cal}
\vspace{-4mm}
\end{figure*}
\begin{figure*}[htb!]
\centering
\subfigure[Estimated spikes from constrained f-oopsi algorithm]{\label{fig:tv_spikes_foopsi}\includegraphics[width=43.2mm]{tv_4neuron_sp_foopsi_paper}}
\subfigure[Estimated spikes from Dynamic CS after denoising]{\label{fig:tv_spikes_dcs}\includegraphics[width=44mm]{tv_4neuron_sp_est_denoising_paper}}
\subfigure[Estimated spikes from Dynamic CS after denoising and compression $n/p = 2/3$]{\label{fig:tv_spikes_dcs}\includegraphics[width=44mm]{tv_4neuron_sp_est_comp_paper}}
\caption{Reconstructed spikes of Dynamic CS from calcium imaging data}
\label{fig:tv_cal_spikes}
\vspace{-4mm}
\end{figure*}
\subsection{Application to Calcium Imaging Data of Neural Spiking Activities}
In this section we apply the dynamic CS algorithm to real data recordings of calcium traces of neuronal activity. Calcium imaging takes advantage of intracellular calcium flux to directly visualize calcium signaling in living neurons. This is done by using Calcium indicators, fluorescent molecules that can respond to the binding of Calcium ions by changing their fluorescence properties and using a fluorescence microscope and a CCD camera to capture record the visual patterns \cite{smetters1999detecting,stosiek2003vivo}. The data was recorded from $219$ neurons at a rate of $30$ frames per second for a total time of $22$ minutes from the mouse's auditory cortex using a two-photon microscope. We chose $T=2000$ samples corresponding to $1$ minute for analysis. In order to suppress the neuropil effects, the data were spatially filtered. We chose $p=108$ spatially separated neurons by visual inspection. We estimate the measurement noise variance from an inactive period of spiking activities to be $\sigma^2 = 10^{-5}$ and use a value of $\epsilon = 10^{-10}$. Figure \ref{fig:tv_cal} shows the denoised states for four sample neurons with $90\%$ confidence bounds. The output is significantly denoised while preserving the dynamics of the data.
Figure \ref{fig:tv_cal_spikes} shows the reconstruction of the spikes in comparison to the constrained f-oopsi algorithm \cite{vogelstein2010fast}, which assumes an inhomogeneous Poisson model for spiking with an exponential approximation. Similar to the simulated data, the thresholding level was chosen using the confidence bounds. Note that the performance of our algorithm remains largely the same when $2/3$ of the observations are used. Figure \ref{tv_raster_cal} shows the corresponding raster plot of the detected spikes. By comparing the performance of f-oopsi to our algorithm, two observations can be made. First, the f-oopsi algorithm outputs a large number of small spikes, whereas our algorithm rejects them. Second, the detected events of f-oopsi are in the form of spike clusters, whereas our algorithm outputs correspondingly separated spikes. This difference in performance is due to the fact that we explicitly model the sparse nature of the spiking activity by going beyond the Gaussian state-space modeling paradigm. In contrast, the constrained f-oopsi algorithm assumes an exponential approximation with a log-barrier to a Poisson model of spiking activities, which results in losing the temporal resolution of the jumps.
In addition, we are able to form precise confidence bounds for our estimates, whereas the f-oopsi algorithm does not produce statistical confidence bounds. Our thresholding method is based on these confidence bounds which results in a systematic detection and rejection of the spikes.
\begin{figure}[htb!]
\vspace{-3mm}
\centering
{\includegraphics[width=85mm]{tv_raster_denoising_paper}}
\caption{Raster plot of the estimated spikes.}
\label{tv_raster_cal}
\vspace{-6mm}
\end{figure}
\section{Conclusions}
\label{sec:tv_conc}
In this paper, we considered compressible state-space models, where the state innovations are modeled by a sequence of compressible vectors. The traditional results of CS theory do not readily generalize to this cases where the sparsity lies in the dynamics and not the state itself, as the overall linear measurement operator does not satisfy regularity conditions such as the RIP \cite{ba2012exact}. We showed that the guarantees of CS can indeed be extended to the state estimation problem. Hence, using the state-space model, one can infer temporally global information from local measurements.
We also developed a scalable, low-complexity algorithm using two nested EM algorithms for the estimation of the states as well as the transition parameter. We further verified the validity of our theoretical results through simulation studies as well as application to real data recordings of calcium traces of neuronal activity. In addition to scalability and the ability to track the rapid dynamics in the states and in comparison to the widely used spike deconvolution algorithm f-oopsi, our algorithm provides a systematic way to detect the spike events by forming statistical confidence intervals for the state estimates. Our results suggest the possibility of using compressive measurements for reconstruction and denoising of calcium traces, which from a practical point of view, can allow faster data acquisition by undersampling the field of view . We consider joint spike sorting and deconvolution as future work.
\section{Acknowledgments}
This work was supported in part by the National Institutes of Health Award No. R01DC009607 and the National Science Foundation Award No. 1552946.
{
\small
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.829102,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbGE5qdmB6uAdshyW |
\section{Introduction}
The ongoing trend of applying \acp{NN} to signal processing tasks for communication systems has led to the demonstration of substantial improvements when compared to conventional systems for a wide range of applications \cite{honkala2020deeprx,samuel2017deep,li2018power}.
Especially when focusing on recent results of \ac{NN}-based \ac{OFDM} receivers \cite{honkala2020deeprx, aoudia2020end, Fischer_2021}, where implementations showed comparable, or sometimes even better performance than conventional state-of-the-art baselines%
, there is reason to believe that \ac{NN}-based components will play a significant role in future beyond 5G systems \cite{Toward6G_Hoydis_2021}.
Based on the assumption that trainable components will be present in future receivers, we want to discuss the opportunity of online retraining during operation to further adapt to current channel conditions.
Conventionally, receiver algorithms are designed offline, where they are optimized for best performance on comprehensive channel models, focusing on universal optimal performance.
At the same time, these channel models are optimized to mimic the expected average behavior of the real-world channel as accurately as possible.
This also holds for \ac{NN}-based receivers, which are typically trained offline on a data-set representing an ensemble of channel realizations generated by the same underlying channel model.
Training \ac{NN}-based receivers could also be done using measured data, but this entails several difficulties as the measurements must cover a wide range of different channel conditions to enable the NN to generalize to the task, and are therefore expensive.
Thus, initially training \ac{NN}-based receivers on generated data is advantageous for generalization due to the randomness introduced by stochastic channel models.
This has been done in \cite{aoudia2020end, Fischer_2021} and results in similar or even superior performance compared to conventional \ac{LMMSE}-based systems, when also evaluated on the same stochastic channel models.
\begin{figure}[t]
\begin{center}
\input{fig/channel_ensemble_v2.tikz}
\end{center}
\vspace{-2mm}
\caption{Visualization of sub-ensembles representing various channel conditions within a universal training data-set.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:channel_ensemble}
\vspace{-4mm}
\end{figure}
However, in an actual real-world system and within a short period of time, only a subset of these universal channel conditions occurs.
The receiver rather observes sub-ensembles of conditions, sketched schematically in Fig.~\ref{fig:channel_ensemble}, depending on the area of current operation (rural, urban, city) or situation (velocity, interference).
As these \emph{macro} conditions only change slowly, compared to signal processing from the receiver's point of view, we want to investigate the impact of retraining the initially universally optimized receiver for the actual channel conditions.
From a deep learning perspective, this approach can be seen as a deliberate overfitting, since we propose to retrain the receiver with only the latest data available.
In the following, we show by using the example of \ac{NN}-based \ac{OFDM} receivers, that re-optimizing to the current channel conditions leads to gains compared to the universally optimized system in corner cases and demonstrate that retrained receivers can also adapt to initially unseen channel conditions and channel alterations like interference.
The paper is structured as follows: Sec.~\ref{sec:system_setup} introduces the channel model and \ac{OFDM} system.
In Sec.~\ref{sec:RNN} details on the applied \ac{RNN}-based \ac{OFDM} receiver and the adaptive retraining process are given.
Finally, Sec.~\ref{sec:results} presents simulation results and Sec.~\ref{sec:conclusion} concludes the main findings.
\section{System Setup}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:system_setup}
The ideal channel data to showcase the advantages of online retraining would be temporally continuous ``in-the-field'' measurements of \ac{CSI} for \ac{UE} trajectories covering various different channel conditions.
An equally potent alternative to measured data could be ray-tracing-based \ac{CSI}, simulated for \ac{UE} trajectories within large spatially consistent areas.
Unfortunately, to the best of our knowledge, neither of both data sources satisfying these requirements are currently available.
This is why we rely on a modified Jakes' and Clarke's oriented time-varying and frequency-selective stochastic channel model for our simulations.
By sensitively manipulating the stochastic model's parameters, e.g., maximum channel delay, \ac{PDP} or \ac{UE} velocity, we can generate stochastic sub-ensembles of channel realizations representing the different channel conditions as simplistically visualized in Fig.~\ref{fig:channel_ensemble}.
\subsection{Channel Model and OFDM System}
We consider a tapped-delay line channel model with time-varying channel impulse response $h\left(t, \tau\right)$.
The time-varying channel impulse response is defined as
\begin{equation}
h\left(t, \tau\right) = \sum_{\ell=0}^{L-1} a_{\ell}\left(t\right)\delta\left(\tau - \tau_{\ell}\right)
\end{equation}
where $L$ is the number of resolvable multipath-components, i.e., taps, $a_{\ell}$ is the complex time-varying gain of the ${\ell}$th tap, $\tau_{\ell}$ is the delay of the ${\ell}$th tap\footnote{In the following it is assumed that the delay of the first tap is \unit[0]{ns} and that the delay time is equally spaced with $\nicefrac{1}{B}=\unit[100]{ns}$.} and $\delta\left(.\right)$ is the Dirac delta function.
For each channel realization, these multipath-components $a_{\ell}$ are randomly generated to hold a certain average power $p_{\ell} = \operatorname{E}\left[|a_{\ell}|^2\right]$ while their absolute value $|a_{\ell}|$ is Rayleigh distributed. %
This average power $p_{\ell}$ of the ${\ell}$th multipath-compenent is assumed to follow an exponentially decaying \ac{PDP}.
Each channel tap is therefore weighted during its generation with the weight $b_{\ell} = \sqrt{p_{\ell}}$ computed by
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:exp_dec}
b_{\ell} = \frac{1}{\gamma}\sqrt{1-\beta}\cdot \beta^{\nicefrac{{\ell}}{2}} \in \mathbb{R}, \qquad {\ell} = 0,1,...,L-1
\end{equation}
where the factor $\gamma$ is chosen such that $\sum_{\ell}|b_{\ell}|^2=1$ and ${0<\beta<1}$ is a variable decay parameter.
The Fourier transform of the channel impulse response $h\left(t, \tau\right)$ then yields the channel transfer function $H \left( t,f \right)$.
We assume that the considered \ac{OFDM} transmission system operates on frames of $n_\mathrm{T}$ consecutive \ac{OFDM} symbols with parameters given in Tab.~\ref{Tab:Scenario}.
Each \ac{OFDM} symbol consists of $N_{\mathrm{Sub}}$ symbols -- either data-carrying or pilot-carrying -- that are transmitted in parallel over the $N_\mathrm{Sub}$ subcarriers.
The transmitted information bits $\mathbf{u}$ %
are encoded and interleaved into the sequence $\mathbf{c}$ of length $n_{\mathrm{d}}\cdot m$ using an 5G NR compliant \ac{LDPC} code \cite{5G_Code_2018} of length $n=1296$ bit. Here, $n_\mathrm{d}$ denotes the number of transmitted data-carrying symbols within a frame and each data symbol carries the information of $m$ bits (e.g., $m=4$ for a 16 \ac{QAM}).
For the simulation in frequency domain it is assumed that a sufficiently long \ac{CP} is applied and \ac{ISI} is not present. %
Let $\mathbf{X} \in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ be the transmitted symbols.
After the removal of the \ac{CP} the received symbols $\mathbf{Y}\in\mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ are given by
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:received_symbols}
\mathbf{Y} = \mathbf{H} \circ \mathbf{X} + \mathbf{N}
\end{equation}
where $\circ$ denotes the element-wise multiplication, $\mathbf{H}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the channel matrix and $\mathbf{N}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the \ac{AWGN} matrix.
By sampling $H\left(t,f\right)$ according to the \ac{OFDM} system parameters given in Tab.~\ref{Tab:Scenario} we end up with the channel matrix $\mathbf{H}$ of the current frame.
The elements $N_{k,n}$ of the noise matrix $\mathbf{N}$ are independent and identically complex Gaussian distributed according to
$N_{k,n}\sim \mathcal{CN}\left(0, \sigma^2\right)$ where
$\sigma^2$ denotes the noise power per element.
The task at receiver side is to equalize and demap the received symbols $\mathbf{Y}$. %
Finally, the obtained soft bit estimates are decoded by a \ac{BP} decoder. %
\subsection{Iterative LMMSE Baseline}
As a state-of-the-art baseline system, we employ a receiver based on the \ac{IEDD} principle. %
It consists of a data-aided \ac{LMMSE} channel estimator, a (soft-decision) \ac{APP} demapper and a \ac{BP} decoder that iterates and exchanges soft bit information with the estimator and the demapper.
For further details the interested reader is referred to \cite{aoudia2020end} and the references therein.
\section{Adaptive RNN-based OFDM Receiver}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:RNN}
To demonstrate the advantages of adaptive retraining we consider a trainable \ac{RNN}-based \ac{OFDM} receiver. %
Similar to \cite{honkala2020deeprx,aoudia2020end}, it combines the tasks of channel estimation, equalization and soft-demapping within a single \ac{NN}.%
\subsection{Neural Network Structure and Training}
\begin{figure}[t]
\begin{center}
\input{fig/RNN_Non_iter_slim.tex}
\end{center}
\vspace{-2mm}
\caption{Block diagram of the \ac{RNN}-based \ac{OFDM} receiver.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:RNN_structures}
\vspace{-5mm}
\end{figure}
Fig.~\ref{fig:RNN_structures} provides an overview of the applied \ac{NN} model which is based on the structure that has been used in \cite{Fischer_2021} for the task of channel estimation. %
The RNN maps the received symbols $\mathbf{Y}$ to a soft bit estimation, interpreted as \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}\in \mathbb{R}^{n_{\mathrm{d}}\cdot m}$. %
Besides $\mathbf{Y}$, it also takes the transmitted pilot symbols $\mathbf{X}_\mathrm{p} \in \mathbb{C}^{ n_\mathrm{T} \times N_{\mathrm{Sub}}}$%
, the \ac{LS} channel estimates $\hat{\mathbf{H}}_\mathrm{p,LS}\in \mathbb{C}^{n_{\mathrm{T}} \times N_{\mathrm{Sub}}}$ at pilot positions and the noise standard deviation $\sigma$ into account. %
The complex-valued inputs are split into their real and imaginary parts and the noise standard deviation is broadcasted for the whole frame to match the input tensor shape, so that all inputs can be stacked to one large input tensor.
Similar to \cite{Fischer_2021}, the core element of the \ac{RNN} cell are three bidirectional \ac{LSTM} layers that primarily process the input.
The first \ac{LSTM} layer operates along the input's frequency dimension.
Next, the output's frequency and time dimension are permuted causing the second \ac{LSTM} layer to operate in time dimension.
Finally, the time dimension and the frequency dimension of the second layer's output are again permuted so that the third \ac{LSTM} layer again processes along the frequency dimension of the frame.
Subsequently, %
the \ac{RNN} cell's output is reshaped and processed by two \acp{TDDL}. %
Here, every element of the two-dimensional resource grid of the frame is processed separately by these \acp{TDDL} using shared weights. %
The \ac{LSTM} cells are applied with TensorFlow's default settings using \ac{tanh} activations, the first \ac{TDDL} uses \acp{ReLU} and the second \ac{TDDL} has no activation function. %
In this work, we use 64 units within each \ac{LSTM} layer, the first \ac{TDDL} consists of 8 neurons and the second \ac{TDDL} uses $m$ neurons, i.e., the RNN outputs $m$ values for every position in the resource grid. %
After removing the output values at pilot positions, the \ac{RNN}'s reshaped output $\mathbf{l}_{\mathrm{RNN}} \in \mathbb{R}^{n_\mathrm{d}\cdot m}$ can be de-interleaved and utilized by the outer \ac{BP} decoder. %
Training of the described \ac{RNN} is carried out in a supervised manner utilizing \ac{SGD} and \ac{BPTT}.
During training (initial as well as re-training) the Adam optimizer \cite{Kingma2014} with a learning rate of $\eta = 0.001$ is used to minimize the \ac{BCE} loss between estimations $\mathbf{l}_{\mathrm{RNN}}$ and labels $\vec{c}$.
The \ac{RNN}-based receiver is initially trained with universal randomly generated channel realizations from the stochastic channel model for a vast range of different channel parameters.
This kind of initial training results in an universal and robust generalization and allows the \ac{RNN}-based receiver to implicitly gather knowledge of the channel only through data-driven training \cite{Fischer_2021}.
The exact parameters used for initial training are summarized in Tab.~\ref{Tab:Training_Parameters}.
\begin{table}[t]
\centering
\vspace{0.03in}
\caption{Parameters for Initial (Universal) Training}
\vspace{-1mm}
\begin{tabular}{l|l}
\toprule
Parameter & Value \\
\midrule
Epochs / It. per epoch / BS & 100 / 1000 / 128 \\
Velocity $v$& $\unitfrac[0]{km}{h}- \unitfrac[200]{km}{h}$ \\
Signal-to-noise-ratio (SNR) & $\unit[8]{dB} - \unit[30]{dB}$\\
%
Number of channel taps $L$ & Ep. 1-50: 4-10; Ep. 51-100: 1-14\\
\ac{PDP} & Exp. decaying with $10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)$\\& $=\unit[-13]{dB}$ and equally spaced\\%Exp. decaying with the power \\&in the last resolvable path being\\ & $\unit[13]{dB}$ lower than the power of\\& the first path and equally spaced\\ %
\bottomrule
\end{tabular}
\pgfkeysvalueof{/tikz/dsp/label}}{Tab:Training_Parameters}
\vspace{-5.5mm}
\end{table}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\subsection{Adaptive Retraining via On-the-fly Label Recovery}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:retraining}
In order to allow the \ac{RNN}-based \ac{OFDM} receiver to adapt to current channel conditions, it has to be retrained periodically.
To enable a single retraining step, a data-set consisting of multiple recorded OFDM frames (holding inputs $\mathbf{Y}$, $\mathbf{X}_\mathrm{p}$, $\hat{\mathbf{H}}_\mathrm{p,LS}$ and $\sigma$) and the corresponding labels, being the originally transmitted interleaved coded bits $\mathbf{c}$, must be collected.
As the labels $\mathbf{c}$ are required for supervised training, they must either be retrieved by the transmission of pilot-based training sequences (and are thereby known at the receiver side) or via on-the-fly label recovery, as presented in \cite{schibisch2018online}.
Whereas pilot-based training sequences would cause a rate loss, the approach proposed in \cite{schibisch2018online} recovers the labels on-the-fly via the outer \ac{FEC} after the decoder has corrected the received bits.
Thus, there is no additional rate loss and these labels usually come for free as most systems rely on \acp{FEC}.
To demonstrate the feasibility of on-the-fly label recovery for the task of RNN retraining, we only use labels recovered by the LDPC code after 20 iterations of BP decoding.
The block diagram in Fig.~\ref{fig:on_the_fly_label_recovery} depicts the individual processing steps that allow retraining with recovered labels. %
Therefore, the \ac{RNN} processes the received symbols as described above and outputs an \ac{LLR} for each transmitted bit.
These \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}$ are then de-interleaved and further processed by the \ac{BP} decoder. %
In normal operation, the decoder makes a final decision on the received information bits $\hat{\mathbf{u}}$ after several iterations of \ac{BP} decoding.
But, in order to build up a labeled data-set for retraining, at the same time the decoder also outputs its information on the coded bits $\hat{\mathbf{c}}$, i.e., a hard decision on the final variable nodes.
These coded bits $\hat{\mathbf{c}}$ are then interleaved to $\tilde{\mathbf{c}}$ and stored together with the corresponding inputs.
If enough tuples of inputs and labels are recovered to form a sufficiently large retraining data-set, an update step using supervised \ac{SGD} is performed, aiming to reduce the \ac{BCE} loss.
However, one drawback of the described label recovery approach is, that even after sufficient decoding, not all labels can be recovered correctly by a \ac{FEC} code.
This is why we consider a codeword's error syndrome in combination with the current \ac{SNR} to define a threshold for labels that are stored in the retraining data-set, while samples above the threshold are discarded.
Similar to the findings in \cite{schibisch2018online} we saw improved performance after retraining even with partly erroneous labels.
If the number of erroneous labels exceeded a certain level we saw a degradation after retraining.
But, this can be avoided by defining the threshold conservatively.%
\begin{figure}[t]
%
\begin{center}
\input{fig/On-the-fly-label-recovery.tex}
\end{center}
\vspace{-2mm}
\caption{Block diagram of the retraining process for NN-based receiver adaptation via on-the-fly label recovery \cite{schibisch2018online}.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:on_the_fly_label_recovery}
\vspace{-8mm}
\end{figure}
\section{Simulation Results}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:results}
\begin{table}[t]
\centering
\vspace{0.03in}
\caption{OFDM and Channel Model Parameters}
\vspace{-1mm}
\begin{tabular}{l|l}
\toprule
Parameter & Value \\
\midrule
Number of subcarriers $N_{\mathrm{Sub}}$ & 64 \\
Frame length $n_{\mathrm{T}}$& 36 \\
Carrier frequency $f_{\mathrm{c}}$ & $\unit[5.9]{GHz}$ \\
Symbol duration including \ac{CP} $T_{\mathrm{S}}$ & $\unit[8]{\mu s}$ \\
Length of the \ac{CP} &$\unit[1.6]{\mu s}$ \\
Bandwidth $B$ & $\unit[10]{MHz}$\\
Data symbol constellation & 16 QAM, $m=4$ bit per symbol \\
Pilot structure/arrangement & Rectangular/Grid \\
Pilot symbol distance & $d_{\mathrm{T}}=15$, $d_\mathrm{F}=5$\\
\ac{PDP} & Exp. decaying with\\ &$10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)=\unit[-13]{dB}$\\ %
LDPC code & $R_{\mathrm{C}} = \nicefrac{1}{2}$, $n = \unit[1296]{bit}$\\ %
\bottomrule
\end{tabular}
\pgfkeysvalueof{/tikz/dsp/label}}{Tab:Scenario}
\vspace{-4mm}
\end{table}
To evaluate the effects of adaptive retraining we simulate the performance of various receiver setups in three different scenarios.
For each scenario we assume certain channel conditions, simulated by channel model parameters, to be static for a short period of time.
Within this time period, which shall represent the \emph{current} channel, we gather retraining data via on-the-fly label recovery as described in Sec.~\ref{sec:retraining}, perform a retraining step of the RNN-based receiver and then evaluate the performance on the same channel conditions.
For the following simulation results, a retraining step was executed after 32 batches with 50 frames of input-label-tuples per batch were collected.
With the general simulation parameters given in Tab.~\ref{Tab:Scenario}, this translates to a label recovery time period of $\unit[0.4608]{s}$ and, thereby, sets a lower bound (neglecting time for retraining computations) for periodic retraining steps to track channel alterations.
To limit the amount of erroneous labels within a recovered retraining data-set, we empirically defined the threshold according to the codeword's error syndrome in a way that at least $82\%$ of the parity-checks of the recovered labels have to be fulfilled by a batch to be used for retraining.
In addition, a batch is only used for retraining if the \acs{SNR} $\nicefrac{E_{\mathrm{b}}}{N_0}$ is larger than $\unit[7]{dB}$, resulting in basically no retraining in the low SNR regime.\footnote{Pilot sequence-based labels are required for retraining in the low SNR regime, as recovered labels based on FEC suffer from high error rates.}
Also, each recovered batch is only used once for an SGD weight update iteration and one retraining step is performed separately for every evaluation point at different SNR.
For each scenario the performance is measured by the \ac{BER} after forward error correction (post-FEC) and the following receiver systems are analyzed:
\begin{itemize}
\item \textbf{Universal \ac{RNN}}: Non-iterative RNN-based receiver, %
initially trained with the universal parameters summarized in Tab.~\ref{Tab:Training_Parameters}, complemented by 20 iterations of BP decoding.
\item \textbf{Adapted \ac{RNN}:} Non-iterative \ac{RNN}-based receiver, initially trained with the universal parameters in Tab.~\ref{Tab:Training_Parameters}, that is adapted to the current channel via one retraining step using on-the-fly recovered labels. Also complemented by 20 iterations of BP decoding.
\item \textbf{\ac{LMMSE} \ac{IEDD}}: Conventional \ac{LMMSE} \ac{IEDD} baseline system %
utilizing an autocorrelation matrix that is matched to the channel (genie knowledge of channel model parameters). %
The \ac{BP} decoder executes 5 iterations %
before feedback is provided to estimator and demapper. %
In total $4\times 5=20$ iterations of BP decoding are executed.
\item \textbf{Perfect Knowledge IDD}: Lower limit of the achievable \ac{BER} assuming perfect knowledge of the channel and utilizing an iterative receiver, i.e., exploiting \ac{IDD}.
Here, feedback is provided to the demapper after every iteration of \ac{BP} decoding and $\vec{H}$ is known. %
In total $20\times 1 = 20$ iterations of BP decoding are executed.
\vspace{-2mm}
\end{itemize}
%
%
%
\subsection{Corner Case (Sub-Ensemble) Scenario}
\begin{figure}[t]
\begin{center}
\input{fig/BER_Retraining_8taps_0kmh_exp_dec_16QAM}
\end{center}
\vspace{-2mm}
\caption{\ac{BER} performance of the investigated receivers in the corner case scenario of no movement and thereby no channel time-variance ($v = \unitfrac[0]{km}{h}$ and moderate $L = 8$ channel taps).}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_8taps}
\vspace{-5mm}
\end{figure}
The first scenario investigates the impact of adaptation to corner case conditions using the example of no \ac{UE} movement.
For this purpose we set the velocity to $v = \unitfrac[0]{km}{h}$ and choose a moderate number of $L = 8$ channel taps so that the stochastic channel model generates channel realizations that form a sub-ensemble of the universal conditions used for initial training (Tab.~\ref{Tab:Training_Parameters}).
As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_8taps}, the unadapted \emph{Universal RNN} already shows a better performance than the conventional \emph{LMMSE IEDD} baseline, thus, confirming the findings of \cite{aoudia2020end, Fischer_2021}.
This gain can be justified by the fact that the RNN-based receiver can additionally exploit the expected distribution of the data-carrying symbols in $\vec{Y}$.
However, by adapting the RNN receiver to the current channel conditions, the \emph{Adapted RNN} can further gain about \unit[0.1]{dB} of BER performance compared to the \emph{Universal RNN}.
Interestingly, this gain is possible although the channel conditions of this scenario were part (sub-ensemble) of the initial universal training.
We assume that retraining to current channel conditions reinforces the RNN to lift conservative assumptions, as channel realizations with high velocity are not part of the retraining data and high velocity implications are thereby not considered for weight updates.
These gains have also been observed for various other corner cases with different parameters within the range of the universal channel ensemble, but due to paper length limits we exemplary only show this corner case.
\subsection{Out-of-Specification (Extreme) Scenario}
\begin{figure}[t]
\begin{center}
\input{fig/BER_Retraining_16taps_100kmh_exp_dec_16QAM}
\end{center}
\vspace{-2mm}
\caption{\ac{BER} performance of the investigated receivers in the extremely frequency-variant (out-of-specifications) scenario of $L = 16$ channel taps at a moderate velocity of $v = \unitfrac[100]{km}{h}$.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_16taps}
\vspace{-4mm}
\end{figure}
In the second scenario, we want to focus on the benefit of adaptation in case of unforeseen and extreme channel conditions.
Therefore, the results shown in Fig.~\ref{fig:BER_retraining_16taps} were obtained at highly frequency-selective channel conditions with $L = 16$ channel taps at a moderate velocity of $v=\unitfrac[100]{km}{h}$.
The simulation results show that the performance of the conventional \emph{LMMSE IEDD} baseline system degrades heavily.
This is expected as it mainly relies on pilot symbols and the used pilot position spacing in frequency dimension is not sufficient for $L = 16$ channel taps, setting this scenario out of specification.
Likewise, this scenario is also out of specification for the \emph{Universal RNN} as initial training only covers channel conditions up to $L = 14$ channel taps.
However, the performance of the \emph{Universal RNN} does also degrade compared to the \emph{Perfect Knowledge IDD} lower limit, but not as much as the \emph{LMMSE IEDD} baseline system.
This observation is also consistent with the findings of \cite{aoudia2020end, Fischer_2021} which showed, that NN-based receivers extract further knowledge about the channel from the provided data-carrying symbols and are therefore more robust against sparse pilot spacing.
But, most interestingly, the \emph{Adapted RNN} shows significantly improved performance compared to the \emph{Universal RNN}.
While there is still a large gap between the performance of the \emph{Adapted RNN} and \emph{Perfect Knowledge IDD}, these results show that adaptation can render a NN-based receiver to significantly higher operability, even in the case of a scenario that was originally out of specifications.
\subsection{Interference Scenario}
\begin{figure}[t]
\begin{center}
\input{fig/BER_Retraining_8taps_100kmh_GuardBand_Interference_6dB}
\end{center}
\vspace{-2mm}
\caption{
\ac{BER} performance of the investigated receivers in a scenario with side channel interference, modeled by additive noise of $\unit[6]{dB}$ on the outer four subcarriers, at otherwise moderate conditions with $L = 8$ channel taps and $v = \unitfrac[100]{km}{h}$.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_guard_band_interference}
\vspace{-4mm}
\end{figure}
Finally, we want to showcase a scenario that highlights the flexibility of NN-based receivers and how retraining can even enable adaptation to unseen tasks.
This is shown using the example of side channel interference, which is modeled by adding noise to the outer four subcarriers, reducing their SNR by $\unit[6]{dB}$.
As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_guard_band_interference}, the \emph{LMMSE IEDD} baseline as well as the \emph{Universal RNN} suffer from the added interference, but retraining the RNN-based receiver leads to a performance gain of \unit[0.42]{dB} when we compare the \emph{Adapted RNN} with the \emph{Universal RNN}.
In this case the NN-based receiver is able to cope with the new task of incorporating the disturbance on the outer four subcarriers via retraining, while a conventional system would require additional signal processing and can not simply adapt.
\section{Conclusion}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:conclusion}
We have demonstrated that \ac{NN}-based receivers benefit from continuous retraining as they can adapt to current, extreme and new unforeseen channel conditions.
For such cases, adaptation leads to a superior performance when compared to static receivers that have only been designed and optimized for a universal channel model.
Finally, we want to emphasize that these gains come without any additional signaling overhead, as on-the-fly label recovery is sufficient for the retraining process.
\bibliographystyle{IEEEtran}
\section{Introduction}
The ongoing trend of applying \acp{NN} to signal processing tasks for communication systems has led to the demonstration of substantial improvements when compared to conventional systems for a wide range of applications \cite{honkala2020deeprx,samuel2017deep,li2018power}.
Especially when focusing on recent results of \ac{NN}-based \ac{OFDM} receivers \cite{honkala2020deeprx, aoudia2020end, Fischer_2021}, where implementations showed comparable, or sometimes even better performance than conventional state-of-the-art baselines%
, there is reason to believe that \ac{NN}-based components will play a significant role in future beyond 5G systems \cite{Toward6G_Hoydis_2021}.
Based on the assumption that trainable components will be present in future receivers, we want to discuss the opportunity of online retraining during operation to further adapt to current channel conditions.
Conventionally, receiver algorithms are designed offline, where they are optimized for best performance on comprehensive channel models, focusing on universal optimal performance.
At the same time, these channel models are optimized to mimic the expected average behavior of the real-world channel as accurately as possible.
This also holds for \ac{NN}-based receivers, which are typically trained offline on a data-set representing an ensemble of channel realizations generated by the same underlying channel model.
Training \ac{NN}-based receivers could also be done using measured data, but this entails several difficulties as the measurements must cover a wide range of different channel conditions to enable the NN to generalize to the task, and are therefore expensive.
Thus, initially training \ac{NN}-based receivers on generated data is advantageous for generalization due to the randomness introduced by stochastic channel models.
This has been done in \cite{aoudia2020end, Fischer_2021} and results in similar or even superior performance compared to conventional \ac{LMMSE}-based systems, when also evaluated on the same stochastic channel models.
\begin{figure}[t]
\begin{center}
\input{fig/channel_ensemble_v2.tikz}
\end{center}
\vspace{-2mm}
\caption{Visualization of sub-ensembles representing various channel conditions within a universal training data-set.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:channel_ensemble}
\vspace{-4mm}
\end{figure}
However, in an actual real-world system and within a short period of time, only a subset of these universal channel conditions occurs.
The receiver rather observes sub-ensembles of conditions, sketched schematically in Fig.~\ref{fig:channel_ensemble}, depending on the area of current operation (rural, urban, city) or situation (velocity, interference).
As these \emph{macro} conditions only change slowly, compared to signal processing from the receiver's point of view, we want to investigate the impact of retraining the initially universally optimized receiver for the actual channel conditions.
From a deep learning perspective, this approach can be seen as a deliberate overfitting, since we propose to retrain the receiver with only the latest data available.
In the following, we show by using the example of \ac{NN}-based \ac{OFDM} receivers, that re-optimizing to the current channel conditions leads to gains compared to the universally optimized system in corner cases and demonstrate that retrained receivers can also adapt to initially unseen channel conditions and channel alterations like interference.
The paper is structured as follows: Sec.~\ref{sec:system_setup} introduces the channel model and \ac{OFDM} system.
In Sec.~\ref{sec:RNN} details on the applied \ac{RNN}-based \ac{OFDM} receiver and the adaptive retraining process are given.
Finally, Sec.~\ref{sec:results} presents simulation results and Sec.~\ref{sec:conclusion} concludes the main findings.
\section{System Setup}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:system_setup}
The ideal channel data to showcase the advantages of online retraining would be temporally continuous ``in-the-field'' measurements of \ac{CSI} for \ac{UE} trajectories covering various different channel conditions.
An equally potent alternative to measured data could be ray-tracing-based \ac{CSI}, simulated for \ac{UE} trajectories within large spatially consistent areas.
Unfortunately, to the best of our knowledge, neither of both data sources satisfying these requirements are currently available.
This is why we rely on a modified Jakes' and Clarke's oriented time-varying and frequency-selective stochastic channel model for our simulations.
By sensitively manipulating the stochastic model's parameters, e.g., maximum channel delay, \ac{PDP} or \ac{UE} velocity, we can generate stochastic sub-ensembles of channel realizations representing the different channel conditions as simplistically visualized in Fig.~\ref{fig:channel_ensemble}.
\subsection{Channel Model and OFDM System}
We consider a tapped-delay line channel model with time-varying channel impulse response $h\left(t, \tau\right)$.
The time-varying channel impulse response is defined as
\begin{equation}
h\left(t, \tau\right) = \sum_{\ell=0}^{L-1} a_{\ell}\left(t\right)\delta\left(\tau - \tau_{\ell}\right)
\end{equation}
where $L$ is the number of resolvable multipath-components, i.e., taps, $a_{\ell}$ is the complex time-varying gain of the ${\ell}$th tap, $\tau_{\ell}$ is the delay of the ${\ell}$th tap\footnote{In the following it is assumed that the delay of the first tap is \unit[0]{ns} and that the delay time is equally spaced with $\nicefrac{1}{B}=\unit[100]{ns}$.} and $\delta\left(.\right)$ is the Dirac delta function.
For each channel realization, these multipath-components $a_{\ell}$ are randomly generated to hold a certain average power $p_{\ell} = \operatorname{E}\left[|a_{\ell}|^2\right]$ while their absolute value $|a_{\ell}|$ is Rayleigh distributed. %
This average power $p_{\ell}$ of the ${\ell}$th multipath-compenent is assumed to follow an exponentially decaying \ac{PDP}.
Each channel tap is therefore weighted during its generation with the weight $b_{\ell} = \sqrt{p_{\ell}}$ computed by
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:exp_dec}
b_{\ell} = \frac{1}{\gamma}\sqrt{1-\beta}\cdot \beta^{\nicefrac{{\ell}}{2}} \in \mathbb{R}, \qquad {\ell} = 0,1,...,L-1
\end{equation}
where the factor $\gamma$ is chosen such that $\sum_{\ell}|b_{\ell}|^2=1$ and ${0<\beta<1}$ is a variable decay parameter.
The Fourier transform of the channel impulse response $h\left(t, \tau\right)$ then yields the channel transfer function $H \left( t,f \right)$.
We assume that the considered \ac{OFDM} transmission system operates on frames of $n_\mathrm{T}$ consecutive \ac{OFDM} symbols with parameters given in Tab.~\ref{Tab:Scenario}.
Each \ac{OFDM} symbol consists of $N_{\mathrm{Sub}}$ symbols -- either data-carrying or pilot-carrying -- that are transmitted in parallel over the $N_\mathrm{Sub}$ subcarriers.
The transmitted information bits $\mathbf{u}$ %
are encoded and interleaved into the sequence $\mathbf{c}$ of length $n_{\mathrm{d}}\cdot m$ using an 5G NR compliant \ac{LDPC} code \cite{5G_Code_2018} of length $n=1296$ bit. Here, $n_\mathrm{d}$ denotes the number of transmitted data-carrying symbols within a frame and each data symbol carries the information of $m$ bits (e.g., $m=4$ for a 16 \ac{QAM}).
For the simulation in frequency domain it is assumed that a sufficiently long \ac{CP} is applied and \ac{ISI} is not present. %
Let $\mathbf{X} \in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ be the transmitted symbols.
After the removal of the \ac{CP} the received symbols $\mathbf{Y}\in\mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ are given by
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:received_symbols}
\mathbf{Y} = \mathbf{H} \circ \mathbf{X} + \mathbf{N}
\end{equation}
where $\circ$ denotes the element-wise multiplication, $\mathbf{H}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the channel matrix and $\mathbf{N}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the \ac{AWGN} matrix.
By sampling $H\left(t,f\right)$ according to the \ac{OFDM} system parameters given in Tab.~\ref{Tab:Scenario} we end up with the channel matrix $\mathbf{H}$ of the current frame.
The elements $N_{k,n}$ of the noise matrix $\mathbf{N}$ are independent and identically complex Gaussian distributed according to
$N_{k,n}\sim \mathcal{CN}\left(0, \sigma^2\right)$ where
$\sigma^2$ denotes the noise power per element.
The task at receiver side is to equalize and demap the received symbols $\mathbf{Y}$. %
Finally, the obtained soft bit estimates are decoded by a \ac{BP} decoder. %
\subsection{Iterative LMMSE Baseline}
As a state-of-the-art baseline system, we employ a receiver based on the \ac{IEDD} principle. %
It consists of a data-aided \ac{LMMSE} channel estimator, a (soft-decision) \ac{APP} demapper and a \ac{BP} decoder that iterates and exchanges soft bit information with the estimator and the demapper.
For further details the interested reader is referred to \cite{aoudia2020end} and the references therein.
\section{Adaptive RNN-based OFDM Receiver}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:RNN}
To demonstrate the advantages of adaptive retraining we consider a trainable \ac{RNN}-based \ac{OFDM} receiver. %
Similar to \cite{honkala2020deeprx,aoudia2020end}, it combines the tasks of channel estimation, equalization and soft-demapping within a single \ac{NN}.%
\subsection{Neural Network Structure and Training}
\begin{figure}[t]
\begin{center}
\input{fig/RNN_Non_iter_slim.tex}
\end{center}
\vspace{-2mm}
\caption{Block diagram of the \ac{RNN}-based \ac{OFDM} receiver.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:RNN_structures}
\vspace{-5mm}
\end{figure}
Fig.~\ref{fig:RNN_structures} provides an overview of the applied \ac{NN} model which is based on the structure that has been used in \cite{Fischer_2021} for the task of channel estimation. %
The RNN maps the received symbols $\mathbf{Y}$ to a soft bit estimation, interpreted as \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}\in \mathbb{R}^{n_{\mathrm{d}}\cdot m}$. %
Besides $\mathbf{Y}$, it also takes the transmitted pilot symbols $\mathbf{X}_\mathrm{p} \in \mathbb{C}^{ n_\mathrm{T} \times N_{\mathrm{Sub}}}$%
, the \ac{LS} channel estimates $\hat{\mathbf{H}}_\mathrm{p,LS}\in \mathbb{C}^{n_{\mathrm{T}} \times N_{\mathrm{Sub}}}$ at pilot positions and the noise standard deviation $\sigma$ into account. %
The complex-valued inputs are split into their real and imaginary parts and the noise standard deviation is broadcasted for the whole frame to match the input tensor shape, so that all inputs can be stacked to one large input tensor.
Similar to \cite{Fischer_2021}, the core element of the \ac{RNN} cell are three bidirectional \ac{LSTM} layers that primarily process the input.
The first \ac{LSTM} layer operates along the input's frequency dimension.
Next, the output's frequency and time dimension are permuted causing the second \ac{LSTM} layer to operate in time dimension.
Finally, the time dimension and the frequency dimension of the second layer's output are again permuted so that the third \ac{LSTM} layer again processes along the frequency dimension of the frame.
Subsequently, %
the \ac{RNN} cell's output is reshaped and processed by two \acp{TDDL}. %
Here, every element of the two-dimensional resource grid of the frame is processed separately by these \acp{TDDL} using shared weights. %
The \ac{LSTM} cells are applied with TensorFlow's default settings using \ac{tanh} activations, the first \ac{TDDL} uses \acp{ReLU} and the second \ac{TDDL} has no activation function. %
In this work, we use 64 units within each \ac{LSTM} layer, the first \ac{TDDL} consists of 8 neurons and the second \ac{TDDL} uses $m$ neurons, i.e., the RNN outputs $m$ values for every position in the resource grid. %
After removing the output values at pilot positions, the \ac{RNN}'s reshaped output $\mathbf{l}_{\mathrm{RNN}} \in \mathbb{R}^{n_\mathrm{d}\cdot m}$ can be de-interleaved and utilized by the outer \ac{BP} decoder. %
Training of the described \ac{RNN} is carried out in a supervised manner utilizing \ac{SGD} and \ac{BPTT}.
During training (initial as well as re-training) the Adam optimizer \cite{Kingma2014} with a learning rate of $\eta = 0.001$ is used to minimize the \ac{BCE} loss between estimations $\mathbf{l}_{\mathrm{RNN}}$ and labels $\vec{c}$.
The \ac{RNN}-based receiver is initially trained with universal randomly generated channel realizations from the stochastic channel model for a vast range of different channel parameters.
This kind of initial training results in an universal and robust generalization and allows the \ac{RNN}-based receiver to implicitly gather knowledge of the channel only through data-driven training \cite{Fischer_2021}.
The exact parameters used for initial training are summarized in Tab.~\ref{Tab:Training_Parameters}.
\begin{table}[t]
\centering
\vspace{0.03in}
\caption{Parameters for Initial (Universal) Training}
\vspace{-1mm}
\begin{tabular}{l|l}
\toprule
Parameter & Value \\
\midrule
Epochs / It. per epoch / BS & 100 / 1000 / 128 \\
Velocity $v$& $\unitfrac[0]{km}{h}- \unitfrac[200]{km}{h}$ \\
Signal-to-noise-ratio (SNR) & $\unit[8]{dB} - \unit[30]{dB}$\\
%
Number of channel taps $L$ & Ep. 1-50: 4-10; Ep. 51-100: 1-14\\
\ac{PDP} & Exp. decaying with $10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)$\\& $=\unit[-13]{dB}$ and equally spaced\\%Exp. decaying with the power \\&in the last resolvable path being\\ & $\unit[13]{dB}$ lower than the power of\\& the first path and equally spaced\\ %
\bottomrule
\end{tabular}
\pgfkeysvalueof{/tikz/dsp/label}}{Tab:Training_Parameters}
\vspace{-5.5mm}
\end{table}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\subsection{Adaptive Retraining via On-the-fly Label Recovery}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:retraining}
In order to allow the \ac{RNN}-based \ac{OFDM} receiver to adapt to current channel conditions, it has to be retrained periodically.
To enable a single retraining step, a data-set consisting of multiple recorded OFDM frames (holding inputs $\mathbf{Y}$, $\mathbf{X}_\mathrm{p}$, $\hat{\mathbf{H}}_\mathrm{p,LS}$ and $\sigma$) and the corresponding labels, being the originally transmitted interleaved coded bits $\mathbf{c}$, must be collected.
As the labels $\mathbf{c}$ are required for supervised training, they must either be retrieved by the transmission of pilot-based training sequences (and are thereby known at the receiver side) or via on-the-fly label recovery, as presented in \cite{schibisch2018online}.
Whereas pilot-based training sequences would cause a rate loss, the approach proposed in \cite{schibisch2018online} recovers the labels on-the-fly via the outer \ac{FEC} after the decoder has corrected the received bits.
Thus, there is no additional rate loss and these labels usually come for free as most systems rely on \acp{FEC}.
To demonstrate the feasibility of on-the-fly label recovery for the task of RNN retraining, we only use labels recovered by the LDPC code after 20 iterations of BP decoding.
The block diagram in Fig.~\ref{fig:on_the_fly_label_recovery} depicts the individual processing steps that allow retraining with recovered labels. %
Therefore, the \ac{RNN} processes the received symbols as described above and outputs an \ac{LLR} for each transmitted bit.
These \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}$ are then de-interleaved and further processed by the \ac{BP} decoder. %
In normal operation, the decoder makes a final decision on the received information bits $\hat{\mathbf{u}}$ after several iterations of \ac{BP} decoding.
But, in order to build up a labeled data-set for retraining, at the same time the decoder also outputs its information on the coded bits $\hat{\mathbf{c}}$, i.e., a hard decision on the final variable nodes.
These coded bits $\hat{\mathbf{c}}$ are then interleaved to $\tilde{\mathbf{c}}$ and stored together with the corresponding inputs.
If enough tuples of inputs and labels are recovered to form a sufficiently large retraining data-set, an update step using supervised \ac{SGD} is performed, aiming to reduce the \ac{BCE} loss.
However, one drawback of the described label recovery approach is, that even after sufficient decoding, not all labels can be recovered correctly by a \ac{FEC} code.
This is why we consider a codeword's error syndrome in combination with the current \ac{SNR} to define a threshold for labels that are stored in the retraining data-set, while samples above the threshold are discarded.
Similar to the findings in \cite{schibisch2018online} we saw improved performance after retraining even with partly erroneous labels.
If the number of erroneous labels exceeded a certain level we saw a degradation after retraining.
But, this can be avoided by defining the threshold conservatively.%
\begin{figure}[t]
%
\begin{center}
\input{fig/On-the-fly-label-recovery.tex}
\end{center}
\vspace{-2mm}
\caption{Block diagram of the retraining process for NN-based receiver adaptation via on-the-fly label recovery \cite{schibisch2018online}.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:on_the_fly_label_recovery}
\vspace{-8mm}
\end{figure}
\section{Simulation Results}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:results}
\begin{table}[t]
\centering
\vspace{0.03in}
\caption{OFDM and Channel Model Parameters}
\vspace{-1mm}
\begin{tabular}{l|l}
\toprule
Parameter & Value \\
\midrule
Number of subcarriers $N_{\mathrm{Sub}}$ & 64 \\
Frame length $n_{\mathrm{T}}$& 36 \\
Carrier frequency $f_{\mathrm{c}}$ & $\unit[5.9]{GHz}$ \\
Symbol duration including \ac{CP} $T_{\mathrm{S}}$ & $\unit[8]{\mu s}$ \\
Length of the \ac{CP} &$\unit[1.6]{\mu s}$ \\
Bandwidth $B$ & $\unit[10]{MHz}$\\
Data symbol constellation & 16 QAM, $m=4$ bit per symbol \\
Pilot structure/arrangement & Rectangular/Grid \\
Pilot symbol distance & $d_{\mathrm{T}}=15$, $d_\mathrm{F}=5$\\
\ac{PDP} & Exp. decaying with\\ &$10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)=\unit[-13]{dB}$\\ %
LDPC code & $R_{\mathrm{C}} = \nicefrac{1}{2}$, $n = \unit[1296]{bit}$\\ %
\bottomrule
\end{tabular}
\pgfkeysvalueof{/tikz/dsp/label}}{Tab:Scenario}
\vspace{-4mm}
\end{table}
To evaluate the effects of adaptive retraining we simulate the performance of various receiver setups in three different scenarios.
For each scenario we assume certain channel conditions, simulated by channel model parameters, to be static for a short period of time.
Within this time period, which shall represent the \emph{current} channel, we gather retraining data via on-the-fly label recovery as described in Sec.~\ref{sec:retraining}, perform a retraining step of the RNN-based receiver and then evaluate the performance on the same channel conditions.
For the following simulation results, a retraining step was executed after 32 batches with 50 frames of input-label-tuples per batch were collected.
With the general simulation parameters given in Tab.~\ref{Tab:Scenario}, this translates to a label recovery time period of $\unit[0.4608]{s}$ and, thereby, sets a lower bound (neglecting time for retraining computations) for periodic retraining steps to track channel alterations.
To limit the amount of erroneous labels within a recovered retraining data-set, we empirically defined the threshold according to the codeword's error syndrome in a way that at least $82\%$ of the parity-checks of the recovered labels have to be fulfilled by a batch to be used for retraining.
In addition, a batch is only used for retraining if the \acs{SNR} $\nicefrac{E_{\mathrm{b}}}{N_0}$ is larger than $\unit[7]{dB}$, resulting in basically no retraining in the low SNR regime.\footnote{Pilot sequence-based labels are required for retraining in the low SNR regime, as recovered labels based on FEC suffer from high error rates.}
Also, each recovered batch is only used once for an SGD weight update iteration and one retraining step is performed separately for every evaluation point at different SNR.
For each scenario the performance is measured by the \ac{BER} after forward error correction (post-FEC) and the following receiver systems are analyzed:
\begin{itemize}
\item \textbf{Universal \ac{RNN}}: Non-iterative RNN-based receiver, %
initially trained with the universal parameters summarized in Tab.~\ref{Tab:Training_Parameters}, complemented by 20 iterations of BP decoding.
\item \textbf{Adapted \ac{RNN}:} Non-iterative \ac{RNN}-based receiver, initially trained with the universal parameters in Tab.~\ref{Tab:Training_Parameters}, that is adapted to the current channel via one retraining step using on-the-fly recovered labels. Also complemented by 20 iterations of BP decoding.
\item \textbf{\ac{LMMSE} \ac{IEDD}}: Conventional \ac{LMMSE} \ac{IEDD} baseline system %
utilizing an autocorrelation matrix that is matched to the channel (genie knowledge of channel model parameters). %
The \ac{BP} decoder executes 5 iterations %
before feedback is provided to estimator and demapper. %
In total $4\times 5=20$ iterations of BP decoding are executed.
\item \textbf{Perfect Knowledge IDD}: Lower limit of the achievable \ac{BER} assuming perfect knowledge of the channel and utilizing an iterative receiver, i.e., exploiting \ac{IDD}.
Here, feedback is provided to the demapper after every iteration of \ac{BP} decoding and $\vec{H}$ is known. %
In total $20\times 1 = 20$ iterations of BP decoding are executed.
\vspace{-2mm}
\end{itemize}
%
%
%
\subsection{Corner Case (Sub-Ensemble) Scenario}
\begin{figure}[t]
\begin{center}
\input{fig/BER_Retraining_8taps_0kmh_exp_dec_16QAM}
\end{center}
\vspace{-2mm}
\caption{\ac{BER} performance of the investigated receivers in the corner case scenario of no movement and thereby no channel time-variance ($v = \unitfrac[0]{km}{h}$ and moderate $L = 8$ channel taps).}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_8taps}
\vspace{-5mm}
\end{figure}
The first scenario investigates the impact of adaptation to corner case conditions using the example of no \ac{UE} movement.
For this purpose we set the velocity to $v = \unitfrac[0]{km}{h}$ and choose a moderate number of $L = 8$ channel taps so that the stochastic channel model generates channel realizations that form a sub-ensemble of the universal conditions used for initial training (Tab.~\ref{Tab:Training_Parameters}).
As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_8taps}, the unadapted \emph{Universal RNN} already shows a better performance than the conventional \emph{LMMSE IEDD} baseline, thus, confirming the findings of \cite{aoudia2020end, Fischer_2021}.
This gain can be justified by the fact that the RNN-based receiver can additionally exploit the expected distribution of the data-carrying symbols in $\vec{Y}$.
However, by adapting the RNN receiver to the current channel conditions, the \emph{Adapted RNN} can further gain about \unit[0.1]{dB} of BER performance compared to the \emph{Universal RNN}.
Interestingly, this gain is possible although the channel conditions of this scenario were part (sub-ensemble) of the initial universal training.
We assume that retraining to current channel conditions reinforces the RNN to lift conservative assumptions, as channel realizations with high velocity are not part of the retraining data and high velocity implications are thereby not considered for weight updates.
These gains have also been observed for various other corner cases with different parameters within the range of the universal channel ensemble, but due to paper length limits we exemplary only show this corner case.
\subsection{Out-of-Specification (Extreme) Scenario}
\begin{figure}[t]
\begin{center}
\input{fig/BER_Retraining_16taps_100kmh_exp_dec_16QAM}
\end{center}
\vspace{-2mm}
\caption{\ac{BER} performance of the investigated receivers in the extremely frequency-variant (out-of-specifications) scenario of $L = 16$ channel taps at a moderate velocity of $v = \unitfrac[100]{km}{h}$.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_16taps}
\vspace{-4mm}
\end{figure}
In the second scenario, we want to focus on the benefit of adaptation in case of unforeseen and extreme channel conditions.
Therefore, the results shown in Fig.~\ref{fig:BER_retraining_16taps} were obtained at highly frequency-selective channel conditions with $L = 16$ channel taps at a moderate velocity of $v=\unitfrac[100]{km}{h}$.
The simulation results show that the performance of the conventional \emph{LMMSE IEDD} baseline system degrades heavily.
This is expected as it mainly relies on pilot symbols and the used pilot position spacing in frequency dimension is not sufficient for $L = 16$ channel taps, setting this scenario out of specification.
Likewise, this scenario is also out of specification for the \emph{Universal RNN} as initial training only covers channel conditions up to $L = 14$ channel taps.
However, the performance of the \emph{Universal RNN} does also degrade compared to the \emph{Perfect Knowledge IDD} lower limit, but not as much as the \emph{LMMSE IEDD} baseline system.
This observation is also consistent with the findings of \cite{aoudia2020end, Fischer_2021} which showed, that NN-based receivers extract further knowledge about the channel from the provided data-carrying symbols and are therefore more robust against sparse pilot spacing.
But, most interestingly, the \emph{Adapted RNN} shows significantly improved performance compared to the \emph{Universal RNN}.
While there is still a large gap between the performance of the \emph{Adapted RNN} and \emph{Perfect Knowledge IDD}, these results show that adaptation can render a NN-based receiver to significantly higher operability, even in the case of a scenario that was originally out of specifications.
\subsection{Interference Scenario}
\begin{figure}[t]
\begin{center}
\input{fig/BER_Retraining_8taps_100kmh_GuardBand_Interference_6dB}
\end{center}
\vspace{-2mm}
\caption{
\ac{BER} performance of the investigated receivers in a scenario with side channel interference, modeled by additive noise of $\unit[6]{dB}$ on the outer four subcarriers, at otherwise moderate conditions with $L = 8$ channel taps and $v = \unitfrac[100]{km}{h}$.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_guard_band_interference}
\vspace{-4mm}
\end{figure}
Finally, we want to showcase a scenario that highlights the flexibility of NN-based receivers and how retraining can even enable adaptation to unseen tasks.
This is shown using the example of side channel interference, which is modeled by adding noise to the outer four subcarriers, reducing their SNR by $\unit[6]{dB}$.
As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_guard_band_interference}, the \emph{LMMSE IEDD} baseline as well as the \emph{Universal RNN} suffer from the added interference, but retraining the RNN-based receiver leads to a performance gain of \unit[0.42]{dB} when we compare the \emph{Adapted RNN} with the \emph{Universal RNN}.
In this case the NN-based receiver is able to cope with the new task of incorporating the disturbance on the outer four subcarriers via retraining, while a conventional system would require additional signal processing and can not simply adapt.
\section{Conclusion}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:conclusion}
We have demonstrated that \ac{NN}-based receivers benefit from continuous retraining as they can adapt to current, extreme and new unforeseen channel conditions.
For such cases, adaptation leads to a superior performance when compared to static receivers that have only been designed and optimized for a universal channel model.
Finally, we want to emphasize that these gains come without any additional signaling overhead, as on-the-fly label recovery is sufficient for the retraining process.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.555664,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbHk4ukPiEY1blDyg | \section{Introduction}
The interplay between superconductivity (SC) and magnetism, and
the role of correlation effects in Fe-pnictides are under debate
\cite{Johonston2010,Stewart2011,Bang2012}.
It is commonly assumed that in most of the
so-called stoichiometric
parent
compounds nesting
between electron (el) and
hole (h) Fermi surfaces is responsible for the presence of
long-range spin density waves (SDW). To get SC,
the SDW should
be suppressed by chemical doping or external pressure
\cite{Johonston2010,Stewart2011}. Therefore, it is believed that SC is
driven at least partially
by AFM spin fluctuations (sf).
In contrast,
in some other of Fe-pnictides such as LiFeAs (111) or
KFe$_2$As$_2$ (K122) there is no
nesting. Therefore, the role of magnetism in the
SC pairing
of these
compounds is less obvious but it is still believed that remaining
sf
as in
Ba122 \cite{Hirano12} or a new type \cite{Zhang2010} sf can be
responsible for SC.
In this paper we demonstrate that the
physical properties
of clean
K122 single crystals are strongly affected by an unexpected glassy like
magnetic
behavior
such as
spin-glass (SG) and Griffiths (G) type. It is known that a SG phase
gives a nearly
linear magnetic contribution to the SH below the freezing temperature $T_f$
\cite{Binder1986,Mydosh1993}. In some cases the SG contribution can be hardly
distinguished from the usual
electronic (Sommerfeld) contribution to the SH, since it has the same
linear $T$-dependence and only a weak maximum slightly above $T_f$.
Therefore, the Sommerfeld coefficient and the
deduced strength
of correlation effects can be signifiantly
overestimated,
if one considers only SH data ignoring thereby the glassy phases. Moreover, the
interplay of
superconductivity
(SC) and {\it unknown} magnetic phases can lead to confusing conclusions
concerning SC gaps \cite{Kim2011}. A clear
understanding of
various coexisting or competing forms of magnetism
should be addressed first.
\section{Experimental results and discussion}
\subsection{Samples}
K122 single crystals have been grown using a
self-flux method (FeAs-flux (S1) and KAs-flux (S2)).
The high quality of the grown single crystals was
assessed by complementary techniques. Several samples were
examined with a Scanning Electron Microscope (SEM Philips XL 30)
equipped with an electron microprobe analyzer for a
semi-quantitative elemental analysis using the energy dispersive
x-ray (EDX) mode (for details see
\cite{Hafiez2011,Aswartham2012}). Resistivity of all measured samples
showing a metallic behavior at all $T$ with a
RRR$_5=\rho({\rm300\,K})/\rho ({\rm5\,K})\approx$400-500, where
$\rho({\rm300\,K})$ and $\rho ({\rm5\,K})$ is resistivity at $T=300$ and 5~K,
respectively, comparable with the best values reported in the literature.
The resistivity data
will be published elsewhere.
Low-$T$ SH and ac susceptibility were
determined using a PPMS (Quantum Design).
The dc magnetic susceptibility has been measured in a SQUID
(Quantum Design) magnetometer. In this paper the data for two
representative single crystals are shown.
\subsubsection{Magnetization measurements}
\begin{figure}[t]
\includegraphics[width=18pc,clip]{fig1.eps}
\caption{(Color online) The temperature dependence of the volume susceptibility
of S1 and S2 in the
SC state.
Inset: Their molar susceptibilities in the normal state.}
\label{Fig:1}
\end{figure}
\paragraph{Samples with cluster glass behavior}
Fig.\ \ref{Fig:1} depicts the $T$-dependence of
the volume susceptibility ($\chi_{v}$) determined from dc magnetization
of our samples measured under both
zero-field-cooled (ZFC) and field-cooled (FC) conditions with the
field $B_{\parallel ab} = 20$\,Oe. Bulk SC of our samples
is confirmed by 'full' diamagnetic signals of the ZFC data
at low $T$. For sample S1, a clear splitting between ZFC and FC normal
state linear susceptibility $\chi_l(T)=M(T)/B$ curves is observed below 100~K
(see the inset of Fig.~\ref{Fig:1}), where $M(T)$ is a magnetization
in the field $B$.
The maximum in the ZFC $\chi_l(T)$ is attributed to the
freezing temperature of a spin glass (SG) type phase
at $T_f \approx 65$~K and $B=20$~Oe \cite{remres1}.
\begin{figure}[t]
\includegraphics[width=18pc,clip]{fig2.eps}
\caption{(Color online) (a) The molar susceptibility
$\chi_l(T)=M/B$ for S1 crystal measured at different magnetic fields $B$.
$M$ is the magnetization. Inset: field
dependence of freezing temperature for field $B_{\parallel ab}$ and
$B_{\parallel c }$.
(b) Field dependence of magnetization of S1 crystal measured after
ZFC at $T$=5K and $T$=60K ($B_{\parallel ab}$) . Inset: field dependence of
magnetization at $B_{\parallel ab}$ and $B_{\parallel c }$ and $T$=5K.}
\label{Fig:2}
\end{figure}
$T_f$ decreases with increasing field and at 5~T no
splitting was observed down to 2~K (see Fig.\ \ref{Fig:2}a).
The field dependence of $T_f$ is shown in the inset of Fig.\ \ref{Fig:2}a.
The extrapolated value of $T_f(B=0)\sim 90$~K
is a relatively high value. This might point to a large
concentration of the involved
magnetic moments (MM)
$\gtrsim$10\% in sample S1 \cite{Mydosh1993}. Such a high value of MM is
expected from entropy estimations, too (see section 2.2). On the other hand,
structural investigations did not reveal any impurity phase (see section 2.3).
Therefore, we speculate that the high value of $T_f$ might be caused
by a low-lying excited
incommensurate spin density wave state \cite{Overhauser1960}. For a
more detailed consideration of
this scenario see Ref.\ 13.
In addition, an upshift of the maximum and its lowering
with increasing frequency $\nu$ of the ac susceptibility,
generic for a
SG ordering \cite{Binder1986,Mydosh1993}
(Fig.\ \ref{Fig:3}), was observed for crystal S1.
The value of the frequency shift of $T_f$ \cite{remres1}:
\begin{equation}\label{eq1}
\delta T_f=\frac{\Delta T_f}{[T_f\Delta(log \nu)]}\sim 0.05 .
\end{equation}
is above the range 0.001-0.02
expected for canonical SG but well below $\sim$ 0.3 observed
in the case of superparamagnets \cite{Mydosh1993}. Such an
intermediate value of
the frequency shift is usually related to a the so-called cluster glass (CG)
behavior \cite{Marcano2007,Anand2012}. The frequency dependence of the $T_f$
shown in inset Fig.~\ref{Fig:3} follows a conventional power-law divergence
of critical slowing down \cite{Mydosh1993,Anand2012}:
\begin{equation}\label{eq2}
\tau=\tau_{0}[\frac{T_{f}(\nu)}{T_{f}(0)}-1]^{-z\nu^{'}} ,
\end{equation}
where $\tau=1/\nu$ is the relaxation time corresponding to the measured
frequency $\nu$, $\tau_0$ is the characteristic relaxation time of
single spin flip, $T_f(\nu=0, B=5 Oe)\approx$71~K is the spin-glass
temperature as the
frequency tends to zero adopted from dc susceptibility measurements
(inset Fig.~\ref{Fig:2}a), and $z\nu^{'}$ is the dynamic critical exponent.
It is convenient to rewrite
Eq.~\ref{eq2} in the form:
\begin{equation}\label{eq2a}
ln(\tau)=ln(\tau_0)-z\nu^{'}ln(t),
\end{equation}
where $t=T_f(\nu)/T_f(0)-1$. The fit of the
experimental data by a power-law
divergence Eq.~\ref{eq2a} is shown in the
inset Fig.~\ref{Fig:3}. The best fit
was obtained with $z\nu^{'}$=10(2) and $\tau_0=6.9\cdotp 10^{-11}$s.
The value of $z\nu^{'}$ is in the range of 4 to 12 observed for typical SG
\cite{Anand2012}. On the other hand the value of $\tau_0$ is large as
compared to $10^{-12} - 10^{-14}$s
observed for structureless SG systems, which is of the order of the spin-flip
time of atomic magnetic moments ($10^{-13}$s) \cite{Malinowski2011}.
This suggests a slow spin dynamics in our crystal S1, likely due to the presence of
interacting clusters rather than individual spins.
Another signature of a SG-like behavior for crystal S1 is a hysteresis
in the
magnetization data below $T_f$
with a reduced ZFC susceptibility at low fields (Fig.\ \ref{Fig:2}b).
This behavior is expected in the case of SG or CG systems
\cite{Mydosh1993,Marcano2007,Coles1978}
below $T_f$ and also excludes superparamagnetic behavior in our samples
where no
hysteresis was observed \cite{Bean1959}.
On the other hand, in the case of canted antiferromagnetic (AF)
or ferromagnetic (FM) impurity phases hysteresis is expected but with
a higher susceptibility at low fields because the
clusters are at first saturated
along their local easy axis, and only after that various clusters
become fully aligned along the applied field \cite{Malinowski2011}.
Therefore, our $M(B)$ data exclude
(large) clusters of impurity phases such as FeAs, Fe$_2$As or other iron
compounds. The same conclusion can be drawn
from our SH measurements (see below).
We also observed the displacement of the
ZFC magnetization compared to magnetization after complete hysteresis loop at
magnetic field applied parallel to $ab$ plane
(inset Fig.~\ref{Fig:2}b). For the field along the
$c$-axis no
displacement was observed. This indicates that the glassy behavior
is dominated by a magnetic interaction between moments lying in the
$ab$-plane.
\begin{figure}[t]
\includegraphics[width=18pc,clip]{fig3.eps}
\caption{(Color online) $T$-dependence of the real part of
the ac susceptibility for sample S1 measured for three
different frequencies $\nu$ at 5~Oe ac field amplitude.
Inset: the $\nu$-dependence of the
freezing
temperature plotted as $ln(\tau)$ vs.\ $\ln (t)$, where $t$ denotes the
reduced temperature
$t=T_f(\nu)/T_f(0)-1$.}
\label{Fig:3}
\end{figure}
\paragraph{Samples with Griffiths-phase behavior}
In contrast, the $T$-dependence of the linear susceptibility $\chi_l(T)$ of
some crystals S2
does not show a
difference
between ZFC and FC curves above $T_c$. The $\chi_l(T)$ data of one of the S2
crystals is shown in Fig.~\ref{Fig:4}a.
At high $T>$200~K, $\chi_l(T)$ follows a Curie-Weiss behavior with an AFM
$\Theta_c$=-117~K \cite{remres2}.
At $T\lesssim$120~K $\chi_l(T)$ shows a plateau-like feature with a field
independent susceptibility above $B=1$~T. In a view of the
observed CG in
sample S1, we relate this flattening to a tendency to form magnetic
clusters in the
crystals S2, too. However, with lowering $T$ after a weak reduction of
$\chi_l(T)$ instead to form a CG phase in crystal S2,
below $T^*$ an exponential increase of the susceptibility is observed:
\begin{equation}\label{eq3}
\chi_{l}(T)= \chi_{l0}+\frac{Cu_{\mbox{\tiny G}}}{T^{1-\lambda_{\mbox{\tiny G}}}},
\end{equation}
where $\chi_{l0}$ is a $T$-independent susceptibility,
and $Cu_{\mbox{\tiny G}}$ is a back ground constant.
A power-law with the exponent
$\lambda_{\mbox{\tiny G}}$ was found up to the
highest measured field 7~T (see Fig.\ \ref{Fig:4}a).
This exponential behavior is quite similar to the
reported one for the weak itinerant ferromagnetic alloy Ni$_{1-x}$V$_x$
\cite{Ubaid-Kassis2010}, where
the formation of a Griffiths (G) phase with a
non-universal exponent was
observed near a FM quantum critical point. Following the analysis
proposed there, the
field and $T$-dependence of the magnetization can be scaled on a single
curve
Fig.~\ref{Fig:4}b:
\begin{equation}\label{eq3a}
M_s(T,B)= B^{1-\lambda_{\mbox{\tiny G}}}Y(\frac{\mu B}{k_{b}T}),
\end{equation}
where $\mu$ is the scaling moment and
$Y=A^{'}/(1+z^{-2})^{\lambda_{\mbox{\tiny G}}/2}$ a
scaling function with
$A^{'}=A/\mu^{\lambda_{\mbox{\tiny G}}}$ as a constant.
To scale the data using Eq.~\ref{eq3a} we have subtracted
the $T$-independent susceptibility
$\chi_{l0}$ from $\chi_{l}(T)$ and $\chi_{l0}B$ from $M(H)$, correspondingly.
For sample S2 (Fig.~\ref{Fig:4}) a scaling was observed for
$\lambda_{\mbox{\tiny G}} \approx$0.5(1)
with a scaling moment $\mu \sim$3.5$\mu_b$. According to Ref.\ 20
the obtained moment can be related
to a typical cluster size in crystal S2.
The SH data are also in agreement with the G-like
scenario (see below). Therefore, we ascribe the anomalous
power-law of $\chi(T)$ at low $T<T^*\approx 50$~K to the
formation of a
quantum G-phase.
\begin{figure}[t]
\includegraphics[width=18pc,clip]{fig4.eps}
\caption{(Color online) (a) The molar susceptibility
$\chi_l(T)=M/B$ for crystal S2 measured in
different magnetic fields $B$, where
$M$ is the magnetization. Fitting curves using Eq.~\ref{eq3a}.
(b) The scaled magnetization $M_{s}B^{-0.5}=(M(T,B)-\chi_{l0}B)B^{-0.5}$ vs.\
$B/T$ for crystal S2.(see Eq.~\ref{eq3a})}
\label{Fig:4}
\end{figure}
\subsection{Specific heat measurements}
\paragraph{Specific heat in the normal state}
We have found that the glassy magnetic subsystems (observed in the magnetization
measurements) do
also contribute to the
SH shown in Fig.~\ref{Fig:5}a. In case of SG or CG
phases the magnetic contribution
$C_{CG}$ to the SH varies almost linearly at $T<T_f$
like the usual electronic contribution in case of a FL
\cite{Mydosh1993,Binder1986,Dawes1979,Martin1980}. Empirically,
this behavior can be approximated by
\begin{equation}\label{eq4a}
C_{\rm CG}\approx \gamma_{\rm CG}T+\varepsilon_{\rm CG2}T^2,
\end{equation}
or
\begin{equation}\label{eq4b}
C_{\rm CG}\approx
\varepsilon_{\rm CG1.5}T^{1.5},
\end{equation}
where $\gamma_{\rm CG}$, $\varepsilon_{\rm CG2}$ and $\varepsilon_{\rm CG1.5}$
are CG constants. The $\varepsilon_{\rm CG1.5}$ contribution can be interpreted
as originating from short-range 3D ferromagnetic (FM) spin waves which can
exist in a FM clusters \cite{Thomson1981}, also a linear contribution to SH can
be expected for 2D FM spin waves.
Then, the normal state SH of sample S1 reads
\begin{equation}\label{eq4d}
C_p^{S1}(T)=\gamma_{\rm el}T+C_{\rm m}^{CG}+\beta_3 T^3+\beta_5 T^5 \ ,
\end{equation}
where $C_{\rm m}^{CG}$ is given by Eq.~\ref{eq4a} and Eq.~\ref{eq4b},
$\gamma_{\rm el}$ is an intrinsic electronic contribution and
$\beta_3$, $\beta_5$ are a lattice contribution.
In case of a G-phase (sample S2),
$C(T)_G/T \propto \chi(T)$ is expected \cite{CastroNeto1998,Stewart2001}.
Hence, for the SH we have:
\begin{equation}\label{eq5}
C_p(T)^{S2}=\gamma_{\rm el}T+\gamma_GT^{\lambda_G}+\beta_3 T^3+\beta_5 T^5 \ ,
\end{equation}
where $\lambda_{\mbox{\tiny G}} \approx 0.5(1)$ according to our magnetization
data.
\begin{figure}[t]
\includegraphics[width=21pc,clip]{fig5.eps}
\caption{(Color online) (a) The specific heat of our two K122
samples. Inset: low temperature part of the SH at zero field.
(b) The plot $[C_p^{S1}(T)-C_p^{S2}(T)]/T$.}
\label{Fig:5}
\end{figure}
To reduce the number of fitting parameters in Eqs.~\ref{eq4d} and \ref{eq5},
we analyzed the difference:
\begin{equation}\label{eq6}
[C_p^{S1}(T)-C_p^{S2}(T)]/T=
C_{\rm CG}^{S1}-\gamma_GT^{\lambda_G-1}\ .
\end{equation}
This allow us to exclude the lattice contributions $\beta_3$, $\beta_5$,
as well as the linear electronic term $\gamma_{\rm el}$
which are all
supposed to be nearly the same for both crystals, respectively. The fit of the
experimental data by Eq.~\ref{eq6} is shown in Fig.~\ref{Fig:5}b.
i) In the case of Eq.~\ref{eq4a} it gives:
$\gamma_{\rm CG}^{S1}=36$~mJ/mol$\cdotp$K$^2$,
$\varepsilon_{\rm CG2}=2.0$~mJ/mol$\cdotp$K$^3$ and
$\gamma_{\rm G}=104$~mJ/mol$\cdotp$K$^{1.5}$, respectively.
Then, using in Eq.~\ref{eq5} obtained magnetic contribution
we have estimated the {\it intrinsic}
$\gamma_{\rm el}^i\approx 52$~mJ/mol$\cdotp$K$^2$ for sample S2
with the lattice terms
$\beta_3^i=0.59$~mJ/mol$\cdotp$K$^4$ and
$\beta_5^i=1.20\cdotp 10^{-3}$~mJ/mol$\cdotp$K$^6$,
respectively. The obtained $\beta_3^i$ corresponds to a
Debye temperature
$\Theta_D^i\approx 254$~K.
ii) In the case of validity of Eq.~\ref{eq4b} it gives:
$\varepsilon_{\rm CG1.5}=14.9$~mJ/mol$\cdotp$K$^3$ and
$\gamma_{\rm G}=75.3$~mJ/mol$\cdotp$K$^{1.5}$, respectively.
Then the {\it intrinsic}
$\gamma_{\rm el}^{ii}\approx 68$~mJ/mol$\cdotp$K$^2$ is the same for S1 and S2
crystals with slightly different lattice terms as compared to those
obtained in the
analysis (ii): $\beta_3^{ii}=0.46$~mJ/mol$\cdotp$K$^4$ and
$\beta_5^{ii}=1.86\cdotp 10^{-3}$~mJ/mol$\cdotp$K$^6$,
respectively. This value of $\beta_3^{ii}$ corresponds to a Debye temperature
$\Theta_D^{ii}\approx$276K. Both analysis give reasonable values of the lattice
contribution (for example, in the case of Ba$_{0.68}$K$_{0.32}$Fe$_{2}$As$_{2}$
a Debye temperature of 277K was estimated \cite{Popovich2010}) and essentially
reduced $\gamma_{\rm el}\approx 52-68$~mJ/mol$\cdotp$K$^2$ as compared to
nominal values $\gamma_{\rm n}\approx 100$~mJ/mol$\cdotp$K$^2$
\cite{Fukazawa2011,Kim2011} obtained with out accounting for
magnetic contributions. The SH data at
$B=9$~T shown in Fig.~\ref{Fig:5}a can be considered as a support of
analysis (ii), since this analysis provides essentially a better agreement
between the
experimental data and the fitting curves at low-$T$. However, we cannot
exclude that large field enhances FM order in S1 crystals and actually change
the entropy of SG at low temperatures.
Below $T_d$ $\sim$6~K the data for
S2 deviate from the fit-curve (Fig.~\ref{Fig:5}).
At $T_{\rm CG} \sim$4~K, slightly above $T_c$, (see
Fig.\ \ref{Fig:5}a) another magnetic anomaly is well visible in the
SH data. Additionally, slightly above $T_{\rm CG}$ we observed a plateau-like
feature in $\chi(T)$ at relatively low fields (see Fig.~\ref{Fig:4}a).
We ascribe $T_d$ to the freezing of large cluster
dynamics in accord with the behavior expected for a
quantum G-phase
(see Ref.~\cite{Vojta2010}) followed by the final
formation of a CG phase due to the RKKY interaction between
the
clusters at $T_{\rm CG}$
in crystal S2, too (for an illustration see also Fig.~\ref{Fig:6}).
\begin{figure}[b]
\includegraphics[width=21pc,clip]{fig6.eps}
\caption{(Color online) Schematic phase diagram of
extrinsic magnetic moments (MM) driven
quantum phase transitions.
Notation of phases: SC - superconductivity, G - Griffiths, CG - cluster glass,
S1, S2 - samples from this work.
}
\label{Fig:6}
\end{figure}
\paragraph{Specific heat in the superconducting state}
Measurements in SC state have shown that there is a large
residual Sommerfeld coefficient observed for all investigated samples
(see inset Fig.~\ref{Fig:5}a). The fit below 1K gives a residual
contribution for crystal S1
$\gamma_{\rm res1}^{S1}\approx$43~mJ/mol$\cdotp$K$^2$ and about
$\gamma_{\rm res1}^{S2}\approx$46~mJ/mol$\cdotp$K$^2$ for S2
\cite{remminorphases}. The $\gamma_{\rm res1}^{S1}$
is close to
$\gamma_{\rm CG}^{S1}=36$~mJ/mol$\cdotp$K$^2$
estimated for the normal state using
analysis (i). The closeness of these values would indicate that
$\gamma_{\rm SG}^{S1}$ is weakly
effected by the SC transition and also excludes essentially a
non-superconducting
volume fraction for
our the samples. The latter is also supported by the large absolute value of
the SH jump at $T_c$ compared to the reported in the literature values
\cite{Kim2011,Fukazawa2011}.
In the case of \cite{Kim2011} it was observed that $C_p/T$ of the
investigated crystals tends to zero at $T=0$ after AFM type magnetic
transition at $T\sim$0.7K. This demonstrates that almost all itinerant
quasi-particals are gapped at $T=0$. Therefore, we conclude that the large
residual $\gamma_{\rm res}$ in the SC state of our samples is mainly due
to the magnetic
contribution from a CG. On the other hand, using
$\varepsilon_{\rm CG1.5}=14.9$~mJ/mol$\cdotp$K$^3$ from analysis (ii),
we get $\gamma_{\rm res2}^{S1}\approx$36~mJ/mol$\cdotp$K$^2$. This value is
nearly a half of $\gamma_{\rm el}^{ii}\approx 68$~mJ/mol$\cdotp$K$^2$.
In contrast to the conclusion obtained from analysis (i), it
would mean that the CG phase in SC state is different from the CG in
the normal state,
since we exclude a large non-SC part of our samples. This can be possible, since
itinerant electrons responsible for the RKKY interaction
are affected by the SC transition. Thus, on this stage we cannot decide which
analysis (i) or (ii) is more sophisticated. Therefore, we estimate the
{\it intrinsic} $\gamma_{\rm el}^{ii}\approx 52-68$~mJ/mol$\cdotp$K$^2$.
A more detailed report of the superconducting properties
including microscopic considerations will be given elsewhere.
\subsection{Possible disorder induced quantum phase transitions}
Up to now the structural investigation of the
cleaved surface of the samples such as EDX, XRD
and SEM did not reveal any secondary phases \cite{Hafiez2011,Aswartham2012}.
Therefore, we enforced to adopt a 'point'-defect model such as vacancies or
interstitials of Fe atoms.
To compare the amount of magnetic clusters contributing to a glassy
phases of our samples, we calculated the magnetic entropy
$S_{m}=\varint(C_{m}/T)dT$ using the obtained above magnetic contributions.
$S_m^{S2}$ for crystal S2 related to CG and G phases between 0 and
$T^*\approx 50$~K
(where the quantum
Griffiths behavior appears in the
magnetization data Fig.~\ref{Fig:4}a)
is $\sim$0.074RJ/mol-Fe$\cdotp$K$^2$. The estimate for crystal S2
related to the CG phases below $T_{f}\approx 87$~K gives an essentially
higher value than $S_m^{S1}$. In the case of validity of
analysis (i) $\sim$0.64RJ/mol-Fe$\cdotp$K$^2$ and for the case of
analysis (ii)
$\sim$0.48RJ/mol-Fe$\cdotp$K$^2$ for the magnetic contribution
have been
obtained, respectively.
Hence, we conclude that in the normal state crystal S1, it can contain
up to 10 times more magnetic clusters than S2 does.
Summarizing our experimental
observations of disordered magnetic phases in K122, we can propose
a quantum phase transition of spin glass type with strong quantum G-phase
effects (see Fig.~\ref{Fig:6}) driven by some tuning
parameter $p$ which is responsible for the formation of magnetic moments
(MM) in K122. The physical nature of $p$ should be unraveled in future
investigations such as spin and nuclear magnetic resonance and/or M\"ossbauer
spectroscopy. These techniques can be helpful to estimate the amount and the
distribution of MM in K122 single crystals.
\section{Conclusions}
To summarize,
analyzing magnetization and specific heat
data, we found out that even in high-quality
KFe$_2$As$_2$ single crystals
glassy magnetic behavior like in spin- , cluster-glasses
and Griffiths phases
may occur near superconductivity and
coexist with it.
The magnetic contribution is responsible for a large value of the nominal
Sommerfeld coefficient
$\gamma_{n}\sim$100~mJ/mol$\cdotp$K$^2$ of this compound.
The analysis of the SH data has
shown that magnetic contribution amounts up to 50$\%$ of $\gamma_{n}$.
In this way, the intrinsic value of the Sommerfeld coefficient
$\gamma_{\rm el}\approx 52-68$~mJ/mol$\cdotp$K$^2$ was estimated. We observed
that various samples exhibit different disordered magnetic contributions
depending
on the amount and distribution of magnetic moments (MM).
This suggests an
extrinsic origin of MM which can be caused by point defects such as
vacancies or Fe
interstitials. Therefore, we proposed a scenario of disorder induced spin
glass type quantum phase transitions accomplished by strong quantum Griffiths
effects. Further investigations are required to elucidate the physical
origin
and the distribution of such MM.
\begin{acknowledgement}
We thank
J.A.\ Mydosh,
U.\ R\"o{\ss}ler, and
D.\ Evtushinsky
for
discussions.
Our work
was supported by the DFG (SPP 1458
and the Grants No. GR3330/2, WU 595/3-1 (S.W.))
as well as the IFW-Pakt f\"ur Forschung.
\end{acknowledgement}
| {
"attr-fineweb-edu": 1.858398,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbHo5qsMAI0vkbnpS | \section*{High quality GeV proton acceleration driven by an circularly polarized laser pulse}\suppressfloats
\title{Target shape effects on monoenergetic GeV proton acceleration}
\author{Min Chen} \affiliation{Institut f\"{u}r Theoretische
Physik I, Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, 40225
D\"{u}sseldorf, Germany}
\author{Tong-Pu Yu}
\affiliation{Institut f\"{u}r Theoretische Physik I,
Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, 40225 D\"{u}sseldorf,
Germany}\affiliation{Department of Physics, National University of
Defense Technology, Changsha 410073, China}
\author{Alexander Pukhov
\thanks{To whom any corresponding should be sent: [email protected]}
\affiliation{Institut f\"{u}r Theoretische Physik I,
Heinrich-Heine-Universit\"{a}t D\"{u}sseldorf, 40225 D\"{u}sseldorf,
Germany}
\author{Zheng-Ming Sheng}
\affiliation{Department of Physics, Shanghai Jiao Tong University,
Shanghai 200240, China \\and Beijing National Laboratory of
Condensed Matter Physics, Institute of Physics, Beijing 100080,
China}
\begin{abstract}
When a circularly polarized laser pulse interacts with a foil
target, there are three stages: pre-hole-boring, hole-boring and the
light sail acceleration. We study the electron and ion dynamics in
the first stage and find the minimum foil thickness requirement for
a given laser intensity. Based on this analysis, we propose to use a
shaped foil for ion acceleration, whose thickness varies
transversely to match the laser intensity. Then, the target evolves
into three regions: the acceleration, transparency and deformation
regions. In the acceleration region, the target can be uniformly
accelerated producing a mono-energetic and spatially collimated ion
beam. Detailed numerical simulations are performed to check the
feasibility and robustness of this scheme, such as the influence of
shape factors and surface roughness. A GeV mono-energetic proton
beam is observed in the three dimensional particle-in-cell
simulations when a laser pulse with the focus intensity of
$10^{22}W/cm^2$ is used. The energy conversion efficiency of laser
pulse to accelerated proton beam is more than $23\%$. Synchrotron
radiation and damping effects are also checked in the interaction.
\end{abstract}
\pacs{52.40Nk, 52.35.Mw, 52.57.Jm, 52.65.Rr}
\maketitle
\section{INTRODUCTION}
Ion acceleration is one of the most important topics in the ultrashort
ultraintense (USUI) laser-plasma accelerator
field~\cite{TNSA,shock-acce,Esirkepov2004,other-acce}. Because
the laser-accelerated proton and ion beams are highly concentrated,
ultrashort and ultraintense, they have a broad spectrum of
potential applications, such as proton therapy~\cite{therapy}, proton
imaging~\cite{ionimaging}, ion beam ignition of laser fusion
targets~\cite{fusion}, \emph{etc}.
These applications usually require a high degree of beam
monochromaticity as well.
The usual energy spectrum of the ion beams produced due to the
mechanism of target normal sheath acceleration~(TNSA) shows an
exponential decay up to a high energy cutoff. To overcome this, a
few schemes are proposed such as use of plasma
lens~\cite{lens,lenstheory}, double layer
targets~\cite{double-layer} and special target
shaping~\cite{Schwoerer2006,Bulanov2002,Okada2006,Robinson2007}.
It was shown recently in theoretical
analysis and numerical simulations that when a circularly
polarized~(CP) pulse is used, high energy
mono-energetic ion acceleration can be achieved. The reason is
absence of the oscillatory term in the ponderomotive force of a CP
pulse and thus
suppression of the electron heating~\cite{Macchi2005}. A CP laser
pulse pushes gently the whole electron fluid and generates a moving local
charge separation field that accelerates ions. It looks as if the whole
target is accelerated by the laser radiation
pressure. The mechanism then is similar to the radiation pressure
dominated acceleration~(RPDA) proposed by Esirkepov \textit{et al.}
before~\cite{Esirkepov2004}. However, the threshold for the laser
intensity is dramatically reduced when a CP pulse is used. In the one
dimensional geometry and if the laser pulse is long enough, the target
can be accelerated in multi stages and in every stage the
accelerated ions gain the same energy. The narrow
width of the final energy spectrum is a
result~\cite{Zhang2007-multi}. Certainly, the mechanism is sensitive
to a laser prepulse as it might cause pre-plasma at the target surface
and destroy the accelerating structure.
Thanks to the development of USUI laser pulse and plasma mirror
technology, this can be controlled in today's experiments.
Recently, Klimo \textit{et al.}~\cite{Klimo2008}, Robinson
\textit{et al.}~\cite{Robinson2008} and Yan \textit{et
al.}~\cite{Yan2008} have studied an ultra thin target condition.
They have shown that ions are accelerated continuously and rotate in
the phase space. This can be considered as a special kind of multi
stage acceleration~\cite{Zhang2007}. The second stage begins before
the end of the first one. The width of the spectrum is further
reduced and mono-energetic ion beams are easily observed. Qiao
\textit{et al.}~\cite{Qiao2009} recently studied this kind of
acceleration and separated it into two different processes: hole
boring and light sail. They noticed the importance of the transition
between these two processes to reach the final mono-energetic
spectrum. In the hole boring process, the laser pulse propagates
thorough the bulk of the foil target and compresses it. Afterwards,
when the compressed layer arrives at the rear side of the foil, the
light sail process begins. At this stage, the whole target dynamics
can be well described using a ballistic
equation~\cite{Tripathi2009}. There is a minimum target thickness
for the target to stay opaque so that the process can operate. To
get the minimum target thickness for acceleration, all of the
earlier works use the balance between the laser pressure and the
charge separation field and assume immobile ions during the balance
build-up. This is correct when the laser intensity is relatively
small and the target density is low. However, it overestimates the
minimum target thickness when the laser intensity and target density
increase.
In this paper we study the process of the balance build-up and name
this process as a pre-hole-boring process. We consider the finite
ion mass to calculate the minimum foil thickness for ion
acceleration, which is usually also the optimal thickness. Then, we
suggest to use a shaped target to achieve a uniform acceleration.
For shaped targets, we find that the acceleration structure can be
kept for a longer time compared with usual flat targets. The final
spectrum contains a mono-energetic peak. Later we study the
robustness of our scheme by considering the influence of the shape
parameters and surface roughness on the final spectrum. This affords
a detailed description to our recently published
paper~\cite{Chen2009}. We also make some consideration on the
radiation damping effect during the ion acceleration. Finally, we
give some discussions and a summary.
\section{Pre-hole-boring and optimal target thickness}
\begin{figure}\suppressfloats
\includegraphics[width=9cm]{fig_1_pre-hole-boring.eps}%
\caption{\label{pre-hole-boring} Analytical results for the
pre-hole-boring process. (a) Density distribution of the electrons
and ions at the end of the numerical calculation. (b) The dependance
of final velocity of the CEL at the end of the calculation on the
laser intensity, target density and ion mass. (c) Evolution of CEL
velocity and forces on the CEL along with the CEL
displacement($l_1$). (d) CEL displacements at the end of the
calculation for different ion masses and laser intensities. The
black solid line represents the minimum thickness obtained in an
immobile ion model.}
\end{figure}
Here, we study the electron and ion dynamics during the laser
impinging. When the laser pulse arrives at the front surface of the
target, electrons are accelerated and piled up by the laser
pressure. Later, ions start to move being accelerated by the charge
separation field. We call this stage pre-hole-boring since during
this time ions have not caught up with the compressed electron
layer~(CEL), and the hole boring process did not reach its
stationary stage yet.
For convenience we use normalized variables here. The laser electric
field is normalized as $a=eE/m{\omega}c$, spatial and temporal
coordinates are normalized by the laser wavelength $\lambda$ and
period $2\pi/\omega$, respectively. The particle velocity, mass,
plasma density are normalized by light speed in vacuum $c$, electron
mass $m$, and critical density $n_c=m\omega^2/4{\pi}e^2$,
respectively.
If we consider only the ponderomotive force of the laser pulse
acting on the plasma electrons (the light pressure), the dynamics of
the CEL is governed by the equations:
\begin{eqnarray}\label{CEL}
n_0l_1\frac{d\gamma\beta}{dt}+n_0\gamma\beta\frac{dl_1}{dt} &=&
\frac{a^2_0}{\pi}\frac{1-\beta}{1+\beta}-{\pi}n_0^2l_1(l_1+l_s)\\
\frac{dl_1}{dt} &=& \beta,
\end{eqnarray}
\noindent where $\beta$ is the normalized velocity of the CEL, $\gamma$ is the
relativistic factor, $l_1$ is the displacement of the CEL, $n_0$ is
the initial plasma density, $a_0$ is the laser electric field and
$l_s$ is the skin length for the laser pulse. The second term on the
left side of the first equation comes from the mass increase of the
CEL. The first term on the right side is the contribution of the
laser pressure and the second one is due to the charge separation
field.
We treat ions as a hydrodynamical fluid and solve the
following hydrodynamical equations:
\begin{eqnarray}
\frac{{\partial}n}{{\partial}t}+\frac{{\partial}(n\beta_i)}{{\partial}t}
&=& 0, \\
\frac{{\partial}\beta_i}{{\partial}t}+\beta_i\frac{{\partial}\beta_i}{{\partial}x}
&=& \frac{2\pi}{m_i}E_x, \\
\frac{{\partial}E_x}{{\partial}x} &=& 4{\pi}e(n-n_e) \label{eq:ion}
\end{eqnarray}
\noindent where $n$ is the ion fluid density, $\beta_i$ is its
velocity, $m_i$ is the ion mass and $E_x$ is the charge separation
field. The density distribution of the electrons~($n_e$) is
calculated from the evolution of the CEL. We simply assume the
electrons from the front target are piled up, compressed and
uniformly distributed within a region with the size of the skin
length $l_s=1/\sqrt{n_0}$. We do this because the CEL is always in
front of the accelerated ions at this early stage. This assumption
will become invalid as soon as the ions catch up with the CEL. Our
calculation ends before this time and makes sure the hydrodynamic
velocity of the ions at the density peak point is larger than the
CEL velocity. The moving distance of the CEL at this time is just
the one we are looking for and it equals the minimum thickness of
the target for the ion acceleration~($l_{hydro}$). As we will see
usually this gives a smaller value than the usually used
one~($l_{immo}=a/{\pi}n$) obtained in the immobile ion model.
We solve the system of equations (\ref{CEL})-(\ref{eq:ion})
numerically. In our calculations, we vary the ion mass, target
density and laser intensity to see their effects on the minimum
target thickness. The foil is assumed to be thick enough
initially~($0.2\lambda<x<1.8\lambda$) for the ions to catch up with
the CEL. The electron and ion density distributions at the
calculation end are shown in Fig.~\ref{pre-hole-boring}(a). The
density distributions also indicate one of the necessary criterions
to end our calculation: the time when the peak of the ion density
distribution reaches the CEL. After this the charge separation field
for CEL will deviate obviously from the one used in Eq.~(\ref{CEL}).
The moving distance of the CEL at this time is assumed to be the
minimum thickness of the target $l_{hydro}$ since the ions can catch
up with the CEL once the target is thicker than $l_{hydro}$.
Obviously we can get $l_{hydro}$ from the numerically calculations
show in Fig.~\ref{pre-hole-boring}(a), it can also be gotten by
considering the force balance as we will show in the following.
Fig.~\ref{pre-hole-boring}(b) shows the ratio of the CEL
velocity~($\beta_e$) at the end of the calculation to the
theoretical relativistic hole boring
velocity~$\beta_h=a/(a+\sqrt{m_in_0})$~\cite{Robinson2009}. As we
see the value tends to a constant depending only on the ion mass.
With this constant value it is then easy to calculate the
displacement of the CEL at this time from the relationship of the
almost balance between the charge separation force and the laser
pressure:
\begin{equation}\label{balance}
{\pi}n_0^2l_1(l_1+l_s) = \frac{a_0^2}{{\pi}}\frac{1-\beta_e}{1+\beta_e}
\end{equation}
\noindent The force balance can be seen from
Fig.~\ref{pre-hole-boring}(c). The black line indicates the force
due to laser pressure, the red line indicates the charge separation
force and the blue line indicates the force due to mass increase.
The first two forces balance with each other very quickly once the
CEL moves $0.1\lambda$ into the target. The displacements of the CEL
obtained in Fig.~\ref{pre-hole-boring}(a) fit well with those gotten
from the force balance calculation. The latter is in
Fig.~\ref{pre-hole-boring}(d) with different ion masses. The usually
used value based on the immobile ions model~$l_{immo}$ is also shown
with the black line. As we see, when the intensity of the laser
electric field is larger than $a_0=100$, the present
results~$l_{hydro}$ are smaller than the value based on immobile
ions~$l_{immo}$. The lighter the ion mass, the larger the difference
with the one of the immobile ion model. $l_{hydro}$ is just the
minimum thickness of the foil target for ion acceleration. When the
target is thinner than this minimum value, ions can not catch up the electrons and
neutralize them. Then electrons are smashed away from the ions
completely by the light pressure and the target is transparent to
the laser later. The electrons are dispersed by the pulse and the
acceleration structure disappears. From the present calculation, we
see that ions have already caught up with the electrons before the
CEL moves a distance of $l_{immo}$ and the CEL will not completely
separate from the ions. So our model shows that the usual
value~$l_{immo}$ overestimates the minimum thickness. This finding
is especially important for the selection of the target thickness
for multicascade ion acceleration scheme recently proposed by
Gonoskov \textit{et al.} ~\cite{Gonoskov2009}.
\section{Ion acceleration by use of shaped foil target}\label{sec_shapetarget}
\begin{figure}
\includegraphics[width=6cm]{fig_2_layout.eps}%
\caption{\label{layout} Layout of the interaction scheme. $\sigma_T$
defines the transverse Gaussian profile of the shaped target, $l_0$
is the maximal target thickness while $l_1$ is the cutoff thickness.
A circularly polarized laser pulse is incident on the foil target from
the left boundary.}
\end{figure}
Up to now we discussed the minimum target thickness for ion
acceleration in the laser pressure dominated regime. Once the target
is thicker than the minimum value, the whole target is opaque to the
laser pulse and it is accelerated in a hole-boring process with the
velocity $\beta_h$. When the CEL arrives at the rear of the foil,
the acceleration changes to a light sail process. Then, the momentum
evolution of the target satisfies~\cite{Robinson2008,Pegoraro2007}:
\begin{equation}\label{acce}
\frac{dp}{dt} =
\frac{2I}{c}\frac{\sqrt{p^2+\sigma^2c^2}-p}{\sqrt{p^2+\sigma^2c^2}+p},
\end{equation}
\noindent where $I$ is the laser intensity, $\sigma$ is the target area
density. For the velocity evolution of the target one obtains:
\begin{equation}\label{velocity}
\frac{d\beta}{dt} =
\frac{1}{2{\pi}n_0m_ic}\frac{E^2(t,x,r)}{l_0}\frac{1}{\gamma^3}\frac{1-\beta}{1+\beta}.
\end{equation}
\noindent Here $E$ is the laser electric filed amplitude, $n_0$ and
$l_0$ are the target initial density and thickness, respectively.
Eq.~(\eqref{velocity}) shows that the energy spread of accelerated
ions depends on the transverse variation of the local ratio of laser
intensity to the target area density. The distance the ions pass
under the laser pressure is $s(r){\propto}{E^2(t,x,r)}{l^{-1}_0}$.
An initially flat target is inevitably deformed, if the laser
intensity is not uniform transversely. The target deformation
quickly destroys the acceleration structure and deteriorates the
beam quality. From Eq.~(\ref{velocity}) we see that a target can be
uniformly accelerated if its areal density $\sigma$ is shaped
properly. For the usual transversely Gaussian
pulse~[$a=a_0{\times}exp(-r^2/\sigma_L^2)$], one can use a target
with the Gaussian thickness distribution as shown in
Fig.~\ref{layout}. In the following simulations, the distribution of
the target thickness along the transverse direction is:
\begin{equation}\label{thickness}
l=max\{l_1,l_0{\times}\exp[(-r^2/\sigma^2_T)^m]\}.
\end{equation}
\noindent Here $r$ is the transverse distance to the laser axis,
$l_1,l_0,\sigma_T, m$ are the shape factors, which are marked in
Fig.~\ref{layout}.
\begin{figure}[t]
\includegraphics[width=9cm]{fig_3_division.eps}%
\caption{\label{division} Target partitions in the cases of (a) shaped
target and (b) flat target, according to the transparency
calculation. The parameters for the shaped target are:
$l_0=0.3\lambda, l_1=0.1\lambda, m=1, \sigma_T=6\lambda$. For the
flat target we just use $l_1=l_0=0.3\lambda$. The laser pulse has a
focus of $\sigma_L=8\lambda$. The red solid line represents the
target thickness distribution along the transverse direction; the
blue dashed line represents the minimum target thickness requirement
for an opaque target. The black line represents the acceleration
factor.}
\end{figure}
Before carrying out the simulations for such kind of shaped target,
we first check the target transparency for the pulse with a
transversely Gaussian pulse. In Fig.~\ref{division} we show the
minimum thickness requirement from theoretical calculation by use of
Eq.~(\ref{balance}) and the acceleration factor $F_{acc}=a^2/l$ in
the target transverse direction. $\beta_e$ is the function of
$a,n,m_i$ and we get the value from Fig.~\ref{pre-hole-boring}(b).
The shaped target thickness and flat target thickness distributions
are also shown in the figure. As we see from Fig.~\ref{division}(a)
for the shaped target, in the center it is thicker than the minimum
value~$l_{min}$. At this region, the acceleration factor $F_{acc}$
is also almost uniform, so the target can be accelerated uniformly.
Outside of this region, the target thickness is thinner than
$l_{min}$. The target is transparent to the laser pulse here, ions
can not get an effective uniform acceleration. In the outside
region, the target is thicker than $l_{min}$ again and $F_{acc}$
decreases with radius, so in this region the target is opaque and
will be accelerated and deformed. However, for the usual flat target
[see Fig.~\ref{division}(b)], the target thickness is always larger
than the minimum value, so it is opaque to the laser pulse and the
whole target belongs to the deformation region. We will demonstrate
these in the following PIC simulations.
By use of Eq.~(\ref{balance}) we can also get the final size of the
accelerated ion bunch in the target center . For the present target
shape the calculated bunch radius is:
$r_b=\sigma_L\sigma_T\sqrt{\ln[{\pi}n_0l_0a_0^{-1}\sqrt{(1+\beta_e)/(1-\beta_e)}](\sigma_L^2-\sigma_T^2)^{-1}}$,
where $\beta_e=\alpha\beta_h$ and $\alpha$ is a function of ion mass
and laser intensity. Theoretically it can be obtained from
Fig.~\ref{pre-hole-boring}(b) in which $\alpha\approx0.5$. However
in the PIC simulations we find the accelerated bunch size can be
approximated much better when we choose $\alpha\approx 1$. This
difference may result from our too strict criterion for the minimum
target thickness. From the calculated bunch size we can also get
requirements for the shape factors, such as $\sigma_T<\sigma_L$ and
$l_1{\leq}l_0{\times}\exp(-r_b^2/\sigma_T^2)$. In the next section
we present results of multi-dimensional PIC simulations, compare
them with the above results and use them to get the optimal shape
factors.
\subsection{Particle-in-cell simulations}
We use the VLPL code to do both 2D and 3D
simulations~\cite{Pukhov-vlpl}. First, we do 3D-simulation to show
the ion acceleration from the shaped foil target. and then we
investigate in detail the effects of the target parameters on the
beam quality by less time-consuming 2D simulations.
\begin{figure}
\includegraphics[width=9cm]{fig_4_3dsim.eps}%
\caption{\label{3dsim}Proton density distributions in three
dimensional simulations at $t=15T_0$ (a) and $20T_0$ (b). The
single-headed arrow indicates the high quality proton bunch with a
higher energy and better collimation. The initial target with
$\sigma_T=6\lambda$ is located between $x=2.0\lambda$ and
$2.3\lambda$, irradiated by a CP laser pulse with
$\sigma_L=8\lambda$. (c) Proton energy spectra at $t=20T_0$. Here,
the normal flat target with $l_0=l_1=0.3\lambda$ is located at the
same position as the shaped foil target and is irradiated by the
same laser pulse.}
\end{figure}
The simulation box in the 3D-simulation is $25\lambda \times27\lambda
\times27\lambda$, which consists of $2500 \times 225 \times225$
cells. A circularly polarized laser pulse with a Gaussian profile in
space and a trapezoidal profile (linear growth - plateau - linear
decrease) in time is normally incident on the foil target. The
temporal profile satisfies:
\begin{equation} \label{eq:4}
a=\left\{ \begin{aligned}
& a_0\exp(-\frac{y^2}{\sigma_L^2})t, \quad 0\leq t<1T\\
& a_0\exp(-\frac{y^2}{\sigma_L^2}),\quad 1T\leq t\leq6T,\\
& a_0\exp(-\frac{y^2}{\sigma_L^2})(8-t),\quad 6T<t\leq7T,
\end{aligned} \right.
\end{equation}
\noindent where $a_0=100$ is the normalized intensity of the laser
electric field, $\sigma_L=8\lambda$ is the focal spot radius,
$T=3.3fs$ is the laser cycle for an infrared laser with $1~\mu$m
wavelength. The initial target is located between $x=2.0\lambda$ and
$2.3\lambda$, with a transversely varying thickness, as shown in
Fig.~\ref{layout}. Here, the cutoff thickness $l_1=0.15\lambda$ and
$\sigma_T=6.0\lambda$. The plasma density is $n_0=100$.
Fig.~\ref{3dsim} presents the proton density distributions at two
time points: $t=15T_0$ and $20T_0$. One sees that the center part of
the target is accelerated strongly and soon breaks away from the
whole target. As expected, the deformation of the target center is
well suppressed and a peak appears in the proton energy spectrum,
which is shown in Fig.~\ref{3dsim}(c). The number of the protons
whose energy is larger than 0.65GeV is about $8.3\times10^{11}$ and
their total energy is about 120J. Instead, for a usual flat target,
the energy spectrum shows a typical exponential decay due to the
easy heating and deformation of the target. By diagnosing the
divergency angle distribution, the average divergency of these
energetic protons from the shaped foil target is less than
$5^\circ$. The whole target can be stably accelerated until the end
of laser irradiation with the energy conversion efficiency as high
as $23.1\%$.
In Fig.~\ref{3dsim}(a,b), it is easy to distinguish the
deformation region, acceleration region and transparency region. The
center part is the acceleration region, which composes the energy
peak in the spectrum. Around it is the transparency region, where the
electron density is quickly low enough so that the laser pulse can
easily penetrate it and then go through it. This region separates
the acceleration region from the deformation region locating in the
outer side of the target and effectively suppresses the target
heating and protects the acceleration region. For a
flat target, this transparency region can also be formed due to the
density dilution during the target deformation. However, the process
is much slower and
it is a gradual changing. Target heating is inevitable then,
and there is no obvious acceleration region formation. The
pre-shaped target makes the final isolated acceleration bunch
possible. The radius of the bunch in our simulation is around
$4.5\lambda$, which is close to the theoretical estimate for $r_b$.
Although our 3D simulations show the possibility of a GeV
monoenergetic proton beam acceleration, a well shaped
target is required. In experiment, it may be difficult to make a well matched target
without any deviations. Certainly, the final beam quality should be
related with the target shape factors. For real applications, we
are going to demonstrate the robustness of this shaped target scheme.
In the following we discuss in detail the effects of these factors on the
final beam bunch. To save the computational time, we only perform 2D
simulations to show the effects. A series of 2D simulations have
been performed. The whole simulation box is
$32\lambda\times32\lambda$, sampled by $3200\times3200$ cells. The
foil target initially locates between $x=5.0\lambda$ and
$5.3\lambda$. The CP laser pulse has the same profile both in time
and space as above 3D case except the pulse duration is now
$\tau=10$, which corresponds to a distribution of 1T-8T-1T in time.
\subsection{Dependence on the shape factor}
\begin{figure}
\includegraphics[width=9cm]{fig_5_2dcutoff.eps}%
\caption{\label{cutoff} Proton energy distributions in x-y space for
different cutoff thickness $l_1$: (a) $0.05\lambda$, (b)
$0.15\lambda$ and (c) $0.25\lambda$. All the other laser pulse and
target parameters are the same. (d) Proton energy spectra at
$t=30T_0$. Here the flat target refers to the one with $n_0=100$ and
$l_0=l_1=0.3\lambda$.}
\end{figure}
We first take into account the influence of the cutoff thickness
$l_1$ on the beam quality. In the simulation we fix all other
parameters and only change $l_1$. The ratio of the target width to
the laser focus~($\sigma_T/\sigma_L$) is kept to be 7/8.
Fig.~\ref{cutoff} shows the simulation results. We can see that the
spatial energy distribution in Fig.~\ref{cutoff}(a) and (b) are
almost the same. The cutoff thicknesses for these two cases are
$0.05\lambda$ and $0.15\lambda$, respectively. The corresponding
energy spectra are shown in Fig.~\ref{cutoff}(d). Again, the energy
distributions of high energy protons for these two cases are the
same. However, when $l_1$ is increased to $0.25\lambda$, the energy
spectrum changes significantly as shown in Fig.~\ref{cutoff}(d). The
peak energy decreases and the cutoff energy increases. The spectrum
tends to be the one of a flat target. It shows there exists a
threshold value for $l_1$. When $l_1$ is larger than the threshold,
the spectra are significantly different. In additional simulations, we find
that the threshold is about $0.20\lambda$ in the present case. This
is close to the theoretical value of our analysis above. When the cutoff thickness is
smaller than the $l_0\times{\exp}(-r_b^2/\sigma_T^2)$, the
accelerated bunch size is almost constant as shown in
Fig.~\ref{cutoff}(a) and (b). When it increases, no obvious
transparency region separates the acceleration and the deformation
regions. Target deformation happens continuously along the target
and the effectively accelerated bunch is smaller as shown in
Fig.~\ref{cutoff}(c) and the final spectrum is close to the flat
target case.
\begin{figure*}
\includegraphics[width=14cm]{fig_6_2dsigma.eps}
\caption{\label{2dsigma} Proton energy distributions in x-y space
for different $\sigma_L$ and $\sigma_T$ at $t=30T_0$: (a)
$\sigma_L=6\lambda$, $\sigma_T=6\lambda$, (b) $\sigma_L=8\lambda$,
$\sigma_T=6\lambda$, (c) $\sigma_L=12\lambda$, $\sigma_T=6\lambda$,
and (d) $\sigma_L=12\lambda$, $\sigma_T=10\lambda$. Corresponding
proton energy distributions as a function of the divergency angle
are shown in (e)-(h).}
\end{figure*}
The most important factors are the matching parameters: $\sigma_T$
and $\sigma_L$. In the following we check their effects on the beam
quality. Fig.~\ref{2dsigma} shows some typical simulation results
for different $\sigma_T$ and $\sigma_L$. The top four figures show
the proton energy distributions in space while the bottom four
correspond to the angular distributions. We fix $\sigma_T=6\lambda$
and increase $\sigma_L$ from $6\lambda$ to $12\lambda$. It is shown
that, when $\sigma_T$ is close to $\sigma_L$, the target
deformation happens. Most protons are located at the deformation
region and the target evolves into a natural cone. The corresponding
energy-divergency distribution is widely spread, as shown in
Fig.~\ref{2dsigma}(e). With the increase of laser focus, the center
part of the target is uniformly accelerated so that it can break
away from the whole target. The three regions mentioned before can
be easily distinguished from the Fig.~\ref{2dsigma}(b-c). A bunch of
protons with a higher energy and a better collimation is formed in
Fig.~\ref{2dsigma}(f-g). The radius of the bunch decreases with
$\sigma_L$, which confirms the theoretical analysis. When
$\sigma_L$ increases, the transparency region extends to the target
center, a larger laser focus (or laser energy) leads to a smaller
accelerated bunch. The results show the importance of a well matched
target. Fig.~\ref{2dsigma}(d) and (h) correspond to a well matched
case when the laser focus is $\sigma_L=12\lambda$. When we increase
the target width close to the $\sigma_L$, the acceleration region
broadens and more ions are uniformly accelerated.
\begin{figure}[b]
\includegraphics[width=9cm]{fig_7_2dspec.eps}%
\caption{\label{2dspec} Proton energy spectra (a) and divergency
angle (b) distributions for different laser focus radii $\sigma_L$ and
$\sigma_T$ at $t=30T_0$.}
\end{figure}
Fig.~\ref{2dspec} shows both the energy spectra and the angular
distributions for these cases. As expected, there is a clear
quasi-monoenergetic peak both in the case with
$\sigma_T/\sigma_L=6/8$ and $\sigma_T/\sigma_L=10/12$. The peak
energy is about 0.85GeV and 0.80GeV, respectively. The corresponding
full-width of half maximum divergency angle is about $6^\circ$ and
$4^\circ$. Obviously, for the well matched cases the larger the
laser focus, the more the accelerated protons. On the contrary, for
the not perfect matched case, both the peak energy and the total
production of the accelerated protons decrease. For the unmatched
case, no clear peak appears and the proton number decreases further.
\begin{table}
\caption{Available and optimum values of $\sigma_T/\sigma_L$}
\label{tab-sigma}
\begin{tabular}{|c|cccccc|c|}
\hline
&\multicolumn{7}{c}{$\sigma_T/\sigma_L$}\vline \\
\cline{2-8}
\raisebox{2.3ex}[0pt]{$\sigma_L$}&\multicolumn{6}{c}{Available values}\vline&Optimum \\
\hline
6 & 0.5 & 0.583 & 0.677 & 0.75 & 0.833 & 0.916 & 0.75\\
\hline
8 & 0.375 & 0.5 & 0.6 & 0.75 & 0.8125 & 0.875 & 0.8125\\
\hline
10 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.8\\
\hline
\end{tabular}
\end{table}
In order to obtain the optimal ratio of $\sigma_T/\sigma_L$ in the
simulation, we perform the parameter scanning as shown in
Table~\ref{tab-sigma}. Here, all the target and laser parameters are
the same except for $\sigma_T$ and $\sigma_L$. The available values of
$\sigma_T/\sigma_L$ mean that a high quality proton bunch with a
quasi-monoenergetic peak and low divergency angle can be observed
with these parameters. The optimum value indicates the best bunch
quality such as the narrowest energy spread and the lowest divergency. It
is shown that the tolerable values of $\sigma_T/\sigma_L$ exist
around 0.50-0.90 while the optimum value is about 0.80. These
simulations supplement our analytical results which only give the
condition of $\sigma_T/\sigma_L<1$ and also give some quantitative
illumination to the experiments.
\subsection{Effect of target surface roughness}
Since in our scheme, the target thickness is less than the laser
wavelength, i.e. nanometer-thickness, the relatively larger surface
roughness of the target might be inevitable in real experiments
and it may have influence on the final accelerated ion beam. Here we
check its effects by comparing three simulations with different
surface roughness: (a) a smooth surface, (b) $10\%$ roughness and
(c) $30\%$ roughness. The initial targets are shown in
Fig.~\ref{2droughness}(a). In order to resolve the surface
roughness, both the longitudinal and transversal cell sizes should
be small enough which leads to extremely small steps both in
space and time in the simulation. This makes the simulations extremely
time consuming.
Therefore, we only present the simulation results at an early time
$t=10T_0$. This time is, however, already long enough to see the final
effects.
Fig.~\ref{2droughness}(c) shows proton energy spectra for
these cases. We notice that all the spectra show a clear energy peak
despite the different surface roughness. Yet, for the
target with 30\% surface roughness, the peak energy is about
0.25GeV, which is higher than the value of 0.2GeV in the cases
with a lower roughness. Similarly, the cutoff energy is also higher
than the other two cases. The differences between the two lower roughness
cases are much smaller. The main effect of the target roughness is
increasing the laser absorption and conversion efficiency of its
energy to the super hot electrons. These electrons are easily
dispersed in space and awake the TNSA acceleration. This can be seen
in Fig.~\ref{2droughness}(b), in which the energy spectrum of the
electrons is shown. Obviously the target with $30\%$ roughness has a
much higher electron temperature. The other two cases are similar.
In addition to the spectrum we also check the angular distribution. The results
are shown in Fig.~{\ref{2droughness}(d). There is no obvious
difference with the case of smooth target and a target with $10\%$
roughness. So generally speaking, the simulation shows the roughness
of $10\%$ is acceptable. It gives a quantity guidance for
experimental demonstrations.
\begin{figure}
\includegraphics[width=9cm]{fig_8_2droughness.eps}%
\caption{\label{2droughness} Shaped foil targets with different
surface roughness (a). Electron energy spectra (b) and proton energy
spectra (c) for different surface roughness at $t=20T_0$. Proton
divergency angle distributions at $t=20T_0$ (d). Here, the cutoff
thickness for these cases is $0.15\lambda$.}
\end{figure}
\subsection{Radiation damping effect}
Further, we perform simulations to check the radiation damping
effects, which was found to be very important for transparent nano
targets~\cite{Kulagin2007}. To simulate the damping effect we
suppose that, at any given moment of time, the electron radiation
spectrum is synchrotron like~\cite{kiselev-synchrotron}. The
critical frequency $\omega_c$ is given by the relation
$\omega_c=(3/2)\gamma^2|{\bigtriangleup}P_\perp|/(mcdt)$;
${\bigtriangleup}P_\perp$ is the variation of transversal electron
momentum force during the time step of $dt$. In our PIC code, we
follow trajectories of each electron and calculate the emission
during the interaction. We calculate the damping effects by
considering the electron's recoil due to the emitted radiation. The
recoil force was included in the equations of electron motion. We do
3D simulations with the same parameters as mentioned above,
while the radiation module switched on this time.
Fig.~\ref{3ddamping} shows the synchrotron radiation distribution.
$\theta$ is the angle between the directions of photon emission and
laser propagation; $\phi$ is the azimuthal angle. We find that the
emitted x-ray photons are mainly radiated at the angle of
($\theta=35^o-45^o$). Totally 8J energy~(about $1.54\%$ of the laser
energy)~is transferred to the x-ray photons with the average photon
energy about 1.2MeV. For the electrons' cooling, our simulation
shows that there is no obvious changing in the final electron energy
spectrum. Only the highest electrons~(400MeV$<E_k<$800MeV)~are a
little cooling down. The total electron energy is about 0.123J less
than the one in the simulation without the radiation effect. This
means the electrons can quickly get back their radiation-lost energy
from the driving pulse. For the ion acceleration, our simulation
results show there is almost no observable difference in the final
ion spectra between the cases with and without radiation damping
effects. Obviously this little electron cooling can not benefit the
ion acceleration so much for these laser-plasma parameters.
\begin{figure}
\includegraphics[width=7cm]{fig_9_2ddamping.eps}%
\caption{\label{3ddamping} Synchrotron radiation distribution in the
angular-energy space at $t=20T_0$. The colorbar represents the
photon number.}
\end{figure}
\section{Discussion and Summary}
We should point out that the target shaping only helps
to reduce electron heating and keeps the acceleration much more
uniform and for a longer time. However, the instabilities still
exist~\cite{Pegoraro2007,Chen2008} in the accelerated plasmas.
Although under perfect matching condition proton numbers can be
increased by increasing the laser focus as Fig.~\ref{2dsigma}(d)
suggests, the surface instability will develop after some time and destroy
the acceleration structure, which limits the final proton energy.
Suppression of such kinds of instabilities should be an important
work both for the laser ion acceleration itself and for the fast
ignition of inertial fusion targets based on laser-accelerated ion
beams~\cite{fusion}. These are important issues for the future work.
{In summary, by multi dimensional PIC simulations, we have checked
the target shape effects on GeV mono-energetic proton beam
acceleration. We propose to use a shaped target to match the laser
intensity. By this, there will be a transparency region separating
the acceleration region and deformation region and keeping the
acceleration structure for a longer time compared with a usual flat
target. Final spectrum shows a well mono-energetic peak once the
well shaped target is used. Effects of shape factors and target
surface roughness are also checked, which demonstrates the
robustness of our scheme. Finally, synchrotron radiation and damping
are checked. It does little influence on the ion acceleration for
the present parameter region.}
\begin{acknowledgments}\suppressfloats
This work is supported by the DFG programs TR18 and GRK1203. MC
acknowledges support by the Alexander von Humboldt Foundation. TPY
thanks the scholarship awarded by China Scholarship Council(CSC NO.
2008611025). ZMS is supported in part by the National Nature Science
Foundation of China (Grants No. 10674175, 60621063) and the National
Basic Research Program of China (Grant No. 2007CB815100).
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.806641,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbK45qrqCyt4L0oTQ | \section*{ACKNOWLEDGMENTS}
\small{
This work has taken place in the Learning Agents Research Group (LARG) at UT Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, FAIN-2019844), ONR (N00014-18-2243), ARO (W911NF-19-2-0333), DARPA, GM, Bosch, and UT Austin's Good Systems grand challenge. Peter Stone serves as the Executive Director of Sony AI America and receives financial compensation for this work. The terms of this arrangement have been reviewed and approved by the University of Texas at Austin in accordance with its policy on objectivity in research.}
\section*{APPENDIX}
No Appendix for now. Leave a position for future use.
\section{APPROACH}
\label{sec::approach}
In this section, we first formulate the multi-robot hallway passing problem. We then describe our solution, Perceptual Hallucination for Hallway Passing (\textsc{phhp}).
\subsection{Problem Formulation}
We consider a hallway passing scenario in which two robots must pass each other in confined hallways that are barely wide enough to permit collision-free passing.
We specifically consider this scenario in four widely observed hallway shapes: I-, L-, T-, and Z-shaped.
In this context, let $\vec{p}^1$ and $\vec{p}^2$ denote the two-dimensional global positions of the first and second robot, respectively.
Further, let $\mathcal{C}$ denote a set of global points covering the center of the hallway.
We assume that each robot is equipped with a two-dimensional LiDAR scanner, and we denote the LiDAR measurements obtained by each robot at time $t$ as $\mathbf{l}^1_t$ and $\mathbf{l}^2_t$.
We also assume that both robots use a global planner and a collision-free local navigation system designed for static environments (e.g., ROS \texttt{move\textunderscore base}\cite{ros_move_base}).
In the context of the above scenario, we seek to investigate whether perceptual hallucination can improve the performance of an existing navigation system to reduce collisions and decrease passing delay.
Mathematically, we use $h$ to denote the \emph{hallucination function}, i.e., the sensor reading $\mathbf{l}_\mathcal{H} = h(\mathbf{l}, \mathcal{H})$ is modified by transforming a LiDAR scan $\mathbf{l}$ such that it appears as if virtual obstacles specified by an {\em obstacle field} $\mathcal{H}$ (i.e., the shape and location of hallucinated obstacles) were added to the current environment.
To assure safety, $\mathbf{l}_\mathcal{H}$ only contains \emph{additional} obstacles, i.e., to compute the depth value at any particular bearing $k$, the \emph{minimum} value between the real scan, $\mathbf{l}$, and a hallucinated scan corresponding to only obstacles in $\mathcal{H}$, $\mathbf{v}_\mathcal{H}$, is sent to the robot.
Finally, we only consider here cases in which the hallucinated obstacle field is static.
In order to use perceptual hallucination, we must determine what each robot should use for its hallucinated obstacle field $\mathcal{H}$ to enable better passing.
In general, $\mathcal{H}$ could consist of an arbitrary number of obstacles, each with an arbitrary shape.
However, to make the problem tractable, we consider only obstacle fields comprised of the union of a set of contiguous circles of some fixed radius.
We denote such obstacle fields as $\mathcal{H}_{\boldsymbol{\theta}}(\mathcal{C})$ parameterized by $\boldsymbol{\theta} = (r, {\Delta}r, k_{begin}, k_{end})$, where $r$ specifies the radius of each circle,
and $k_{begin}$ and $k_{end}$ specify the fraction that divides the distance from the robot to the first and last circle along the hallway center points $\mathcal{C}$ by the detection range of the robot.
Between these starting and ending circles, $\mathcal{H}_{\boldsymbol{\theta}}(\mathcal{C})$ contains a new circle every 5cm.
Finally, $\Delta r$ specifies the distance from the center of the circle to the hallway center points, $\mathcal{C}$.
\textsc{phhp} works best with each robot using the same $\mathcal{H}_{\boldsymbol{\theta}}$, but the system can still improve multi-robot navigation ability with different $\mathcal{H}_{\boldsymbol{\theta}}$ as long as $\mathcal{H}_{\boldsymbol{\theta}}$ is valid.
See Section \ref{ssec:robustness} for more detail.
An overview of the hallway passing scenario and how perceptual hallucination is applied is illustrated in Figure \ref{fig:overview}.
In order to quantify which $\mathcal{H}_{\boldsymbol{\theta}}$ is best for hallway passing, we define a hallway-passing cost function.
For a given hallway passing episode, we define this cost function to encourage both fast and safe passing, i.e.,
\begin{equation}
C(\mathcal{H}_{\boldsymbol{\theta}}) = \frac{\textsc{ttd}_{1}(\mathcal{H}_\theta) + \textsc{ttd}_{2}(\mathcal{H}_\theta)}{2} + c_{coll}*\mathbb{1}_{coll} \; ,
\label{eq:cost}
\end{equation}
where $\textsc{ttd}_{i}(\mathcal{H}_\theta)$ denotes the amount of time (seconds) it takes for robot $i$ to reach its goal using $\mathcal{H}_\theta$, and $\mathbb{1}_{coll}$ is an indicator function that is 1 if a collision occurred in the episode and 0 otherwise.
In our implementation, we set the collision penalty $c_{coll}$ to 100.
With this setup, the problem of finding the best obstacle field to hallucinate for the hallway passing problem becomes one of finding the $\boldsymbol{\theta}$ that minimizes this cost, i.e.,
\begin{equation}
\boldsymbol{\theta}^* = \arg \min_{\boldsymbol{\theta}} C(\mathcal{H}_{\boldsymbol{\theta}}) \; .
\label{eq:phhpprogram}
\end{equation}
\subsection{Optimal Hallucination}
\label{sec::optimalhallucination}
We solve Equation (\ref{eq:phhpprogram}) and find an effective $\boldsymbol{\theta}^*$ using the Covariance Matrix Adaptation Evolution Strategy (\textsc{cma-es}) \cite{hansen2003ecj} algorithm, a population-based, black-box optimizer that selects and evaluates successive generations of samples.
In each generation, \textsc{cma-es} samples N data points $\boldsymbol{\Theta}$ from the multivariate normal distribution and measures the cost of each sample.
Then, the mean of the next generation is updated by a weighted sum of samples in which samples with lower cost have higher coefficients. Finally, the covariance matrix is updated with the same coefficient used to update the mean with the decaying factor.
This process is used to selectively sample new generations until the covariance of the next generation is less than a threshold.
Finally, the minimum-cost sample across all generations is returned as $\boldsymbol{\theta}^*$.
To evaluate a particular sample $\boldsymbol{\theta}$ when running \textsc{cma-es}, we compute $C(\mathcal{H}_{\boldsymbol{\theta}})$ by executing a hallway passing episode with perceptual hallucination in simulation.
For each episode, two robots are initialized at each side of an I- or L-shaped hallway and given navigation goals that require them to pass one another.
Then each robot begins navigating using its base navigation system.
When robots detect one another by communicating, each robot employs \textsc{phhp} with $\mathcal{H}_{\boldsymbol{\theta}}(\mathcal{C})$ where the hallway center points $\mathcal{C}$ are approximated by the path provided by a global planner that seeks to maximize the margin of the path.
The episode ends when both robots have successfully reached their respective goal locations.
The amount of time it takes robot $i$ to reach its goal is recorded as $\textsc{ttd}_{i}$.
Collisions are defined as any time when a robot is in contact with either another robot or a wall.
In order to ensure $\boldsymbol{\theta}^*$ is robust to differences between conditions in simulation and those in the real world, we further employ \emph{domain randomization} \cite{jakobi1997, tobin2017domain}.
That is, we compute the \textsc{cma-es} objective for each sample by averaging costs obtained over several simulation episodes, each with randomized starting delay $t_i$ and detection range $D_i$ sampled from uniform random distributions, $\mathcal{U}_{[0,t_{max}]}$ and $\mathcal{U}_{[d_{min},d_{max}]}$.
The pseudocode of \emph{perceptual hallucination for hallway passing} (\textsc{phhp}) is given in Algorithm \ref{alg:train}.
\begin{algorithm}[ht]
\begin{algorithmic}[1]
\caption{ Find optimal Hallucination with \textsc{cma-es} }
\label{alg:train}
\Require $r, \Delta r, k_{begin}, k_{end}$
\State CMAES.initialize($r, \Delta r, k_{begin}, k_{end}$)
\State $\sigma$ $\gets$ 0.1
\State best\_cost $\gets \infty$
\State ${\boldsymbol{\theta}^*} \gets$ None
\While{$\sigma \geq threshold$}
\State $\boldsymbol{\Theta}$ $\gets$ CMAES.generate\_samples()
\State cost $\gets$ []
\For{$\mathbf{k} \gets 1$ to $N$}
\State $\boldsymbol{\theta}$ $\gets$ $\boldsymbol{\Theta}[k]$
\State $t_{1},t_{2} \gets \mathcal{U}_{[0,t_{max}]},\mathcal{U}_{[0,t_{max}]}$
\State $D_{1},D_{2} \gets \mathcal{U}_{[d_{min},d_{max}]},\mathcal{U}_{[d_{min},d_{max}]}$
\State $\textsc{ttd}_{1}$, $\textsc{ttd}_{2}$, coll $\gets$ episode($\boldsymbol{\theta}$, $t_1, D_1, t_2, D_2$)
\State cost[k] $\gets$ $\frac{\textsc{ttd}_{1} + \textsc{ttd}_{2}}{2}$ + $100\cdot\mathbb{1}_{coll}$
\If best\_cost $\geq$ $\min$(cost)
\State best\_cost $\gets$ $\min$(cost)
\State ${\boldsymbol{\theta}^*} \gets$ $\boldsymbol{\theta}$
\EndIf
\EndFor
\State CMAES.optimize($\boldsymbol{\Theta}$, cost)
\State $\sigma \gets$ CMAES.evaluate()
\EndWhile
\State \Return ${\boldsymbol{\theta}^*}$
\end{algorithmic}
\end{algorithm}
\section{CONCLUSIONS AND FUTURE WORKS}
\label{sec::conclusions}
In this paper, we presented Perceptual Hallucination for Hallway Passing (\textsc{phhp}), a new method that enables multi-robot navigation in constrained spaces.
We showed how to find the best obstacle for \textsc{phhp} to hallucinate for a given environment and navigation policy using \textsc{cma-es}.
The simulation and real-world deployment results indicate that \textsc{phhp} achieves comparable performance against \textsc{orca}, while removing the assumption that the robot has continuous access to the other robot's position and velocity.
Moreover, \textsc{phhp} outperforms both a right-lane-following baseline and our prior work, the halting method, in terms of delay.
Additionally, real-world deployment results experimentally confirm that \textsc{phhp}, which is trained in simulation, can successfully be deployed in a wide variety of real-world settings, including those in which the size of the hallway is changed, the robots move with different velocities, their perception systems exhibit a different detection range, or the virtual obstacles used by \textsc{phhp} is different.
Despite the successes we presented, \textsc{phhp} has only been developed and evaluated here with two robots and we have only explored using a particular class of hallucinated obstacle fields comprised of a number of circles.
Therefore, an important direction for future work is to investigate how to expand \textsc{phhp} to work with multiple, arbitrarily shaped obstacles in a wider variety of settings.
\section{EXPERIMENTS}
\label{sec::experiments}
In order to evaluate the efficacy of Perceptual Hallucination for Hallway Passing (\textsc{phhp}), we measure:
\emph{(1)} the performance of \textsc{phhp} in different hallway shapes,
\emph{(2)} performance of \textsc{phhp} and alternative approaches with communication noise,
and \emph{(3)} whether \textsc{phhp} is robust enough to overcome the sim-to-real gap; as well as to deal with different hallway shapes, different characteristics of robots, and heterogeneous obstacle fields.
We compare \textsc{phhp} to three alternative approaches: a rule-based, right-lane-following baseline; \textsc{nh\_orca}\cite{van2011reciprocal, alonso2013optimal}; and
the \emph{\texttt{halting} method} \cite{CoRL20-Park}.\footnote{Referred to as the {\em adaptive method} in \cite{CoRL20-Park}.}
Details of each of these approaches is provided in Section \ref{ssec:alternative}.
We evaluate the performance of each method using the following metrics:
\begin{itemize}
\item $\Delta t$: The amount of delay compared to a single robot traversing the same hallway.
\item $P_{collision}$: The probability of collision.
\item $P_{failure}$: The probability that the navigation system fails to generate a valid passing plan, which typically manifests as the robot turning around.
\end{itemize}
\subsection{Platform}
We evaluate \textsc{phhp} using BWIBots\cite{IJRR17-khandelwal}, a custom differential-drive robot atop a Segway base.
A single BWIBot is 65cm wide and has a maximum linear velocity of 1.0 m/s.
The BWIBot is equipped with a front-facing Hokuyo LiDAR sensor with a 170-degree field of view and a maximum range of 20m.
For the underlying navigation system, the BWIBot uses the E-Band planner\cite{quinlan1993elastic} as the local planner, which continually generates a sequence of motion commands that result from planning over a 4m horizon.
\subsection{Training}
We train \textsc{phhp} in the widely-used Gazebo \cite{koenig2004design} simulator since it provides a safe and fast way to collect realistic data.
Two types of hallway are used in training: {\em (a)} an ``I-shaped'' straight hallway, and {\em (b)} an ``L-shaped'' hallway corner.
Both hallways are 1.6m-wide, a width for which the two 65cm-wide BWIbots can barely pass each other without colliding.
Training episodes proceed as described in Section \ref{sec::optimalhallucination}, where the robots spawn at each side of the hallway, 14m apart from one another.
As described in Section \ref{sec::approach}, domain randomization is used during simulation training in an effort to ensure that the learned policy works well when deployed in the real world.
Specifically, we impose a random starting delay from 0 to 2 seconds for each robot and we randomly set the detection range to a distance between 7 and 9 meters.
We use \textsc{phhp} to find virtual obstacles for each type of hallway; I-shaped and L-shaped.
To accelerate the \textsc{cma-es} search, we used $(r, \Delta r, k_{begin}, k_{end}) = (0.5, 0.05, 0.3, 0.6)$ as an initial hypothesis, which, intuitively, represents a virtual obstacle that entirely blocks the left half of the hallway.
For each run of \textsc{cma-es}, a total of approximately 100 generations occur before the standard deviation of all samples in a generation becomes less than our selected threshold of 0.01.
A single generation contains 8 sample configurations, and each configuration is evaluated by the average cost in Equation (\ref{eq:cost}) averaged over 200 domain-randomized episodes.
Training takes about 20 hours using a distributed computing cluster with about 150 nodes.
The identified configurations of virtual obstacle fields $\mathcal{H}_{\boldsymbol{\theta}^*}$ in two types of hallways are presented in Table \ref{tbl:cmaes_results}.
\begin{table}[ht]
\centering
\caption{Learned configuration of $\mathcal{H}_{\boldsymbol{\theta^*}}$ in various hallways.}
\begin{tabular}{|c|cccc|}
\hline
train environment & radius & $\Delta r$ & $k_{begin}$ & $k_{end}$ \\ \hline
I-shape hallway & 0.7590 & 0.7888 & 0.4845 & 0.4910 \\
L-shape hallway & 0.5122 & 0.5661 & 0.4842 & 0.5001 \\
\hline
\end{tabular}
\label{tbl:cmaes_results}
\end{table}
\subsection{Alternative Approaches} \label{ssec:alternative}
We compare \textsc{phhp} with three alternative methods; a right-lane-following baseline, \textsc{nh\_orca}, and the halting method.
The right-lane-following baseline, or simply \texttt{baseline} is inspired by the US traffic standard.
It is a rule-based algorithm that, upon detection of the other robot, moves the robot into a human-annotated right lane and proceeds in that lane until the two robots pass one another.
However, the baseline has two drawbacks; (a) it requires human effort to manually specify the lanes for each hallway in the environment, and (b) since the robot needs to always stay in a narrow lane even when another robot is not present, the average speed of the robot drops significantly.
\textsc{nh\_orca}\cite{alonso2013optimal} is an extension of \textsc{orca}\cite{van2011reciprocal} that can be applied to differential drive robots by approximating the trajectory of \textsc{orca} with differential drive constraints.
\textsc{orca} finds the optimal collision-avoiding velocity command with minimum deviation from the velocity command that heads directly to the goal.
As long as \textsc{orca} obtains accurate information about the environment with sufficient frequency (i.e., the precise position and velocity of the other robot), it is able to provide collision avoidance behavior with a small delay.
However, while \textsc{orca} provides excellent performance (we consider it to be an upper bound) in simulation with perfect communication channels, real robots often must rely on noisy communication channels to share position and velocity information, which we will show degrades the performance of \textsc{orca}.
\textsc{phhp}, on the other hand, only needs to observe the presence of the other robot once.
Finally, the \texttt{halting} method\cite{CoRL20-Park} is a system designed for hallway passing in which, when a halting robot detects a potential collision, the halting robot immediately moves to the nearest, safe parking spot until the non-halting robot completely passes and then resumes.
While this approach can be used to avoid collisions in some hallways settings, the halting behavior itself typically causes the average delay to be high.
\subsection{Testing \textsc{phhp} in Different Hallway Shapes}
\label{sec::hallwayshapes}
We trained $\textsc{phhp}$ in both I- and L-shaped hallways, and tested the resulting obstacle fields, $\mathcal{H}_{\boldsymbol{\theta^*_I}}$ and $\mathcal{H}_{\boldsymbol{\theta^*_L}}$, 300 times each in I-, L-, and T-shaped hallways.
Note that the T-shaped hallway is a test hallway that has never been exposed to the robot during training.
The results can be found in Figure \ref{fig:performance_between_phhp}, where we can see that the performance of \textsc{phhp} using $\mathcal{H}_{\boldsymbol{\theta^*_L}}$ outperforms $\mathcal{H}_{\boldsymbol{\theta^*_I}}$ in all three environments, presumably because the wings of the L-shaped hallway can be viewed as an I-shaped hallway with a shorter length.
For this reason, we use \textsc{phhp} with $\mathcal{H}_{\boldsymbol{\theta}^*_{L}}$ to represent \textsc{phhp} in further experiments unless otherwise mentioned.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{contents/figures/performance_by_train_episode.png}
\caption{
The figure shows the violin plot of \textsc{phhp} trained in the subscripted environment deployed in various hallways.
Each bar consists of 300 test episodes in the given hallway in simulation.
Both \textsc{phhp} with $\mathcal{H}_{\boldsymbol{\theta^*_{I}}}$ and $\mathcal{H}_{\boldsymbol{\theta^*_{L}}}$ provide smooth interaction in the I and T-shaped hallway.
However, $\mathcal{H}_{\boldsymbol{\theta^*_{I}}}$ often fails to provide a smooth solution in the L-shaped hallway while $\mathcal{H}_{\boldsymbol{\theta^*_{L}}}$ does.
Note that no collision happened over 1,800 episodes in simulation.
The obstacle field used by each \textsc{phhp} can be found in Table \ref{tbl:cmaes_results}.
}
\label{fig:performance_between_phhp}
\end{figure}
\subsection{Performance in the Presence of Communication Noise}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{contents/figures/performance.png}
\caption{
The performance analysis of each algorithm in the noisy communication channel; (left) the average delay of each algorithm, and (right) the probability of collision.
The average delay of \textsc{phhp} remains consistently stable, while the delay of \textsc{nh\_orca} increases rapidly as the noise increases.
The right-lane-following baseline and halting methods are also resistant to noise, but the average delay of both algorithms is much higher than \textsc{phhp}.
}
\label{fig:performance}
\end{figure}
We now seek to investigate the efficacy of \textsc{phhp} versus alternative approaches.
In particular, we are interested in how robust each approach is to communication noise, and so we perform our experiments here in a setting in which an artificial noisy channel is imposed using a dropout model.
Under this model, messages fail to reach their destination with a specified probability and, when such a failure occurs, each algorithm uses the last received message to perform collision avoidance.
We expect that \textsc{phhp}, the baseline method, and the halting method, all of which only require observing the other robot once before activating the corresponding collision avoidance behavior, will slowly degrade as the probability of dropout increases.
In contrast, we expect that the performance of ORCA will drop rapidly as we increase channel noise.
Experiments were conducted 30 times in the 1.6m-wide I-shaped hallway in Gazebo, and the results are shown in Figure \ref{fig:performance}.
The results indicate that, as expected, the average delay incurred by \textsc{phhp} becomes lower than that of \textsc{nh\_orca} as channel noise increases.
Additionally, about 10\% of \textsc{nh\_orca} trials ended in collision in high noise settings, while no collisions were reported for \textsc{phhp} across all noise levels.
Taken together these results suggest that \textsc{phhp} will fare better than \textsc{nh\_orca} in real-world deployment settings.
Additionally, we note that, while the \texttt{baseline} and \texttt{halting} methods are robust to noise, the raw performance of those methods is far worse than that of \textsc{phhp}.
\subsection{Robustness Analysis} \label{ssec:robustness}
\begin{figure}
\centering
\includegraphics[width=0.53\textwidth]{contents/figures/robustness.png}
\caption{
Box and whisker plot of real-world experiments with various conditions.
Details for each condition can be found in Table \ref{tbl:exp_setups}.
When the two robots use different conditions, the delay of each robot is plotted separately.
Across conditions, \textsc{phhp} incurs between just 1 and 7 seconds of average delay compared to single-robot navigation in the same hallway, and did not incur a single collision or turnaround behavior across all 192 experiments represented here.
}
\label{fig:robustness}
\end{figure}
We investigate the robustness of \textsc{phhp} in terms of sim-to-real transfer performance in different environments, different characteristics of the robot (i.e., detection range and velocity), and even with different but valid obstacle fields,
$\mathcal{H}_{\boldsymbol{\theta}^*}$,
used by \textsc{phhp}.
The test setup is as follows.
We directly deploy simulation-trained \textsc{phhp} in the real world and conduct 32 evaluation episodes per condition, each with a specific environment and robot parameters as shown in Table \ref{tbl:exp_setups}.
``ID'' defines the name of each condition, while ``Hallway Type'' represents the hallway shape and width, where L-shaped hallways are 1.8m wide on one segment and 1.6m wide on the other.
Example hallways can be seen in Figure \ref{fig:real_hallways}.
$\texttt{D}_i$ and $\texttt{V}_i$ describe the detection range and velocity of a robot, respectively, where the subscript $i$ denotes which robot the parameter pertains to. Finally, $\mathcal{H}_i$ represents the specific obstacle field used by \textsc{phhp} during the experiment as defined in Section \ref{sec::hallwayshapes}.
The robots report their locations to each other over WiFi.
The result of each experiment is shown in Figure \ref{fig:robustness}.
Regarding sim-to-real transfer performance, \textsc{phhp} fares well: the average delay of \textsc{phhp} in the real I- and L-type corridors is similar to the results obtained in simulation.
Interestingly, a Z-shaped hallway only records one second of delay, presumably because the interaction in a Z-shaped hallway happens on the wider side of the hallway.
Regarding robustness to different robot characteristics, even though the maximum linear velocity of the robots significantly differed, 1.0 m/s and 0.6 m/s, respectively, \textsc{phhp} managed to resolve the hallway passing problem without any turnaround or collision.
Finally, regarding robustness to different but valid obstacle fields, the \textbf{\texttt{DIFF}$_\mathcal{H}$} results indicate that, as long as individual obstacle fields can resolve the hallway passing problem, a combination of them can also be successful.
\textbf{\textit{Importantly, no collisions or turnarounds were observed}} during the entire set of 192 real-world episodes.
These results suggest that \textsc{phhp} is very robust to the sim-to-real gap, environmental changes, different speeds and detection ranges of the robot, and even to using obstacle fields trained in different hallways.
\begin{table}[]
\centering
\caption{The configuration used in real world experiments.}
\begin{tabular}{l|lllllll}
ID & Hallway Type & $\texttt{D}_1$ & $\texttt{D}_2$ & $\texttt{V}_1$ & $\texttt{V}_2$ & $\mathcal{H}_1$ & $\mathcal{H}_2$ \\ \hline
\textbf{I-shape} & \textbf{1.8m I-shaped} & 8.0 & 8.0 & 1.0 & 1.0 & $\mathcal{H}^*_{L}$ & $\mathcal{H}^*_{L}$ \\
\textbf{L-shape} & \textbf{L-shaped} & 8.0 & 8.0 & 1.0 & 1.0 & $\mathcal{H}^*_{L}$ & $\mathcal{H}^*_{L}$ \\
\textbf{S-shape} & \textbf{1.6m Z-shaped} & 8.0 & 8.0 & 1.0 & 1.0 & $\mathcal{H}^*_{L}$ & $\mathcal{H}^*_{L}$ \\
\textbf{$\texttt{DIFF}_D$} & 1.8m I-shaped & \textbf{9.0} & \textbf{7.0} & 1.0 & 1.0 & $\mathcal{H}^*_{L}$ & $\mathcal{H}^*_{L}$ \\
\textbf{$\texttt{DIFF}_V$} & L-shaped & 8.0 & 8.0 & 1.0 & \textbf{0.6} & $\mathcal{H}^*_{L}$ & $\mathcal{H}^*_{L}$ \\
\textbf{$\texttt{DIFF}_\mathcal{H}$} & \textbf{1.6m I-shaped} & 8.0 & 8.0 & 1.0 & 1.0 & $\mathcal{H}^*_{L}$ & \textbf{$\mathbf{{H}}^*_{I}$}
\end{tabular}
\label{tbl:exp_setups}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth]{contents/figures/photo_of_hallway_compressed.png}
\caption{
The photos of hallways in which we performed the real-world \textsc{phhp} experiments; (a) 1.8m I-shaped, (b) 1.6m I-shaped, (c) 1.6m Z-shaped, and (d) L-shaped.
}
\label{fig:real_hallways}
\end{figure}
\section{INTRODUCTION}
One of the grand goals of the robotics community is to safely and reliably deploy fully-autonomous mobile robots in common environments over extended periods of time.
Indeed, many researchers have moved toward this vision and reported hundreds of hours of unsupervised, collision-free navigation by a single robot\cite{biswas20161, khandelwal2017bwibots}.
However, long-term deployment of \emph{multiple} autonomous robots in common spaces still remains a difficult task.
One reason for this difficulty is that, while conventional navigation systems are good at handling static environments, their performance deteriorates in the presence of dynamic obstacles, e.g., other moving robots.
The research community has explored some solutions to this problem, but these solutions typically rely on strict requirements such as a perfectly-controlled space (e.g., a warehouse) or perfect sensing \cite{van2011reciprocal}, and they cannot guarantee safety in novel environments without employing time-consuming movement schemes such as one robot halting while another moves past \cite{CoRL20-Park}.
To the best of our knowledge, there are no reports that claim long-term deployment of \emph{multiple} autonomous robots in uncontrolled spaces without human supervision.
Separately, recent work in the navigation community leveraging the concept of \emph{perceptual hallucination}\cite{xiao2021toward, xiao2021agile, wang2021agile} has demonstrated impressive results in allowing robots to navigate highly constrained spaces successfully.
Here, perceptual hallucination refers to the technique of forcing the robot to perceive specific additional virtual obstacles such that motion plans generated and executed in the presence of these additional obstacles will exhibit certain desired behaviors.
One intuitive motivation for such techniques is that the additional obstacles serve as a kind of blinder for the robot by concealing unnecessary (or even distracting) information.
To date, however, perceptual hallucination has not been applied in the context of multiple robots or dynamic obstacles.
In this paper, we hypothesize that perceptual hallucination can be used to improve conventional navigation systems in multi-robot and confined settings.
In particular, we posit that, by using perceptual hallucination techniques to obscure the presence of moving objects that would otherwise cause these conventional systems to generate suboptimal behavior (e.g., collision or turning around), we can enable multi-robot navigation in confined spaces such as narrow hallways.
If true, this would imply that hallucination would allow system designers to solve the multi-robot navigation problem using the same conventional navigation systems that have been thoroughly tested to be stable in static environments.
To investigate our hypothesis, we introduce and evaluate \emph{Perceptual Hallucination for Hallway Passing} (\textsc{phhp}), a hallucination-based approach to improve a given navigation policy in the setting of multi-robot navigation in narrow hallways.
\textsc{phhp} uses experience gathered in domain-randomized simulation episodes of hallway passing in order to learn the proper size and placement of virtual obstacles so as to enable successful navigation.
We investigate the performance and robustness of using \textsc{phhp} in common hallways with both simulation and real-world experiments, and we find that it can achieve similar performance compared to a leading existing method, Optimal Reciprocal Collision Avoidance (\textsc{orca}) \cite{van2011reciprocal}, while relaxing its assumption of perfect information about the surroundings.
We further show that, compared to a rule-based right-lane-following method, \textsc{phhp} reduces the average delay by 59.41\%.
Finally, we show that \textsc{phhp} is robust to the sim-to-real gap, different speeds, detection ranges, and even various hallway shapes and widths.
\section{RELATED WORK}
\label{sec::related}
The PHHP approach we present is a machine-learning-based solution to the problem of autonomous multi-robot navigation.
Therefore, we review here both conventional approaches that have been proposed to solve the multi-robot navigation problem, and also more recent approaches that have specifically incorporated the use of machine learning.
We also briefly review recent work on the use of perceptual hallucination in navigation.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{contents/figures/overview_v3.png}
\caption{
Overview of the proposed Perceptual Hallucination for Hallway Passing approach. (Left) multi-robot hallway passing scenario with an existing navigation system, and (right) how \textsc{phhp} improves the navigation system with hallucinated sensor readings.
\textsc{phhp} deployment proceeds as follows: (a) two robots try to pass each other in the hallway; (b) one robot detects the other and initiates the hallucination process; (c) the trained \textsc{phhp} system generates a hallucinated field $\mathcal{H}_{\boldsymbol{\theta}}$ in the global coordinate system based on the global plan (d) each robot uses its corresponding hallucinated scan $\mathbf{l}_{\mathcal{H}_\theta}$ and its existing navigation system to handle the rest of the scenario.
Note that the hallucination function, $h(\mathbf{l}, \mathcal{H})$, computes the depth value of hallucinated readings by taking the \emph{minimum} value between the real and virtual scans.
}
\label{fig:overview}
\end{figure*}
\subsection{Conventional Approaches}
We divide conventional approaches to multi-robot navigation into ones based on coordination between robots and approaches that operate in a fully decentralized manner.
Approaches that require tight coordination include the centralized approach deployed by Kiva Systems (now Amazon Robotics) in controlled warehouse spaces \cite{wurman2008coordinating} and the coordinated task planning approach proposed by Jiang et al. \cite{AURO19-jiang}.
While these approaches can be used successfully, the reliance on coordination limits their use in scenarios without sufficient communication bandwidth and environments with external dynamic obstacles such as third-party robots.
Conventional decentralized approaches \cite{fiorini1998motion}, including \textsc{orca} \cite{van2011reciprocal}, on the other hand, do not require explicit coordination and communication between agents, and have instead focused on modifying single-agent motion planning in an attempt to make them applicable in multi-robot settings.
However, these approaches have their own drawbacks, typically exhibiting high sensitivities to errors in dynamic obstacle state estimation, sensor noise, and parameter selection \cite{hennes2012multi, alonso2013optimal}.
\subsection{Learning-Based Approaches}
Inspired by recent success in machine learning, several in the community have proposed methods that use learning to enable multi-robot navigation \cite{xiao2022motion, mirsky2021prevention}.
Decentralized end-to-end approaches, i.e., approaches that learn mappings directly from sensor readings to motion commands, have been shown to be successful in limited settings \cite{long2017deep, long2018towards, lin2019end, tan2020deepmnavigate}, but they are typically less successful in novel environments and often suffer from a lack of safety guarantees, e.g., they may not prevent collisions in highly constrained narrow hallways.
There have also been some hybrid attempts to combine conventional navigation with machine learning for multi-robot navigation \cite{fan2020distributed,CoRL20-Park}, but they have thus far typically resulted in sub-optimal passing behaviors.
\subsection{Perceptual Hallucination}
Finally, the concept of perceptual hallucination, which our method also uses, has recently emerged as an effective tool for enabling navigation in highly-constrained spaces \cite{xiao2021toward, xiao2021agile, wang2021agile} by allowing robots to synthetically modify their own sensor readings for better neural network training, and simplified motion planning during deployment, or both.
Despite its success, however, the idea of hallucination has not previously been applied to dynamic scenarios, including the multi-robot scenario that we study here.
| {
"attr-fineweb-edu": 1.641602,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbKU5qdmB61-1YRAW | \section{Introduction}
\label{sec:intro}
In recent years, Intelligent Personal Digital Assistants (IPDA) have becoming increasingly popular. Users interact with these assistants using natural language (either spoken or through text) where the conversations are generally short and chit-chat based. There is also emergence in assistants which interact with document-type conversation, such as emails (figure \ref{fig:email_response.}). These assistants must work with a broader context scope and multiple sub-conversations or intents occurring at each turn of the user input.
The increased scope and complexity that comes with multi-turn conversations creates many challenges for assistants in this setting. Specifically, extracting entities among other non-task entities, and ensuring dialogue state updates from entities relevant to the task become challenging. Previous works \cite{scopeit, htransformer} have shown that directly applying chit-chat methods to such document-type conversations tasks leads to sub-optimal results.
In all conversation models, collecting and manually annotated training data is a challenge and incurs significant cost. This problem is exacerbated by the distributional shift of the input data in online decision-making, meaning a supervised learning model must be retrained periodically to maintain its accuracy. Moreover, for commercial assistants, most of the user data is not available for eyes-on access, either for collecting or labeling, and hence cannot be used for training or improving models via traditional supervised learning. In this paper, we ask and answer the question \textit{"Can we effectively improve assistants in an online setting without explicit feedback and no eyes-on access to the data?"}
In this paper, we provide a simple yet effective method to improve the NLU components of intelligent assistants in a privacy-preserving online setting. The framework consists of two main parts. In the first part, we associate a user's emotion in his/her response to the assistant with the assistant's previous action. This is critical in document-type conversations which have multiple intents present in each turn of a conversation where the emotion might not be associated to the task being completed by the assistant. In the second part, the relevant detected emotion is used as a weak reward signal for the policy gradient algorithm to update the NLU models online. The signal is associated with the previous intent/action the assistant took and is used to improve its behavior for the previous step. It's important to ensure users can provide feedback with minimal effort. We accomplish this by detecting the implicit emotion from the user's natural conversation with the assistant rather than requiring the user's explicit feedback (such as ratings which are difficult to collect and might not be reliable indicators on the assistant's performance).
\section{Related work}
\label{sec:related work}
\begin{figure*}[ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering
\includegraphics[width=.55\linewidth]{imgs/email_response.PNG}
\caption{Example email to document-type assistant}
\label{fig:email_response.}
\end{subfigure
\begin{subfigure}[b]{0.5\linewidth}
\centering
\centering
\includegraphics[scale=0.15]{imgs/Fig_1_v3.png}
\caption{Architecture of proposed framework NARLE}
\label{fig:RL_icassp}
\end{subfigure}
\caption{Improving natural language understanding using implicit feedback}
\end{figure*}
Natural language understanding in conversations (including intent detection, entity extraction, and dialogue state tracking) are well studied problems \cite{gao2019neural, dst_review, todbert, simpletod}. Recently, there has been work in joint end-to-end models \cite{Liu+2016, Liu2017}. Jointly modeling multiple tasks in an end-to-end manner solves the model attribution problem and paves the way for directly optimizing the end-goal objective of correctly completing a task. This led to works which used a combination of supervised learning with RL \cite{li2017end-to-end, peng-etal-2017-composite, Dhingra2017EndtoEndRL}. Approaches which preferred pre-training the dialogue model before interactive tuning using RL suffered problems of dialogue state distribution mismatch. A solution to this was proposed by \cite{liu-etal-2018-dialogue}, who proposed a hybrid imitation and RL approach. The learning agent learned its policy from supervised training but used users' explicit guidance or demonstrations to learn the correct action. The feedback used in this case is either explicit feedback provided by user, or treating task completion as positive feedback.
One key component in the proposed framework is extracting emotion from document-type conversations by attributing the expressed emotion to the correct cause, as emotions not emerging from the assistant's actions should be ignored. Previous works focusing on emotion cause extraction and its variations evolve from rule-based methods \cite{chen2010emotion} to modern learning-based models \cite{gui2018event,xia2019emotion, chen2020conditional,poria2020recognizing}. Instead of relying on the above methods, we adapt the ScopeIt model \cite{scopeit} to extract not only task specific sentences but also sentences expressing emotion towards the tasks of interest to the assistant. Specifically, the ScopeIt is a neural model consisting of three parts: an intra-sentence aggregrator which contextualizes information within sentence, an inter-sentence aggregator which contextualizes information across sentences, and a classifier.
\section{Methodology}
This section contains the details of the proposed deep RL framework, with sections dedicated to the learning agent, the environment, and the learning algorithm. The overall architecture is depicted in Figure \ref{fig:RL_icassp}.
\subsection{The Learning Agent} \label{section:method:agent}
The learning agent in our case is a sequence of a ScopeIt unit \cite{scopeit} and a NLU model. The ScopeIt unit reduces the initial emails from the users to sentences relevant to the assistant's task, which will be supplied to the NLU model. In our work, the ScopeIt module is modified to identify both task specific sentences and sentences which provide surrounding information about the task. The agent will learn a policy that maps the filtered email message to the action based on the reward received from the environment. In this work, the learning agent can either start from scratch or from a pre-trained model using supervised learning.
\subsection{The Environment} \label{section:method:env}
The environment models the dynamics of users' interactions with the learning agent. Upon receiving the actions from the agent, the workflow will automatically generate a response based on the predicted action and the existing data and knowledge base. Next, when users respond to the agent's action, their response may express implicit emotion towards the action of the agents. Note that one key challenge in document-type conversations like emails is associating the expressed emotion to the correct cause, as emotions not emerging from the agent's actions should be ignored. We modify ScopeIt module to extract sentences expressing emotion towards the tasks of interest to the assistant. Filtered sentences are embedded using BERT, generating an emotion embedding. This emotion embedding is then used to classify the implicit emotion into positive, negative, or neutral. The detected emotion is then mapped to a numerical reward and fed back to the agent for updating the policy network.
\subsection{The Learning Algorithm} \label{section:method:alg}
We optimize the NLU model by allowing the agent to interact with users and learn from user feedback. We only use implicit emotions associated with the task as the metric in designing the reward. A reward function $R(x)$ is collected respectively for each positive (x = 1), negative (x = -1) and neutral (x = 0) implicit feedback.
\[ R(x) = \begin{cases}
+1 & x = 1 \\
-1 & x = -1 \\
0 & x = 0
\end{cases}
\]
The agent always acts on-policy in order to ensure the agent is always acting optimally for customers. This is accomplished through a softmax policy on the actions/predictions from the policy network outputs. We apply the REINFORCE algorithm \cite{williams1992simple} in optimizing the network parameters. Since the expectation of the sample gradient is equal to the actual gradient, we measure returns from real sample trajectories and use that to update our policy gradient. Specifically,
\begin{equation*}
\begin{aligned}
\nabla_{\theta}J(\theta) & = \mathbb{E}_{\pi}[Q^{\pi}(s,a) \nabla_{\theta}\ln(\pi_{\theta}(a|s))] \\
& = \mathbb{E}_{\pi}[\nabla_{\theta}\ln(\pi_{\theta}(a|s)) R]
\end{aligned}
\end{equation*}
where $Q^{\pi}(s,a)$ is the value of state-action pair when we follow a policy $\pi$.
\section{Experiments}
\subsection{Dataset}\label{sec:exp}
The emotion detection model is trained offline on a private customer dataset from our private preview service. The dataset for emotion recognition model is built from the email messages that are either filtered or not filtered by the ScopeIt module, and user feedback directed toward the actions of the agents. The dataset is constructed through the following two steps.
\textit{Step 1: Identifying insertion positions} We parse all email sentences based on a set of punctuation (including comma, period, colon, question and exclamation mark). Then, all positions after those punctuation are considered as candidate locations for injecting emotional feedbacks.
\textit{Step 2: Inject users' emotions} We next inject the users' emotions at the candidate locations found in Step 1 randomly and label those samples. We also randomly insert "general positive or negative emotions" and label those samples as neutral. This allows the agent to capture the nuance between emotions directed toward the agent and general emotions.
\subsection{Setup}\label{sec:setup}
We use the following transformer-based models for the emotion detection problem: BERT, DistilBERT, ALBERT, and RoBERTa. For each model, a linear layer is added on top of the pooled output to perform classification. We freeze some transformer layers of all BERT-type models and the number of frozen layers are learned in a trial-and-error manner. The model was end-to-end trained using huggingface implementation \cite{wolf2019huggingface} on 4 Nvidia K80 GPUs.
\section{Results}\label{sec:res}
\subsection{Emotion recognition}
We report both accuracy and the overall Macro F1 scores for all models in Table \ref{tab:res}. It is worth noting that the use of the ScopeIt model leads to improved performance for all models. Both the BERT and the DistilBERT models have similar accuracy. Hence, DistilBERT with two frozen layers is finally selected due to its smaller size and lower inference latency.
\begin{table}[!t]
\small
\begin{tabular}{ccccc}
\hline
Models & ScopeIt & F1 & Accuracy & Parameters \\ \hline
ALBERT & & 89.93 & 90.36 & 11M \\
RoBERTa(-2) & & 91.22 & 91.58 & 125M \\
BERT(-1) & & 94.06 & 94.37 & 110M \\
DistilBERT(-1) & & 94.15 & 94.45 & 66M \\
ALBERT & \checkmark & 91.46 & 91.87 & 11M \\
RoBERTa(-2) & \checkmark & 91.75 & 92.03 & 125M \\
BERT(-3) & \checkmark & \textbf{94.96} & \textbf{95.42} & 110M \\
DistilBERT(-2) & \checkmark & 94.92 & 95.39 & \textbf{66M} \\ \hline
\end{tabular}
\caption{Results for emotion recognition model. All scores are in percentage and are reported at best accuracy. BERT(-1) represents the BERT classification model with one frozen transformer layer.}
\label{tab:res}
\end{table}
\begin{figure*}[ht]
\begin{subfigure}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{imgs/RL_single_no_pretrain.png}
\caption{Multi-class training on random model}
\label{fig:MC_no_pretrain}
\end{subfigure
\begin{subfigure}[b]{0.5\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{imgs/RL_single_pretrain.png}
\caption{Multi-class training on fine-tuned model}
\label{fig:MC_pretrain}
\end{subfigure}
\begin{subfigure}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{imgs/RL_multi_no_pretrain.png}
\caption{Multi-label training on random model}
\label{fig:ML_no_pretrain}
\end{subfigure
\begin{subfigure}[b]{0.5\linewidth}
\centering
\includegraphics[width=.9\linewidth]{imgs/RL_multi_pretrain.png}
\caption{Multi-label training on fine-tuned model}
\label{fig:ML_pretrain}
\end{subfigure}
\caption{Learning curves for different model and settings}
\label{fig7}
\end{figure*}
\subsection{Multi-class Intent Detection}
We first consider an intent detection model behind a conversational assistant that assists users to schedule meetings. The assistant will use a DistilBERT model to classify the intent of emails into three categories: modify a meeting, cancel a meeting, and other (not relevant to task of meeting scheduling). We first conduct experiments in learning the intent classification model from scratch using only RL and summarize the results in figure \ref{fig:MC_no_pretrain}. We can see that all DistilBERT models start from random guesses, but as the learning agent interacts more with users the task success rates improve over time.
The type of feedback obtained is critical to NLU model training. Hence, three different feedback mechanisms are studied in this work: full feedback, partial feedback, and partial with noisy feedback. In the full feedback scenario, every customer is assumed to leave implicit feedback. In partial feedback scenario only 15\% of requests are assumed to have implicit feedback. While in partial with noisy feedback, out of the 15\% partial feedback, one third of the implicit feedback is incorrect. These feedback scenarios provide insight into the NLU model performance and both quantity and quality of feedback required.
The orange curve shows the performance by assuming every customer leaves an implicit feedback. Even under partial (i.e., 15\%) feedback, the agent can achieve comparable performance with the full feedback case after sufficient number of turns. Finally, the agent learns more slowly and has lower accuracy under the partial and noisy feedback case where one third of the 15\% feedback are wrongly labeled.
We next demonstrate the experimental results that use online RL training for a limited data fine-tuned DistilBERT model. As shown in the blue line in figure \ref{fig:MC_pretrain}, the supervised benchmark model only has about 63\% accuracy. This is due to limited eyes-on access to data and the domain mismatch of offline training and online product scale-up. We study the same three feedback scenarios as before. Note that a supervised pre-trained model leads to faster learning rates for both full and partial feedback cases and a better accuracy for the partial and noisy feedback.
\subsection{Multi-label Intent Detection}
We consider a multi-label intent classification model of the same conversational assistant as before. In this case, there are a total of six distinct actions and the action space is represented by a six-dimension vector with 0s and 1s. Out of the total 64 ($2^{6}=64$) possible scenarios, only six possible combinations are valid. The learning agent consists of an ensemble of six separate DistilBERT binary classification models which work independently to determine whether each single action should be taken or not.
We first train the learning agent from scratch where all six DistilBERT models start from random guesses. The learning curves on accuracy for full, partial as well as partial and noisy cases are shown in figure \ref{fig:ML_no_pretrain}. We can see that the learning accuracy improves for all three scenarios over time. Specifically, the accuracy for both full and partial feedback can reach about 70\%, which is nearly 10\% higher than that of the partial noisy case.
We next use online RL training for the six separate fine-tuned DistilBERT models from limited data. The learning curves are demonstrated in figure \ref{fig:ML_pretrain}, where the fine-tuned benchmark can be considerably improved over iterations for all scenarios. Also, compared to learning from scratch, fine-tuned models have faster learning rates and higher accuracies.
\section{Conclusion}\label{sec:conclusion}
We propose a deep RL framework ``NARLE" to improve the NLU models of a task-oriented conversational assistant in an online manner for document-type conversations. The proposed architecture scopes out emotion feedback relevant to the assistant's task and uses that as feedback to improve the performance of the assistant in an adaptive way. The proposed framework is evaluated on customer data, where the proposed method can improve a limited data fine-tuned model up to 43\%. We also show that the proposed method is robust to partial and noisy feedback.
\bibliographystyle{IEEEbib}
| {
"attr-fineweb-edu": 1.638672,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbLrxK02iP5rWI51i | \section{Introduction}
In this paper we derive an integral representation of the relative entropy $R(\mu || P)$, where $\mu$ is a measure on $C([0,T];\mathbbm{R}^d)$ and $P$ governs the solution to a stochastic differential equation (SDE). The relative entropy is used to quantify the distance between two measures. It has considerable applications in statistics, information theory and communications. It has been used in the long-time analysis of Fokker-Planck equations \cite{plastino-miller-etal:97,desvillettes-villani:01}, the analysis of dynamical systems \cite{yu-mehta:09} and the analysis of spectral density functions \cite{georgiou-lindquist:03}. It has been used in financial mathematics to quantify the difference between martingale measures \cite{fritelli:00,grandits-rheinlander:02}. The finiteness of $R(\mu|| P)$ has been shown to be equivalent to the invertibility of certain shifts on Wiener Space, when $P$ is the Wiener Measure \cite{ustunel:09,lassalle:12}.
One of the most important applications of the relative entropy is in the area of Large Deviations. Sanov's Theorem dictates that the empirical measure induced by independent samples governed by the same probability law $P$ converge towards their limit exponentially fast; and the constant governing the rate of convergence is the relative entropy \cite{dembo-zeitouni:97}. Large Deviations have been applied for example to spin glasses \cite{ben-arous-guionnet:95}, neural networks \cite{moynot-samuelides:02,faugeras-maclaurin:13b} and mean-field models of risk \cite{josselin-garnier-yang:13}. In the mean-field theory of neuroscience in particular, there has been a recent interest in the modelling of `finite-size-effects' \cite{baladron-fasoli-etal:11}. Large Deviations provides a mathematically rigorous tool to do this. However when $P$ governs a diffusion process the calculation of $R(\mu || P)$ for an arbitrary measure $\mu$ is not necessarily straightforward. In this context, the variational definition in Definition \ref{eq:relativeentropy} below is not straightforward to work with, and neither is the equivalent expression $E^{\mu}[\log\frac{d\mu}{dP}]$. The problem becomes particularly acute when one wishes to numerically calculate the rate function.
The classic paper by \cite{dawson-gartner:87} obtained a Large Deviations estimate for the empirical process corresponding to a system of diffusion processes, with coefficients themselves dependent on the empirical process. The empirical process in this paper is a map from the time $t$ to the empirical measure at time $t$. They used projective limits to determine a representation of the rate function as an integral (analogously to Freidlin and Wentzell \cite{freidlin-wentzell:98}) with respect to time. This rate function is not the relative entropy because the authors' empirical process differs from the empirical measure. Various authors have extended this work. Most recently, \cite{budhiraja-dupuis-etal:12,fischer:12} have used weak convergence and stochastic optimal control techniques to obtain a Large Deviation Principle for the empirical measure corresponding to a similar system to \cite{dawson-gartner:87} (although in a more general form). This empirical measure is a measure on the space $C([0,T](\mathbbm{R}^d))$ of continuous functions; it contains more information about the system than the empirical process of \cite{dawson-gartner:87}. The rate function is in variational form, being equal to the minimum of a cost functional over a set of control processes. If one assumes that the coefficients governing the diffusion process are independent of the empirical measure in this paper, then through Sanov's Theorem one may infer that the rate function is equal to the relative entropy: in other words, one may infer a variational representation of the relative entropy from these papers. It is also of interest to consult \cite{boue-dupuis:98}, who were the first to prove such a variational representation of the relative entropy in the case where the underlying process is a Wiener process.
In this paper we derive a specific integral (with respect to time) representation of the relative entropy $R(\mu|| P)$ when $P$ is a diffusion process. This $P$ is the same as in \cite[Section 4]{dawson-gartner:87}. The representation makes use of regular conditional probabilities. It ought, in many circumstances, to be more numerically tractable than the standard definition in \ref{eq:relativeentropy}, and thus it would be of practical use in the applications listed above.
\section{Outline of Main Result}
Let $\T$ be the Banach Space $C([0,T];\mathbbm{R}^d)$ equipped with the norm
\begin{equation}
\norm{X} = \sup_{s\in [0,T]}\lbrace |X_s| \rbrace,
\end{equation}
where $\left| \cdot \right|$ is the standard Euclidean norm over $\mathbbm{R}^d$. We let $(\mathcal{F}_t)$ be the canonical filtration over $(\T,\mathcal{B}(\T))$. For some topological space $\mathcal{X}$, we let $\mathcal{B}(\mathcal{X})$ be the Borelian $\sigma$-algebra and $\mathcal{M}(\mathcal{X})$ the space of all probability measures on $(\mathcal{X},\mathcal{B}(\mathcal{X}))$. Unless otherwise indicated, we endow $\mathcal{M}(\mathcal{X})$ with the topology of weak convergence. Let $\sigma = \lbrace t_1,t_2,\ldots,t_m \rbrace$ be a finite set of elements such that $t_1\geq 0$, $t_m \leq T$ and $t_{j} < t_{j+1}$. We term $\sigma$ a partition, and denote the set of all such partitions by $\mathbf J$. The set of all partitions of the above form such that $t_1 = 0$ and $t_m = T$ is denoted $\mathbf J_*$. For some $t\in [0,T]$, we define $\underline{\sigma}(t) = \sup\lbrace s\in \sigma | s\leq t\rbrace$.
\begin{definition}\label{eq:relativeentropy}
Let $(\Omega,\mathcal{H})$ be a measurable space, and $\mu$, $\nu$ probability measures.
\begin{equation*}
R_{\mathcal{H}}(\mu||\nu) = \sup_{f\in \mathcal{E}}\left\lbrace \Exp^{\mu}[f] - \log\Exp^{\nu}[\exp(f)]\right\rbrace \in \mathbbm{R}\cup\infty,
\end{equation*}
where $\mathcal{E}$ is the set of all bounded functions. If the $\sigma$-algebra is clear from the context, we omit the $\mathcal{H}$ and write $R(\mu||\nu)$. If $\Omega$ is Polish and $\mathcal{H} = \mathcal{B}(\Omega)$, then we only need to take the supremum over the set of all continuous bounded functions.
\end{definition}
Let $P\in \mathcal{M}(\T)$ be the following law governing a Markov-Feller diffusion process on $\T$. $P$ is stipulated to be a weak solution (with respect to the canonical filtration) of the local martingale problem with infinitesimal generator
\begin{equation*}
\mathcal{L}_t(f) = \frac{1}{2} \sum_{1\leq j,k\leq d} a^{jk}(t,x)\frac{\partial^2 f}{\partial x^j x^k} + \sum_{1\leq j\leq d} b^j(t,x) \frac{\partial f}{\partial x^j},
\end{equation*}
for $f(x)$ in $C^2(\mathbbm{R}^{d})$, i.e. the space of twice continuously differentiable functions. The initial condition (governing $P_0$) is $\mu_I \in \mathcal{M}(\mathbbm{R}^d)$. The coefficients $a^{jk},b^j : [0,T]\times\mathbbm{R}^d \to \mathbbm{R}$ are assumed to be continuous (over $[0,T]\times\mathbbm{R}^d$), and the matrix $a(t,x)$ is strictly positive definite for all $t$ and $x$. $P$ is assumed to be the unique weak solution. We note that the above infinitesimal generator is the same as in \cite[p 269]{dawson-gartner:87} (note particularly Remark 4.4 in this paper).
Our major result is the following. Let $\mu\in\mathcal{M}(\T)$ govern a random variable $X \in \T$. For some $x\in\T$, we note $\mu_{|[0,s],x}$, the regular conditional probability (rcp) given $X_r = x_r$ for all $r\in [0,s]$. The marginal of $\mu_{|[0,s],x}$ at some time $t\geq s$ is noted $\mu_{t | [0,s],x}$.
\begin{theorem}\label{thm:ratefunctionfinal}
For $\mu\in\mathcal{M}(\T)$,
\begin{multline}
R\left(\mu ||P\right) = R_{\mathcal{F}_0}(\mu || P) + \sup_{\sigma\in\mathbf J_*}E^{\mu(x)}\left[ \int_{0}^{T}\sup_{f\in\mathcal{D}}\bigg\lbrace \frac{\partial}{\partial t}E^{\mu_{t | [0,\underline{\sigma}(t)],x}}\left[f\right]\right. \\ \left.\left.-E^{\mu_{t|[0,\underline{\sigma}(t)],x}(y)} \left[\mathcal{L}_t f(y) + \frac{1}{2}\sum_{j,k=1}^d a^{jk}(t,y)\frac{\partial f}{\partial y^j}\frac{\partial f}{\partial y^k}\right]\right\rbrace dt\right].\label{eq:bigresult}
\end{multline}
Here $\mathcal{D}$ is the Schwartz Space of compactly supported functions $\mathbbm{R}^{d} \to \mathbbm{R}$, possessing continuous derivatives of all orders. If $\frac{\partial}{\partial t}E^{\mu_{t | [0,\underline{\sigma}(t)],x}}\left[f\right]$ does not exist, then we consider it to be $\infty$.
\end{theorem}
Our paper has the following format. In Section 3 we make some preliminary definitions, defining the process $P$ against which the relative entropy is taken in this paper. In Section \ref{Sect:three} we employ the projective limits approach of \cite{dawson-gartner:87} to obtain the chief result of this paper: Theorem \ref{thm:ratefunctionfinal}. This gives an explicit integral representation of the relative entropy. In Section 5 we apply the result in Theorem \ref{thm:ratefunctionfinal} to various corollaries, including the particular case when $\mu$ is the solution of a Martingale Problem. We finish by comparing our results to those of \cite{budhiraja-dupuis-etal:12} and \cite{fischer:12}.
\section{Preliminaries}
We outline some necessary definitions.
For $\sigma\in\mathbf J$ of the form $\sigma = \lbrace t_1,t_2,\ldots,t_m \rbrace$, let $\sigma_{;j} = \lbrace t_1,\ldots,t_j \rbrace$. We denote the number of elements in a partition $\sigma$ by $m(\sigma)$. We let $\mathbf J_s$ be the set of all partitions lying in $[0,s]$. For $0 < s < t \leq T$, we let $J_{s;t}$ be the set of all partitions of the form $\sigma\cup t$, where $\sigma\in \mathbf J_s$. Let $|\sigma| = \sup_{0\leq j \leq m(\sigma)-1} (t_{j+1} - t_j)$ and, for $t\in [0,T]$, let $\underline{\sigma}(t) = \sup\lbrace s\in\sigma: s\leq t\rbrace$.
Let $\pi_\sigma : \T \to \T_\sigma := \mathbbm{R}^{d\times m(\sigma)}$ be the natural projection, i.e. such that $\pi_\sigma(x) = (x_{t_1},\ldots,x_{t_{m(\sigma)}})$. We similarly define the natural projection $\pi_{\alpha\gamma}:\T_\gamma\to\T_\alpha$ (for $\alpha\subseteq\gamma \in \mathbf J$), and we define $\pi_{[s,t]}: \T \to C([s,t];\mathbbm{R}^d)$ to be the natural restriction of $x\in \T$ to $[s,t]$. The expectation of some measurable function $f$ with respect to a measure $\mu$ is written as $\Exp^{\mu(x)}[f(x)]$, or simply $\Exp^{\mu}[f]$ when the context is clear.
For $s<t$, we write $\mathcal{F}_{s,t} = \pi_{[s,t]}^{-1}\mathcal{B}(C([s,t];\mathbbm{R}^d))$ and $\mathcal{F}_\sigma = \pi_\sigma^{-1}\mathcal{B}(\T_\sigma)$. We define $\mathcal{F}_{s;t}$ to be the $\sigma$-algebra generated by $\mathcal{F}_s$ and $\mathcal{F}_{\gamma}$ (where $\gamma = [t]$). For $\mu\in\mathcal{M}(\T)$, we denote its image laws by $\mu_{\sigma} := \mu\circ \pi_{\sigma}^{-1}\in\mathcal{M}(\T_\sigma)$ and $\mu_{[s,t]} := \mu\circ \pi_{[s,t]}^{-1} \in \mathcal{M}(C([s,t];\mathbbm{R}^d))$. Let $(X_t)$ be a continuous stochastic process, adapted to $(\mathcal{F}_t)$ and governed by $\mu\in \mathcal{M}(\T)$. For $z\in \mathbbm{R}^d$, we write the rcp given $X_s = z$ by $\mu_{|s,z}$. For $x\in\mathcal{C}([0,s];\mathbbm{R}^d)$ or $\T$, the rcp given that $X_u = x_u$ for all $0\leq u \leq s$ is written as $\mu_{|[0,s],x}$. The rcp given that $X_u = x_u$ for all $u\leq s$, and $X_t = z$, is written as $\mu_{|s,x;t,z}$. For $\sigma\in\mathbf J_s$ and $z\in (\mathbbm{R}^d)^{m(\sigma)}$, the rcp given that $X_u = z_u$ for all $u\in\sigma$ is written as $\mu_{|\sigma,z}$. All of these measures are considered to be in $\mathcal{M}(C([s,T](\mathbbm{R}^d))$ (unless indicated otherwise in particular circumstances). The probability laws governing $X_t$ (for $t\geq s$), for each of these, are respectively $\mu_{t|s,z}$, $\mu_{t|[0,s],x}$ and $\mu_{t|\sigma,z}$. We clearly have $\mu_{s|s,z} = \delta_z$, for $\mu_s$ a.e. $z$, and similarly for the others.
\begin{remark}
See \cite[Definition 5.3.16]{karatzas-shreve:91} for a definition of a rcp. Technically, if we let $\mu^*_{|s,z}$ be the r.c.p given $X_s = z$ according to this definition, then $\mu_{|s,z} = \mu^*_{s,z}\circ\pi_{[s,T]}^{-1}$ and $\mu_{t|s,z} = \mu^*_{s,z}\circ\pi_t^{-1}$. By \cite[Theorem 3.18]{karatzas-shreve:91}, $\mu_{|s,z}$ is well-defined for $\mu_s$ a.e. $z$. Similar comments apply to the other rcp's defined above.
\end{remark}
In the definition of the relative entropy, we abbreviate $R_{\mathcal{F}_\sigma}(\mu || P)$ by $R_\sigma(R || P)$. If $\sigma =\lbrace t\rbrace$, we write $R_t(\mu || P)$.
\section{The Relative Entropy $R(\cdot ||P)$ using Projective Limits}\label{Sect:three}
In this section we derive an integral representation of the relative entropy $R(\mu ||P)$, for arbitrary $\mu\in\mathcal{M}(\T)$. We use Sanov's Theorem to obtain the initial expressions in Theorem \ref{thm:DG}, before adapting the projective limits approach of \cite{dawson-gartner:87} to obtain the central result (Theorem \ref{thm:ratefunctionfinal}).
We begin with a standard decomposition result for the relative entropy \cite{donsker-varadhan:83}.
\begin{lemma}\label{lem:DonskVara}
Let $X$ be a Polish Space with sub $\sigma$-algebras $\mathcal{G}\subseteq\mathcal{F} \subseteq \mathcal{B}(X)$. Let $\mu$ and $\nu$ be probability measures on $(X,\mathcal{F})$, and their regular conditional probabilities over $\mathcal{G}$ be (respectively) $\mu_\omega$ and $\nu_\omega$. Then
\begin{equation*}
R_{\mathcal{F}}(\mu||\nu) = R_\mathcal{G}(\mu||\nu) + \Exp^{\mu(\omega)}\left[R_{\mathcal{F}}(\mu_\omega||\nu_\omega)\right].
\end{equation*}
\end{lemma}
The following Theorem is proved in Section \ref{sect:DGproof} of the Appendix.
\begin{theorem}\label{thm:DG}
If $\alpha,\sigma \in \mathbf J$ and $\alpha\subseteq\sigma$, then $R_\alpha(\mu||P) \leq R_\sigma(\mu||P)$. Furthermore,
\begin{align}
R_{\mathcal{F}_{s,t}}(\mu||P) &= \sup_{\sigma\in\mathbf J\cap [s,t]} R_{\sigma} (\mu||P),\label{eq:projlimit1}\\
R_{\mathcal{F}_{s;t}}(\mu||P) &= \sup_{\sigma\in\mathbf J_{s;t}} R_{\sigma} (\mu||P).\label{eq:projlimit2}
\end{align}
It suffices for the supremums in \eqref{eq:projlimit1} to take $\sigma\subset\mathcal{Q}_{s,t}$, where $\mathcal{Q}_{s,t}$ is any countable dense subset of $[s,t]$. Thus we may assume that there exists a sequence $\sigma^{(n)}\subset\mathcal{Q}_{s,t}$ of partitions such that $\sigma^{(n)}\subseteq \sigma^{(n+1)}$, $|\sigma^{(n)}| \to 0$ as $n\to \infty$ and
\begin{equation}
R_{\mathcal{F}_{s,t}}(\mu||P) = \lim_{n\to\infty} R_{\sigma^{(n)}}(\mu||P).\label{eq:projlimit3}
\end{equation}
\end{theorem}
We start with a technical lemma.
\begin{lemma}\label{lem:technical1}
Let $t>s$, $\alpha,\sigma \in \mathbf J_s$, $\sigma\subset \alpha$ and $s\in \sigma$. Then for $\mu_\sigma$ a.e. $x$,
\begin{equation*}
R_t (\mu_{|\sigma,x}||P_{|s,x_s}) = R (\mu_{t|\sigma,x}||P_{t|s,x_s}).
\end{equation*}
Secondly,
\begin{equation*}
E^{\mu_{\sigma}(x)}\left[R_t (\mu_{|\sigma,x}||P_{|s,x_s})\right] \leq E^{\mu_{\alpha}(z)}\left[R_t (\mu_{|\alpha,z}||P_{|s,z_s})\right].
\end{equation*}
\end{lemma}
\begin{proof}
The first statement is immediate from Definition \ref{eq:relativeentropy} and the Markovian nature of $P$. For the second statement, it suffices to prove this in the case that $\alpha = \sigma \cup u$, for some $u < s$. We note that, using a property of regular conditional probabilities, for $\mu_\sigma$ a.e $x$,
\begin{equation}
\mu_{t|\sigma,x} = E^{\mu_{u|\sigma,x}(\omega)}\left[ \mu_{t|\alpha,v(x,\omega)}\right],\label{eq:rcpd1}
\end{equation}
where $v(x,\omega)\in \T_{\alpha}$, $v(x,\omega)_u = \omega$, $v(x,\omega)_r = x_r$ for all $r\in\sigma$.
We consider $\mathfrak{A}$ to be the set of all finite disjoint partitions $\mathfrak{a}\subset \mathcal{B}(\mathbbm{R}^d)$ of $\mathbbm{R}^d$. The expression for the entropy in \cite[Lemma 1.4.3]{dupuis-ellis:97} yields
\begin{equation*}
E^{\mu_\sigma(x)}\left[ R \left(\mu_{t|\sigma,x}||P_{t|s,x_s}\right)\right] = E^{\mu_\sigma(x)}\left[ \sup_{\mathfrak{a}\in\mathfrak{A}}\sum_{A\in\mathfrak{a}}\mu_{t|\sigma,x}(A)\log\frac{\mu_{t|\sigma,x}(A)}{P_{t|s,x_s}(A)}\right] .
\end{equation*}
Here the summand is considered to be zero if $\mu_{t|\sigma,x}(A) = 0$, and infinite if $\mu_{t|\sigma,x}(A) > 0$ and $P_{t|s,x_s}(A)=0$. Making use of \eqref{eq:rcpd1}, we find that
\begin{align*}
& E^{\mu_\sigma(x)}\left[R \left(\mu_{t|\sigma,x}||P_{t|s,x_s}\right)\right] \\ &= E^{\mu_\sigma(x)}\left[\sup_{\mathfrak{a}\in\mathfrak{A}}\sum_{A\in\mathfrak{a}}E^{\mu_{u|\sigma,x}(w)}\left[\mu_{t|\alpha,v(x,w)}(A)\right]\log\frac{\mu_{t|\sigma,x}(A)}{P_{t|s,x_s}(A)}\right] \\
&\leq E^{\mu_\sigma(x)} E^{\mu_{u|\sigma,x}(\omega)}\left[ \sup_{\mathfrak{a}\in\mathfrak{A}}\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,v(x,\omega)}(A)\log\frac{\mu_{t|\sigma,x}(A)}{P_{t|s,x_s}(A)}\right]\\
&= E^{\mu_\alpha(z)}\left[\sup_{\mathfrak{a}\in\mathfrak{A}}\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,z}(A)\log\frac{\mu_{t|\sigma,\pi_{\sigma\alpha} z}(A)}{P_{t|s,z_s}(A)}\right].
\end{align*}
We note that, for $\mu_{\alpha}$ a.e. $z$, if $\mu_{t|\sigma,\pi_{\sigma\alpha} z}(A) = 0$ in this last expression, then $\mu_{t|\alpha,z}(A) =0$ and we consider the summand to be zero. To complete the proof of the lemma, it is thus sufficient to prove that for $\mu_\alpha$ a.e. $z$
\begin{equation*}
\sup_{\mathfrak{a}\in\mathfrak{A}}\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,z}(A)\log\frac{\mu_{t|\alpha,z}(A)}{P_{t|s,z_s}(A)} \geq \sup_{\mathfrak{a}\in\mathfrak{A}}\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,z}(A)\log\frac{\mu_{t|\sigma,\pi_{\sigma\alpha} z}(A)}{P_{t|s,z_s}(A)} .
\end{equation*}
But, in turn, the above inequality will be true if we can prove that for each partition $\mathfrak{a}$ such that $P_{t|s,z_s}(A) > 0$ and $\mu_{t|\sigma,\pi_{\sigma\alpha} z}(A) > 0$ for all $A\in\mathfrak{a}$,
\begin{equation*}
\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,z}(A)\log\frac{\mu_{t|\alpha,z}(A)}{P_{t|s,z_s}(A)} -\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,z}(A)\log\frac{\mu_{t|\sigma,\pi_{\sigma\alpha} z}(A)}{P_{t|s,z_s}(A)}\geq 0.
\end{equation*}
The left hand side is equal to $\sum_{A\in\mathfrak{a}}\mu_{t|\alpha,z}(A)\log\frac{\mu_{t|\alpha,z}(A)}{\mu_{t|\sigma,\pi_{\sigma\alpha} z}(A)}$. An application of Jensen's inequality demonstrates that this is greater than or equal to zero.
\end{proof}
\begin{remark}
If, contrary to the definition, we briefly consider $\mu_{|[0,t],x}$ to be a probability measure on $\T$, such that $\mu(A)=1$ where $A$ is the set of all points $y$ such that $y_s = x_s$ for all $s\leq t$, then it may be seen from the definition of $R$ that
\begin{equation}
R_{\mathcal{F}_T}\left(\mu_{|[0,t],x}||P_{|[0,t],x}\right) = R_{\mathcal{F}_{t,T}}\left(\mu_{|[0,t],x}||P_{|[0,t],x}\right) = R_{\mathcal{F}_{t,T}}\left(\mu_{|[0,t],x}||P_{|t,x_t}\right). \label{eq:I2smallident}
\end{equation}
We have also made use of the Markov Property of $P$. This is why our convention, to which we now return, is to consider $\mu_{|[0,t],x}$ to be a probability measure on $(C([t,T];\mathbbm{R}^d),\mathcal{F}_{t,T})$.
\end{remark}
This leads us to two alternative expressions for $R(\mu || P)$.
\begin{lemma}\label{lem:part1}
Each $\sigma$ in the supremums below is of the form $\lbrace t_1 < t_2 < \ldots < t_{m(\sigma)-1} < t_{m(\sigma)}\rbrace$ for some integer $m(\sigma)$.
\begin{align}
&R\left(\mu||P\right) = R_0(\mu || P) + \sum_{j=1}^{m(\sigma)-1}E^{\mu_{[0,t_j]}(x)}\left[R_{\mathcal{F}_{t_j,t_{j+1}}}\left( \mu_{| [0,t_j],x}||P_{|t_j,x_{t_j}}\right)\right], \label{eq:I2ident1}\\
&R\left(\mu||P\right) = R_0(\mu || P) + \sup_{\sigma\in\mathbf J_*}\sum_{j=1}^{m(\sigma)-1}E^{\mu_{\sigma_{;j}}(x)}\left[ R_{t_{j+1}}\left( \mu_{t_{j+1} | \sigma_{;j},x}||P_{t_{j+1}|t_j,x_{t_j}}\right)\right], \label{eq:I2ident2} \\
&E^{\mu_{[0,s]}(x)}\left[R_t\left(\mu_{t|[0,s],x}||P_{t|s,x_s}\right)\right] = \sup_{\sigma\in\mathbf J_s}E^{\mu_{\sigma}(y)}\left[ R_t\left(\mu_{t|\sigma,y}||P_{t|s,y_s}\right)\right],\label{eq:I2ident4}
\end{align}
where in this last expression $0\leq s < t \leq T$.
\end{lemma}
\begin{proof}
Consider the sub $\sigma$-algebra $\mathcal{F}_{0,t_{m(\sigma)-1}}$. We then find, through an application of Lemma \ref{lem:DonskVara} and \eqref{eq:I2smallident}, that
\begin{multline*}
R\left(\mu ||P\right) = R_{\mathcal{F}_{0,t_{m(\sigma)-1}}} \left(\mu ||P\right)+ \\ E^{\mu_{[0,t_{m(\sigma)-1}]}(x)}\left[ R_{\mathcal{F}_{t_{m(\sigma)-1},t_{m(\sigma)}}}\left( \mu_{| [0,t_{m(\sigma)-1}],x}||P_{|t_{m(\sigma)-1},x_{t_{m(\sigma)-1}}}\right)\right].\end{multline*}
We may continue inductively to obtain the first identity.
We use Theorem \ref{thm:DG} to prove the second identity. It suffices to take the supremum over $\mathbf J_*$, because $R_\sigma(\mu ||P) \geq R_\gamma(\mu ||P)$ if $\gamma \subset \sigma$. It thus suffices to prove that
\begin{equation}
R_{\sigma}(\mu||P) = R_0(\mu || P) +\sum_{j=1}^{m(\sigma)-1}E^{\mu_{\sigma_{;j}}(x)}\left[ R_{t_{j+1}}\left( \mu_{t_{j+1} | \sigma_{;j},x}||P_{t_{j+1}|t_j,x_{t_j}}\right)\right].
\end{equation}
But this also follows from repeated application of Lemma \ref{lem:DonskVara}. To prove the third identity, we firstly note that
\begin{align*}
R_{\mathcal{F}_{s;t}}\left(\mu||P\right) &= R_0(\mu || P) + \sup_{\sigma\in\mathbf J_{s;t}}\sum_{j=1}^{m(\sigma)-1}E^{\mu_{\sigma_{;j}}(x)}\left[ R_{t_{j+1}}\left( \mu_{t_{j+1} | \sigma_{;j},x}||P_{t_{j+1}|t_j,x_{t_j}}\right)\right].\\
&= \sup_{\sigma\in\mathbf J_s}\left\lbrace R_{\sigma}(\mu||P) + E^{\mu_{\sigma}(x)}\left[R_t\left(\mu_{t|\sigma,x}||P_{t|s,x_s}\right)\right]\right\rbrace.
\end{align*}
The proof of this is entirely analogous to that of the second identity, except that it makes use of \eqref{eq:projlimit2} instead of \eqref{eq:projlimit1}. But, after another application of Lemma \ref{lem:DonskVara}, we also have that
\begin{equation*}
R_{\mathcal{F}_{s;t}}\left(\mu ||P\right) = R_{\mathcal{F}_{0,s}}(\mu ||P) + E^{\mu_{[0,s]}(x)}\left[ R_t(\mu_{t|[0,s],x} ||P_{t|s,x_s})\right].
\end{equation*}
On equating these two different expressions for $R_{\mathcal{F}_{s;t}}\left(\mu ||P\right)$, we obtain
\begin{multline*}
E^{\mu_{[0,s]}(x)}\left[R_t(\mu_{t|[0,s],x} ||P_{t|s,x_s})\right] = \sup_{\sigma\in\mathbf J_{s}}\left\lbrace \left(R_{\sigma}(\mu ||P) - R_{\mathcal{F}_{0,s}}(\mu ||P)\right) \right. \\ \left.+ E^{\mu_{\sigma}(x)}\left[R_t\left(\mu_{t|\sigma,x} ||P_{t|s,x_s}\right)\right]\right\rbrace.
\end{multline*}
Let $(\sigma^{(k)}) \subset \mathbf J_s$, $\sigma^{(k-1)}\subseteq \sigma^{(k)}$ be such that $\lim_{k\to\infty}R_{\sigma^{(k)}}(\mu ||P) = R_{\mathcal{F}_{0,s}}(\mu ||P) $. Such a sequence exists by \eqref{eq:projlimit1}. Similarly, let $(\gamma^{(k)})\subseteq \mathbf J_s$, be a sequence such that $E^{\mu_{\gamma^{(k)}}(x)}\left[R_t\left(\mu_{t|\gamma^{(k)},x} ||P_{t|s,x_s}\right)\right]$ is strictly nondecreasing and asymptotes as $k\to\infty$ to $\sup_{\sigma\in\mathbf J_s}E^{\mu_{\sigma}(x)}\left[R_t\left(\mu_{t|\sigma,x} ||P_{t|s,x_s}\right)\right]$. Lemma \ref{lem:technical1} dictates that \begin{equation*}
E^{\mu_{\sigma^{(k)}\cup\gamma^{(k)}}(x)}\left[R_t\left(\mu_{t|\sigma^{(k)}\cup\gamma^{(k)},x} ||P_{t|s,x_s}\right)\right]
\end{equation*}
asymptotes to the same limit as well. Clearly $\lim_{k\to\infty}R_{\sigma^{(k)}\cup\gamma^{(k)}}(\mu ||P) = R_{\mathcal{F}_{0,s}}(\mu ||P) $ because of the identity at the start of Theorem \ref{thm:DG}. This yields the third identity.
\end{proof}
\subsection{Proof of Theorem \ref{thm:ratefunctionfinal}}
In this section we work towards the proof of Theorem \ref{thm:ratefunctionfinal}, making use of some results in \cite{dawson-gartner:87}. However first we require some more definitions.
If $K\subset \mathbbm{R}^{d}$ is compact, let $\mathcal{D}_K$ be the set of all $f\in\mathcal{D}$ whose support is contained in $K$. The corresponding space of real distributions is $\mathcal{D}^{'}$, and we denote the action of $\theta\in\mathcal{D}^{'}$ by $\langle \theta,f\rangle$. If $\theta\in\mathcal{M}(\mathbbm{R}^d)$, then clearly $\langle \theta,f\rangle = \Exp^{\theta}[f]$. We let $C^{2,1}_0(\mathbbm{R}^d)$ denote the set of all continuous functions, possessing continuous spatial derivatives of first and second order, a continuous time derivative of first order, and of compact support. For $f\in\mathcal{D}$ and $t\in[0,T]$, we define the random variable $\nabla_t f: \mathbbm{R}^d\to \mathbbm{R}^d$ such that $(\nabla_t f(y))^i= \sum_{j=1}^d a^{ij}(t,y)\frac{\partial f}{\partial y^j}$ (for $x\in\T$, we may also understand $\nabla_t f(x) := \nabla_t f(x_t)$). Let $a_{ij}$ be the components of the matrix inverse of $a^{ij}$. For random variables $X,Y:\T\to\mathbbm{R}^d$, we define the inner product $(X,Y)_{t,x} = \sum_{i,j=1}^d X^i(x) Y^j(x) a_{ij}(t,x_t)$, with associated norm $|X|_{t,x}^2 = (X(x),X(x))_{t,x}^2$. We note that $|\nabla_t f|_{t,x}^2 = \sum_{i,j=1}^d a^{ij}(t,x_t)\frac{\partial f}{\partial z^i}(x_t)\frac{\partial f}{\partial z^j}(x_t)$.
Let $\mathfrak{M}$ be the space of all continuous maps $[0,T] \to \mathcal{M}(\mathbbm{R}^d)$, equipped with the topology of uniform convergence. For $s\in[0,T]$, $\vartheta\in \mathfrak{M}$ and $\nu\in\mathcal{M}(\mathbbm{R}^d)$ we define $\mathfrak{n}(s,\vartheta,\nu) \geq 0$ and such that
\begin{equation}
\mathfrak{n}(s,\vartheta,\nu)^2 = \sup_{f\in\mathcal{D}}\left\lbrace\langle\vartheta,f\rangle -\frac{1}{2} E^{\nu(y)}\left[\left|\nabla_t f\right|_{t,y}^2\right] \right\rbrace.\label{defn:n}
\end{equation}
This definition is taken from \cite[Eq. (4.7)]{dawson-gartner:87} - we note that $\mathfrak{n}$ is convex in $\vartheta$. For $\gamma\in \mathcal{M}(\T)$, we may naturally write $\mathfrak{n}(s,\gamma,\nu) :=\mathfrak{n}(s,\omega,\nu)$, where $\omega$ is the projection of $\gamma$ onto $\mathfrak{M}$, i.e. $\omega(s) = \gamma_s$. It is shown in \cite{dawson-gartner:87} that this projection is continuous.
The following two definitions, lemma and two propositions are all taken (with some small modifications) from \cite{dawson-gartner:87}.
\begin{definition}
Let I be an interval of the real line. A measure $\mu\in \mathcal{M}(\T)$ is called absolutely continuous if for each compact set $K\subset \mathbbm{R}^d$ there exists a neighborhood $U$ of 0 in K and an absolutely continuous function $H_K :I \to \mathbbm{R}$ such that
\begin{equation*}
\left| \Exp^{\mu_u }[f] - \Exp^{\mu_v}[f] \right| \leq \left| H_K(u) - H_K(v)\right|,
\end{equation*}
for all $u,v\in I$ and $f\in U_K$.
\end{definition}
\begin{lemma}\label{lem:abscontderiv}
\cite[Lemma 4.2]{dawson-gartner:87} If $\mu$ is absolutely continuous over an interval $I$, then its derivative exists (in the distributional sense) for Lebesgue a.e. $t\in I$. That is, for Lebesure a.e. $t\in I$, there exists $\dot{\mu}_t \in \mathcal{D}^{'}$ such that for all $f\in \mathcal{D}$
\begin{equation*}
\lim_{h\to 0}\frac{1}{h}\left( \langle \mu_{t+h},f\rangle - \langle \mu_t,f\rangle\right) = \langle \dot{\mu}_t,f\rangle.
\end{equation*}
\end{lemma}
\begin{definition}\label{defn:HilbertSpace1}
For $\nu\in\mathcal{M}(C([s,t];\mathbbm{R}^d))$, and $0\leq s < t \leq T$, let $L^{2}_{s,t}(\nu)$ be the Hilbert space of all measurable maps $h: [s,t]\times \mathbbm{R}^{d} \to \mathbbm{R}^{d}$ with inner product
\begin{equation*}
[ h_1,h_2 ] = \int_s^t\Exp^{\nu_u(x)}\left[(h_1(u,x),h_2(u,x))_{u,x}\right] du.
\end{equation*}
We denote by $L^{2}_{s,t,\nabla}(\nu)$ the closure in $L_{s,t}^{2}(\nu)$ of the linear subset generated by maps of the form $(x,u)\to \nabla_u f$, where $f\in C^{2,1}_0([s,t],\mathbbm{R}^{d})$. We note that functions in $L^{2}_{s,t,\nabla}(\nu)$ only need to be defined $du\otimes\nu_u(dx)$ almost everywhere.
\end{definition}
Recall that $\mathfrak{n}$ is defined in \eqref{defn:n}, and note that $\langle {}^*\mathcal{L}_t\mu_t,f\rangle := \langle \mu_t,\mathcal{L}_t f\rangle$.
\begin{proposition}\label{prop:intermediate}
Assume that $\mu\in\mathcal{M}(C([r,s];\mathbbm{R}^d))$, such that $\mu_r = \delta_y$ for some $y\in \mathbbm{R}^d$ and $0\leq r<s\leq T$. We have that \cite[Eq. 4.9 and Lemma 4.8]{dawson-gartner:87}
\begin{multline}
\int_r^s \mathfrak{n}(t,\dot{\mu}_t-{}^*\mathcal{L}_t\mu_t,\mu_t)^2 dt = \sup_{f\in C^{2,1}_0(\mathbbm{R}^d)}\left\lbrace \Exp^{\mu_s(x)}[f(s,x)] -f(r,y) \right. \\ \left.- \int^s_r \Exp^{\mu_t(x)}\left[\left(\frac{\partial}{\partial t} + \mathcal{L}_t\right)f(t,x) + \frac{1}{2}\left|\nabla_t f(t,x)\right|^2_{t,x}\right] dt\right\rbrace .\label{eq:DG3}
\end{multline}
It clearly suffices to take the supremum over a countable dense subset. Assume now that $\int_r^s \mathfrak{n}(t,\dot{\mu}_t-{}^*\mathcal{L}_t\mu_t,\mu_t)^2 dt < \infty$. Then for Lebesgue a.e. $t$, $\dot{\mu}_t = {}^*\mathcal{K}_t\mu_t$, where \cite[Lemma 4.8(3)]{dawson-gartner:87}
\begin{equation}\label{eq:KidentityDG}
\mathcal{K}_t f(\cdot) = \mathcal{L}_t f(\cdot)+ \sum_{1\leq j\leq d}(h^{\mu}(t,\cdot))^j\frac{\partial f}{\partial x^j}(\cdot),
\end{equation}
for some $h^{\mu}\in L^{2}_{r,s,\nabla}(\mu)$ which satisfies \cite[Lemma 4.8(4)]{dawson-gartner:87}
\begin{equation}
\int_r^s \mathfrak{n}(t,\dot{\mu}_t-{}^*\mathcal{L}_t\mu_t,\mu_t)^2 dt = \frac{1}{2}\int_r^s \Exp^{\mu_t(x)}\left[\left| h^\mu(t,x)\right|_{t,x}^2\right] dt < \infty.\label{eq:hmuratefunction}
\end{equation}
\end{proposition}
\begin{remark}
We reach \eqref{eq:DG1} from the proof of Lemma 9 in \cite[Eq 4.10]{dawson-gartner:87}. One should note also that \cite{dawson-gartner:87} write the relative entropy $R$ as $L^{(1)}_\nu$ in (their) equation (4.10). To reach \eqref{eq:DG2}, we also use the equivalence between (4.7) and (4.8) in \cite{dawson-gartner:87}.
\end{remark}
\begin{proposition}\label{theorem:DGbig}
Assume that $\mu\in\mathcal{M}(\T)$, such that $\mu_{r} = \delta_y$ for some $y\in \mathbbm{R}^d$ and $0\leq r<s\leq T$. If $R_{\mathcal{F}_{r,s}}(\mu ||P_{|r,y}) < \infty$, then $\mu$ is absolutely continuous on $[r,s]$, and \cite[Lemma 4.9]{dawson-gartner:87}
\begin{equation}
R_{\mathcal{F}_{r,s}}(\mu ||P_{|r,y}) \geq \int_r^s \mathfrak{n}(t,\dot{\mu}_{t}-{}^*\mathcal{L}_t\mu_{t},\mu_{t})^2 dt.\label{eq:DG1}
\end{equation}
Here the derivative $\dot{\mu}_{t}$ is defined in Lemma \ref{lem:abscontderiv}. For all $f\in\mathcal{D}$, \cite[Eq (4.35)]{dawson-gartner:87}
\begin{equation}
\Exp^{\mu_{s}}[f] -\log\Exp^{P_{s|r,y}}\left[\exp(f)\right] \leq \int_r^s \mathfrak{n}(t,\dot{\mu}_{t}-{}^*\mathcal{L}_t\mu_t,\mu_t)^2 dt.\label{eq:DG2}
\end{equation}
\end{proposition}
We are now ready to prove Theorem \ref{thm:ratefunctionfinal} (the central result).
\begin{proof}
Fix a partition $\sigma = \lbrace t_1,\ldots,t_{m}\rbrace$. We may conclude from \eqref{eq:I2ident1} and \eqref{eq:DG1} that,
\begin{multline}
R\left(\mu || P\right) \geq R_0\left(\mu || P\right) + \\ \sum_{j=1}^{m-1}E^{\mu_{[0,t_j]}(x)}\int_{t_j}^{t_{j+1}} \mathfrak{n}(t,\dot{\mu}_{t | [0,t_j],x}-{}^*\mathcal{L}_t\mu_{t|[0,t_j],x},\mu_{t|[0,t_j],x})^2 dt.
\end{multline}
The integrand on the right hand side is measurable with respect to $E^{\mu_{[0,t_j]}(x)}$ due to the equivalent expression \eqref{eq:DG3}. We may infer from \eqref{eq:DG2} that,
\begin{multline}
E^{\mu_{[0,t_j]}(x)}\int_{t_j}^{t_{j+1}} \mathfrak{n}(t,\dot{\mu}_{t | [0,t_j],x}-{}^*\mathcal{L}_t\mu_{t|[0,t_j],x},\mu_{t|t_j,x})^2 dt \\ \geq \Exp^{\mu_{[0,t_j]}(x)}\left[ \sup_{f\in\mathcal{D}}\left\lbrace \Exp^{\mu_{t_{j+1}|[0,t_j],x}}[f] -\log\Exp^{P_{t_{j+1}|t_j,x_{t_j}}}\left[\exp(f)\right]\right\rbrace\right]\\
\\ = \Exp^{\mu_{[0,t_j]}(x)}\left[ \sup_{f\in C_b(\mathbbm{R}^d)}\left\lbrace \Exp^{\mu_{t_{j+1}|[0,t_j],x}}[f] -\log\Exp^{P_{t_{j+1}|t_j,x_{t_j}}}\left[\exp(f)\right]\right\rbrace\right].\label{eq:tmptmp}
\end{multline}
This last step follows by noting that if $\nu\in\mathcal{M}(\mathbbm{R}^d)$, and $f\in C_b(\mathbbm{R}^d)$, and the expectation of $f$ with respect to $\nu$ is finite, then there exists a series $(K_n)\subset \mathbbm{R}^d$ of compact sets such that
\begin{equation*}
\int_{\mathbbm{R}^d}f(x)d\nu(x) = \lim_{n\to\infty}\int_{K_n}f(x)d\nu(x).
\end{equation*}
In turn, for each $n$ there exist $(f_n^{(m)})\in \mathcal{D}_{K_n}$ such that we may write
\begin{equation*}
\int_{K_n}f(x)d\nu(x) = \lim_{m\to\infty}\int_{K_n}f_n^{(m)}(x)d\nu(x).
\end{equation*}
This allows us to conclude that the two supremums are the same. The last expression in \eqref{eq:tmptmp} is merely
\begin{equation*}
E^{\mu_{[0,t_j]}(x)}\left[R_{t_{j+1}}\left( \mu_{t_{j+1} | [0,t_j],x}||P_{t_{j+1}|t_j,x_{t_j}}\right)\right].
\end{equation*}
By \eqref{eq:I2ident4}, this is greater than or equal to
\begin{equation*}
E^{\mu_{\sigma_{;j}}(y)}\left[R_{t_{j+1}}\left(\mu_{t_{j+1}|\sigma_{;j},y}||P_{t_{j+1}|t_j,y_{t_j}}\right)\right].
\end{equation*}
We thus obtain the theorem using \eqref{eq:I2ident2}.
\end{proof}
\section{Some Corollaries}\label{Sect:Particular}
We state some corollaries of Theorem \ref{thm:ratefunctionfinal}. In the course of this section we make progressively stronger assumptions on the nature of $\mu$, culminating in the elegant expression for $R(\mu || P)$ when $\mu$ is a solution of a martingale problem. We finish by comparing our work with that of \cite{budhiraja-dupuis-etal:12,fischer:12}.
\begin{corollary}
Suppose that $\mu\in\mathcal{M}(\T)$ and $R(\mu || P) < \infty$. Then for all $s$ and $\mu$ a.e. $x$, $\mu_{|[0,s],x}$ is absolutely continuous over $[s,T]$. For each $s\in [0,T]$ and $\mu$ a.e. $x\in\T$, for Lebesgue a.e. $t\geq s$
\begin{equation}
\dot{\mu}_{t|[0,s],x} = {}^*\mathcal{K}^\mu_{t|s,x}\mu_{t|[0,s],x}\label{eq:temp10}
\end{equation}
where for some $h^{\mu}_{s,x} \in L^{2}_{s,T,\nabla}(\mu_{|[0,s],x})$
\begin{equation}
\mathcal{K}^\mu_{t|s,x}f(y) = \mathcal{L}_t f(y) + \sum_{j=1}^d h^{\mu,j}_{s,x}(t,y)\frac{\partial f}{\partial y^j}(y).\label{eq:temp11}
\end{equation}
Furthermore,
\begin{equation}
R\left(\mu ||P\right) = R_0(\mu || P) +\frac{1}{2} \sup_{\sigma\in\mathbf J_*}\int_{0}^{T}E^{\mu(w)}E^{\mu_{t|[0,\underline{\sigma}(t)],w}(z)}\left[ \left| h^\mu_{\underline{\sigma}(t),w}(t,z)\right|_{t,z}^2\right] dt.
\label{eq:temp7}
\end{equation}
For any dense countable subset $\mathcal{Q}_{0,T}$ of $[0,T]$, there exists a series of partitions $\sigma^{(n)}\subset\sigma^{(n+1)}\in\mathcal{Q}_{0,T}$, such that as $n\to\infty$, $|\sigma^{(n)}|\to 0$, and
\begin{equation}\label{eq:corollaryR}
R\left(\mu ||P\right) = R_0(\mu || P) + \frac{1}{2}\lim_{n\to\infty}\int_{0}^{T}E^{\mu(w)}E^{\mu_{t|[0,\underline{\sigma}^{(n)}(t)],w}(z)}\left[ \left| h^\mu_{\underline{\sigma}^{(n)}(t),w}(t,z)\right|_{t,z}^2\right] dt.
\end{equation}
\end{corollary}
\begin{remark}
It is not immediately clear that we may simplify \eqref{eq:temp7} further (barring further assumptions). The reason for this is that we only know that \newline $E^{\mu_{|[0,\underline{\sigma}(t)],w}(z)}\left[ \left| h^\mu_{\underline{\sigma}(t),w}(t,z)\right|_{t,z}^2\right]$ is measurable (as a function of $w$), but it has not been proven that $h^\mu_{\underline{\sigma}(t),w}(t,z)$ is measurable (as a function of $w$).
\end{remark}
\begin{proof}
Let $\sigma = \lbrace 0=t_1,\ldots,t_m=T\rbrace$ be an arbitrary partition. For all $j<m$, we find from Lemma \ref{lem:part1} that $R_{\mathcal{F}_{t_j,t_{j+1}}}\left( \mu_{| [0,t_j],x} || P_{|t_j,x_{t_j}}\right) < \infty$ for $\mu_{[0,t_j]}$ a.e. $x\in C([0,t_j];\mathbbm{R}^d)$. We thus find that, for all such $x$, $\mu_{|[0,t_j],x}$ is absolutely continuous on $[t_j,t_{j+1}]$ from Proposition \ref{theorem:DGbig}. We are then able to obtain \eqref{eq:temp10} and \eqref{eq:temp11} from Propositions \ref{prop:intermediate} and \ref{theorem:DGbig}.
From \eqref{eq:hmuratefunction}, \eqref{eq:bigresult} and \eqref{eq:temp10} we find that
\begin{equation}
R\left(\mu || P\right) = R_0(\mu || P) + \frac{1}{2}\sup_{\sigma\in\mathbf J_*}E^{\mu(x)}\int_{0}^{T}E^{\mu_{t|[0,\underline{\sigma}(t)],x}(z)}\left[\left| h^\mu_{\underline{\sigma}(t),x}(t,z)\right|_{t,z}^2\right] dt.
\label{eq:temp12}
\end{equation}
The above integral must be finite (since we are assuming $R(\mu || P)$ is finite). Furthermore $E^{\mu_{t|[0,\underline{\sigma}(t)],x}(z)}\left[\left| h^\mu_{\underline{\sigma}(t),x}(t,z)\right|_{t,z}^2\right]$ is $(t,x)$ measurable as a consequence of the equivalent form \eqref{eq:DG3}. This allows us to apply Fubini's Theorem to obtain \eqref{eq:temp7}. The last statement on the sequence of maximising partitions follows from Theorem \ref{thm:DG}.
\end{proof}
\begin{corollary}\label{cor:last}
Suppose that $R(\mu|| P) < \infty$. Suppose that for all $s\in\mathcal{Q}_{0,T}$ (any countable, dense subset of $[0,T]$), for $\mu$ a.e. $x$ and Lebesgue a.e. $t$, $h^{\mu}_{s,x}(t,x_t) = E^{\mu_{|[0,s],x;t,x_t}(w)}h^{\mu}(t,w)$ for some progressively-measurable random variable $h^{\mu}:[0,T]\times\T \to \mathbbm{R}^d$. Then
\begin{equation*}
R\left(\mu ||P\right) = R_0(\mu || P) +\frac{1}{2}\int_{0}^{T}E^{\mu(w)}\left[ \left| h^\mu(t,w)\right|_{t,w_t}^2\right] dt.
\end{equation*}
\end{corollary}
\begin{proof}
Let $\mathcal{G}^{s,x;t,y}$ be the sub $\sigma$-algebra consisting of all $B\in \mathcal{B}(\mathcal{T})$ such that for all $w\in B$, $w_r = x_r$ for all $r\leq s$ and $w_t = y$. Thus $h^{\mu}_{s,x}(t,y) = E^{\mu_{|[0,s],x;t,y}(w)}h^{\mu}(t,w) = E^{\mu}\left[h^{\mu}(t,\cdot)|\mathcal{G}^{s,x;t,y}\right]$. By \cite[Corollary 2.4]{revuz-yor:91}, since \newline$\cap_{s< t} \mathcal{G}^{s,x;t,x_t}= \mathcal{G}^{t,x;t,x_t}$ (restricting to $s\in\mathcal{Q}_{0,T}$), for $\mu$ a.e. $x$,
\begin{equation}\label{eq:revuz}
\lim_{s\to t^{-}}E^{\mu_{|[0,s],x;t,x_t}(w)}h^{\mu}(t,w)= h^{\mu}(t,x),
\end{equation}
where $s\in\mathcal{Q}_{0,T}$. By the properties of the regular conditional probability, we find from \eqref{eq:corollaryR} that
\begin{multline}
R\left(\mu ||P\right) = R_0(\mu || P) +\\ \frac{1}{2}\lim_{n\to\infty}\int_{0}^{T}E^{\mu(w)}\left[ \left| E^{\mu_{|[0,\underline{\sigma}^{(n)}(t)],w;t,w_t}(v)} \left[h^\mu(t,v) \right]\right|_{t,w_t}^2\right] dt.\label{eq:entropytmp4}
\end{multline}
By assumption, the above limit is finite. Thus by Fatou's Lemma, and using the properties of the regular conditional probability,
\begin{multline*}
R\left(\mu ||P\right) \geq R_0(\mu || P) + \\ \frac{1}{2}\int_{0}^{T}E^{\mu(w)}\left[ \linf{n}\left| E^{\mu_{|[0,\underline{\sigma}^{(n)}(t)],w;t,w_t}(v)} \left[h^\mu(t,v) \right]\right|_{t,w_t}^2\right] dt.
\end{multline*}
Through use of \eqref{eq:revuz},
\begin{equation*}
R\left(\mu ||P\right) \geq R_0(\mu || P) +\frac{1}{2}\int_{0}^{T}E^{\mu(w)}\left[ \left| h^\mu(t,w)\right|_{t,w_t}^2\right] dt.
\end{equation*}
Conversely, through an application of Jensen's Inequality to \eqref{eq:entropytmp4}
\begin{equation*}
R\left(\mu ||P\right) \leq R_0(\mu || P) + \frac{1}{2}\lim_{n\to\infty}\int_{0}^{T}E^{\mu(w)}\left[E^{\mu_{|[0,\underline{\sigma}^{(n)}(t)],w;t,w_t}(v)} \left[\left| h^\mu(t,v)\right|_{t,w_t}^2\right]\right] dt.
\end{equation*}
A property of the regular conditional probability yields
\begin{equation*}
R\left(\mu ||P\right) \leq R_0(\mu || P) +\frac{1}{2}\int_{0}^{T}E^{\mu(w)}\left[ \left| h^\mu(t,w)\right|_{t,w_t}^2\right] dt.
\end{equation*}
\end{proof}
\begin{remark}
The condition in the above corollary is satisfied when $\mu$ is a solution to a Martingale Problem - see Lemma \ref{Thm:final}.
\end{remark}
We may further simplify the expression in Theorem \ref{thm:ratefunctionfinal} when $\mu$ is a solution to the following Martingale Problem. Let $\lbrace c^{jk},e^j\rbrace$ be progressively-measurable functions $[0,T]\times \T \to\mathbbm{R}$. We suppose that $c^{jk} = c^{kj}$. For all $1\leq j,k \leq d$, $c^{jk}(t,x)$ and $e^j(t,x)$ are assumed to be bounded for $x\in L$ (where $L$ is compact) and all $t\in [0,T]$. For $f\in C_0^{2}(\mathbbm{R}^d)$ and $x\in\T$, let
\begin{equation*}
\mathcal{M}_u(f)(x) = \sum_{1\leq j,k\leq d}c^{jk}(u,x) \frac{\partial^2 f}{\partial y^j \partial y^k}(x_u) + \sum_{1\leq j\leq d}e^j(u,x)\frac{\partial f}{\partial y^j}(x_u).
\end{equation*}
We assume that for all such $f$, the following is a continuous martingale (relative to the canonical filtration) under $\mu$
\begin{equation}
f(X_t) - f(X_0) - \int^t_0\mathcal{M}_u f(X)du.\label{eq:mumartingale}
\end{equation}
The law governing $X_0$ is stipulated to be $\nu \in \mathcal{M}(\mathbbm{R}^d)$.
From now on we switch from our earlier convention and we consider $\mu_{|[0,s],x}$ to be a measure on $\T$ such that, for $\mu$ a.e. $x\in\T$, $\mu_{|[0,s],x}(A_{s,x}) = 1$ where $A_{s,x}$ is the set of all $X\in\T$ satisfying $X_t = x_t$ for all $0\leq t\leq s$. This is a property of a regular conditional probability (see \cite[Theorem 3.18]{karatzas-shreve:91}). Similarly, $\mu_{|s,x;t,y}$ is considered to be a measure on $\T$ such that for $\mu$ a.e. $x\in\T$, $\mu_{|s,x;t,y}(B_{s,x;t,y}) = 1$, where $B_{s,x;t,y}$ is the set of all $X\in A_{s,x}$ such that $X_t = y$. We may apply Fubini's Theorem (since $f$ is compactly supported and bounded) to \eqref{eq:mumartingale} to find that
\begin{equation*}
\langle \mu_{t|[0,s],x},f\rangle - f(x_s) = \int^t_s \Exp^{\mu_{|[0,s],x}}\left[\mathcal{M}_u f\right] du.
\end{equation*}
This ensures that $\mu_{|[0,s]}$ is absolutely continuous over $[s,T]$, and that
\begin{equation}
\langle \dot{\mu}_{t|[0,s],x},f\rangle =\Exp^{\mu_{|[0,s],x}}\left[\mathcal{M}_t f\right].\label{eq:Mderivative}
\end{equation}
\begin{lemma}\label{Thm:final}
If $R(\mu || P) < \infty$ then for Lebesgue a.e. $t\in[0,T]$ and $\mu$ a.e. $x\in\T$,
\begin{equation}
a(t,x_t) = c(t,x).\label{eq:aequalc}
\end{equation}
If $R(\mu || P) < \infty$ then
\begin{equation}
R(\mu||P) = R(\nu || \mu_I) + \frac{1}{2}\Exp^{\mu(x)}\left[\int_0^T \left| b(s,x_s) -e(s,x)\right|_{s,x_s}^2ds\right].\label{eq:lastidentity}
\end{equation}
\end{lemma}
\begin{proof}
It follows from $R(\mu||P) < \infty$, \eqref{eq:temp10} and \eqref{eq:temp11} that for all $s$ and $\mu$ a.e. $x$, for Lebesgue a.e. $t\geq s$
\begin{equation}
E^{\mu_{|s,x;t,x_t}}[c(t,\cdot)] = a(t,x_t).
\end{equation}
Let us take a countable dense subset $\mathcal{Q}_{0,T}$ of $[0,T]$. There thus exists a null set $\mathcal{N}\subseteq[0,T]$ such that for every $s\in\mathcal{Q}_{0,T}$, $\mu$ a.e. $x$ and every $t\notin \mathcal{N}$ the above equation holds. We may therefore conclude \eqref{eq:aequalc} using \cite[Corollary 2.4]{revuz-yor:91} and taking $s\to t^-$. From \eqref{eq:Mderivative}, we observe that for all $s\in [0,T]$ and $\mu$ a.e. $x$, for Lebesgue a.e. $t$
\begin{equation*}
h^\mu_{s,x}(t,x_t) = E^{\mu_{|[0,s],x;t,x_t}}[e(t,\cdot)].
\end{equation*}
Equation \eqref{eq:lastidentity} thus follows from Corollary \ref{cor:last}.
\end{proof}
\subsection{Comparison of our Results to those of Fischer \textit{et al} \cite{budhiraja-dupuis-etal:12,fischer:12}}
We have already noted in the introduction that one may infer a variational representation of the relative entropy from \cite{budhiraja-dupuis-etal:12,fischer:12} by assuming that the coefficients of the underlying stochastic process are independent of the empirical measure in these papers. The assumptions in \cite{fischer:12} on the underlying process $P$ are both more general and more restrictive than ours. His assumptions are more general insofar as the coefficients of the SDE may depend on the past history of the process and the diffusion coefficient is allowed to be degenerate. However our assumptions are more general insofar as we only require $P$ to be the unique (in the sense of probability law) weak solution of the SDE, whereas \cite{fischer:12} requires $P$ to be the unique strong solution of the SDE. Of course when both sets of assumptions are satisfied, one may infer that the expressions for the relative entropy are identical.
\section{Appendix}
\subsection{Proof of Theorem \ref{thm:DG}}\label{sect:DGproof}
The fact that, if $\alpha\subseteq\sigma$, then $R_\alpha(\mu||P) \leq R_\sigma(\mu||P)$, follows from Lemma \ref{lem:DonskVara}. We prove the first expression \eqref{eq:projlimit1} in the case $s=0$, $t=T$ (the proof of the second identity \eqref{eq:projlimit2} is analogous). We employ Large Deviations Theory to do this.
\begin{definition}
A series of probability laws $\Gamma^N$ on some topological space $\Omega$ equipped with its Borelian $\sigma$-algebra is said to satisfy a strong Large Deviation Principle with rate function $I:\Omega\to\mathbbm{R}$ if for all open sets $O$,
\begin{equation*}
\linf{N}N^{-1}\log \Gamma^N(O) \geq - \inf_{x\in O} I(x)
\end{equation*}
and for all closed sets $F$
\begin{equation*}
\lsup{N}N^{-1}\log\Gamma^N(F) \leq -\inf_{x\in F} I(x).
\end{equation*}
If furthermore the set $\lbrace x: I(x) \leq \alpha\rbrace$ is compact for all $\alpha\geq 0$, we say that $I$ is a good rate function.
\end{definition}
We define the following empirical measures.
\begin{definition}
For $x\in \T^N, y\in \T^N_\sigma$, let
\begin{align*}
\hat{\mu}^N(x) = \frac{1}{N}\sum_{1\leq j\leq N} \delta_{x^j} \in \mathcal{M}(\T), \;\;\;\;
\hat{\mu}_\sigma^N(y) = \frac{1}{N}\sum_{1\leq j\leq N} \delta_{y^j} \in \mathcal{M}(\T_\sigma).
\end{align*}
\end{definition}
Clearly $\hat{\mu}_\sigma^N(x_\sigma) = \pi_\sigma(\hat{\mu}^N(x))$. The image law $P^{\otimes N}\circ (\hat{\mu}^N)^{-1}$ is denoted by $\Pi^N_{s,t} \in \mathcal{M}(\mathcal{M}(\T))$. Similarly, for $\sigma\in\mathbf J$, the image law of $P^{\otimes N}_\sigma \circ (\hat{\mu}_\sigma^N)^{-1}$ on $\mathcal{M}(\T_\sigma)$ is denoted by $\Pi^N_\sigma \in \mathcal{M}(\mathcal{M}(\T_{\sigma}))$. Since $\T$ and $\T_\sigma$ are Polish spaces, we have by Sanov's Theorem (see \cite[Theorem 6.2.10]{dembo-zeitouni:97}) that
$\Pi^N$ satisfies a strong Large Deviation Principle with good rate function $R(\cdot ||P)$. Similarly, $\Pi^N_\sigma$ satisfies a strong Large Deviation Principle on $\mathcal{M}(\T_\sigma)$ with good rate function $R_{\mathcal{F}_\sigma}(\cdot ||P)$.
We now define the projective limit $\undt{M}(\T)$. If $\alpha,\gamma\in\mathbf J$, $\alpha \subset \gamma$, then we may define the projection $\pi^\mathcal{M}_{\alpha\gamma}:\mathcal{M}(\T_\gamma) \to \mathcal{M}(\T_\alpha)$ as $\pi^{\mathcal{M}}_{\alpha\gamma}(\xi) := \xi\circ\pi_{\alpha\gamma}^{-1}$. An element of $\undt{\mathcal{M}}(\T)$ is then a member $\otimes_\sigma\zeta(\sigma)$ of the cartesian product $\otimes_{\sigma\in\mathbf J}\mathcal{M}(\T_\sigma)$ satisfying the consistency condition $\pi^{\mathcal{M}}_{\alpha\gamma}(\zeta(\gamma)) = \zeta(\alpha)$ for all $\alpha\subset\gamma$. The topology on $\undt{\mathcal{M}}(\T)$ is the minimal topology necessary for the natural projection $\undt{\mathcal{M}}(\T) \to \mathcal{M}(\T_\alpha)$ to be continuous for all $\alpha\in\mathbf J$. That is, it is generated by open sets of the form
\begin{equation}
A_{\gamma,O} = \lbrace \otimes_\sigma \zeta(\sigma) \in \undt{\mathcal{M}}(\T): \zeta(\gamma) \in O \rbrace,\label{eq:generatingsets}
\end{equation}
for some $\gamma\in\mathbf J$ and open $O$ (with respect to the weak topology of $\mathcal{M}(\T_\gamma)$).
We may continuously embed $\mathcal{M}(\T)$ into the projective limit $\undt{\mathcal{M}}(\T)$ of its marginals, letting $\iota$ denote this embedding. That is, for any $\sigma\in\mathbf J$, $(\iota(\mu))(\sigma) = \mu_{\sigma}$. We note that $\iota$ is continuous because $\iota^{-1}(A_{\gamma,O})$ is open in $\mathcal{M}(\T)$, for all $A_{\gamma,O}$ of the form in \eqref{eq:generatingsets}. We equip $\undt{\mathcal{M}}(\T)$ with the Borelian $\sigma$-algebra generated by this topology. The embedding $\iota$ is measurable with respect to this $\sigma$-algebra because the topology of $\mathcal{M}(\T)$ has a countable base. The embedding induces the image laws $(\Pi^N\circ\iota^{-1})$ on $\mathcal{M}(\undt{\mathcal{M}}(\T))$. For $\sigma\in\mathbf J$, it may be seen that $\Pi^N_\sigma = \Pi^N\circ\iota^{-1}\circ(\pi^\mathcal{M}_\sigma)^{-1} \in \mathcal{M}(\mathcal{M}(\T_\sigma))$, where $\pi^{\mathcal{M}}_\sigma( \otimes_\alpha \mu(\alpha)) = \mu(\sigma)$.
It follows from \cite[Thm 3.3]{dawson-gartner:87} that $\Pi^N\circ\iota^{-1}$ satisfies a Large Deviation Principle with rate function
$\sup_{\sigma\in\mathbf J} R_\sigma(\mu||P).$ However we note that $\iota$ is $1-1$, because any two measures $\mu,\nu\in \mathcal{M}(\T)$ such that $\mu_\sigma = \nu_\sigma$ for all $\sigma\in\mathbf J$ must be equal. Furthermore $\iota$ is continuous. Because of Sanov's Theorem , $(\Pi^N)$ is exponentially tight (see \cite[Defn 1.2.17, Exercise 1.2.19]{dembo-zeitouni:97} for a definition of exponential tightness and proof of this statement). These facts mean that we may apply the Inverse Contraction Principle \cite[Thm 4.2.4]{dembo-zeitouni:97} to infer $\Pi^N$ satisfies an LDP with the rate function $\sup_{\sigma\in\mathbf J} R_{\sigma}(\mu||P)$. Since rate functions are unique \cite[Lemma 4.1.4]{dembo-zeitouni:97}, we obtain the first identity in conjunction with Sanov's Theorem. The second identity \eqref{eq:projlimit2} follows similarly.
We may repeat the argument above, while restricting to $\sigma\subset\mathcal{Q}_{s,t}$. We obtain the same conclusion because the $\sigma$-algebra generated by $(\mathcal{F}_{\sigma})_{\sigma\subset\mathcal{Q}_{s,t}}$, is the same as $\mathcal{F}_{s,t}$. The last identity follows from the fact that, if $\alpha\subseteq\sigma$, then $R_\alpha(\mu||P) \leq R_\sigma(\mu||P)$.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.702148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbLw5qrqCykKvZV0S | \section{Introduction}
Let $\mu$ be a probability measure on $\mathbb{R}^d$. We are interested in minimizing
the interaction potential energy defined by
\begin{equation}\label{eq:contE}
E[\mu] = \frac12\int_{\mathbb{R}^d\times\mathbb{R}^d} W(x-y)
d\mu(x)d\mu(y)\,.
\end{equation}
Here, $W$ is a repulsive-attractive power-law potential
\begin{equation}\label{eq:kernelW}
W(z) = w(|z|) = \frac{|z|^\gamma}{\gamma}
-\frac{|z|^\alpha}{\alpha},\quad \gamma > \alpha \,,
\end{equation}
with the understanding that $\frac{|z|^\eta}{\eta}=\log |z|$ for $\eta=0$. Moreover, we define $W(0)=+\infty$ if $\alpha\leq 0$. This is
the simplest possible potential that is repulsive in the short range and attractive
in the long range. Depending on the signs of the exponents $\gamma$ and $\alpha$,
the behaviour of the potential is depicted in Figure~\ref{fig:wfun}. Since this
potential $W$ is bounded from below by $w(1)=\frac{1}{\gamma}-\frac{1}{\alpha}$,
the energy $E[\mu]$ always makes sense, with possibly positive infinite values.
\begin{figure}[thp]
\begin{center}
\includegraphics[totalheight=0.175\textheight]{wfun}
\end{center}
\caption{Three different behaviours of $w(r) = \frac{r^\gamma}{\gamma}
-\frac{r^\alpha}{\alpha}$, $\gamma>\alpha$.}
\label{fig:wfun}
\end{figure}
The minimizers of the energy $E[\mu]$ are related to
stationary states for the aggregation equation $\rho_t = \nabla\cdot(\rho \nabla W*\rho)$
studied in~\cite{Carrillo-McCann-Villani03,
Carrillo-McCann-Villani06,BCL,CDFLS,BLL} with
repulsive-attractive potentials~\cite{FellnerRaoul1,FellnerRaoul2,FHK,FH,Raoul,BCLR,BCLR2}.
The set of local minimizers of the interaction energy, in both the discrete setting of
empirical measures (equal mass Dirac Deltas) and the continuum setting of general probability
measures, can exhibit rich complicated structure as studied numerically in \cite{KSUB,BCLR2}.
In fact, it is shown in \cite{BCLR2} that the dimensionality of the support of local minimizers
of~\eqref{eq:contE} depends on the strength of the repulsion at zero of the potential $W$. In other
words, as the repulsion at zero gets stronger (i.e., $\alpha$ gets smaller), the support of local
minimizers gets larger in Hausdorff dimension.
From the viewpoint of applications, these models with nonlocal interactions are ubiquitous
in the literature. Convex attractive potentials appear in granular media~\cite{BenedettoCagliotiCarrilloPulvirenti98,LT,Carrillo-McCann-Villani03,Carrillo-McCann-Villani06}. More sophisticated potentials like~\eqref{eq:kernelW} are included to take into account short range repulsion and long range attraction in kinematic models of collective behaviour of animals, see \cite{Mogilner2,Dorsogna,Mogilner2003,KSUB,KCBFL} and the references therein. The minimization of the interaction energy in the discrete settings is of paramount importance for the structure of virus capsides \cite{Viral_Capside}, for self-assembly materials in chemical engineering design \cite{Wales1995,Wales2010,Rechtsman2010}, and for flock patterns in animal swarms \cite{BUKB,soccerball,CHM}.
Despite the efforts in understanding the qualitative behaviour of stationary solutions to the aggregation equation $\rho_t = \nabla\cdot(\rho \nabla W*\rho)$ and the structure of local minimizers of the interaction energy $E[\mu]$, there are no general results addressing the global minimization of $E[\mu]$ in the natural framework of probability measures.
See \cite{CFT} for a recent analysis of this question in the more restricted set of bounded or binary densities. Here, we will first try to find solutions in the restricted set of atomic measures.
The interest of understanding the global discrete minimizers of the interaction energy is not purely mathematical. The discrete global minimizers will give the spatial profile of typical flocking patterns obtained in simplified models for social interaction between individuals as in \cite{Albietal,KSUB} based on the famous 3-zones models, see for instance \cite{Huth:Wissel,lukeman}. Moreover, due to the recent nonlinear stability results in \cite{CHM}, we know now that the stability properties of the discrete global minimizer as stationary solution of the first order ODE model
$$
\dot{x}_i=-\sum_{j\neq i}^n \nabla
W\left(x_i-x_j\right), \quad i=1,\dots, n\,,
$$
lead to stability properties of the flock profiles for the second order model in swarming introduced in \cite{Dorsogna} or with additional alignment mechanisms as the Cucker-Smale interaction \cite{CS1,CS2}, see also \cite{Albietal} and the discussion therein.
Our objective is to show the existence of global minimizers of the interaction energy defined
on probability measures under some conditions on the exponents. Our approach starts with
the discrete setting by showing qualitative properties about the global minimizers in the
set of equal mass Dirac Deltas. These discrete approximations are used extensively in the literature
to show various properties of the minimizers~\cite{Dorsogna,FHK,KSUB,BUKB}, but the existence
as well as the convergence of these discrete minimizers is not justified in general. In a certain range of exponents,
we will prove that the diameter of the support of discrete minimizers does not depend on the
number of Dirac Deltas. This result together with standard
compactness arguments will result in our desired global minimizers among probability measures.
In fact, our strategy to show the confinement of discrete minimizers is in the same spirit as
the proof of confinement of solutions of the aggregation equation in~\cite{CDFLS2,BCY}. In
our case, the ideas of the proof in Section 2 will be based on convexity-type arguments
in the range of exponents $\gamma>\alpha\geq 1$ to show the uniform bound in the diameter of
global minimizers in the discrete setting. Section 3 will be devoted to more refined results in
one dimension. We show that for very repulsive potentials, the bounds on the diameter is not uniform
in the number of Dirac Deltas, complemented by numerical simulations; in the range of exponents
$\gamma>1>\alpha$, the minimizers turn out to be unique (up to translation), analogous to the simplified
displacement convexity in 1D; in the special case $\gamma=2$ and $\alpha=1$, we can find the
minimizers and show the convergence to the continuous minimizer explicitly.
\section{Existence of Global minimizers}
We will first consider the discrete setting where $\mu$ is a convex combinations of Dirac Deltas, i.e.,
\[
\mu = \frac{1}{n} \sum_{i=1}^n \delta_{x_i},\quad x_i \in \mathbb{R}^d.
\]
Setting
\begin{equation}\label{eq:sumEng}
E_n(x_1,\cdots,x_n)=\sum_{i\neq j}^n \left(\frac{|x_i-x_j|^\gamma}{\gamma}-\frac{|x_i-x_j|^\alpha}{\alpha}\right)\,,
\end{equation}
for such a $\mu$ one has
$
E[\mu] = \frac{1}{2n^2} E_n(x_1,\cdots,x_n)\,.
$
In the definition of the energy, we can include the self-interaction for non singular cases, $\alpha>0$, since both definitions coincide. Fixing $W(0)=+\infty$ for singular kernels makes $W$ upper semi-continuous, and the self-interaction must be excluded to have finite energy configurations.
Let us remark that due to translational invariance of the interaction energy, minimizers of the
interaction energy $E[\mu]$ can only be expected up to translations. Moreover, when the
potential is radially symmetric, as in our case, then any isometry in $\mathbb{R}^d$ will also
leave invariant the interaction energy. These invariances are also inherited by the discrete
counterpart $E_n(x_1,\cdots,x_n)$. We will first consider the minimizers of $E_n(x)$ among
all $x=(x_1,\cdots,x_n)\in (\mathbb{R}^d)^n$, and then the convergence to the global minimizers of $E[\mu]$
as $n$ goes to infinity.
\subsection{Existence of minimizer: Discrete setting}
Let us consider for $\alpha <\gamma$, the derivative of the radial potential
\[
w'(r) = r^{\gamma-1}-r^{\alpha-1} =
r^{\alpha-1}\left(r^{\gamma-\alpha}-1\right)\,,
\]
which obviously vanishes for $r=1$ and for $r=0$ when $\alpha>1$. We conclude from the
sign of derivatives, that $w(r)$ attains always a global minimum at $r=1$. There are, following
the values of $\alpha<\gamma$, three types of behaviours for $w$ that are shown in
Figure~\ref{fig:wfun}. In all the three cases, $E_n$ is bounded from below since
\[
E_n(x) \geq n^2\left(\frac{1}{\gamma}-\frac{1}{\alpha}\right)\,,
\]
with the understanding that $\frac{|x|^\eta}{\eta}=\log |x|$ for $\eta=0$. We set
\begin{equation}\label{eq:Indefn}
I_n = \inf_{x \in (\mathbb{R}^d)^n} E_n(x).
\end{equation}
Using the translational invariance of $E_n(x_1,\cdots,x_n)$, we can assume without
loss of generality that $x_1=0$ what we do along this subsection. First we have the following lemma
showing that $I_n$ is achieved, which can be proved by discussing different ranges of the
exponents $\gamma$ and $\alpha$.
\begin{lem} For any finite $n(\geq 2)$, the minimum value $I_n$ is obtained for
some discrete minimizers in $(\mathbb{R}^d)^n$ .
\end{lem}
\vskip 6pt
\underline{The case $0<\alpha<\gamma$.} We claim that
\begin{equation}\label{eq:Case1Bound}
n^2\left( \frac{1}{\gamma}-\frac{1}{\alpha} \right)
\leq I_n <0.
\end{equation}
Indeed consider $x = (x_1,\cdots,x_n) \in (\mathbb{R}^d)^n$ such
that $x_1,\cdots,x_n$ are aligned and $|x_{i}-x_{i+1}|=\frac{1}{n}$.
Then for any $i,j$ one has
$
0 < |x_i-x_j|\leq 1
$
and $w(|x_i-x_j|)<0$. Therefore~\eqref{eq:Case1Bound} follows.
Let us show that the infimum $I_n$ is achieved.
Let $x\in(\mathbb{R}^d)^n$. Set $R = \max_{i,j} |x_i-x_j|$. A
minimizer is sought among the points such that $E_n(x)<0$ and one
has for such a point
\[
\frac{R^\gamma}{\gamma}
\leq \sum_{i,j} \frac{|x_i-x_j|^\gamma}{\gamma}
< \sum_{i,j} \frac{|x_i-x_j|^\alpha}{\alpha}
\leq n^2 \frac{R^\alpha}{\alpha}.
\]
This implies the upper bound
\begin{equation}\label{eq:case1R}
R \leq \left(\frac{n^2\gamma}{\alpha}\right)^{\frac{1}{\gamma-\alpha}}.
\end{equation}
Thus, since $x_1=0$, all the $x_i$'s have to be in the ball
of center $0$ and radius
$\left(n^2\gamma/\alpha\right)^{\frac{1}{\gamma-\alpha}}$, i.e.,
$x$ has to be in a compact set of $(\mathbb{R}^d)^n$. Since $E_n(x)$ is
continuous, the infimum $I_n$ is achieved. Note that the bound on the radius, where
all Dirac Deltas are contained, depends a priori on $n$.
\vskip 6pt
\underline{The case $\alpha \leq 0 \leq \gamma$ and $\alpha\neq\gamma$.} In this
case $w(0^+)=+\infty$ and $w(\infty)=+\infty$. We minimize among all $x$ such that
$x_i\neq x_j$ for $i\neq j$. Note that $w$ and $I_n$ are both positive. Since $w(r)\to +\infty$
as $r\to 0$ or $r\to \infty$, there exists $a_n,b_n>0$ such that
\begin{equation*}
a_n < 1<b_n,\quad w(a_n)=w(b_n)>I_n.
\end{equation*}
Let $x\in (\mathbb{R}^d)^n$. If for a couple $i,j$, one has
\begin{equation}\label{eq:Case2bound}
|x_i-x_j| <a_n \quad \mbox{or}\quad |x_i-x_j|>b_n
\end{equation}
then one has
$
E_n(x)>I_n.
$
Thus the infimum~\eqref{eq:Indefn} will not be achieved among the points $x$
satisfying~\eqref{eq:Case2bound} but among those in
\[
\{ x\in (\mathbb{R}^d)^n\mid a_n \leq |x_i-x_j|\leq b_n
\}
\]
Since the set above is compact, being closed and contained in $(B(0,b_n))^n$
due to $x_1=0$, the infimum $I_n$ is achieved.
\vskip 6pt
\underline{The case $\alpha<\gamma<0$.} In this case $I_n <0$. Indeed it is
enough to choose
\[
|x_i-x_j|>1\qquad \forall i,j
\]
to get $E_n(x)<0$. Since $w(0^+)=+\infty$, we
minimize $E_n$ among the points $x$ such that $x_i\neq x_j$, $i\neq j$.
Thus the summation is on $n^2-n$ couples $(i,j)$. Denote by
$
x^k = (x_1^k,\cdots,x_n^k) \in (\mathbb{R}^d)^n
$
a minimizing sequence of $E_n$. Since $w(r)\to +\infty$
as $r\to 0$, there exists a number $a_n<1$ such that
\[
w(a_n) > n(n-1)\left(\frac{1}{\alpha}-\frac{1}{\gamma}\right)>0.
\]
If for a couple $(i,j)$ one has $|x_i^k-x_j^k|<a_n$ then
\[
E(x^k) > w(a_n)
+ ({n^2-n-1})\left(\frac{1}{\gamma}-\frac{1}{\alpha}\right)
>\left(\frac{1}{\alpha}-\frac{1}{\gamma}\right)>0
\]
and $x^k$ cannot be a minimizing sequence. So without loss of generality, we may assume that
$|x_i^k-x_j^k|\geq a_n,\, \forall i,j$. Let us denote by $y_1,\cdots,y_d$ the coordinates
in $\mathbb{R}^d$. Without loss of generality, we can assume by relabelling and isometry invariance
that for every $k$ one has
\[
x_1^k =0,\quad x_{i}^k \in \{ y=(y_1,\cdots,y_d)\mid y_d\geq 0\}\,.
\]
Suppose that
$
x_i^k = (x^k_{i,1},\cdots,x^k_{i,d})
$
and the numbering of the points is done in such a way that
$
x_{i,d}^k\leq x_{i+1,d}^k.
$
We next claim that one can assume that $x_{i+1,d}^k-x_{i,d}^k \leq 1$, $\forall i$.
Indeed if not, let $i_0$ be the first index such that
\[
x_{i_0+1,d}^k-x_{i_0,d}^k >1.
\]
Let us leave the first $x_i^k$ until $i_0$ unchanged and replace for
$i>i_0$ , $x_i^k$ by
\[
\tilde{x}_i^k = x_i^k - \{x_{i_0+1,d}^k-x_{i_0,d}^k-1\}e_d
\]
where $e_d$ is the $d$-vector of the the canonical basis of
$\mathbb{R}^d$,
i.e., we shift $x_i^k$ down in the direction $e_d$ by
$x_{i_0+1,d}^k-x_{i_0,d}^k-1$. Denote by $\tilde{x}_i^k$ the new
sequence obtained in this manner. One has
\begin{align*}
w(|\tilde{x}_i^k-\tilde{x}_j^k|)
&< w(|x_i^k-x_j^k|),\quad \forall i\leq i_0<j, \\
w(|\tilde{x}_i^k-\tilde{x}_j^k|)
&= w(|x_i^k-x_j^k|),\quad \forall i_0< i<j, \mbox{or } i<j\leq i_0
\end{align*}
and thus one has obtained a minimizing sequence with
\[
\tilde{x}_{i_0+1,d}^k - \tilde{x}_{i_0,d}^k\leq 1,\qquad \forall i
\]
i.e., $0\leq x_{i,d}^k\leq n-1$, for all $i$.
Repeating this process in the other directions one can assume without loss
of generality that
\begin{equation}\label{eq:case3R}
x_i^k \in [0,n-1]^d
\end{equation}
for all $k$, i.e., that $x^k$ is in a compact subset of $(\mathbb{R}^d)^n$,
and extracting a convergent subsequence, we obtain our desired minimizer in $[0,n-1]^d$.
\subsection{Existence of minimizer: General measures}
The estimates~\eqref{eq:case1R} and~\eqref{eq:case3R} give estimates on the support
of a minimizer of~\eqref{eq:Indefn}. However, these estimates depend on $n$.
We will show now that the diameter of any minimizer of~\eqref{eq:Indefn} can sometimes
be bounded independently of $n$.
\begin{thm}\label{thm:bndxn}
Suppose that $1\leq \alpha < \gamma$. Then the diameter of any global minimizer
of $E_n$ achieving the infimum in \eqref{eq:Indefn} is bounded independently of $n$.
\end{thm}
\begin{proof}
At a point $x=(x_1,\cdots,x_n) \in (\mathbb{R}^d)^n$ where the minimum of $E_n$ is achieved one has
\[
0=\nabla_{x_k}E_n(x_1,\cdots,x_n)=\nabla_{x_k} \sum_{j\neq k}
\left(\frac{|x_{k}-x_j|^\gamma}{\gamma}-
\frac{|x_k-x_j|^\alpha}{\alpha}\right),\quad k=1,2,\cdots,n.
\]
Since $\nabla_x \big(|x|^\eta/\eta\big) = |x|^{\eta-2}x$, we obtain
\begin{equation}\label{jjj}
\sum_{j\neq k} |x_k-x_j|^{\gamma-2}(x_k-x_j)=\sum_{j\neq k}
|x_k-x_j|^{\alpha-2}(x_k-x_j),\quad k=1,\cdots,n.
\end{equation}
Suppose the points are labelled in such a way that
\[
|x_n-x_1|=\max_{i,j} |x_i-x_j|.
\]
Then for $k=1$ and $n$ in~\eqref{jjj}, we get
\begin{align*}
\sum_{j\neq 1} |x_1-x_j|^{\gamma-2}(x_1-x_j)&=\sum_{j\neq 1}
|x_1-x_j|^{\alpha-2}(x_1-x_j),\\
\sum_{j\neq n} |x_n-x_j|^{\gamma-2}(x_n-x_j)&=\sum_{j\neq n}
|x_n-x_j|^{\alpha-2}(x_n-x_j).
\end{align*}
By subtraction, this leads to
\begin{multline*}
\sum_{j\neq 1,n} \Big( |x_n-x_j|^{\gamma-2}(x_n-x_j)-
|x_1-x_j|^{\gamma-2}(x_1-x_j)\Big)
+2|x_n-x_1|^{\gamma-2}(x_n-x_1) \\
= \sum_{j\neq n}|x_n-x_j|^{\alpha-2}(x_n-x_j)
-\sum_{j\neq 1}|x_n-x_j|^{\alpha-2}(x_1-x_j).
\end{multline*}
Taking the scalar product of both sides with $x_n-x_1$ we obtain
\begin{multline*}
\sum_{j\neq 1,n} \Big( |x_n-x_j|^{\gamma-2}(x_n-x_j)-
|x_1-x_j|^{\gamma-2}(x_1-x_j),x_n-x_1\Big)
+2|x_n-x_1|^{\gamma} \\
= \sum_{j\neq n}|x_n-x_j|^{\alpha-2}(x_n-x_j,x_n-x_1)
-\sum_{j\neq 1}|x_n-x_j|^{\alpha-2}(x_1-x_j,x_n-x_1).
\end{multline*}
For $\gamma \geq 2$, there exists a constant $C_\gamma>0$ such that (see \cite{MC})
\begin{equation}\label{eq:gammaineq1}
\big( |\eta|^{\gamma-2}\eta-|\xi|^{\gamma-2}\xi,\eta-\xi\big)
\geq C_\gamma |\eta-\xi|^\gamma,\qquad
\forall \eta,\xi\in \mathbb{R}^d.
\end{equation}
Note that this is nothing else that the modulus of convexity (in the sense of~\cite{Carrillo-McCann-Villani06}) of the potential $|x|^\gamma$.
Thus estimating from above, we derive
\begin{align*}
\big( (n-2)C_\gamma+2\big)|x_n-x_1|^\gamma
&\leq \sum_{j\neq n} |x_n-x_j|^{\alpha-1}|x_n-x_1|+
\sum_{j\neq 1}|x_1-x_j|^{\alpha-1}|x_n-x_1| \cr
&\leq 2(n-1)|x_n-x_1|^\alpha.
\end{align*}
Thus if $a\wedge b$ denotes the minimum of two numbers $a$ and $b$, we
derive
\[
\big( C_\gamma\wedge 1\big) n |x_n-x_1|^\gamma
\leq 2(n-1)|x_n-x_1|^\alpha.
\]
That is
\[
|x_n-x_1| \leq \left(\frac{2}{C_\gamma\wedge
1}\frac{n-1}{n}\right)^{\frac{1}{\gamma-\alpha}}
\leq \left(\frac{2}{C_\gamma\wedge 1}\right)^{\frac{1}{\gamma-\alpha}},
\]
which proves the theorem in the case $\gamma \geq 2$. In the case where
$1<\gamma<2$, one can replace~\eqref{eq:gammaineq1} with
\begin{equation*}
\big( |\eta|^{\gamma-2}\eta-|\xi|^{\gamma-2}\xi,\eta-\xi\big)
\geq c_\gamma\big\{|\eta|+|\xi|\big\}^{\gamma-2} |\eta-\xi|^2,\qquad
\forall \eta,\xi\in \mathbb{R}^d,
\end{equation*}
for some constant $c_\gamma$, (see \cite{MC}). We get, arguing as above,
\[
\sum_{j\neq 1,n} c_\gamma\big\{|x_n-x_j|+|x_1-x_j|\big\}^{\gamma}
+2|x_n-x_1|^\gamma
\leq 2(n-1)|x_n-x_1|^\alpha.
\]
No since $\gamma-2<0$, $|x_n-x_j|<|x_n-x_1|$ and $|x_1-x_j|
<|x_n-x_1|$, we derive that
\[
\big( (n-2)c_\gamma 2^{\gamma-2}+2\big)|x_n-x_1|^\gamma
\leq 2(n-1)|x_n-x_1|^\alpha.
\]
We thus obtain the bound
\[
|x_n-x_1| < \left\{ \frac{2}{(2^{\gamma-2}c_\gamma)\wedge
1}\right\}^{\frac{1}{\gamma-\alpha}},
\]
which completes the proof of the theorem.
\end{proof}
As a direct consequence of this bound independent of the number of Dirac Deltas,
we can prove existence of global minimizers in the continuous setting.
\begin{thm} Suppose that $1\leq \alpha <\gamma$. Then global minimizers associated to
the global minimum of $E_n(x)$ with zero center of mass converge as $n\to\infty$ toward
a global minimizer among all probability measures with bounded moments of order $\gamma$
of the interaction energy $E[\mu]$ in \eqref{eq:contE} .
\end{thm}
\begin{proof}
Let $x^n\in(\mathbb{R}^d)^n$ be a minimizer of~\eqref{eq:sumEng} and
\[
\mu^n = \frac{1}{n}\sum_j^n \delta_{x^n_j}
\]
be the associated discrete measure. From Theorem~\ref{thm:bndxn}, the
radius of the supports of the measures $\mu^n$ are bounded uniformly in $n$ by $R$, provided
that the center of the mass $\int_{\mathbb{R}^d} xd\mu^n$ is normalized to be the origin.
By Prokhorov's theorem~\cite{Prokhorov}, $\{\mu^n\}$ is compact in the weak-$*$ topology of measures and also in the metric space induced by $\gamma$-Wasserstein distance $d_\gamma$ between probability measures (see \cite{GS,V} for definition and basic properties). Then there is a measured $\mu^*$ supported on $B(0,R)$ such that
\[
\mu^n \rightharpoonup \mu^* \qquad \mbox{and} \qquad d_\gamma(\mu^n,\mu^*)\to 0
\]
as $n$ goes to infinity. Note that the notion of convergence of a sequence of probability measures
in $d_\gamma$ is equivalent to weak convergence of the measures plus convergence of
the moments of order $\gamma$, see \cite[Chapter 9]{V}. Let $\nu$ be any probability measure
on $\mathbb{R}^d$ with bounded moment of order $\gamma$, then $E[\nu]<\infty$. Moreover, there
is a sequence of discrete measures $\nu^n$
of the form
\[
\nu^n = \frac{1}{n} \sum_{j=1}^n \delta_{y_j^n},
\]
such that $d_\gamma(\nu^n,\nu)\to 0$, and thus $\nu^n \rightharpoonup \nu$, see \cite{GS,V}.
By the definition of $E_n$ in \eqref{eq:Indefn}, we deduce
\[
E[\nu^n]= \frac12\int_{\mathbb{R}^d\times \mathbb{R}^d} W(x-y)d\nu^n(x)d\nu^n(y) \geq
\frac12\int_{\mathbb{R}^d\times \mathbb{R}^d} W(x-y)d\mu^n(x)d\mu^n(y) =
E[\mu^n].
\]
On the other hand, since
\[
d_\gamma(\mu^n \otimes \mu^n , \mu^*\otimes \mu^*)\to 0, \qquad
d_\gamma(\nu^n \otimes \nu^n , \nu\otimes \nu)\to 0\,,
\]
as $n\to\infty$, and the function $w(x-y) = |x-y|^\gamma/\gamma-|x-y|^\alpha/\alpha$ is
Lipschitz continuous on bounded sets in $\mathbb{R}^d\times \mathbb{R}^d$ with growth of order $\gamma$ at infinity, then
\begin{align*}
E[\mu^*] &= \frac12\int_{\mathbb{R}^d\times \mathbb{R}^d} W(x-y)d\mu^*(x)d\mu^*(y) = \lim_{n\to\infty}
\frac12\int_{\mathbb{R}^d\times \mathbb{R}^d} W(x-y)d\mu^n(x)d\mu^n(y) \cr
&\leq \lim_{n\to\infty} \frac12\int_{\mathbb{R}^d\times \mathbb{R}^d} W(x-y)d\nu^n(x)d\nu^n(y) = E[\nu].
\end{align*}
Therefore, $\mu^*$ must be a global minimizer of $E[\mu]$ in the set of probability measures with bounded moments of order $\gamma$.
\end{proof}
\begin{rem}
Global minimizers of the energy in the continuum setting might be a convex combination of a finite number of Dirac Deltas. Numerical experiments suggest that it is always the case in the range $2<\alpha<\gamma$. It is an open problem in this range to show that global minimizers in the discrete case do not change (except symmetries) for $n$ large enough and coincide with global minimizers of the continuum setting.
\end{rem}
\section{Further Remarks in one dimension}
In this section, we concentrate on the one dimensional case ($d=1$) for more refined properties.
\subsection{Confinement of Discrete Global Minimizers}
We check first how sharp are the conditions on the exponents of the potential to get the confinement of global discrete minimizers independently of $n$.
In fact, when the potential is very repulsive at the origin, we can show that a uniform bound in $n$ of the diameter of global minimizers of the discrete setting does not hold. If $x$ is a minimizer of $E_n(x)$, we will always
assume that the labelling of $x_i$s is in increasing order:
$
x_1\leq x_2 \cdots \leq x_n.
$
\begin{thm}
Suppose $\alpha<\gamma<0$ and $\alpha<-2$. If $x$ is a minimizer of $E_n$, then
there exists a constant $C_{\alpha,\gamma}$ such that for $n$ large
enough
\[
x_n-x_1 \geq C_{\alpha,\gamma}\, n^{1+\frac{2}{\alpha}}
\]
holds.
\end{thm}
\begin{proof} Set $C=\frac{1}{\alpha}-\frac{1}{\gamma}>0$. Denote by $a_n$
the unique element of $\mathbb{R}$ such that
\begin{equation}\label{eq:wacn}
w(a_n)=Cn^2.
\end{equation}
If $x$ is a
minimizer of $E_n$, we claim that
\begin{equation}\label{eq:1ddist}
x_{i+1}-x_i \geq a_n,\qquad \forall i=1,\dots,n.
\end{equation}
Indeed otherwise,
\[
E_n(x) \geq w(x_{i+1}-x_i)-(n^2-n-1)C >
Cn^2 - (n^2-n-1)C=(n+1)C>0
\]
and we know that in this case $E_n<0$.
From~\eqref{eq:wacn} we derive
\[
Cn^2 = \frac{ {a_n}^\gamma}{\gamma}-
\frac{ {a_n}^\alpha}{\alpha}=
-\frac{ {a_n}^\alpha}{\alpha}\left(
1-\frac{\alpha}{\gamma}{a_n}^{\gamma-\alpha}
\right)\geq -\frac{ {a_n}^\alpha}{2\alpha},
\]
for $n$ large enough (recall that $a_n\to 0$ when $n\to\infty$). It follows
that
\[
a_n \geq \big(-2\alpha C n^2\big)^{\frac{1}{\alpha}}
=\big(-2\alpha C\big)^{\frac{1}{\alpha}}n^{\frac{2}{\alpha}}.
\]
Combining this with~\eqref{eq:1ddist} we get
\[
x_n-x_1 \geq (n-1)a_n \geq \frac{n}{2}\big(-2\alpha
C\big)^{\frac{1}{\alpha}} n^{\frac{2}{\alpha}}
=\frac{(-2\alpha C)^{\frac{1}{\alpha}}}{2}n^{1+\frac{2}{\alpha}}
\]
for $n$ large enough, proving the desired estimate with $C_{\alpha,\gamma}
=(-2\alpha C)^{\frac{1}{\alpha}}/2$.
\end{proof}
\begin{figure}[htp]
\begin{center}
\subfloat[$\alpha=-2.5$]{\includegraphics[keepaspectratio=true,
width=.43\textwidth]{alpham25}}
$~~$
\subfloat[$\alpha=-1.5$]{\includegraphics[keepaspectratio=true,
width=.43\textwidth]{alpham15}}
\end{center}
\caption{The dependence of the diameter $R = \max_{i,j}|x_i-x_j|$ on the number of
particles $n$.}
\label{fig:radius}
\end{figure}
This property for the minimizers of this very repulsive case is similar to H-stability in
statistical mechanics~\cite{HStable}, where the minimal distance between
two particles is expected to be constant when $n$ is large, and crystallization occurs.
This also suggests that the lower bound $O(n^{1+\frac{2}{\alpha}})$ is not sharp, which is verified in Figure~\ref{fig:radius}.
In fact, numerical experiments in \cite{BCLR2,BCY} suggest that confinement happens for $-1<\alpha <1$. It is an open problem to get a uniform bound in the support of the discrete minimizers as in Section 2 in this range. In the range $\alpha \leq -1$, our numerical simulations suggest that spreading of the support happens for all $\gamma$ with a decreasing spreading rate as $\gamma$ increases.
\subsection{Uniqueness of global minimizers}
We turn now to the issue of uniqueness (up to isometry) of global discrete and
continuum minimizers. If $x$ is a minimizer of $E_n(x)$, we can always assume
at the expense of a translation that the center of mass is zero, that
is $\frac{x_1+\cdots+x_n}{n}=0$. Let us recall that
\[
E_n(x) = \sum_{i\neq j}^n w(|x_i-x_j|)
\]
with the convention that $x_i\neq x_j$ when $i\neq j$, $\alpha <0$.
\begin{lem}\label{lem:unique1d}
Suppose that $\alpha \leq 1$, $\gamma\geq 1$, and $\alpha < \gamma$. Let $x,y$ be two points of
$\mathbb{R}^n$ such that
\begin{subequations}\label{eq:1dminxy}
\begin{equation}\label{eq:1dminx}
\frac{x_1+\cdots+x_n}{n}=0,\ x_1\leq x_2 \leq \cdots \leq x_n,
\end{equation}
\begin{equation}\label{eq:1dminy}
\frac{y_1+\cdots+y_n}{n}=0,\ y_1\leq y_2 \leq \cdots \leq y_n.
\end{equation}
\end{subequations}
then
\[
E_n\left(\frac{x+y}{2}\right)< \frac{E_n(x)+E_n(y)}{2}
\]
unless $x=y$.
\end{lem}
\begin{proof}
One has $w''(r)=(\gamma-1)r^{\gamma-2}-(\alpha-1)r^{\alpha-2}>0$, for all $r>0$.
Thus $w$ is strictly convex. Then one has by the strict
convexity of $w$,
\begin{align*}
E_n\left(\frac{x+y}{2}\right)
&= \sum_{i\neq j}^n w\left(\Big|\frac{x_i+y_i}{2}-\frac{x_j+y_j}{2}\Big|\right) \cr
&= \sum_{i\neq j}^n w\left(\Big|\frac{x_i-x_j}{2}+\frac{y_i-y_j}{2}\Big|\right) \cr
&\leq \frac{1}{2}\left(
\sum_{i\neq j}^n w(|x_i-x_j|)+\sum_{i\neq j}^n w(|y_i-y_j|)
\right) =\frac{E_n(x)+E_n(y)}{2}
\end{align*}
The equality above is strict unless $x_i-x_j=y_i-y_j$ for all $i,j$, that is
$x_{i+1}-x_i=y_{i+1}-y_i$. Therefore $x=y$.
\end{proof}
As a consequence, we can now state the following result regarding the uniqueness of global discrete minimizers.
\begin{thm}\label{thm:unique}
Suppose $\alpha \leq 1$, $\gamma\geq 1$, and $\alpha < \gamma$. Up to translations, the minimizer $x$
of $E_n$ is unique and symmetric with respect to its center of mass.
\end{thm}
\begin{proof} Let $x$, $y$ be two minimizers of $E_n$
satisfying~\eqref{eq:1dminxy}. If $x\neq y$, by Lemma~\ref{lem:unique1d},
one has
\[
E_n\left(\frac{x+y}{2}\right)<\frac{E_n(x)+E_n(y)}{2}=I_n
\]
and a contradiction. This shows the uniqueness of a minimizer satisfying~\eqref{eq:1dminx}.
Denote now by $s$ the symmetry defined by
$
s(\xi) = -\xi$, $\xi \in \mathbb{R}.
$
If $x$ is a minimizer of $E_n(x)$ satisfying~\eqref{eq:1dminx} then $y$ defined
by
\[
y_i=s(x_{n+1-i})\qquad i=1,\cdots,n
\]
is also a minimizer satisfying~\eqref{eq:1dminy}. Thus by uniqueness
\[
x_i=-x_{n+1-i}\qquad i=1,\cdots,n,
\]
and this completes the proof of the theorem.
\end{proof}
\begin{rem}[Uniqueness and displacement convexity in one dimension]
Lemma \ref{lem:unique1d} and Theorem \ref{thm:unique} are just discrete versions
of uniqueness results for the continuum interaction functional~\eqref{eq:contE}.
In the seminal work of R. McCann \cite{Mc} that introduces the notion of displacement
convexity, he already dealt with the uniqueness (up to translation) of the interaction
energy functional \eqref{eq:contE} using the theory of optimal transportation:
if $W$ is strictly convex in $\mathbb{R}^d$, then the global minimizer is unique
among probability measures by fixing the center of mass, as the energy $E[\mu]$ is
(strictly) displacement convex. However, the displacement convexity of a functional
is less strict in one dimension than that in higher dimensions. As proven in~\cite{CFP},
to check the displacement convexity of the energy $E[\mu]$ in one dimension, it is
enough to check the convexity of the function $w(r)$ for $r>0$. Therefore, if $w(r)$
is strictly convex in $(0,\infty)$, then the energy functional \eqref{eq:contE} is strictly
displacement convex for probability measures with zero center of mass. As a consequence,
the global minimizer of \eqref{eq:contE} in the set of probability measures is unique up
to translations. Lemma \ref{lem:unique1d} shows that this condition is equivalent
to $\alpha \leq 1$, $\gamma\geq 1$, and $\alpha < \gamma$, for power-law potentials.
Finally, the convexity of $E_n$ in Lemma~\ref{lem:unique1d} is just the displacement
convexity of the energy functional \eqref{eq:contE} restricted to discrete measures.
We included the proofs of the convexity and uniqueness because they are quite straightforward
in this case, without appealing to more involved concepts in optimal transportation.
\end{rem}
\begin{rem}[Explicit convergence to uniform density] As a final example we consider the case where $\gamma=2$,
$\alpha=1$, which corresponds to quadratic attraction and Newtonian repulsion in
one dimension (see \cite{FellnerRaoul1}). When $x$ is a minimizer of $E_n(x)$, we have by \eqref{jjj} that
\[
\sum_{j\neq k} (x_k-x_j) =
\sum_{j\neq k} \mbox{sign}(x_k-x_j) = k-n-1,\quad \forall\ k=1,\cdots,n.
\]
Replacing the index $k$ by $k+1$, the equation becomes
\[
\sum_{j\neq {k+1}} (x_{k+1}-x_j) =
\sum_{j\neq {k+1}} \mbox{sign}(x_{k+1}-x_j) = 2k-n+1,\quad \forall\
k=1,\cdots,n-1.
\]
Subtracting the two equations above, we get
\[
n(x_{k+1}-x_k)=2,\ \forall\ k=1,\cdots,n-1,
\]
that is $x_{k+1}-x_k=\frac{2}{n}$, for all $k=1,\cdots,n-1$.
This shows that in the case $\gamma=2$ and $\alpha=1$, the points $x_i$ are uniformly
distributed; as $n$ goes to infinity, the corresponding discrete measure
$\mu=\frac{1}{n}\sum_{i=1}^n \delta_{x_i}$ converges to the uniform probability
measure on the interval $[-1,1]$. This uniform density is known to be the global
minimizer of the energy $E[\mu]$ in the continuum setting, see \cite{FellnerRaoul1,CFT}.
\end{rem}
\section*{Acknowledgement}
JAC acknowledges support from projects MTM2011-27739-C04-02,
2009-SGR-345 from Ag\`encia de Gesti\'o d'Ajuts Universitaris i de Recerca-Generalitat
de Catalunya, and the Royal Society through a Wolfson Research Merit Award. JAC and YH
acknowledges support from the Engineering and Physical Sciences Research Council (UK) grant
number EP/K008404/1. MC acknowledges the support of a J. Nelder fellowship from Imperial
College where this work was initiated. The research leading to these results has received
funding from Lithuanian-Swiss cooperation programme to reduce economic and social disparities within the enlarged European Union under project agreement No CH-3-SMM-01/0. The research of MC was also supported by the Swiss National Science Foundation under the contract $\#$ 200021-146620. Finally, the paper was completed during a visit of MC at the Newton Institute in Cambridge. The nice atmosphere of the institute is greatly acknowledged.
| {
"attr-fineweb-edu": 1.703125,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbOrxK4sA-9zR-ugv | \section{Introduction}
\label{intro}
In this paper, we introduce a unified formulation and analysis for linear elasticity problems
\begin{equation}\label{model}
\left\{
\begin{aligned}[rll]
A\boldsymbol{\sigma}-\epsilon (u)&=0\quad &\text{ in }{\rm\Omega} ,\\
{\rm div} \boldsymbol{\sigma}&=f\quad & \text{ in }{\rm\Omega} ,\\
u&=0\quad &\text{ on }\Gamma_D ,\\
\boldsymbol{\sigma}n&=0\quad &\text{ on }\Gamma_N,
\end{aligned}
\right.
\end{equation}
with ${\rm\Omega}\subset \mathbb{R}^n~(n=2,3)$ and $\partial {\rm\Omega}=\Gamma_D\cup \Gamma_N$, $\Gamma_D\cap \Gamma_N=\varnothing$.
Here the displacement is denoted by $u: {\rm\Omega}\rightarrow \mathbb{R}^n$ and the stress tensor is denoted by $\boldsymbol{\sigma}: {\rm\Omega}\rightarrow \mathcal{S}$, where $\mathcal{S}$ is the set of symmetric $n\times n$ tensors. The linearized strain tensor $\epsilon (u)=\frac{1}{2}(\nabla u+\nabla u^T)$. The compliance tensor $A: \mathcal{S}\rightarrow \mathcal{S}$
\iffalse
\begin{equation}\label{lame}
A\boldsymbol{\sigma} = \frac{1}{2\mu}(\boldsymbol{\sigma} -\frac{\lambda}{2\mu+n\lambda }tr(\boldsymbol{\sigma})I),\ \lambda>0,\ \mu>0
\end{equation}
is assumed to be bounded and symmetric positive definite, where $\lambda$ and $\mu$ are the Lam$\rm \acute{e}$ coefficients of the elastic material under consideration.
\fi
\begin{equation}\label{lame}
A\boldsymbol{\sigma} ={1+\nu\over E}\boldsymbol{\sigma} -{(1+\nu)\nu\over (1+(n-2)\nu) E}tr(\boldsymbol{\sigma})I
\end{equation}
is assumed to be bounded and symmetric positive definite, where $E$ and $\nu\in (0, \frac12)$ are the Young's modulus and the Poisson's ratio of the elastic material under consideration, respectively.
\iffalse
Finite element method (FEM) and its variants have been widely used for numerical solutions of partial differential equations. Conforming and nonconforming FEMs in primal form are two classic Galerkin methods for elasticity and structural problem \cite{hrennikoff1941solution,courant1994variational,feng1965finite}. Mixed FEMs for the elasticity problem, derived from the Hellinger-Reissner variational principle, are also popular methods since they avoid locking effects and approximate not only the displacement $u_h$ but also the stress tensor $\boldsymbol{\sigma}_h$. Unlike the mixed FEMs for scalar second order elliptic problem, the strong symmetry is required for the stress tensor in the elasticity problem. The combination of the symmetry and the continuity requirements for the stress field represents a substantial additional difficulty for developing stable mixed FEMs for elasticity problem. Some works in late nineteens \cite{amara1979equilibrium,arnold1984peers,arnold1988new,stein1990mechanical,stenberg1986construction,stenberg1988family,stenberg1988two} employed Lagrangian functionals to relax or abandon the symmetric constraints of the stress tensor. Two ways of introducing elements with reduced symmetry for linear elasticity are discussed in \cite{boffi2009reduced}. Nonconforming mixed FEMs with strong symmetric stress tensor, which relax the continuity property, have been widely investigated in the literature \cite{yi2005nonconforming,yi2006new,man2009lower,hu2007lower,awanou2009rotated,arnold2003nonconforming,arnold2014nonconforming,gopalakrishnan2011symmetric,wu2017interior}.
In 2002, the first stable conforming mixed FEM with strong symmetric stress tensor was proposed in \cite{arnold2002mixed}. Recently, a family of conforming mixed elements with fewer degrees of freedoms were proposed in \cite{hu2014family,hu2015family,hu2016finite,hu2014finite}. A stabilized mixed finite elements have been studied in \cite{chen2017stabilized} and a unified quasi-optimal convergence analysis for mixed method for elasticity has been carried out in \cite{hu2018unified}.
Discontinuous Galerkin (DG) methods are also widely used in the numerical solution for elasticity problem, see \cite{chen2010local,wang2020mixed}. Various hybridizable discontinuous Galerkin (HDG) formulations with strong symmetric stress tensor were proposed and analyzed for linear elasticity problem, such as \cite{soon2008hybridizable,soon2009hybridizable,fu2015analysis,qiu2018hdg,chen2016robust}. A three-field decomposition method is discussed for a linear three-dimensional elasticity problem in \cite{brezzi2005three}. Virtual element is applied to linear elasticity problems in \cite{da2013virtual}. Weak Galerkin (WG) methods are also proposed and analyzed in \cite{wang2016locking} for linear elasticity problem. A new hybridized mixed method for linear elasticity is proposed in \cite{gong2019new}.
\fi
Finite element method (FEM) and its variants have been widely used for numerical solutions of partial differential equations. Conforming and nonconforming FEMs in primal form are two classic Galerkin methods for elasticity and structural problems \cite{hrennikoff1941solution,courant1994variational,feng1965finite}. Mixed FEMs for the elasticity problem, derived from the Hellinger-Reissner variational principle, are also popular methods since they
approximate not only the displacement but also the stress tensor. Unlike the mixed FEMs for scalar second-order elliptic problems, the strong symmetry is required for the stress tensor in the elasticity problem. This strong symmetry causes a substantial additional difficulty for developing stable mixed FEMs for the elasticity problem.
To overcome such a difficulty, it was proposed in \cite{Fraejis1975} to relax or abandon the symmetric constraint on the stress tensor by employing Lagrangian functionals. This idea was developed
in late nineteens \cite{amara1979equilibrium, arnold1984peers, arnold1988new, stein1990mechanical,
stenberg1986construction, stenberg1988family, stenberg1988two}, and further systematically explored in a recent work \cite{arnold2007mixed} by utilizing a constructive derivation of the elasticity complex starting from the de Rham complex \cite{eastwood2000complex} and mimicking the construction in the discrete case.
Another framework to construct stable weakly symmetric mixed finite elements was presented in \cite{boffi2009reduced}, where two approaches were particularly proposed with the first one based on {the Stokes problem} and the second one based on interpolation operators. To keep the symmetry of discrete stress, a second way is to relax the continuity of the normal components of discrete stress across the internal edges or faces of grids. This approach leads to nonconforming mixed FEMs with strong symmetric stress tensor \cite{yi2005nonconforming,yi2006new,man2009lower,hu2007lower,awanou2009rotated,arnold2003nonconforming,
arnold2014nonconforming,gopalakrishnan2011symmetric,wu2017interior,hu2019nonconforming}.
In 2002, based on the elasticity {complex}, the first family of symmetric conforming mixed elements with polynomial shape functions was proposed for {the two-dimensional case} in \cite{arnold2002mixed}, which was extended to { the } three-dimensional { case} in \cite{arnold2008finite}.
Recently, a family of conforming mixed elements with fewer degrees of freedom was proposed for any dimension by discovering a
crucial structure of discrete stress spaces of symmetric matrix-valued polynomials on any dimensional simplicial grids and proving two basic algebraic results in \cite{hu2014family,hu2015family,hu2016finite,hu2014finite}. Those new elements can be regarded as an improvement and a unified extension to any dimension of those from \cite{arnold2002mixed} and \cite{arnold2008finite}, without an explicit use of the elasticity complex. Besides the optimal convergence property with respect to the degrees of polynomials of discrete stresses, an advantage of those elements is that it is easy to construct their basis functions, therefore implement the elements. See stabilized mixed finite elements on simplicial grids for any dimension in \cite{chen2017stabilized}.
Discontinuous Galerkin (DG) methods were also widely used in numerical solutions for the elasticity problem, see \cite{chen2010local,hong2016robust, hong2019conservative, wang2020mixed}. DG methods offer the convenience to discretize problems in an element-by-element fashion and use numerical traces to glue each element together \cite{arnold2002unified, hong2012discontinuous, hong2016uniformly, hong2018parameter}. This advantage makes DG methods an ideal option for linear elasticity problems to preserve the strong symmetry of the stress tensor.
Various hybridizable discontinuous Galerkin (HDG) formulations with strong symmetric stress tensor were proposed and analyzed for linear elasticity problems, such as \cite{soon2008hybridizable,soon2009hybridizable,fu2015analysis,qiu2018hdg,chen2016robust}. The HDG methods for linear elasticity problems contain three variables -- stress $\boldsymbol{\sigma}_h$, displacement $u_h$ and numerical trace of displacement $\hat u_h$. In the HDG methods, the variable $\hat u_h$ is defined on element borders and can be viewed as the Lagrange multiplier for the continuity of the normal component of stress. Weak Galerkin (WG) methods were proposed and analyzed in \cite{wang2016locking,wang2018weak,wang2018hybridized,chen2016robust,yi2019lowest} for linear elasticity problems. The main feature of the WG methods is the weakly defined differential operators over weak functions.
A three-field decomposition method was discussed for linear elasticity problems in \cite{brezzi2005three}. A new hybridized mixed method for linear elasticity problems was proposed in~\cite{gong2019new}.
{Virtual element method is a new Galerkin scheme for the approximation of partial differential equation problems, and admits the flexibility to deal with general polygonal and polyhedral meshes.
Virtual element method is experiencing a growing interest towards structural mechanics problems, and has contributed a lot to linear elasticity problems, see \cite{da2013virtual,artioli2017stress,artioli2018family,dassi2020three} and the reference therein. Recently, investigation of the possible interest in using virtual element method for traditional decompositions is presented in \cite{brezzi2021finite}. As shown in \cite{brezzi2021finite}, virtual element method looks promising for high-order partial differential equations as well as Stokes and linear elasticity problems. Some other interesting methods, say the tangential-displacement normal-normal-stress method which is robust with respect both shear and volume locking, were considered in \cite{pechstein2011tangential,pechstein2018analysis}.
}
In this paper, a unified formulation is built up for linear elasticity problems following and modifying the ones in \cite{hong2020extended,hong2021extended} for scalar second-order elliptic problems. The formulation is given in terms of four discretization variables --- $\boldsymbol{\sigma}_h$, $\check{\boldsymbol{\sigma}}_h$, $u_h$, $\check u_h$. The variables $\boldsymbol{\sigma}_h$ and $u_h$ approximate the stress tensor $\boldsymbol{\sigma}$ and displacement $u$ in each element, respectively. Strong symmetry of the stress tensor is guaranteed by the symmetric shape function space of the variable $\boldsymbol{\sigma}_h$. The variables $\check{\boldsymbol{\sigma}}_h$ and $\check u_h$ are the residual corrections to the average of $\boldsymbol{\sigma}_h$ and $u_h$ along interfaces of elements, respectively. They can also be viewed as multipliers to impose the inter-element continuity property of $u_h$ and the normal component of $\boldsymbol{\sigma}_h$, respectively.
The four variables in the formulation provide feasible choices of numerical traces, and therefore, the flexibility of recovering most existing FEMs for linear elasticity problems.
There exist two different three-field formulations by eliminating the variable $\check{\boldsymbol{\sigma}}_h$ and $\check u_h$, respectively, and a two-field formulation by eliminating both.
With the same choice of discrete spaces and parameters, these four-field, three-field, and two-field formulations are equivalent.
Moreover, some particular discretizations induced from the unified formulation are hybridizable and lead to the corresponding one-field formulation.
As shown in \cite{ hong2019unified, hong2018uniform, hong2020extended}, the analysis of the formulation is classified into two classes:
$H^1$-based class and $H({\rm div})$-based class. Polynomials of a higher degree for the displacement than those for the stress tensor
are employed
for the $H^1$-based formulation and the other way around for the $H({\rm div})$-based formulation.
Both classes are proved to be well-posed under natural assumptions.
Unlike scalar second order elliptic problems, there is no stable symmetric $H({\rm div})$-conforming mixed finite elements in the literature that approximates the stress tensor by polynomials with degree not larger than $k$ and $k\leq n$.
This causes the difficulty to prove the inf-sup condition for the $H({\rm div})$-based formulation with $k\leq n$. The nonconforming element in \cite{wu2017interior} is employed here to circumvent this difficulty with the jump of the normal component of $\boldsymbol{\sigma}_h$ embedded in the norm of the stress tensor $\boldsymbol{\sigma}_h$.
The unified formulation is closely related to some mixed element methods. As some parameters approach zero, some mixed element method{s} and primal method{s} can be proven to be the limiting cases of the unified formulation. In particular, both the nonconforming mixed element method in \cite{gopalakrishnan2011symmetric} and the conforming mixed element methods in \cite{hu2014finite,hu2014family,hu2015family} are some limiting cases of the formulation.
The proposed four-field formulation is also closely related to most existing methods \cite{qiu2018hdg,chen2016robust,soon2009hybridizable,fu2015analysis,chen2010local,wang2020mixed}
for linear elasticity as listed in the first three rows in Table \ref{tab:HDGexist}, and the first row in Table \ref{tab:LDG1} and Table \ref{tab:LDG2}.
More importantly, some new discretizations are derived from this formulation as listed in
Table \ref{tab:intronew}.
Under the unified analysis of the four-field formulation, all these new methods are well-posed and admit optimal error estimates.
In Table \ref{tab:intronew}, the first scheme is an $H^1$-based method and the following two schemes are $H({\rm div})$-based methods. The
last scheme is a special case of the second one with $\gamma=0$ and $\eta=\tau^{-1}$. The last scheme is hybridizable and can be
written as a one-field formulation with only one globally-coupled variable.
In fact, after the elimination of variable $\check \sigma_h$ and a transformation from variable $\check u_h$ to variable $\hat u_h$ in the last method of Table \ref{tab:intronew}, we obtain an optimal $H({\rm div})$-based HDG method.
The notation $\tau=\Omega(h_e^{-1})$ and $\tau=\Omega(h_e)$ in
Table \ref{tab:intronew}
means there exist constants $c_0>0, C_0>0$ such that $c_0 h_e^{-1}\le \tau\le C_0 h_e^{-1}$ and $c_0 h_e\le \tau\le C_0 h_e$, respectively. For $k\ge 0$,
\begin{equation}\label{Spaces}
\begin{aligned}
V^{k}_h&=\{v_h\in L^2(\Omega, \mathbb{R}^n): v_h|_{K}\in \mathcal{P}_k(K, \mathbb{R}^n), \forall
K\in \mathcal T_h \},\\
Q^k_h& = \{\boldsymbol{\tau}_h \in L^2(\Omega, \mathcal{S}):
\boldsymbol{\tau}_h|_K\in \mathcal{P}_k(K, \mathcal{S}), \forall K\in
\mathcal T_h \},\\
\check V^{k}_{h}&=\{\check v_h\in L^2({\mathcal{E}_h}, \mathbb{R}^n): v_h|_e\in \mathcal{P}_k(e, \mathbb{R}^n), \forall e\in {\mathcal{E}_h}, \ \check v_h|_{\Gamma_D}=0 \},\\
\check Q^k_{h}& {= \{\check{\boldsymbol{\tau}}_h \in L^2({\mathcal{E}_h}, \mathcal{S}): \check{\boldsymbol{\tau}}_h|_e\in \boldsymbol {\mathcal{P}}_k(e, \mathcal{S}), \forall e\in {\mathcal{E}_h}, \ \check{\boldsymbol{\sigma}}_hn|_{\Gamma_N}=0 \},}
\end{aligned}
\end{equation}
where $\boldsymbol {\mathcal{P}}_k(K, \mathbb{R}^{n})$ and $\boldsymbol {\mathcal{P}}_k(e, \mathbb{R}^{n})$ are vector-valued in $\mathbb{R}^n$ and each component is in the space of polynomials of degree at most $k$ on $K$ and $e$, respectively, and $\boldsymbol {\mathcal{P}}_k(K, \mathcal{S})$ are symmetric tensor-valued functions in $\mathcal{S}$ and each component is in the space of polynomials of degree at most $k$ on $K$.
\iffalse
\begin{table}[!htbp]
\scriptsize
\centering
\begin{tabular}{c|cccccccccccc}
\hline
&$\eta$& $\tau$& $Q_h$ & $V_h$ & $\check Q_h$& $\check{V}_h$ &$\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_0$ & $\|u-u_h\|_0$& $\|\epsilon (u)-\epsilon_h (u_h)\|_0$&$\|{\rm div} _h (\boldsymbol{\sigma}-\boldsymbol{\sigma}_h)\|_0$ &
\\
\hline
1&$\tau^{-1}$& $\Omega(h_e)$ &$ Q_h^k$ & $V_h^{k}$ & $\check Q_h^{k}$ &
$\check{V}_h^{k}$ &$k+\frac{1}{2}$ &$k+1$ & $k+\frac{1}{2}$&- & HDG in \cite{soon2009hybridizable,fu2015analysis}
\\
2&$\tau^{-1}$& $\Omega(h_e^{-1})$&$ Q_h^{k}$ & $V_h^{k+1}$ & $\check Q_h^{k}$ &$\check{V}_h^{k}$& $k+1$ & $k+2$&$k+1$ &- &HDG in \cite{qiu2018hdg,chen2016robust}
\\
3&$\tau^{-1}$& $\Omega(h_e^{-1})$&$ Q_h^k$ & $V_h^{k}$ &$\check Q_h^{k}$ &
$\check{V}_h^k$& $k$& $k+1$ &$k$&- & HDG in \cite{soon2009hybridizable}
\\
4&0& $\Omega(h_e^{-1})$&$ Q_h^k$ & $V_h^{k+1}$& $\check Q_h^{k}$ &
$\check{V}_h^k$ & $k+1$ & $k+2$ &$k+1$ & - &LDG in \cite{chen2010local}
\\\hline
5&$\Omega(h_e^{-1})$& 0& $ Q_h^{k+1}$ &$V_h^{k}$&
$\check Q_h^{k}$ &$\check{V}_h^{k+1}$ & $k+1$ &$k+1$ &- &$k+1$ & LDG in \cite{wang2020mixed}
\\\hline
\end{tabular}
\caption{\footnotesize{Some existing methods in the literature with $\gamma=0$. Here $\eta$ are $\tau$ are to enhance the continuity of the stress tensor $\boldsymbol{\sigma}_h$ and displacement $u_h$, respectively.}}
\label{tab:introexist}
\end{table}
\fi
\begin{table}[!htbp]
\centering
\begin{tabular}{c|cccccccccccc}
\hline
&$\eta$& $\tau$&$\gamma$& $Q_h$ & $V_h$ & $\check{Q}_h$& $\check{V}_h$
\\\hline
1 &$\mathcal{O}(h_e)$& $\mathcal{O}(h_e^{-1})$&$\mathcal{O}(1)$&$ Q_h^{k}$ & $V_h^{k+1}$& $\check{Q}_h^{r}$ &$\check{V}_h^{k}$
\\\hline
2&$\mathcal{O}(h_e^{-1})$&$\mathcal{O}(h_e)$& $\mathcal{O}(1)$ &$Q_h^{k+1}$ & $V_h^{k}$ & $\{0\}\ \text{or}\ \check{Q}_h^{m}$ &$\check{V}_h^{k+1}$
\\
3&$\tau^{-1}$&$\Omega(h_e)$&0&$Q_h^{k+1}$&$V_h^{k}$ & $\check{Q}_h^{k}$ &$\check{V}_h^{k+1}$
\\\hline
\end{tabular}
\caption{\footnotesize{New proposed methods with $r\ge \max (1, k)$ and $m\ge 0$. For the second and third schemes,
$\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{\rm div,h}=\mathcal{O}(h^{k+1})$ for any $k\ge 0$ and $\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_0=\mathcal{O}(h^{k+2})$ if $k\ge n$.}}
\label{tab:intronew}
\end{table}
Throughout this paper, we shall use letter $C$, which is independent
of mesh-size $h$ and stabilization parameters $\eta, \tau, \gamma$,
to denote a generic
positive constant which may stand for different values at different
occurrences. The notation $x \lesssim y$ and $x \gtrsim y$ means $x
\leq Cy$ and $x \geq Cy$, respectively. Denote $x\lesssim y\lesssim x$ by $x \eqsim y$.
The rest of the paper is organized as follows. Some notation is introduced in Section \ref{notation}.
{In Section \ref{sec:4form}, a four-field unified formulation is derived for linear elasticity problems. By proving uniform inf-sup conditions under two sets of assumptions, an optimal error analysis is provided for this unified formulation.
Section \ref{sec:variants} derives some variants of this four-field formulation, and reveals their relation with some existing methods in the literature.
Section \ref{sec:limit} illustrates two limiting cases of the unified formulation: mixed methods and primal methods.
Numerical results are provided in Section \ref{sec:numerical} to verify the theoretical analysis including the optimal convergence of the new proposed methods.
Some conclusion remarks are given in Section \ref{concl}.
}
\section{Preliminaries}\label{notation}
Given a nonnegative integer $m$ and a bounded domain $D\subset \mathbb{R}^n$, let $H^m(D)$, $\|\cdot\|_{m,D}$ and $|\cdot|_{m,D}$ be the usual Sobolev space, norm
and semi-norm, respectively. The
$L^2$-inner product on $D$ and $\partial D$ are denoted by $(\cdot,
\cdot)_{D}$ and $\langle\cdot, \cdot\rangle_{\partial D}$,
respectively. Let $\|\cdot\|_{0,D}$ and $\|\cdot\|_{0,\partial D}$ be the norms of Lebesgue spaces $L^2(D)$ and $L^2(\partial D)$, respectively. The norms $\|\cdot\|_{m,D}$ and $|\cdot|_{m,D}$ are abbreviated as $\|\cdot\|_{m}$ and
$|\cdot|_{m}$, respectively, when $D$ is chosen as $\Omega$.
Suppose that ${\rm\Omega}\subset \mathbb{R}^n$ is a bounded polygonal domain covered exactly by a shape-regular partition ${\mathcal{T}_h}$ { of polyhedra}. Let $h_K$ be the diameter of element $K\in {\mathcal{T}_h}$ and $h=\max_{K\in{\mathcal{T}_h}}h_K$. Denote the set of all interior edges/faces of ${\mathcal{T}_h}$ by ${\mathcal{E}_h^I}$, and all edges/faces on boundary $\Gamma_D$ and $\Gamma_N$ by ${\mathcal{E}_h^D}$ and ${\mathcal{E}_h^N}$, respectively. Let ${\mathcal{E}_h}={\mathcal{E}_h^I}\cup {\mathcal{E}_h^D} \cup {\mathcal{E}_h^N}$ and $h_e$ be the diameter of edge/face $e\in {\mathcal{E}_h}$. For any interior edge/face $e=K^+\cap K^-$, let $n^i$ = $n|_{\partial K^i}$ be the unit outward normal vector on $\partial K^i$ with $i = +,-$. For any vector-valued function $v_h$ and matrix-valued function $\boldsymbol{\tau}_h$, let $v_h^{\pm}$ = $v_h|_{\partial K^{\pm}}$, $\boldsymbol{\tau}_h^{\pm}$ = $\boldsymbol{\tau}_h|_{\partial K^{\pm}}$. Define the average $\{\cdot\}$ and the jump $[\cdot ]$ on interior edges/faces $e\in {\mathcal{E}_h^I}$ as
follows:
\begin{equation}\label{jumpdef}
\begin{array}{ll}
\{\boldsymbol{\tau}_h\}=\frac{1}{2}(\boldsymbol{\tau}_h^++\boldsymbol{\tau}_h^-),&[\boldsymbol{\tau}_h]=\boldsymbol{\tau}_h^+n^++\boldsymbol{\tau}_h^-n^-,\\
\{v_h\}=\frac{1}{2}(v_h^++v_h^-),&[v_h] =v_h^+\odot n^++v_h^-\odot n^- - (v_h^+\cdot n^+ + v_h^-\cdot n^-)\bm I
\end{array}
\end{equation}
where $v_h\odot n= \dvn^T+ nv_h^T${ and $\bm I $ is the identity tensor}. For any boundary edge/face $e\subset \partial \Omega$, define
\begin{equation}\label{bddef}
\begin{array}{lllll}
\{\boldsymbol{\tau}_h\}=\boldsymbol{\tau}_h, & [\boldsymbol{\tau}_h]=0, &\{v_h\}=v_h,&[v_h]=v_h\odot n - (v_h\cdot n)\bm I ,& \text{on }\Gamma_D,\\
\{\boldsymbol{\tau}_h\}=\boldsymbol{\tau}_h,& [\boldsymbol{\tau}_h]=\boldsymbol{\tau}_hn, &\{v_h\}=v_h, & [v_h]=0,& \text{on }\Gamma_N.
\end{array}
\end{equation}
Note that the jump $[v_h]$ in \eqref{jumpdef} is a symmetric tensor and
\begin{equation}\label{vjumpn}
[v_h]n^+=v_h^+ - v_h^-,\qquad \forall e\in {\mathcal{E}_h}
\end{equation}
These properties are important for the Nitche's technique in \eqref{bdconstrain}, since the trace of the stress tensor $\boldsymbol{\sigma}_h$ should be a symmetric tensor.
Define some inner products as follows:
\begin{equation} \label{equ:inner-product}
(\cdot,\cdot)_{\mathcal{T}_h}=\sum_{K\in {\mathcal{T}_h} }(\cdot,\cdot)_{K},
\quad \langle\cdot,\cdot\rangle =\sum_{e\in {\mathcal{E}_h}}\langle\cdot,\cdot\rangle_{e},
\quad \langle\cdot,\cdot\rangle_{\partial{\mathcal{T}_h} }=\sum_{K\in{\mathcal{T}_h}}\langle\cdot,\cdot\rangle_{\partial K}.
\end{equation}
With the aforementioned definitions, there exists the following identity \cite{arnold2002unified}:
\begin{equation}\label{identities}
\langle \boldsymbol{\tau}_hn, v_h\rangle_{\partial\mathcal{T}_h} = \langle \{\boldsymbol{\tau}_h\}n, [v_h]n\rangle + \langle [\boldsymbol{\tau}_h], \{v_h\}\rangle.
\end{equation}
For any vector-valued function $v_h$ and matrix-valued function $\boldsymbol{\tau}_h$, define the piecewise gradient $\epsilon_h$ and piecewise divergence ${\rm div}_h$ by
$$
\epsilon_h (v_h)\big |_K=\epsilon (v_h|_K), \quad
{\rm div} _h \boldsymbol{\tau}_h\big |_K={\rm div} (\boldsymbol{\tau}_h |_K) \quad \forall K \in {\mathcal{T}_h}.
$$
Whenever there is no ambiguity, we simplify $(\cdot, \cdot)_{\mathcal{T}_h}$ as $(\cdot,\cdot)$.
The following crucial DG identity follows from integration by parts and \eqref{identities}
\begin{equation}\label{DGidentity}
(\boldsymbol{\tau}_h, \epsilon_h (v_h))
=-({\rm div} _h \boldsymbol{\tau}_h, v_h)
+ \langle [\boldsymbol{\tau}_h], \{v_h\}\rangle
+ \langle \{\boldsymbol{\tau}_h\}n, [v_h]n\rangle.
\end{equation}
\section{A four-field formulation and unified analysis}\label{sec:4form}
Let $Q_h$ and $V_h$ be approximations to $L^2({\rm\Omega}, \mathcal{S})$ and $L^2({\rm\Omega}, \mathbb{R}^n)$, respectively, and be piecewise smooth with respect to $\mathcal{T}_h$. Let
$$
\check{Q}_{h}=\{\check{\boldsymbol{\tau}}_h \in L^2(\mathcal E_h, \mathcal{S}): \check{\boldsymbol{\tau}}_hn|_{\Gamma_N}=0 \} \quad \text{ and }\quad
\check{V}_{h}=\{\check v_h \in L^2(\mathcal E_h, \mathbb{R}^n): \check v_h|_{\Gamma_D}=0 \}.
$$
We start with multiplying the first two equations in \eqref{model} by $\boldsymbol{\tau}_h\in Q_h$ and $v_h\in V_h$, respectively. It is easy to obtain that, for any $K\in \mathcal{T}_h$,
\begin{equation}\label{XGelement}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma},\boldsymbol{\tau}_h)_{0, K}
+(u, {\rm div} _h \boldsymbol{\tau}_h)_{0, K}
-\langle u, \boldsymbol{\tau}_hn \rangle_{0,\partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}, \epsilon_h (v_h))_{0, K}
+\langle \boldsymbol{\sigma}n , v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K}, &\ \forall v_h\in V_h.
\end{array}
\right.
\end{equation}
\iffalse
We sum up the above equations on all elements. It follows from the DG identity \eqref{DGidentity} that
\begin{equation*}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma},\boldsymbol{\tau}_h)
+(u, {\rm div} _h \boldsymbol{\tau}_h)
-\langle \{u\}, [\boldsymbol{\tau}_h]\rangle
&=\langle [u], \{\boldsymbol{\tau}_h\}\rangle,&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}, \epsilon_h (v_h))
+\langle \{\boldsymbol{\sigma}\}, [v_h]\rangle
&=(f,v_h)
-\langle [\boldsymbol{\sigma}], \{v_h\}\rangle&\ \forall v_h\in V_h.
\end{array}
\right.
\end{equation*}
Since $u$ and $\boldsymbol{\sigma}\boldsymbol n$ are continuous across the interior edges/faces, by the last two equations in \eqref{model} and the definitions in \eqref{bddef}, we have
\begin{equation}\label{xg1}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma},\boldsymbol{\tau}_h)
+(u, {\rm div} _h \boldsymbol{\tau}_h)
-\langle u, [\boldsymbol{\tau}_h]\rangle
&=\langle g_D, \boldsymbol{\tau}_hn\rangle_{\Gamma_D},&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}, \epsilon_h (v_h))
+\langle \{\boldsymbol{\sigma}\}, [v_h]\rangle
&=(f,v_h)
-\langle g_N, v_h\rangle_{\Gamma_N},&\ \forall v_h\in V_h.
\end{array}
\right.
\end{equation}
We introduce two independent discrete variables $\check{\boldsymbol{\sigma}}_h\in \check Q_{h}$ and $\check u_h\in \check V_{h}$
as
$$
\{\boldsymbol{\sigma}\} \approx \hat \boldsymbol{\sigma}_{F,h} :=\tilde {\boldsymbol{\sigma}}_h + \check{\boldsymbol{\sigma}}_h,
\qquad
u \approx \hat u_{F,h} := \tilde{u}_h +\check u_h,
$$
where $\tilde {\boldsymbol{\sigma}}_h$ and $\tilde {u}_h$ are given in terms of $\boldsymbol{\sigma}_h$ and $u_h$, namely $\tilde {\boldsymbol{\sigma}}_h=\tilde {\boldsymbol{\sigma}}_h(\boldsymbol{\sigma}_h,u_h)$ and $\tilde {u}_h=\tilde {\boldsymbol{\sigma}}_h(\boldsymbol{\sigma}_h,u_h)$. Here $\check{\boldsymbol{\sigma}}_h\in \check{Q}_{h}$ and $\check u_h\in \check{V}_{h}$ are some \textit{residual corrections} to $\tilde {\boldsymbol{\sigma}}_h$ and $\tilde {u}_h$ along interfaces of mesh, respectively. Thus the formulation \eqref{xg1} can be written as
\begin{equation}\label{origin}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)
-\langle \hat u_{F,h}, [\boldsymbol{\tau}_h]\rangle
&=\langle g_D, \boldsymbol{\tau}_hn\rangle_{\Gamma_D},&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))
+\langle \hat{\boldsymbol{\sigma}}_{F,h}, [v_h]\rangle
&=(f,v_h)
-\langle g_N, v_h\rangle_{\Gamma_N}&\ \forall v_h\in V_h.
\end{array}
\right.
\end{equation}
\fi
We introduce two independent discrete variables $\check{\boldsymbol{\sigma}}_h\in \check Q_{h}$ and $\check u_h\in \check V_{h}$
as
\begin{equation}\label{fluxdef}
\boldsymbol{\sigma}|_{\partial K}\approx \hat \boldsymbol{\sigma}_{h} :=\acute {\boldsymbol{\sigma}}_h + \check{\boldsymbol{\sigma}}_h,
\qquad
u|_{\partial K} \approx \hat u_{h} := \acute{u}_h +\check u_h,
\end{equation}
where $\acute{\boldsymbol{\sigma}}_h=\acute {\boldsymbol{\sigma}}_h(\boldsymbol{\sigma}_h,u_h)$ and $\acute {u}_h=\acute {u}_h(\boldsymbol{\sigma}_h,u_h)$ are given in terms of $\boldsymbol{\sigma}_h$ and $u_h$.
Here $\check{\boldsymbol{\sigma}}_h\in \check{Q}_{h}$ and $\check u_h\in \check{V}_{h}$ are some \textit{residual corrections} to $\acute {\boldsymbol{\sigma}}_h$ and $ \acute {u}_h$ along interfaces of mesh, respectively. Thus the formulation \eqref{XGelement} can be written as
\begin{equation}\label{elemhat}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)_{0, K}
-\langle \hat u_{h}, \boldsymbol{\tau}_hn \rangle_{0, \partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))_{0, K}
+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K}, &\ \forall v_h\in V_h.
\end{array}
\right.
\end{equation}
In order to preserve the continuity of the displacement and { the normal component of }stress across interfaces weakly, we employ two other equations following the Nitche's technique to determine $\check{\boldsymbol{\sigma}}_h$ and $\check u_h$
\begin{equation}\label{bdconstrain}
\left\{
\begin{array}{rll}
\langle \check{\boldsymbol{\sigma}}_h+ \tau[u_h], \check{\boldsymbol{\tau}}_h \rangle_{e }
&=0,&\forall \check{\boldsymbol{\tau}}_h \in \check{Q}_{h},
\\
\langle \check u_h + \eta[\boldsymbol{\sigma}_h], \check v_h \rangle_e
&=0,&\forall \check v_h \in \check{V}_{h}.
\end{array}
\right.
\end{equation}
The variable $\check u_h$ is not only a residual correction but also a multiplier on the jump $[\boldsymbol{\sigma}_h]$ along interfaces. Similarly, the variable $\check{\boldsymbol{\sigma}}_h$ is not only a residual correction but also a multiplier on the jump $[u_h]$ along interfaces.
In this paper, we will discuss a special case with
\begin{equation}\label{tildedef}
\acute{\boldsymbol{\sigma}}_h = \{\boldsymbol{\sigma}_h\} + [\boldsymbol{\sigma}_h]\gamma^T,
\qquad
\acute u_h= \{u_h\} - (\gamma^Tn)[u_h]n
\end{equation}
where $\gamma\in \mathbb{R}^n$ is a column vector. Thus,
\begin{equation}\label{hatdef}
\hat{\boldsymbol{\sigma}}_h = \{\boldsymbol{\sigma}_h\} + [\boldsymbol{\sigma}_h]\gamma^T +\check{\boldsymbol{\sigma}}_h, \qquad
\hat{u}_h = \{u_h\} - (\gamma^Tn)[u_h]n +\check u_h.
\end{equation}
\begin{remark}
Note that the formulation, which seeks $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h\times \check Q_{h}\times V_h\times \check V_{h}$ satisfying \eqref{elemhat} and \eqref{bdconstrain}, is consistent, since $(\boldsymbol{\sigma}, 0, u, 0)$ satisfies the equation \eqref{elemhat} and \eqref{bdconstrain} if $(\boldsymbol{\sigma}, u)$ is the solution to the model \eqref{model}.
\end{remark}
\subsection{$H^1$-based four-field formulation}
Let $\eta_1=\tau^{-1}$ and $\eta_2=\eta$. By the DG identity \eqref{DGidentity}, the resulting $H^1$-based four-field formulation seeks $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h\times \check Q_{h}\times V_h\times \check V_{h}$ such that
\begin{equation}\label{XGgrad}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
- (\epsilon_h(u_h), \boldsymbol{\tau}_h)_{0, K}
-\langle \hat u_{h} - u_h, \boldsymbol{\tau}_hn \rangle_{0, \partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))_{0, K}
+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K}, &\ \forall v_h\in V_h,\\
\langle
\eta_1\check{\boldsymbol{\sigma}}_h+ [u_h], \check{\boldsymbol{\tau}}_h \rangle_{e }
&=0,&\forall \check{\boldsymbol{\tau}}_h \in \check{Q}_{h},
\\
\langle
\check u_h + \eta_2[\boldsymbol{\sigma}_h], \check v_h \rangle_e
&=0,&\forall \check v_h \in \check{V}_{h},
\end{array}
\right.
\end{equation}
with $(\hat{\boldsymbol{\sigma}}_h, \hat{u}_h)$ defined in \eqref{hatdef}.
{
Denote the $L^2$ projection onto $\check{Q}_{h}$ and $\check{V}_{h}$ by $\check P_h^\sigma$ and $\check P_h^u$, respectively.
Nitche's technique in \eqref{bdconstrain} implies that
\begin{equation}\label{hatjumprelu}
\check u_h = -\eta \check P_h^u [\boldsymbol{\sigma}_h].
\end{equation}
By plugging in the above equation and the identity \eqref{identities} into \eqref{elemhat}, the four-field formulation \eqref{XGgrad} with $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)$ is equivalent to the following three-field formulation, which seeks $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h \times \check{Q}_{h}\times V_h $ such that
\begin{equation}\label{M}
\left\{
\begin{array}{rlr}
a_W(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h; \boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h) + b_W(\boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h; u_h)
&=0,
&\forall~(\boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h)\in Q_h \times\check{Q}_{h},
\\
b_W(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h; v_h)
&=(f,v_h),
&\forall~v_h \in V_h,
\end{array}
\right.
\end{equation}
with
\begin{equation}\label{WGABC}
\left\{
\begin{array}{rl}
a_W(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h; \boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h)&=(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
+\langle \eta_2\check{P}_h^u[\boldsymbol{\sigma}_h] , [\boldsymbol{\tau}_h]\rangle
+ \langle \eta_1 \check{\boldsymbol{\sigma}}_h , \check{\boldsymbol{\tau}}_h \rangle,
\\
b_W(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h; v_h)&= -(\boldsymbol{\sigma}_h, \epsilon_h (v_h))
+\langle (\{\boldsymbol{\sigma}_h\} + \check{\boldsymbol{\sigma}}_h + [\boldsymbol{\sigma}_h]\gamma^T)n, [v_h]n\rangle.
\end{array}
\right.
\end{equation}
Thanks to this equivalence, we will use the wellposedness of the three-field formulation \eqref{M} to prove that of
the proposed four-field formulation \eqref{XGgrad} under the following $H^1$-based assumptions:
}
\begin{enumerate}
\item[(G1)] $\epsilon_h (V_h)\subset Q_h$, $\epsilon_h (V_h)|_{\mathcal{E}_h} \subset \check Q_h$ and $Q_hn|_{\mathcal{E}_h}\subset \check Q_h$;
\item[(G2)] $\check{Q}_h$ contains piecewise linear functions;
\item[(G3)] $\eta_1 = \rho_1 h_e$, $\eta_2=\rho_2h_e$ and there exist positive constants $C_1$, $C_2$ and $C_3$ such that
$$
0< \rho_1\leq C_1,\quad 0< \rho_2\leq C_2,\quad 0\leq\gamma\leq C_3,
$$
namely $0< \eta\leq Ch_e$ and $\tau\ge C h_e^{-1}$ in \eqref{bdconstrain}.
\end{enumerate}
Define
\begin{equation}\label{H1norms}
\begin{array}{ll}
\|\boldsymbol{\tau}_h\|_{0, h}^2 =(A\boldsymbol{\tau}_h, \boldsymbol{\tau}_h)+\|\eta_1^{1/2}\{\boldsymbol{\tau}_h\}\|_{\mathcal{E}_h}^2+ \| \eta_2^{1/2} \check{P}_h^u[\boldsymbol{\tau}_h ]\|_{\mathcal{E}_h}^2,
&
\|\check{\boldsymbol{\tau}}_h\|_{0, h}^2 =\|\eta_1^{1/2}\check{\boldsymbol{\tau}}_h\|_{\mathcal{E}_h}^2,
\\
\|v_h\|_{1, h}^2 =\|\epsilon_h (v_h)\|_0^2 + \|\eta_1^{-1/2}\check{P}_h^\sigma[v_h]\|_{\mathcal{E}_h}^2,
&
\|\check v_h\|_{0, h}^2 =\|\eta_2^{-1/2}\check v_h\|_{\mathcal{E}_h}^2.
\end{array}
\end{equation}
Assumption (G2) guarantees that $\|v_h\|_{1, h}$ is a norm for $V_h$.
It follows from \eqref{jumpdef} that
$$
[v_h] =(v_h^+ - v_h^-)\odot n^+ - (v_h^+ - v_h^-)\cdot n^+ \bm I.
$$
Thus, by \eqref{vjumpn},
\begin{equation}
\|[v_h]\|_{0, e}\leq 2\|v_h^+ - v_h^-\|_{0, e} = 2\|[v_h] n^+\|_{0, e}.
\end{equation}
This implies that the norm $\|\eta_1^{-1/2} \check P_h^\sigma [u_h]\|_{{\mathcal{E}_h}} $ is equivalent to $\|\eta_1^{-1/2} \check P_h^\sigma [u_h]n\|_{{\mathcal{E}_h}} $, namely,
\begin{equation}\label{normaljump}
c_1\|\eta_1^{-1/2} \check P_h^\sigma [u_h]\|_{{\mathcal{E}_h}}\leq \|\eta_1^{-1/2} \check P_h^\sigma [u_h]n\|_{{\mathcal{E}_h}} \leq c_2\|\eta_1^{-1/2} \check P_h^\sigma [u_h]\|_{{\mathcal{E}_h}}.
\end{equation}
Define the lifting operators $r_Q: L^2({\mathcal{E}_h}, \mathcal{S})\rightarrow Q_h$ and $l_Q: L^2({\mathcal{E}_h}, \mathbb{R}^n)\rightarrow Q_h$ by
\begin{equation}\label{avgQ}
(r_Q(\boldsymbol{\xi}), \boldsymbol{\tau}_h)= - \langle \{\boldsymbol{\tau}_h\}n, \boldsymbol{\xi}n\rangle,\quad (l_Q(w), \boldsymbol{\tau}_h)= - \langle [\boldsymbol{\tau}_h], w\rangle,\quad \forall\boldsymbol{\tau}_h\in Q_h,
\end{equation}
respectively, and define $r_V: L^2({\mathcal{E}_h}, \mathbb{R}^n)\rightarrow V_h$ and $l_V: L^2({\mathcal{E}_h}, \mathcal{S})\rightarrow V_h$ by
\begin{equation}\label{avgV}
(r_V(w), v_h)= -\langle \{v_h\}, w\rangle,\quad (l_V(\boldsymbol{\xi}), v_h)= - \langle [v_h]n, \boldsymbol{\xi}\rangle,\qquad \forallv_h\in V_h,
\end{equation}
respectively.
If $w|_e\in P_k(e, \mathbb{R}^n)$, there exist the following estimates \cite{arnold2002unified}
\begin{equation}\label{lift}
\|r_Q(\boldsymbol{\xi})\|_0^2\eqsim \|l_V(\boldsymbol{\xi})\|_0^2 \eqsim \|h_e^{-1/2}\boldsymbol{\xi}\|_{{\mathcal{E}_h}}^2,\quad
\|l_Q(w)\|_0^2\eqsim\|r_V(w)\|_0^2 \eqsim \|h_e^{-1/2}w\|_{{\mathcal{E}_h}}^2.
\end{equation}
\begin{theorem}\label{Th:inf-supGrad}
Under Assumptions (G1)--(G3), the formulation \eqref{XGgrad} is uniformly well-posed with respect to the mesh size, $\rho_1$ and $\rho_2$. Furthermore, there exist the following properties:
\begin{enumerate}
\item Let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h\times \check Q_{h}\times V_h\times \check V_{h}$ be the solution of \eqref{XGgrad}. There exists
\begin{equation}\label{stability}
\|\boldsymbol{\sigma}_h\|_{0, h}+ \|\check{\boldsymbol{\sigma}}_h \|_{0, h} + \|u_h\|_{1, h} +\|\check u_h\|_{0, h}\lesssim \|f\|_{-1,h}
\end{equation}
with $\|f\|_{-1, h}= \sup\limits_{v_h\in V_h \setminus\{0\}} \frac{(f,
v_h)}{\|v_h\|_{1,h}}$.
\item Let $(\boldsymbol{\sigma}, u)\in H^{\frac{1}{2}+\epsilon}({\rm\Omega}, \mathcal{S})\cap H({\rm div} , {\rm\Omega}, \mathcal{S})\times H^1({\rm\Omega}, \mathbb{R}^n)$ be the solution of \eqref{model} and $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h \times \check Q_{h} \times V_h \times \check V_{h} $ be the solution of the formulation \eqref{XGgrad}, the quasi-optimal approximation holds as follows:
\begin{equation}\label{optimal_error}
\begin{aligned}
&\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{0, h}+ \|\check{\boldsymbol{\sigma}}_h \|_{0, h} + \|u-u_h\|_{1, h} +\|\check u_h\|_{0, h}
\\
\lesssim &\inf_{\boldsymbol{\tau}_h\in Q_h, v_h\in V_h} \big ( \|\boldsymbol{\sigma}-\boldsymbol{\tau}_h\|_{0, h} + \|u-v_h\|_{1, h}\big ).
\end{aligned}
\end{equation}
\item If $\boldsymbol{\sigma}\in H^{k+1}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+2}({\rm\Omega}, \mathbb{R}^n) ( k\ge 0 )$ and let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h^k\times \check Q_{h}^r\times V_h^{k+1}\times \check V_{h}^k$ be the solution of \eqref{XGgrad} with $r{\ge }\max(1, k)$, then we have the following error estimate:
\begin{equation}\label{error}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{0, h}+ \|\check{\boldsymbol{\sigma}}_h \|_{0, h} + \|u-u_h\|_{1, h} +\|\check u_h\|_{0, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+1} + |u|_{k+2}).
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
{
Since the four-field formulation \eqref{XGgrad} is equivalent to the three-field formulation \eqref{M}, it suffices to prove that \eqref{M} is well-posed under Assumptions (G1) -- (G3), namely the coercivity of $a_W(\cdot, \cdot ; \cdot, \cdot)$ and inf-sup condition for $b_W(\cdot, \cdot ; \cdot)$ in \eqref{WGABC}.
By the definitions of bilinear form $a_W(\cdot, \cdot ; \cdot, \cdot)$ and norms in \eqref{H1norms},
\begin{equation}\label{three-aco}
a_W(\boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h; \boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h)\ge c\left(\|\boldsymbol{\tau}_h\|_{0, h}^2+\|\check{\boldsymbol{\tau}}_h\|^2_{0, h}\right),\quad \forall \boldsymbol{\tau}_h\in Q_h, \check{\boldsymbol{\tau}}_h \in \check Q_h,
\end{equation}
which is coercive on $Q_h\times \check Q_h$.
For any $v_h\in V_h$, take $\boldsymbol{\tau}_h=\epsilon_h(v_h)\in Q_h$ and $\check{\boldsymbol{\tau}}_h=\eta_1^{-1} \check{P}_h^\sigma[v_h] + \{\epsilon_h(v_h)\} + [\epsilon_h(v_h)\gamma^T]$. It holds that
\begin{equation} \label{eq:grad:1}
\begin{split}
b_W(\boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h; v_h)& = (\epsilon_h (v_h), \epsilon_h (v_h)) + \langle \eta_1^{-1} \check{P}_h^\sigma[v_h]n, \check{P}_h^\sigma[v_h]n\rangle
\gtrsim \|v_h\|_{1, h}^2.
\end{split}
\end{equation}
By trace inequality and inverse inequality, we have
\begin{equation}\label{eq:grad:3}
\begin{split}
\|\boldsymbol{\tau}_h\|_{0, h}^2 + \|\check{\boldsymbol{\tau}}_h\|_{0, h}^2
=&(A\epsilon_h (v_h), \epsilon_h (v_h))+\|\eta_1^{1/2}\{\epsilon_h (v_h)\}\|_0^2+ \| \eta_2^{1/2} \check{P}_h^u[\epsilon_h (v_h)]\|_0^2
\\
&+\|\eta_1^{1/2}(\eta_1^{-1} \check{P}_h^\sigma[v_h] + \{\epsilon_h(v_h)\} + [\epsilon_h(v_h)\gamma^T])\|_0^2
\\
\lesssim& \|\epsilon_h (v_h)\|_0^2 + \|\eta_1^{-1/2}\check{P}_h^\sigma[v_h]\|_0^2=\|v_h\|_{1, h}^2.
\end{split}
\end{equation}
It follows that
\begin{equation}\label{three-binfsup}
\inf_{v_h\in V_h}\sup_{(\boldsymbol{\tau}_h,\check{\boldsymbol{\tau}}_h)\in Q_h\times \check Q_h} \frac{b_W(\boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h; v_h)}{(\|\boldsymbol{\tau}_h\|_{0, h}+\|\check{\boldsymbol{\tau}}_h\|_{0, h}) \|v_h\|_{1, h}}\gtrsim 1.
\end{equation}
By Theorem 4.3.1 in \cite{boffi2013mixed}, a combination of \eqref{three-aco} and \eqref{three-binfsup} completes the proof.
}
\end{proof}
\begin{remark}\label{remarkgrad}
For the case $\eta_1=0$, the third equation in \eqref{XGgrad} implies that $\check P^\sigma_h [u_h]=0$. The corresponding discrete space for $u_h$ becomes
$$
V_h^P = \{v_h\in V_h: \langle [v_h], \check{\boldsymbol{\tau}}_h \rangle_{e }=0,\ \forall \check{\boldsymbol{\tau}}_h\in \check Q_h\},
$$
and the norm for $u_h$ reduces to
$$
\|u_h\|_{1, h} =\|\epsilon_h (u_h)\|_0.
$$
For this case, $\boldsymbol{\sigma}_h$, $u_h$ and $\check u_h$ are unique for the four-field formulation \eqref{XGgrad}. The error estimates \eqref{stability}, \eqref{optimal_error} and \eqref{error} in Theorem \ref{Th:inf-supGrad} also hold for this case.
For the case $\eta_2=0$, the last equation in \eqref{XGgrad} implies that $\check u_h=0$, therefore $\|\check u_h\|_{0, h}=0$. The error estimates in Theorem \ref{Th:inf-supGrad} still holds for this case.
\end{remark}
\subsection{$H({\rm div})$-based four-field formulation}
Let $\tau_1=\tau$ and $\tau_2=\eta^{-1}$. Similarly, by applying the DG identity \eqref{DGidentity} to the second equation in \eqref{elemhat}, the four-field formulation seeks $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h\times \check Q_{h}\times V_h\times \check V_{h}$ such that
\begin{equation}\label{XGdiv}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)_{0, K}
-\langle \hat u_{h}, \boldsymbol{\tau}_hn \rangle_{0, \partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
({\rm div} _h \boldsymbol{\sigma}_h, v_h)_{0, K}
+\langle \hat{\boldsymbol{\sigma}}_{h}n - \boldsymbol{\sigma}_hn, v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K}, &\ \forall v_h\in V_h,\\
\langle
\check{\boldsymbol{\sigma}}_h+ \tau_1[u_h], \check{\boldsymbol{\tau}}_h \rangle_{e }
&=0,&\forall \check{\boldsymbol{\tau}}_h \in \check{Q}_{h},
\\
\langle
\tau_2\check u_h + [\boldsymbol{\sigma}_h], \check v_h \rangle_e
&=0,&\forall \check v_h \in \check{V}_{h},
\end{array}
\right.
\end{equation}
with $(\hat{\boldsymbol{\sigma}}_h, \hat{u}_h)$ defined in \eqref{hatdef}.
{
Nitche's technique in \eqref{bdconstrain} implies that
\begin{equation}\label{hatjumprel}
\check{\boldsymbol{\sigma}}_h = -\tau \check P_h^\sigma[u_h],\quad \check u_h = -\eta \check P_h^u [\boldsymbol{\sigma}_h].
\end{equation}
By plugging in the above equations and the identity \eqref{identities} into \eqref{elemhat}, the four-field formulation \eqref{XGdiv} with $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)$ is equivalent to the following two-field formulation, which seeks $(\boldsymbol{\sigma}_h, u_h)\in Q_h \times V_h$ such that
\begin{equation}\label{LDGXG}
\left\{
\begin{array}{rlr}
a_D(\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h) + b_D(\boldsymbol{\tau}_h, u_h)
&=0,
&\forall \boldsymbol{\tau}_h\in Q_h,
\\
b_D(\boldsymbol{\sigma}_h, v_h) - c_D(u_h, v_h)
&=(f,v_h),
&\forall v_h \in V_h,
\end{array}
\right.
\end{equation}
with
\begin{equation}
\left\{
\begin{array}{rl}
a_D(\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)&=(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
+\langle \eta \check{P}_h^u [\boldsymbol{\sigma}_h], [\boldsymbol{\tau}_h]\rangle,
\\
b_D(\boldsymbol{\sigma}_h, v_h)&=({\rm div} _h\boldsymbol{\sigma}_h,v_h)
-\langle [\boldsymbol{\sigma}_h] , \{v_h\}\rangle
+\langle (\gamma^Tn) [\boldsymbol{\sigma}_h], [v_h]n\rangle\\
&=-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))
+\langle \{\boldsymbol{\sigma}_h\}n , [v_h]n\rangle
+ \langle (\gamma^Tn) [\boldsymbol{\sigma}_h], [v_h]n\rangle,
\\
c_D(u_h,v_h) &=\langle \tau \check{P}_h^\sigma[u_h]n, [v_h]n\rangle.
\end{array}
\right.
\end{equation}
Thanks to this equivalence, we will use the wellposedness of this two-field formulation \eqref{LDGXG} to prove that of
the proposed four-field formulation \eqref{XGdiv} under the following $H(\rm div)$-based assumptions:
}
\begin{enumerate}
\item[(D1)] $Q_h=Q_{h}^{k+1}$, ${\rm div} _h Q_h=V_h\subset V_h^k$, $k\ge 0$;
\item[(D2)] $\check V_h^{k+1}\subset \check V_h$;
\item[(D3)] $\tau_1=\rho_1 h_e $, $\tau_2=\rho_2 h_e$ and there exist positive constants $C_1$, $C_2$, $C_3$ and $C_4$ such that
$$
{C_1\leq \rho_1\leq C_2,\quad 0< \rho_2\leq C_3},\quad 0\leq \gamma\leq C_4,
$$
namely $\eta\ge Ch_e^{-1}$ and { $C_1h_e \leq\tau\leq C_2h_e$}.
\end{enumerate}
We first state a crucial estimate \cite{wu2017interior} for the analysis of $H({\rm div})$-based formulation as follows.
\begin{lemma}\label{lm:div}
For any $u_h \in V_h^k$, there exists $\boldsymbol{r}_h \in Q_{h}^{k+1}$ such that
\begin{equation}
{\rm div} _h \boldsymbol{r}_h=u_h, \qquad
\|\boldsymbol{r}_h\|_0 + \|{\rm div} _h \boldsymbol{r}_h\|_0 + \| h_e^{-1/2} [\boldsymbol{r}_h ]\|_0\leq C_0 \|u_h\|_0.
\end{equation}
and
\begin{equation}
\langle [\boldsymbol{r}_h], \check v_h \rangle=0,~\forall~ \check v_h \in \check V^k_{h}.
\end{equation}
\end{lemma}
Define
\begin{equation}\label{divnormdef}
\begin{array}{ll}
{
\|\boldsymbol{\tau}_h\|_{\rm div, h}^2 =\|\boldsymbol{\tau}_h\|_0^2+ \|{\rm div} _h \boldsymbol{\tau}_h\|_0^2+ \| \tau_2^{-1/2}[\boldsymbol{\tau}_h ]\|_{\mathcal{E}_h}^2,}
&
\|\check{\boldsymbol{\tau}}_h\|_{0, h}^2 =\|\tau_1^{-1/2}\check{\boldsymbol{\tau}}_h\|_{\mathcal{E}_h}^2,\\
\|v_h\|_{0, h}^2 =\|v_h\|_0^2+ \|\tau_1^{1/2}[v_h]\|_{\mathcal{E}_h}^2+\|\tau_2^{1/2}\{v_h\}\|_{\mathcal{E}_h}^2,
&
\|\check v_h\|_{0, h}^2 =\|\tau_2^{1/2}\check v_h\|_{\mathcal{E}_h}^2.
\end{array}
\end{equation}
{
A similar result to Lemma 3.3 in \cite{gatica2015analysis} is proved below.
\begin{lemma}\label{L2:eq}
There exists a constant $C > 0$, independent of mesh size $h$, such that
\begin{equation}
(\boldsymbol{\tau}_h,\boldsymbol{\tau}_h)\le C \left((A\boldsymbol{\tau}_h,\boldsymbol{\tau}_h)+ \|{\rm div} _h \boldsymbol{\tau}_h\|_0^2+ \| \tau_2^{-1/2}[\boldsymbol{\tau}_h ]\|_{\mathcal{E}_h}^2\right),\quad \forall \boldsymbol{\tau}_h\in Q_h.
\end{equation}
\end{lemma}
\begin{proof}
Denote $A_\infty\boldsymbol{\tau}_h={1+\nu\over E}\left(\boldsymbol{\tau}_h -{1\over n} tr(\boldsymbol{\tau}_h)I\right)$ and $c_\nu= {1+\nu\over E}\cdot\frac{1-2\nu}{n+n(n-2)\nu}>0$.
It is obvious that
\begin{equation}\label{Ainfty}
(A\boldsymbol{\tau}_h,\boldsymbol{\tau}_h)=(A_\infty \boldsymbol{\tau}_h+c_\nu tr(\boldsymbol{\tau}_h)I , \boldsymbol{\tau}_h) =(A_\infty\boldsymbol{\tau}_h, \boldsymbol{\tau}_h) +c_\nu\|tr(\boldsymbol{\tau}_h)\|_0^2> (A_\infty\boldsymbol{\tau}_h, \boldsymbol{\tau}_h).
\end{equation}
Following the proof of Lemma 3.3 in \cite{gatica2015analysis}, there exists a positive constant $C$ such that
\begin{equation}\label{L2Stokes}
(\boldsymbol{\tau}_h,\boldsymbol{\tau}_h)\le C \left((A_\infty\boldsymbol{\tau}_h,\boldsymbol{\tau}_h)+ \|{\rm div} _h \boldsymbol{\tau}_h\|_0^2+ \| \tau_2^{-1/2}[\boldsymbol{\tau}_h ]\|_{\mathcal{E}_h}^2\right), \quad \forall\boldsymbol{\tau}_h\in Q_h,
\end{equation}
where $C$ is independent of mesh size $h$.
Combining \eqref{Ainfty} and \eqref{L2Stokes}, we obtain the desired result.
\end{proof}
}
\begin{theorem}\label{Th:inf-supDiv}
Under Assumptions (D1)--(D3), the $H(\rm div)$-based formulation \eqref{XGdiv} is well-posed with respect to the mesh size, $\rho_1$ and $\rho_2$. Furthermore, there exist the following properties:
\begin{enumerate}
\item Let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h\times \check Q_{h}\times V_h\times \check V_{h}$ be the solution of \eqref{XGdiv}. There exists
\begin{equation}\label{div_stability}
\|\boldsymbol{\sigma}_h\|_{\rm div, h}+\|\check{\boldsymbol{\sigma}}_h\|_{0, h} + \|u_h\|_{0, h} +\|\check u_h\|_{0, h}\lesssim \|f\|_0.
\end{equation}
\item Let $(\boldsymbol{\sigma}, u)\in H^{\frac{1}{2}+\epsilon}({\rm\Omega}, \mathcal{S})\cap H({\rm div} , {\rm\Omega}, \mathcal{S})\times H^1({\rm\Omega}, \mathbb{R}^n)$ be the solution of \eqref{model} and $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h \times \check Q_{h} \times V_h \times \check V_{h} $ be the solution of the formulation \eqref{XGdiv}, the quasi-optimal approximation holds as follows:
\begin{equation}\label{div_optimal_error}
\begin{aligned}
&\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{\rm div, h}+\|\check{\boldsymbol{\sigma}}_h\|_{0, h} + \|u-u_h\|_{0, h} +\|\check u_h\|_{0, h}
\\
\lesssim &\inf_{\boldsymbol{\tau}_h\in Q_h,
v_h\in V_h} \big ( \|\boldsymbol{\sigma}-\boldsymbol{\tau}_h\|_{\rm div, h} + \|u-v_h\|_{0, h}\big ).
\end{aligned}
\end{equation}
\item
If $\boldsymbol{\sigma}\in H^{k+2}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+1}({\rm\Omega}, \mathbb{R}^n) ( k\ge0 )$ and let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h^{k+1}\times \check Q_{h}^k\times V_h^{k}\times \check V_{h}^{k+1}$ be the solution of \eqref{XGdiv}, then we have the following error estimate:
\begin{equation}\label{div_error}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{\rm div, h}+\|\check{\boldsymbol{\sigma}}_h\|_{0, h} + \|u-u_h\|_{0, h} +\|\check u_h\|_{0, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
Since the four-field formulation \eqref{XGdiv} is equivalent to the two-field formulation \eqref{LDGXG}, it suffices to prove that \eqref{LDGXG} is well-posed under Assumptions (D1) -- (D3).
Consider the inf-sup of $b_D(\boldsymbol{\sigma}_h, v_h)=({\rm div} _h\boldsymbol{\sigma}_h,v_h)
-\langle [\boldsymbol{\sigma}_h] , \{v_h\}\rangle
+\langle (\gamma^Tn) [\boldsymbol{\sigma}_h], [v_h]n\rangle$. According to Lemma \ref{lm:div}, for any $u_h\in V_h$, there exists $\boldsymbol{\sigma}_h\in Q_h$ such that
$$
{\rm div} _h \boldsymbol{\sigma}_h=u_h, \qquad
\langle [\boldsymbol{\sigma}_h], \{u_h\} \rangle_{0, e}= \langle [\boldsymbol{\sigma}_h], [u_h]n \rangle_{0, e}=0,
$$
with $\|\boldsymbol{\sigma}_h\|_0 + \|{\rm div} _h\boldsymbol{\sigma}_h \|_0 + \|h_e^{-1/2}[\boldsymbol{\sigma}_h] \|_{\mathcal{E}_h} \lesssim \|u_h\|_0 $. Then,
\begin{equation}\label{div1}
b_D(\boldsymbol{\sigma}_h, u_h)=\|u_h\|_0^2 \ge c\|u_h\|_{0, h}\|\boldsymbol{\sigma}_h\|_{\rm div, h}
\end{equation}
which proves the inf-sup condition of $b_D(\cdot, \cdot)$.
Define
$$
{ \mathbb K}=\{\boldsymbol{\sigma}_h\in Q_h: ({\rm div} _h\boldsymbol{\sigma}_h,v_h)
-\langle [\boldsymbol{\sigma}_h] , \{v_h\}\rangle
+\langle (\gamma^Tn) [\boldsymbol{\sigma}_h], [v_h]n\rangle
=0,\ \forall v_h\in V_h\}.
$$
It follows from the definition of ${ \mathbb K}$
and the lifting operator in \eqref{avgV} that
\begin{equation*}
{\rm div} _h\boldsymbol{\sigma}_h= -r_V([\boldsymbol{\sigma}_h]) + l_V((\gamma^Tn)[\boldsymbol{\sigma}_h]),\quad \forall \boldsymbol{\sigma}_h\in { \mathbb K}
\end{equation*}
According to Assumption (D2) { and Lemma \ref{L2:eq},}
\begin{equation}\label{div2}
a_D(\boldsymbol{\sigma}_h,\boldsymbol{\sigma}_h)=(A\boldsymbol{\sigma}_h,\boldsymbol{\sigma}_h)
+\langle \tau_2^{-1} [\boldsymbol{\sigma}_h], [\boldsymbol{\sigma}_h]\rangle\ge c\|\boldsymbol{\sigma}_h\|_{\rm div, h}^2
\end{equation}
This means that $a_D(\cdot, \cdot)$ is coercive on ${ \mathbb K}$.
By Theorem 4.3.1 in \cite{boffi2013mixed}, a combination of \eqref{div1} and \eqref{div2} leads to the wellposedness of the two-field formulation \eqref{LDGXG}, and completes the proof.
\end{proof}
{
\begin{remark}
Note that the norm $\|\cdot\|_{\rm div, h}$ defined in \eqref{divnormdef} and the constants in \eqref{div1} and \eqref{div2} do not depend on
the Poisson's ratio $\nu$.
Hence by Theorem \ref{Th:inf-supDiv}, the proposed formulation \eqref{XGdiv} under Assumptions (D1)--(D3) is locking-free.
\end{remark}
}
\begin{remark}\label{remarkdiv}
For the case $\tau_1=0$, the third equation in \eqref{XGdiv} implies that $\check{\boldsymbol{\sigma}}_h=0$, therefore $\|\check{\boldsymbol{\sigma}}_h\|_{0, h}=0$. The error estimates in Theorem \ref{Th:inf-supDiv} still holds for this case.
For the case $\tau_2=0$, the last equation in \eqref{XGdiv} implies that $\check P^u_h [\boldsymbol{\sigma}_h]=0$. The corresponding discrete space for $\boldsymbol{\sigma}_h$ becomes
$$
Q_h^M = \{\boldsymbol{\tau}_h\in Q_h: \langle [\boldsymbol{\tau}_h], \check v_h \rangle_{e }=0,\ \forall \check v_h\in \check V_h\},
$$
and the norm for $\boldsymbol{\tau}_h$ reduces to
{
$$
\|\boldsymbol{\tau}_h\|_{\rm div, h}^2 =\|\boldsymbol{\tau}_h\|_0^2
+ \|{\rm div} _h \boldsymbol{\tau}_h\|_0^2.
$$
}
For this case, $\boldsymbol{\sigma}_h$, $u_h$ and $\check{\boldsymbol{\sigma}}_h$ are unique for the four-field formulation \eqref{XGdiv}. The error estimates \eqref{div_stability}, \eqref{div_optimal_error} and \eqref{div_error} in Theorem \ref{Th:inf-supDiv} also hold for this case.
\end{remark}
Let $\mathbb{M}$ be the space of real matrices of size $n\times n$. Given $\bm \sigma_h$ and $\hat {\bm \sigma}_h$, define a matrix-valued function $\tilde{\boldsymbol{\sigma}}_h\in \mathcal{P}_{k+1}(K ; \mathbb{M})$:
\begin{equation}\label{def:PiSig}
\begin{array}{ll}
\int_{e}\left(\tilde{\boldsymbol{\sigma}}_h -\hat{\boldsymbol{\sigma}}_h\right) n \cdot p_{k+1} ds=0, & \forall p_{k+1} \in \mathcal{P}_{k+1}\left(e ; \mathbb{R}^{n}\right), \\
\int_{K}\left(\tilde{\boldsymbol{\sigma}}_h -\boldsymbol{\sigma}_h\right): \nabla p_{k} dx=0, & \forall p_{k} \in \mathcal{P}_{k}\left(K ; \mathbb{R}^{n}\right), \\
\int_{K}\left(\tilde{\boldsymbol{\sigma}}_h -\boldsymbol{\sigma}_h \right): \boldsymbol{p}_{k+1} dx=0, & \forall \boldsymbol{p}_{k+1} \in \Phi_{k+1}(K),
\end{array}
\end{equation}
where
$
\Phi_{k+1}(K)=\left\{\boldsymbol{\tau}_h \in \mathcal{P}_{k+1}(K ; \mathbb{M}): \operatorname{div} \boldsymbol{\tau}_h=0,\left.\boldsymbol{\tau}_h n\right|_{\partial K}=0\right\}.
$
Define the following space
\[
\mathrm{BDM}_{k+1}^{n \times n}:=\left\{\boldsymbol{\tau} \in H(\operatorname{div}, \Omega ; \mathbb{M}):\left.\boldsymbol{\tau}\right|_{K} \in \mathcal{P}_{k+1}(K ; \mathbb{M}),\ \forall K \in \mathcal{T}_{h}\right\},
\]
and the norm
$$
\|\boldsymbol{\tau}_h\|_A^2 = (A\boldsymbol{\tau}_h, \boldsymbol{\tau}_h),\quad \forall \boldsymbol{\tau}_h \in L^2(\Omega, \mathcal{S}).
$$
There exists the following estimate in \cite{wang2020mixed}.
\begin{lemma}\label{lm:l2}
The matrix-valued function $\tilde{\boldsymbol{\sigma}}_h \in \mathrm{BDM}_{k+1}^{n \times n}$ in \eqref{def:PiSig} is well defined and
\begin{equation}
\left\|\tilde{\boldsymbol{\sigma}}_h -\boldsymbol{\sigma}_h \right\|_{0, K} \lesssim h_{K}^{1 / 2} \| (\hat{\boldsymbol{\sigma}}_h -\boldsymbol{\sigma}_h)n\|_{\partial K}.
\end{equation}
Furthermore, there exists a matrix-valued function $\tilde{\boldsymbol{\tau}}_h \in \mathrm{BDM}_{k+1}^{n \times n}$ such that $\boldsymbol{\sigma}_h^\ast:= \tilde{\boldsymbol{\sigma}}_h +\tilde{\boldsymbol{\tau}}_h \in H(\operatorname{div}, \Omega, \mathcal{S}),$ and
\[
\operatorname{div} \tilde{\boldsymbol{\tau}}_h=0 \text { and }\left\|\tilde{\boldsymbol{\tau}}_h\right\|_{0} \lesssim\left\|\boldsymbol{\sigma}_h -\tilde{\boldsymbol{\sigma}}_h\right\|_{0}.
\]
\end{lemma}
Similar to the analysis in \cite{wang2020mixed}, there exists the following $L^2$ error estimate of the discrete stress tensor for the XG formulation.
\begin{theorem}\label{Th:sigL2div}
Let $\boldsymbol{\sigma}\in H^{k+2}({\rm\Omega}, \mathcal{S})$ and $u\in H^{k+1}({\rm\Omega}, \mathbb{R}^n) ( k\ge n)$ be the solution of \eqref{model} and $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h^{k+1}\times \check Q_{h}^k\times V_h^{k}\times \check V_{h}^{k+1}$ be the solution of \eqref{XGdiv}. Under Assumptions (D1)--(D3), it holds that
\begin{equation}
\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_A\le h^{k+2}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
\end{theorem}
\begin{proof}
Recall the following $H({\rm div})$ four-field formulation \eqref{elemhat} and \eqref{bdconstrain}
\begin{equation}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)_{0, K}
-\langle \hat u_{h}, \boldsymbol{\tau}_hn \rangle_{0, \partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))_{0, K}
+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K},&\ \forall v_h\in V_h,\\
\langle \check{\boldsymbol{\sigma}}_h+ \tau[u_h], \check{\boldsymbol{\tau}}_h \rangle_{e }
&=0,&\forall \check{\boldsymbol{\tau}}_h \in \check{Q}_{h},
\\
\langle \check u_h + \eta[\boldsymbol{\sigma}_h], \check v_h \rangle_e
&=0,&\forall \check v_h \in \check{V}_{h}.
\end{array}
\right.
\end{equation}
with $\hat{\boldsymbol{\sigma}}_h = \{\boldsymbol{\sigma}_h\} + [\boldsymbol{\sigma}_h]\gamma^T +\check{\boldsymbol{\sigma}}_h$ and $\hat{u}_h = \{u_h\} - (\gamma^Tn)[u_h]n +\check u_h.$ By the second equation in the above equation and the definition of $\tilde{\boldsymbol{\sigma}}_h$ in \eqref{def:PiSig},
\begin{equation}\label{divsigstar}
\begin{aligned}
(f,v_h)=&-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{\partial\mathcal{T}_h}
=-(\boldsymbol{\sigma}_h, \nabla_h v_h)+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{\partial\mathcal{T}_h}
\\
=&-(\tilde{\boldsymbol{\sigma}}_h, \nabla_h v_h)+\langle \tilde{\boldsymbol{\sigma}}_hn, v_h\rangle_{\partial\mathcal{T}_h}
=({\rm div} \tilde{\boldsymbol{\sigma}}_h, v_h)=({\rm div} \boldsymbol{\sigma}_h^*, v_h).
\end{aligned}
\end{equation}
When $k\ge n$, there exists a projection { $\Pi_h^c: H^{1}(\Omega, \mathcal{S})\rightarrow Q_h\cap H(\rm div, \Omega, \mathcal{S})$, see Remark 3.1 in \cite{hu2014finite} for reference, such that
\begin{equation}
\begin{aligned}
({\rm div} (\boldsymbol{\tau} -\Pi_{h}^{c} \boldsymbol{\tau}), v_h)_{\Omega} &=0 & & \hbox{for any}~ v_h\in V^k_h,
\\
\left\|\boldsymbol{\tau} - \Pi_{h}^{c} \boldsymbol{\tau} \right\|_{0, \Omega} & \lesssim h^{k+2}|\boldsymbol{\tau} |_{k+2, \Omega} & & \hbox{if}~ \boldsymbol{\tau} \in H^{k+2}(\Omega, \mathcal{S}).
\end{aligned}
\end{equation}
}
It follows from \eqref{divsigstar} and Lemma \ref{lm:l2} that
\begin{align}
({\rm div} (\boldsymbol{\sigma}_h^* -\Pi_{h}^{c} \boldsymbol{\sigma} ), v_h)&=({\rm div} (\tilde{\boldsymbol{\sigma}}_h - \boldsymbol{\sigma} ), v_h)+ ({\rm div} \tilde{\boldsymbol{\tau}}_h, v_h)=0
\end{align}
Let $\boldsymbol{\tau}_h=\Pi_h^c \boldsymbol{\sigma} - \boldsymbol{\sigma}_h^*\in H(\operatorname{div}, \Omega, \mathcal{S})$. According to Assumption (D1), ${\rm div} _hQ_h\subset V_h$. Thus,
$$
{\rm div} \boldsymbol{\tau}_h=0.
$$
It follows from {\eqref{hatdef}, \eqref{XGdiv}} and $\boldsymbol{\tau}_h\in H(\operatorname{div}, \Omega, \mathcal{S})$ that
\begin{equation}
\begin{aligned}
(A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\tau}_h) =& \langle u - \hat{u}_h, \boldsymbol{\tau}_hn\rangle_{{\partial\mathcal{T}_h}} - (u - u_h, {\rm div} \boldsymbol{\tau}_h)= \langle {u-\hat{u}_h}, [\boldsymbol{\tau}_h]\rangle=0.
\end{aligned}
\end{equation}
Since
\begin{align}
(A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\sigma} - \boldsymbol{\sigma}_h) =&\nonumber(A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\sigma} - \Pi_h^c \boldsymbol{\sigma}) + (A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\tau}_h) + (A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\sigma}_h^* - \boldsymbol{\sigma}_h)
\\
=&(A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\sigma} - \Pi_h^c \boldsymbol{\sigma}) + (A(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h), \boldsymbol{\sigma}_h^* - \boldsymbol{\sigma}_h),
\end{align}
we have
\begin{equation}\label{eq:l21}
\begin{aligned}
\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_A \le &\|\boldsymbol{\sigma} - \Pi_h^c \boldsymbol{\sigma}\|_A + \|\boldsymbol{\sigma}_h^* - \boldsymbol{\sigma}_h\|_A
\le \|\boldsymbol{\sigma} - \Pi_h^c \boldsymbol{\sigma}\|_A + \|\tilde{\boldsymbol{\tau}}_h\|_A + \|\tilde{\boldsymbol{\sigma}}_h - \boldsymbol{\sigma}_h\|_A
\\
\lesssim & \|\boldsymbol{\sigma} - \Pi_h^c \boldsymbol{\sigma}\|_0 + \|\boldsymbol{\sigma}_h -\tilde{\boldsymbol{\sigma}}_h\|_0.
\end{aligned}
\end{equation}
A combination of \eqref{hatjumprel} and Lemma \ref{lm:l2} leads to
\begin{equation}\label{eq:l22}
\begin{aligned}
\|\boldsymbol{\sigma}_h -\tilde{\boldsymbol{\sigma}}_h\|_0 &\lesssim \| h_{K}^{1 / 2}(\hat{\boldsymbol{\sigma}}_h -\boldsymbol{\sigma}_h) n \|_{{\partial\mathcal{T}_h}}\lesssim h^{1 / 2} {(\|\check{\boldsymbol{\sigma}}_hn \|_{{\mathcal{E}_h}} + \| [\boldsymbol{\sigma}_h]n\|_{\mathcal{E}_h})}
\\
&\lesssim h\|\check{\boldsymbol{\sigma}}_h\|_{0, h} + h\|\check u_h\|_{0, h}.
\end{aligned}
\end{equation}
It follows from \eqref{eq:l21} and \eqref{eq:l22} that
$$
\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_A \lesssim h^{k+2}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}),
$$
which completes that proof.
\end{proof}
It needs to point out that the above two {discretizations} \eqref{XGgrad} and \eqref{XGdiv} are mathematically equivalent under the same choice of discrete spaces and parameters. But these two {discretizations} behave differently under different assumptions (G1)--(G3) or (D1)--(D3). {discretizations} under Assumptions (G1)--(G3) are more alike $H^1$-based {methods} and those under Assumptions (D1)--(D3) are more alike $H({\rm div})$-based {methods}. According to these two sets of assumptions, the parameter $\tau$ in \eqref{bdconstrain} can tend to infinity in {an} $H^1$-based formulation, but not in {an} $H({\rm div})$-based formulation, while the parameter $\eta$ can tend to infinity in {an} $H({\rm div})$-based formulation, but not in {an} $H^1$-based formulation. In the rest of this paper, we will use \eqref{XGgrad} whenever {an} $H^1$-based formulation is {considered}, and \eqref{XGdiv} for {an} $H({\rm div})$-based formulation.
\section{Variants of the four-field formulation}\label{sec:variants}
Note that the last two equations in \eqref{XGgrad} and \eqref{XGdiv} reveal the relations \eqref{hatjumprel} between $\check{\boldsymbol{\sigma}}_h$, $\check u_h$ and $[u_h]$, $[\boldsymbol{\sigma}_h]$, respectively. In the four-field formulation \eqref{XGgrad} and \eqref{XGdiv}, we can eliminate one or some of the four variables and obtain several reduced formulations as discussed below.
\subsection{Three-field formulation without the variable $\check{\boldsymbol{\sigma}}_h$}
The relations \eqref{hatdef} and \eqref{hatjumprel} imply that
\begin{equation}\label{SigHDG}
\hat{\boldsymbol{\sigma}}_h = \{\boldsymbol{\sigma}_h\} + [\boldsymbol{\sigma}_h]\gamma^T - \tau_1 \check{P}_h^\sigma [u_h].
\end{equation}
A substitution of \eqref{SigHDG} into the four-field formulation \eqref{XGdiv} gives the following three-field formulation without the variable $\check{\boldsymbol{\sigma}}_h$ which seeks $(\boldsymbol{\sigma}_h, u_h, \check u_h)\in Q_h \times V_h \times \check{V}_{h}$ such that
\begin{equation}\label{XGH}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)_{0, K}
-\langle \hat u_{h}, \boldsymbol{\tau}_hn \rangle_{0, \partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,
\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))_{0, K}
+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K}, &\ \forall v_h\in V_h,
\\
\langle\tau_2 \check u_h + [\boldsymbol{\sigma}_h], \check v_h \rangle_e
&=0,&\forall \check v_h \in \check{V}_{h},
\end{array}
\right.
\end{equation}
with $\hat{u}_h$ and $\hat{\boldsymbol{\sigma}}_h$ defined in \eqref{hatdef} and \eqref{SigHDG}, respectively.
The equivalence between the four-field formulations \eqref{XGgrad}, \eqref{XGdiv} and the three-field formulation \eqref{XGH} gives the following optimal error estimates.
\begin{theorem}\label{Th:H}
There exist the following properties:
\begin{enumerate}
\item Under Assumptions (G1)--(G3), the $H^1$-based formulation \eqref{XGH} is uniformly well-posed with respect to
mesh size, $\rho_1$ and $\rho_2$. Let $(\boldsymbol{\sigma}_h,u_h, \check u_h)\in Q_h\times V_h\times \check V_{h}$ be the solution of \eqref{XGH}. There exists
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{0, h}+ \|u_h\|_{1, h} +\|\check u_h\|_{0, h}\lesssim \|f\|_{-1,h}.
\end{equation}
If $\boldsymbol{\sigma}\in H^{k+1}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+2}({\rm\Omega}, \mathbb{R}^n) ( k\ge 0 )$, let $(\boldsymbol{\sigma}_h, u_h, \check u_h)\in Q_h^k \times V_h^{k+1}\times \check V_{h}^k$ be the solution of \eqref{XGH}, then we have the following error estimate:
\begin{equation}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{0, h} + \|u-u_h\|_{1, h} +\|\check u_h\|_{0, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+1} + |u|_{k+2}).
\end{equation}
\item Under Assumptions (D1)--(D3), the $H({\rm div})$-based formulation \eqref{XGH} is uniformly well-posed with respect to
mesh size, $\rho_1$ and $\rho_2$. Let $(\boldsymbol{\sigma}_h, u_h, \check u_h)\in Q_h\times V_h\times \check V_{h}$ be the solution of \eqref{XGH}. There exists
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{\rm div, h} + \|u_h\|_{0, h} +\|\check u_h\|_{0, h}\lesssim \|f\|_0
\end{equation}
If $\boldsymbol{\sigma}\in H^{k+2}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+1}({\rm\Omega}, \mathbb{R}^n) ( k\ge0 )$, let $(\boldsymbol{\sigma}_h, u_h, \check u_h)\in Q_h^{k+1} \times V_h^{k}\times \check V_{h}^{k+1}$ be the solution of \eqref{XGH}, then we have the following error estimate:
\begin{equation}\label{div_error}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{\rm div, h} + \|u-u_h\|_{0, h} + \|\check u_h\|_{0, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
Furthermore, if $k\ge n$,
\begin{equation}\label{div_error2}
\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_A\lesssim h^{k+2}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
\end{enumerate}
\end{theorem}
\subsubsection{A special case of the three-field formulation without $\check{\boldsymbol{\sigma}}_h$}
Consider a special case of this three-field formulation \eqref{XGH} with
\begin{equation}\label{hdgCon}
\tau_2=4\tau_1,\quad \gamma = 0, \quad V_h|_{\mathcal{E}_h}\subset \check V_h\quad V_h|_{\mathcal{E}_h}\subset \check Q_hn.
\end{equation}
It follows from \eqref{SigHDG} that
\begin{equation}\label{errusig}
\langle \hat{\boldsymbol{\sigma}}_h n , v_h \rangle_{\partial\mathcal{T}_h}= \langle\boldsymbol{\sigma}_h n - 2\tau_1 \check P_h^\sigma(u_h - \hat{u}_h ), v_h \rangle_{\partial\mathcal{T}_h}.
\end{equation}
By eliminating $\hat{\boldsymbol{\sigma}}_h$ in \eqref{XGgrad} or \eqref{XGdiv}, we obtain the three-field formulation which seeks $(\boldsymbol{\sigma}_h, u_h, \check u_h)\in Q_h \times V_h \times \check{V}_{h}$ such that
\begin{equation} \label{XGHDG0}
\left\{
\begin{array}{rlll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)_{0, K}
-\langle\hat{u}_h , \boldsymbol{\tau}_hn\rangle_{\partial K}
=&0,
&\boldsymbol{\tau}_h\in Q_h,
\\
-(\boldsymbol{\sigma}_h, \epsilon_h(v_h)) _{0, K}
+ \langle \boldsymbol{\sigma}_h n - 2 \tau_1 \check P_h^\sigma(u_h - \hat{u}_h ), v_h\rangle_{\partial K}
=&(f,v_h),
&v_h\in V_h,
\\
\langle \boldsymbol{\sigma}_h n - 2\tau_1 \check P_h^\sigma(u_h - \hat{u}_h ), \hat{v}_h \rangle_{\partial\mathcal{T}_h}
&=0,&\forall \check v_h \in \check{V}_{h}.
\end{array}
\right.
\end{equation}
This reveals the close relation between the three-field formulation \eqref{XGH} and the HDG formulations \cite{fu2015analysis,soon2009hybridizable,chen2016robust,qiu2018hdg}.
It implies that the special three-field formulation \eqref{XGH} mentioned above is also hybridizable under Assumptions (G1)-(G3). Therefore, the four-field formulation \eqref{XGgrad} or \eqref{XGdiv} with $\tau_2=2\tau_1$ and $\check Q_hn=\check V_h$ can be reduced to a one-field formulation with only the variable $\hat{u}_h$.
Table \ref{tab:HDGexist} lists three HDG methods for linear elasticity problems in the literature and a new $H({\rm div})$-based method. Since the three-field formulation \eqref{XGHDG0} is equivalent to \eqref{XGgrad} and \eqref{XGdiv}, the new method in Table \ref{tab:HDGexist} is well-posed according to Theorem \ref{Th:inf-supGrad}.
\begin{table}[!htbp]
\centering
\begin{tabular}{c|ccccccccccccc}
\hline
cases&$\eta$& $\tau$& $\gamma$&$Q_h$ & $\check Q_h$ &$V_h$ & $\check{V}_h$ &
\\
\hline
1&$\tau^{-1}$& $\Omega(h_e)$ & 0&$ Q_h^k$ & $\check Q_h^k$&$V_h^{k}$ &
$\check{V}_h^{k}$ &\cite{soon2009hybridizable,fu2015analysis}
\\
2&$\tau^{-1}$& $\Omega(h_e^{-1})$& 0&$ Q_h^k$ & $\check Q_h^k$&$V_h^{k}$ &
$\check{V}_h^k$&\cite{soon2009hybridizable}
\\
3&$\tau^{-1}$& $\Omega(h_e^{-1})$& 0&$ Q_h^{k}$ &$\check Q_h^{k}$ &$V_h^{k+1}$ & $\check{V}_h^{k}$& \cite{qiu2018hdg,chen2016robust}
\\\hline
4&$\tau^{-1}$&$\Omega(h_e)$&0&$Q_h^{k+1}$& $\check{Q}_h^{k+1}$ & $V_h^{k}$ &$\check V_h^{k+1}$& new
\\\hline
\end{tabular}
\caption{Some existing HDG methods and a new HDG method.}
\label{tab:HDGexist}
\end{table}
\begin{enumerate}
\item The first two HDG methods in this table were proposed in \cite{soon2009hybridizable}, and the first one was then analyzed in \cite{fu2015analysis}. The inf-sup conditions in Theorem \ref{Th:inf-supGrad} and \ref{Th:inf-supDiv} are not optimal for these two cases since the ${\rm degree\ of }\ Q_h$ {equals to the }${\rm degree\ of }\ V_h$.
\item The third one is called the HDG method with reduced stabilization. It was proposed and analyzed { to be a locking-free scheme} in \cite{qiu2018hdg,chen2016robust}. Theorem~\ref{Th:inf-supGrad} provides a brand new proof of the optimal error estimate for this HDG method.
\item The last one is a new three-field scheme proposed following the $H({\rm div})$-based formulation \eqref{XGHDG0}. The error estimate for this { locking-free scheme} is analyzed in Theorem \ref{Th:H}. Note that the divergence of the stress tensor is approximated by ${\rm div} _h \boldsymbol{\sigma}_h$ directly in this new $H({\rm div})$-based scheme without any extra post-process as required in $H^1$-based methods.
\end{enumerate}
\subsubsection{Hybridization for the $H({\rm div})$-based formulation \eqref{XGHDG0}}
{ Similar to the hybridization in \cite{qiu2018hdg,chen2016robust},
the $H({\rm div})$-based three-field formulation \eqref{XGHDG0} is also hybridizable under Asssumptions (D1)--(D3).
It can be decomposed into two sub-problems as:}
\begin{enumerate}
\item[(I)] Local problems. For each element $K$, given $\hat{u}_h \in \check V_h$, find $(\boldsymbol{\sigma}_h^K, u_h^K)\in Q_h\times V_h$ such that
\begin{equation}\label{XGHDG1}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h^K,\boldsymbol{\tau}_h)_K
+(u_h^K, {\rm div} \boldsymbol{\tau}_h)_K
=&\langle\hat{u}_h , \boldsymbol{\tau}_hn\rangle_{\partial K},
&\boldsymbol{\tau}_h\in Q_h,
\\
({\rm div} _h\boldsymbol{\sigma}_h^K, v_h)_K
- \langle 2\tau_1 u_h^K, v_h\rangle_{\partial K}
=&(f,v_h)_K
- \langle 2\tau_1 \hat{u}_h, v_h\rangle_{\partial K},
&v_h\in V_h.
\end{array}
\right.
\end{equation}
It is easy to see \eqref{XGHDG1} is well-posed.
Denote $H_Q: \check V_h\rightarrow Q_h$ and $H_V:\check V_h\rightarrow V_h$ by
$$
H_Q(\hat{u}_h)|_K= \boldsymbol{\sigma}_h^K\quad\text{ and }\quad H_V(\hat{u}_h)|_K= u_h^K,
$$
respectively.
\item[(II)] Global problem. Find $\hat{u}_h\in \check V_h$ such that
\begin{equation} \label{XGHDG2}
\langle H_Q(\hat{u}_h)n - 2\tau_1 (H_V(\hat{u}_h) - \hat{u}_h), \hat{v}_h \rangle_{\partial\mathcal{T}_h}
=0,\quad \hat{v}_h \in \check V_h.
\end{equation}
It follows from \eqref{XGHDG1} that
\begin{eqnarray*}
&&( AH_Q(\hat{v}_h), H_Q(\hat{u}_h))_K
+ \langle H_V(\hat{v}_h), {\rm div} (H_Q(\hat{u}_h))\rangle_{\partial K}
=
\langle\hat{v}_h, H_Q(\hat{u}_h)n \rangle_{\partial K},
\\
&&\langle 2\tau_1(\hat{u}_h-H_V(\hat{u}_h)), H_V(\hat{v}_h)\rangle_{\partial K}
=(f, H_V(\hat{v}_h))_K- ({\rm div} H_Q(\hat{u}_h), H_V(\hat{v}_h))_K.
\end{eqnarray*}
The global problem \eqref{XGHDG2} can be written in the following symmetric positive form
\begin{equation}\label{HDGhybrid}
\begin{split}
(AH_Q(\hat{u}_h), H_Q(\hat{v}_h))
+ \langle 2\tau_1 (\hat{u}_h - H_V(\hat{u}_h)), \hat{v}_h-H_V(\hat{v}_h)\rangle_{\partial\mathcal{T}_h}
= - (f, H_V(\hat{v}_h)).
\end{split}
\end{equation}
Since the original formulation \eqref{XGHDG0} is well-posed, the global problem \eqref{HDGhybrid} is also well-posed.
\end{enumerate}
Suppose Assumptions (D1)--(D3) hold. If the parameter $\tau_1$ is nonzero, the formulation \eqref{XGHDG0} is an $H({\rm div})$-based HDG formulation, and it is hybridizable with only one variable $\hat{u}_h$ globally coupled in \eqref{HDGhybrid}.
If the parameter $\tau_1$ vanishes, the formulation \eqref{XGHDG0} is a hybridizable mixed formulation \cite{gopalakrishnan2011symmetric,gong2019new}.
This implies that the formulation \eqref{XGgrad} or \eqref{XGdiv} with \eqref{hdgCon} can be reduced to a one-field formulation with only the variable $\hat{u}_h$.
\subsection{Three-field formulation without the variable $\check u_h$}
The relations \eqref{hatdef} and \eqref{hatjumprel} imply that
\begin{equation}\label{uWG}
\hat{u}_h = \{u_h\} - (\gamma^Tn)[u_h]n - \eta_2 \check{P}_h^u[\boldsymbol{\sigma}_h].
\end{equation}
Another reduced formulation is resulted from eliminating $\check u_h$ in the four-field formulation \eqref{XGgrad} by use of \eqref{uWG}. It seeks $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h \times \check{Q}_{h}\times V_h $ such that
\begin{equation}\label{XGW}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)_{0, K}
-(\epsilon_h(u_h), \boldsymbol{\tau}_h)_{0, K}
+\langle u_h - \hat{u}_h, \boldsymbol{\tau}_hn \rangle_{0, \partial K}
&=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,
\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))_{0, K}
+\langle \hat{\boldsymbol{\sigma}}_{h}n, v_h\rangle_{0, \partial K}
&=(f,v_h)_{0, K}, &\ \forall v_h\in V_h,
\\
\langle \eta_1 \check{\boldsymbol{\sigma}}_h+ [u_h], \check{\boldsymbol{\tau}}_h \rangle_{e }
&=0,&\forall \check{\boldsymbol{\tau}}_h \in \check{Q}_{h},
\end{array}
\right.
\end{equation}
with $\hat{u}_h$ and $\hat{\boldsymbol{\sigma}}_h$ defined in \eqref{uWG} and \eqref{hatdef}, respectively.
{ The variable $\check{\boldsymbol{\sigma}}_h$ weakly imposes the $H^1$-continuity of the variable $u_h$ in formulation \eqref{XGgrad} or \eqref{XGdiv}. }This makes the three-field formulation \eqref{XGW} more alike primal methods.
\begin{theorem}\label{Th:W}
There exist the following properties:
\begin{enumerate}
\item Under Assumptions (G1)--(G3), the $H^1$-based formulation \eqref{XGW} is uniformly well-posed with respect to
mesh size, $\rho_1$ and $\rho_2$. Let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h\times \check Q_{h}\times V_h$ be the solution of \eqref{XGW}. There exists
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{0, h}+ \|u_h\|_{1, h} +\|\check{\boldsymbol{\sigma}}_h\|_{0, h}\lesssim \|f\|_{-1,h}.
\end{equation}
If $\boldsymbol{\sigma}\in H^{k+1}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+2}({\rm\Omega}, \mathbb{R}^n) ( k\ge 0 )$, let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h^k \times \check Q_{h}^r\times V_h^{k+1}$ be the solution of \eqref{XGW} with $r=\max(1, k)$, then we have the following error estimate:
\begin{equation}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{0, h} + \|u-u_h\|_{1, h} +\|\check{\boldsymbol{\sigma}}_h\|_{0, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+1} + |u|_{k+2}).
\end{equation}
\item Under Assumptions (D1)--(D3), the $H({\rm div})$-based formulation \eqref{XGW} is uniformly well-posed with respect to
mesh size, $\rho_1$ and $\rho_2$. Let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h\times \check Q_{h}\times V_h$ be the solution of \eqref{XGW}. There exists
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{\rm div, h} + \|u_h\|_{0, h} +\|\check{\boldsymbol{\sigma}}_h\|_{0, h}\lesssim \|f\|_0.
\end{equation}
If $\boldsymbol{\sigma}\in H^{k+2}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+1}({\rm\Omega}, \mathbb{R}^n) ( k\ge0 )$, let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h^{k+1}\times \check Q_{h}^k \times V_h^{k}$ be the solution of \eqref{XGW}, then we have the following error estimate:
\begin{equation}\label{div_error}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{\rm div, h} + \|u-u_h\|_{0, h} + \|\check{\boldsymbol{\sigma}}_h\|_{0, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
Furthermore, if $k\ge n$,
\begin{equation}\label{div_error2}
\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_A\lesssim h^{k+2}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
\end{enumerate}
\end{theorem}
\subsubsection{A special case of three-field formulation without $\check{\boldsymbol{\sigma}}_h$}
For each variable $\bar{\boldsymbol{\tau}}_h= (\boldsymbol{\tau}_h, \check{\boldsymbol{\tau}}_h)\in Q_h\times \check Q_h$, define the weak divergence ${\rm div} _w: Q_h\times \check Q_{h} \rightarrow V_h$ by
\begin{equation}
({\rm div} _w \bar{\boldsymbol{\tau}}_h, w_h)_{0, K}=-(\epsilon_h ( w_h), \boldsymbol{\tau}_h)_{0, K} + \langle (\{\boldsymbol{\tau}_h\} + \check{\boldsymbol{\tau}}_h)n, w_h\rangle_{0, \partial K},\ \forall w_h\in V_h.
\end{equation}
The following lemma presents the relation between a special three-field formulation \eqref{XGW} and the weak Galerkin method.
\begin{lemma}
The formulation \eqref{XGW} with $\eta_1=4\eta_2$, $\gamma=0$, $Q_hn|_{\mathcal{E}_h}\subset \check Q_{h}$ and $Q_hn|_{\mathcal{E}_h}\subset \check V_{h}$ is equivalent to the problem that finds $\bar{\boldsymbol{\sigma}}_h\in Q_h\times \check Q_{h}$ and $u_h\in V_h$ such that
\begin{equation}\label{XGWG}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h) + ({\rm div} _w \bar{\boldsymbol{\tau}}_h, u_h)
+ s(\bar{\boldsymbol{\sigma}}_h, \bar{\boldsymbol{\tau}}_h)
&=0,
&\bar{\boldsymbol{\tau}}_h\in Q_h \times \check Q_{h},
\\
({\rm div} _w \bar{\boldsymbol{\sigma}}_h, v_h)
&
=(f,v_h),
&v_h\in V_h
\end{array}
\right.
\end{equation}
with $s(\bar{\boldsymbol{\sigma}}_h, \bar{\boldsymbol{\tau}}_h)= \langle 2\eta_2(\hat{\boldsymbol{\sigma}}_h - \boldsymbol{\sigma}_h)n, (\hat{\boldsymbol{\tau}}_h - \boldsymbol{\tau}_h)n\rangle_{{\partial\mathcal{T}_h}}$ and $\hat{u}_h$ and $\hat{\boldsymbol{\sigma}}_h$ defined in \eqref{uWG} and \eqref{SigHDG}, respectively.
\end{lemma}
\subsubsection{Hybridization for the three-field formulation \eqref{XGWG}}
Denote
$$
Z_h = \{u_h\in V_h: \epsilon_h (u_h)=0\},
$$
$$
V_h^\perp = \{u_h\in V_h: (u_h, v_h)=0,\ \forall v_h\in Z_h\}.
$$
{ For any $\hat{\boldsymbol{\sigma}}_h\in \check Q_h$, denote $\hat\boldsymbol{\sigma}_{h,n}|_e=\hat{\boldsymbol{\sigma}}_hn_e$ and $\hat\boldsymbol{\sigma}_{h,t}|_e=\hat{\boldsymbol{\sigma}}_h t_e$ where $t_e$ is the unit tangential vector of edge $e$.}
By \eqref{errusig}, the three-field formulation \eqref{XGWG} can be decomposed into two sub-problems as:
\begin{enumerate}
\item[(I)] Local problems. For each element $K$, given ${\hat\boldsymbol{\sigma}_{h, n} \in \hat Q_{h}n}$, find $(\boldsymbol{\sigma}_h^K, u_h^K)\in Q_h\times V_h^\perp$ such that for any $(\boldsymbol{\tau}_h, v_h)\in Q_h\times V_h^\perp$
\begin{equation}\label{XGWG1}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h^K,\boldsymbol{\tau}_h)_K
- (\epsilon_h (u_h^K), \boldsymbol{\tau}_h)_K
+\langle 2\eta_2\boldsymbol{\sigma}_h^K n, \boldsymbol{\tau}_hn\rangle_{\partial K}
&
=\langle 2\eta_2{\hat\boldsymbol{\sigma}_{h, n}}, \boldsymbol{\tau}_hn\rangle_{\partial K},
\\
-(\boldsymbol{\sigma}_h^K, \epsilon_h (v_h))_K
&
=(f,v_h)_K-\langle {\hat\boldsymbol{\sigma}_{h, n}}, v_h\rangle_{\partial K}.
\end{array}
\right.
\end{equation}
It is easy to see that the local problem \eqref{XGWG1} is well-posed if $\epsilon_h (V_h)\subset Q_h$.
Denote $W_Q: {\check Q_hn}\rightarrow Q_h$ and $W_V: {\check Q_hn}\rightarrow V_h^\perp$ by
$$
W_Q({\hat\boldsymbol{\sigma}_{h, n}})|_K= \boldsymbol{\sigma}_h^K\quad\text{ and }\quad W_V({\hat\boldsymbol{\sigma}_{h, n}})|_K= u_h^K,
$$
respectively.
\item[(II)] Global problem. Find $\hat{\boldsymbol{\sigma}}_h$ such that $({\hat\boldsymbol{\sigma}_{h, n}}, u_h^0)\in \hat Q_{h}\times Z_h$ satisfies
\begin{equation} \label{XGWG2}
\left\{
\begin{array}{rll}
\langle { \hat\boldsymbol{\sigma}_{h, n}}, v_h^0\rangle_{\partial\mathcal{T}_h},
&
=(f,v_h^0),&\ \forall v_h^0\in Z_h,
\\
\langle 2\eta_2 ({\hat\boldsymbol{\sigma}_{h, n}} - W_Q({\hat\boldsymbol{\sigma}_{h, n}})n) + W_V({\hat\boldsymbol{\sigma}_{h, n}}) +u_h^0, {\hat\boldsymbol{\tau}_{h,n}}\rangle_{\partial\mathcal{T}_h}
&= 0,&\ \forall {\hat\boldsymbol{\tau}_{h,n}} \in {\check Q_{h}n},
\end{array}
\right.
\end{equation}
and { $\hat\boldsymbol{\sigma}_{h, t}|_{\mathcal{E}_h}=(\{W_Q({\hat\boldsymbol{\sigma}_{h, n}})\} -\eta_1^{-1}[W_V({\hat\boldsymbol{\sigma}_{h, n}})])t|_{\mathcal{E}_h}$}.
It follows from \eqref{XGWG1} that
\begin{equation}
\begin{split}
(AW_Q({\hat\boldsymbol{\sigma}_{h, n}}), W_Q({\hat\boldsymbol{\tau}_{h,n}}))_K
- (\epsilon_h (W_V({\hat\boldsymbol{\sigma}_{h, n}})), W_Q({\hat\boldsymbol{\tau}_{h,n}}))_K&
\\
=\langle 2\eta_2({\hat\boldsymbol{\sigma}_{h, n} }-W_Q({\hat\boldsymbol{\sigma}_{h, n}})n),& W_Q({\hat\boldsymbol{\tau}_{h,n}})n\rangle_{\partial K},
\\
\langle W_V({\hat\boldsymbol{\sigma}_{h, n}}), \hat{\boldsymbol{\tau}}_hn\rangle_{\partial K}
- \langle W_Q({\hat\boldsymbol{\tau}_{h,n}}), \epsilon_h(W_V({\hat\boldsymbol{\sigma}_{h, n}}))\rangle_{\partial K}
&=
(f, W_V({\hat\boldsymbol{\sigma}_{h, n}}))_K.
\end{split}
\end{equation}
Thus the second equation in \eqref{XGWG2} can be written as
\begin{equation}
\begin{split}
\langle \eta_2 ({\hat\boldsymbol{\sigma}_{h, n}} - W_Q({\hat\boldsymbol{\sigma}_{h, n}})n), {\hat\boldsymbol{\tau}_{h,n}}-W_Q({\hat\boldsymbol{\tau}_{h,n}})n \rangle_{\partial\mathcal{T}_h}
+\langle u_h^0, W_V(\hat\boldsymbol{\tau}_{h,n}) \rangle_{\partial\mathcal{T}_h}
= - (f, W_V(\hat\boldsymbol{\tau}_{h,n})).
\end{split}
\end{equation}
Therefore, the global sub-problem \eqref{XGWG2} seeks $\hat{\boldsymbol{\sigma}}_h$ where $(\hat \boldsymbol{\sigma}_{h, n}, u_h^0)\in \hat Q_{h}\times Z_h$
\begin{equation} \label{XGWG3}
\left\{
\begin{array}{rll}
\langle \eta_2 (\hat \boldsymbol{\sigma}_{h, n} - W_Q(\hat \boldsymbol{\sigma}_{h, n})n), \hat\boldsymbol{\tau}_{h, n}-W_Q(\hat\boldsymbol{\tau}_{h,n})n \rangle_{\partial\mathcal{T}_h}
+\langle u_h^0, \hat\boldsymbol{\tau}_{h, n} \rangle_{\partial\mathcal{T}_h}
&
=
- (f, W_V(\hat\boldsymbol{\tau}_{h,n})),
\\
\langle \hat \boldsymbol{\sigma}_{h, n}, v_h^0\rangle_{\partial\mathcal{T}_h}
&
=(f,v_h^0),
\end{array}
\right.
\end{equation}
{ for any $(\hat\boldsymbol{\tau}_{h,n}, v_h^0) \in \check Q_{h}n\times Z_h$, and
$\hat\boldsymbol{\sigma}_{h, t}|_{\mathcal{E}_h}=(\{W_Q(\hat\boldsymbol{\sigma}_{h, n})\} -\eta_1^{-1}[W_V(\hat\boldsymbol{\sigma}_{h, n})])t|_{\mathcal{E}_h}$}.
\end{enumerate}
Note that the three-field formulation
is hybridizable under the Assumptions (G1)--(G3) or (D1)--(D3). This implies that the corresponding four-field formulation \eqref{XGgrad} or \eqref{XGdiv} is hybridizable.
\subsection{Two-field formulation without the variables $\check{\boldsymbol{\sigma}}_h$ and $\check u_h$}
{ Recall the two-field formulation \eqref{LDGXG} seeks: $(\boldsymbol{\sigma}_h, u_h)\in Q_h \times V_h $ such that }
\begin{equation}\label{XGDG}
\left\{
\begin{array}{rlr}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)
-\langle \hat{u}_h, \boldsymbol{\tau}_hn\rangle_{0, \partial K}
&
=0,&\ \forall \boldsymbol{\tau}_h \in Q_h,
\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))
+\langle \hat{\boldsymbol{\sigma}}_hn, v_h\rangle_{0, \partial K},
&
=(f,v_h),&\ \forall v_h\in V_h,
\end{array}
\right.
\end{equation}
with
\begin{equation}
\begin{array}{rlr}
\hat{\boldsymbol{\sigma}}_h|_e &= \check P_h^\sigma ( \{\boldsymbol{\sigma}_h\}- \tau [u_h] + [\boldsymbol{\sigma}_h]\gamma^T)\quad &\text{on}\ {\mathcal{E}_h},\\
\hat{u}_h|_e &= \check P_h^u (\{u_h\} - \eta [\boldsymbol{\sigma}_h] - { (\gamma^Tn)[u_h]n }) \quad &\text{on}\ {\mathcal{E}_h}.
\end{array}
\end{equation}
It is a
generalization of DG methods \cite{chen2010local,cockburn2000development,arnold2002unified}.
\begin{theorem}\label{Th:D}
There exist the following properties:
\begin{enumerate}
\item Under Assumptions (G1)--(G3), the $H^1$-based formulation \eqref{XGDG} is uniformly well-posed with respect to
mesh size, $\rho_1$ and $\rho_2$. Let $(\boldsymbol{\sigma}_h, u_h)\in Q_h \times V_h$ be the solution of \eqref{XGDG}. There exists
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{0, h}+ \|u_h\|_{1, h} \lesssim \|f\|_{-1,h}.
\end{equation}
If $\boldsymbol{\sigma}\in H^{k+1}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+2}({\rm\Omega}, \mathbb{R}^n) ( k\ge 0 )$, let $(\boldsymbol{\sigma}_h, u_h)\in Q_h^k \times V_h^{k+1}$ be the solution of \eqref{XGDG}, then we have the following error estimate:
\begin{equation}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{0, h} + \|u-u_h\|_{1, h}\lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+1} + |u|_{k+2}).
\end{equation}
\item Under Assumptions (D1)--(D3), the $H({\rm div})$-based formulation \eqref{XGDG} is uniformly well-posed with respect to
mesh size, $\rho_1$ and $\rho_2$. Let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h)\in Q_h\times \check Q_{h}\times V_h$ be the solution of \eqref{XGDG}. There exists
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{\rm div, h} + \|u_h\|_{0, h}\lesssim \|f\|_0
\end{equation}
If $\boldsymbol{\sigma}\in H^{k+2}({\rm\Omega}, \mathcal{S})$, $u\in H^{k+1}({\rm\Omega}, \mathbb{R}^n) ( k\ge0 )$, let $(\boldsymbol{\sigma}_h, u_h)\in Q_h^{k+1}\times V_h^{k}$ be the solution of \eqref{XGDG}, then we have the following error estimate:
\begin{equation}\label{div_error}
\|\boldsymbol{\sigma}-\boldsymbol{\sigma}_h\|_{\rm div, h} + \|u-u_h\|_{0, h} \lesssim h^{k+1}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
Furthermore, if $k\ge n$,
\begin{equation}\label{div_error2}
\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_A\lesssim h^{k+2}(|\boldsymbol{\sigma}|_{k+2} + |u|_{k+1}).
\end{equation}
\end{enumerate}
\end{theorem}
Table \ref{tab:LDG1} lists some well-posed $H^1$-based methods and the second method is a new one.
It shows that the LDG method in \cite{chen2010local} is the first one in Table \ref{tab:LDG1} with $k=1$, $\eta=\gamma=0$ and $\tau=O(h_e^{-1})$.
The comparison between the methods in Table \ref{tab:LDG1} implies that the vanishing parameter $\eta$ causes the failure of the hybridization for the method in \cite{chen2010local}.
\begin{table}[!htbp]
\centering
\begin{tabular}{c|cccccccccc}
\hline
cases&$\eta$& $\tau$& $\gamma$&$Q_h$ & $\check{Q}_h$& $V_h$ &$\check V_h$&
\\
\hline
1&0& $\Omega(h_e^{-1})$&0&$ Q_h^k$ & $\check{Q}_h^k$& $V_h^{k+1}$& $\check{V}_h^{k+1}$&\cite{chen2010local}
\\
2&$\mathcal{O}(h_e)$& $\mathcal{O}(h_e^{-1})$&$\mathcal{O}(1)$&$ Q_h^{k}$ & $\check{Q}_h^{k}$& $V_h^{k+1}$& $\check{V}_h^{k}$ & new
\\\hline
\end{tabular}
\caption{\footnotesize{$H^1$-based methods for linear elasticity problem.}}
\label{tab:LDG1}
\end{table}
Table \ref{tab:LDG2} lists the LDG method in \cite{wang2020mixed} and some new $H({\rm div})$-based methods. With the same choice of parameters and discrete spaces, all these methods are well-posed and admit the optimal error estimates for both the displacement and the stress tensor.
It shows that the method induced from the formulation \eqref{XGDG} with $\tau=0$, $\gamma=0$ and $\eta=O(h_e^{-1})$ is equivalent to the LDG method in \cite{wang2020mixed}.
The last two cases in Table \ref{tab:LDG2} are brand new LDG methods.
It implies that the vanishing parameter $\tau$ causes the failure of the hybridization for the method in \cite{wang2020mixed}.
\begin{table}[!htbp]
\centering
\begin{tabular}{c|ccccccccccc}
\hline
cases&$\eta$& $\tau$&$\gamma$& $Q_h$ & $\check{Q}_h$& $V_h$ &$\check V_h$&
\\
\hline
1&$\Omega(h_e^{-1})$& 0&0& $ Q_h^{k+1}$ &
$\check{Q}_h^{k}$& $V_h^{k}$&$\check{V}_h^{k+1}$ &\cite{wang2020mixed}
\\
2&$\mathcal{O}(h_e^{-1})$& $\mathcal{O}(h_e)$& $\mathcal{O}(1)$&$ Q_h^{k+1}$ &
$\check{Q}_h^{k+1}$& $V_h^{k}$&$\check{V}_h^{k+1}$ & new
\\
3&$\tau^{-1}$& $\Omega(h_e)$& $0$&$ Q_h^{k+1}$ &
$\check{Q}_h^{k+1}$& $V_h^{k}$&$\check{V}_h^{k+1}$ &new
\\
\hline
\end{tabular}
\caption{\footnotesize{$H({\rm div})$-based methods for linear elasticity problem.}}
\label{tab:LDG2}
\end{table}
\section{Two limiting cases}\label{sec:limit}
\subsection{Mixed methods: A limiting case of the formulation \eqref{XGHDG0}}
The mixed methods \cite{gopalakrishnan2011symmetric,hu2014family,hu2015family,arnold2002mixed} for linear elasticity problems can be generalized into the following formulation which seeks $(\boldsymbol{\sigma}_h^M, u_h^M)\in Q_{h}^{M}\times V_{h}$ such that
\begin{equation}\label{mixEq}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h^M,\boldsymbol{\tau}_h^M)
+(u_h^M, {\rm div} \boldsymbol{\tau}_h^M)
&=0,&\ \forall \boldsymbol{\tau}_h^M\in Q_{h}^M,
\\
({\rm div} _h\boldsymbol{\sigma}_h^M, v_h)
&=(f,v_h),&\ \forall v_h\in V_{h},
\end{array}
\right.
\end{equation}
with
\begin{equation*}
Q_{h}^{M}=\{\boldsymbol{\tau}_h\in Q_h: \langle [\boldsymbol{\tau}_h], \check v_h \rangle=0, \ \forall \check v_h\in \check V_{h}\}.
\end{equation*}
Let $Q_h=Q_h^{k+1}$, $V_h=V_h^{k}$, $\check V_h=\check V_h^{k+1}$ for any $k\ge n$, {the formulation \eqref{mixEq}} becomes the
conforming mixed element in \cite{hu2014family,hu2015family}. Let
$$
Q_h=\{\boldsymbol{\tau}_h\in Q_h^{k+2}, {\rm div} _h \boldsymbol{\tau}_h|_K\in P_{k}(K, \mathbb{R}^2)\},\quad V_h=V_h^{k},\quad \check V_h=\check V_h^{k+2}
$$
for any $k\ge 1$. The corresponding formulation \eqref{mixEq} is the conforming mixed element in \cite{arnold2002mixed}.
Consider the three-field formulation \eqref{XGH} with $\gamma=0$, $\tau_2=0$, $\check Q_h=\{0\}$ and $V_h|_{\mathcal{E}_h} \subset \check{V}_h$.
By the DG identity \eqref{DGidentity}, this three-field formulation seeks $(\boldsymbol{\sigma}_h, u_h, \check u_h)\in Q_h \times V_h \times \check{V}_{h}$ such that for any $(\boldsymbol{\tau}_h, v_h, \check v_h)\in Q_h \times V_h \times \check V_{h}$,
\begin{equation}\label{HDGmixed}
\left\{
\begin{split}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
+(u_h, {\rm div} _h \boldsymbol{\tau}_h)
-\langle \check u_h+\{u_h\},[\boldsymbol{\tau}_h]\rangle
&= 0,
\\
({\rm div} _h\boldsymbol{\sigma}_h, v_h)
- \langle [\boldsymbol{\sigma}_h], \{ v_h\}\rangle
&=(f,v_h),
\\
\langle [\boldsymbol{\sigma}_h], \check v_h \rangle &=0,
\end{split}
\right.
\end{equation}
which is equivalent to the mixed formulation \eqref{mixEq}.
{ As stated in Remark \ref{remarkdiv}, the three-field formulation \eqref{HDGmixed} is well-posed, thus \eqref{mixEq} is also well-posed with}
\begin{equation}\label{mix:stability}
\|\boldsymbol{\sigma}_h^M\|_{\rm div, h} + \|u_h^M\|_{0, h}\lesssim \|f\|_0.
\end{equation}
Furthermore, a similar analysis to the one in \cite{hong2020extended} {provides} the following theorem.
\begin{theorem}\label{th:mix}
Assume (D1)-(D3) hold.
Let $(\boldsymbol{\sigma}_h, u_h)\in Q_h \times V_h$ be the solution of \eqref{LDGXG} and
$(\boldsymbol{\sigma}_h^M, u_h^M)\in Q_{h}^{M}\times V_{h}$ be the solution of the corresponding mixed method \eqref{mixEq}.
If $V_h|_{\mathcal{E}_h}\subset \check V_h$, the formulation \eqref{LDGXG} with $\gamma=0$ and $\rho_1 + \rho_2\rightarrow 0$ converges to the
mixed method \eqref{mixEq} and
\begin{equation}
\|\boldsymbol{\sigma}_h - \boldsymbol{\sigma}_h^M\|_0+\|{\rm div} _h(\boldsymbol{\sigma}_h - \boldsymbol{\sigma}_h^M)\|_0 + \|u_h - u_h^M\|_0 \lesssim ( \rho_1^{\frac{1}{2}} + \rho_2^{\frac{1}{2}}) \|f\|_0 .
\end{equation}
\end{theorem}
\begin{proof}
Recall the two-field formulation \eqref{LDGXG}
\begin{equation}\label{LDGXG1}
\left\{
\begin{array}{rlr}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
+\langle \tau_2^{-1} \check{P}_h^u [\boldsymbol{\sigma}_h], [\boldsymbol{\tau}_h]\rangle + ({\rm div} _h\boldsymbol{\tau}_h,u_h)
-\langle [\boldsymbol{\tau}_h] , \{u_h\}\rangle
&=0,
&\forall \boldsymbol{\tau}_h\in Q_h,
\\
({\rm div} _h\boldsymbol{\sigma}_h,v_h)
-\langle [\boldsymbol{\sigma}_h] , \{v_h\}\rangle
- \langle \tau_1 \check{P}_h^\sigma[u_h]n, [v_h]n\rangle
&=(f,v_h),
&\forall v_h \in V_h.
\end{array}
\right.
\end{equation}
Substracting \eqref{mixEq} from \eqref{LDGXG1}, we obtain
\begin{equation}\label{XGerroreq}
\left\{
\begin{array}{rlr}
(A(\boldsymbol{\sigma}_h-\boldsymbol{\sigma}_h^M),\boldsymbol{\tau}_h)
+\langle \tau_2^{-1} \check{P}_h^u [\boldsymbol{\sigma}_h-\boldsymbol{\sigma}_h^M], [\boldsymbol{\tau}_h]\rangle &+ ({\rm div} _h\boldsymbol{\tau}_h,u_h-u_h^M)
-\langle [\boldsymbol{\tau}_h] , \{u_h-u_h^M\}\rangle\\
= -(u_h^M, {\rm div}_h (\bm \tau_h-\bm \tau_h^M))&-(A\bm \sigma_h^M, \bm \tau_h-\bm \tau_h^M)+\langle [\boldsymbol{\tau}_h] , \{u_h^M\}\rangle
\\
({\rm div} _h(\boldsymbol{\sigma}_h-\boldsymbol{\sigma}_h^M),v_h)
-\langle [\boldsymbol{\sigma}_h-\boldsymbol{\sigma}_h^M] , \{v_h\}\rangle &- \langle\tau_1 \check{P}_h^\sigma[u_h-u_h^M]n, [v_h]n\rangle
\\
&= \langle\tau_1 \check{P}_h^\sigma[u_h^M]n, [v_h]n\rangle
\end{array}
\right.
\end{equation}
for any $(\boldsymbol{\tau}_h, v_h)\in Q_h\times V_h$.
By the stability estimate in Theorem \ref{Th:D}, trace inequality and note that $\tau_1=\rho_1 h_e $, $\tau_2=\rho_2 h_e$,
\begin{equation}\label{error:stability1}
\begin{aligned}
&\|\boldsymbol{\sigma}_h-\boldsymbol{\sigma}_h^M\|_{\rm div, h} + \|u_h-u_h^M\|_{0, h}
\\
\lesssim &\sup_{\boldsymbol{\tau}_h\in Q_h}\frac{
{ |-(u_h^M, {\rm div}_h (\bm \tau_h-\bm \tau_h^M))-(A\bm \sigma_h^M, \bm \tau_h-\bm \tau_h^M)+\langle [\boldsymbol{\tau}_h] , \{u_h^M\}\rangle | }}{\|\boldsymbol{\tau}_h\|_{\rm div, h}}
\\
&+\sup_{v_h\in V_h}\frac{ { |\langle\tau_1 \check{P}_h^\sigma[u_h^M]n, [v_h]n\rangle |} }{\|v_h\|_{0, h}}\\
\lesssim &\sup_{\boldsymbol{\tau}_h\in Q_h}\frac{
\|u_h^M\|_0 \|{\rm div}_h (\bm \tau_h-\bm \tau_h^M)\|_0
+ \|{ A}\bm \sigma_h^M\|_0 \|\bm \tau_h-\bm \tau_h^M\|_0}{\|\boldsymbol{\tau}_h\|_{\rm div, h}}+( \rho_1^{\frac{1}{2}} + \rho_2^{\frac{1}{2}})\|u_h^M\|_{0}
\end{aligned}
\end{equation}
where $\|\cdot \|_{\rm div, h}$ and $\|\cdot\|_{0, h}$ are defined in \eqref{divnormdef}.
For any given $\bm \tau_h \in Q_h$, we have
\begin{equation}
\begin{aligned}
\inf_{\bm \tau_h^M\in Q_h^M} \left(\|{\rm div}_h (\bm \tau_h-\bm \tau_h^M)\|+ \|\bm \tau_h-\bm \tau_h^M\|\right)\lesssim
\big(\sum_{e\in \mathcal{E}_h} h_e^{-1}\|[\bm
\tau_h]\|^2_{0,e}\big)^{\frac{1}{2}}\le \rho_2^{\frac12} \|\bm \tau_h\|_{{\rm div},h}
\end{aligned}
\end{equation}
It follows from stability estimates \eqref{mix:stability} that
\begin{equation}\label{error:stability2}
\|\boldsymbol{\sigma}_h-\boldsymbol{\sigma}_h^M\|_{\rm div, h} + \|u_h-u_h^M\|_{0, h}\lesssim ( \rho_1^{\frac{1}{2}} + \rho_2^{\frac{1}{2}}){ \left(\|u_h^M\|_{0}+ \|\bm \sigma_h^M\|_0\right)}\lesssim ( \rho_1^{\frac{1}{2}} + \rho_2^{\frac{1}{2}})\|f\|_0,
\end{equation}
which completes the proof.
\end{proof}
\iffalse
\subsection{Primal methods: A limiting case of the formulation \eqref{XG}}
The formulation \eqref{XGW} with $\gamma=0$, $\eta=0$ seeks $(\boldsymbol{\sigma}_h, u_h, \check{\boldsymbol{\sigma}}_h)\in Q_h \times V_h \times \check{Q}_{h}$ such that
\begin{equation}\label{XGconf}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
-(\boldsymbol{\tau}_h, \epsilon_h (u_h))
+\langle \{\boldsymbol{\tau}_h\}n, [u_h]n\rangle
&= 0&\boldsymbol{\tau}_h\in Q_h,
\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))
+\langle \{\boldsymbol{\sigma}_h\}n + \check{\boldsymbol{\sigma}}_hn , [v_h]n\rangle
&
=(f,v_h) &v_h\in V_{h},
\\
\langle 2\tau^{-1} \check{\boldsymbol{\sigma}}_h , \check{\boldsymbol{\tau}}_h \rangle
+ \langle [u_h], \check{\boldsymbol{\tau}}_h \rangle
&=0 ,& \check{\boldsymbol{\tau}}_h \in \check{Q}_{h}.
\end{array}
\right.
\end{equation}
Under the assumptions (G1) - (G3), Theorem \ref{Th:inf-supGrad} implies the well-posedness of \eqref{XGconf}, and
\begin{equation*}
\|\boldsymbol{\sigma}_h\|_{0, h} + \|u_h\|_{1, h} +\|\check{\boldsymbol{\sigma}}_h\|_{0, h}\lesssim \|f\|_0 .
\end{equation*}
As $\rho_2\rightarrow \infty$, the right-hand side of the above estimate is uniformly bounded. The resulting formulation
seeks $(\boldsymbol{\sigma}_h^P, u_h^P)\in Q_{h}\times V_{h}^{P}$ such that
\begin{equation}\label{primalEq}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h^P,\boldsymbol{\tau}_h)
-(\boldsymbol{\tau}_h, \epsilon_h (u_h^P))
&=0&\boldsymbol{\tau}_h\in Q_{h},
\\
-(\boldsymbol{\sigma}_h^P, \epsilon_h (v_h))
&=(f,v_h)
&v_h\in V_{h}^P,
\end{array}
\right.
\end{equation}
with
\begin{equation}
V_{h}^{P}=\{u_h\in V_h: \langle [u_h], \check{\boldsymbol{\tau}}_h \rangle= 0, \forall \check{\boldsymbol{\tau}}_h\in \check{Q}_h\},
\end{equation}
and $[v_h]$ defined in \eqref{jumpdef}.
Under Assumptions (G1)--(G3), the resulting primal formulation is also well-posed with
$$
\|\boldsymbol{\sigma}_h^P\|_{0, h}+\|u_h^P\|_{1, h}\lesssim
\sup_{v_h\in V_h^P} \frac{(f,v_h)}{\|v_h\|_{1, h}}.
$$
Furthermore, if $\epsilon_h(V_h)\subset Q_h$, by eliminating $\boldsymbol{\sigma}_h^P$ in \eqref{primalEq}, we have the following primal formulation which seeks $u_h^P\in V_{h}^{P}$ such that
\begin{equation}\label{non:primal}
(C\epsilon_h(u_h^P), \epsilon_h(v_h))=-(f,v_h),\ \forall v_h\in V_{h}^{P}
\end{equation}
with $C=A^{-1}$.
\begin{remark}
If $V_h=V_h^{k+1}, Q_h=Q_h^k, \check Q_h=\check Q_h^k, k\ge 1$, the unified formulation tends to a high order nonconforming discretization
\eqref{non:primal} for the elasticity problem with only one variable. The relationship between the Crouzeix-Raviart element discretization and discontinuous Galerkin method for
linear elasticity can be found in \cite{hansbo2003discontinuous}.
\end{remark}
In addition, a similar analysis to the one of Theorem \ref{th:mix} proves the following theorem.
\begin{theorem}
Assume that (G1)-(G3) hold. Let $(\boldsymbol{\sigma}_h, \check{\boldsymbol{\sigma}}_h, u_h, \check u_h)\in Q_h \times \check Q_{h}\times V_h \times \check V_{h}$ be the solution of \eqref{XG} and $(\boldsymbol{\sigma}_h^P, u_h^P)\in Q_{h}\times V_{h}^{P}$ be the solution of the corresponding primal method \eqref{primalEq}. Then the formulation \eqref{XG} with $\eta=\rho_2^{-1}h_e$, $\tau=\rho_2 h_e^{-1}$ and $\rho_2\rightarrow \infty$ converges to the primal method \eqref{primalEq} and
\begin{equation}
\|\boldsymbol{\sigma}_h - \boldsymbol{\sigma}_h^P\|_0 + \|\epsilon_h (u_h - u_h^P)\|_0 + \|h_e^{-1/2}[u_h-u_h^P]\|_0
\lesssim \rho_2^{-1/2} \|f\|_0.
\end{equation}
\end{theorem}
\fi
\subsection{Primal methods: A limiting case of the formulation \eqref{XGDG}}
The primal method for linear elasticity problems seeks $u_h^P\in V_{h}^{P}$ such that
\begin{equation}\label{non:primal}
(C\epsilon_h(u_h^P), \epsilon_h(v_h))=-(f,v_h),\ \forall v_h\in V_{h}^{P}
\end{equation}
with $C=A^{-1}$ and
\begin{equation}
V_{h}^{P}=\{u_h\in V_h: \langle [u_h], \check{\boldsymbol{\tau}}_h \rangle= 0, \forall \check{\boldsymbol{\tau}}_h\in \check{Q}_h\},
\end{equation}
where $[v_h]$ is defined in \eqref{jumpdef}.
If $\epsilon_h(V_h)\subset Q_h$, { the formulation \eqref{non:primal} }is equivalent to the following formulation which seeks $(\boldsymbol{\sigma}_h^P, u_h^P)\in Q_{h}\times V_{h}^{P}$ such that
\begin{equation}\label{primalEq}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h^P,\boldsymbol{\tau}_h)
-(\boldsymbol{\tau}_h, \epsilon_h (u_h^P))
&=0,&\boldsymbol{\tau}_h\in Q_{h},
\\
-(\boldsymbol{\sigma}_h^P, \epsilon_h (v_h))
&=(f,v_h),
&v_h\in V_{h}^P.
\end{array}
\right.
\end{equation}
Consider the three-field formulation \eqref{XGW} with $\gamma=0$, $\check V=\{0\}$ seeks $(\boldsymbol{\sigma}_h, u_h, \check{\boldsymbol{\sigma}}_h)\in Q_h \times V_h \times \check{Q}_{h}$ such that
\begin{equation}\label{XGconf}
\left\{
\begin{array}{rll}
(A\boldsymbol{\sigma}_h,\boldsymbol{\tau}_h)
-(\boldsymbol{\tau}_h, \epsilon_h (u_h))
+\langle \{\boldsymbol{\tau}_h\}n, [u_h]n\rangle
&= 0,&\boldsymbol{\tau}_h\in Q_h,
\\
-(\boldsymbol{\sigma}_h, \epsilon_h (v_h))
+\langle \{\boldsymbol{\sigma}_h\}n + \check{\boldsymbol{\sigma}}_hn , [v_h]n\rangle
&
=(f,v_h), &v_h\in V_{h},
\\
\langle \eta_1 \check{\boldsymbol{\sigma}}_h , \check{\boldsymbol{\tau}}_h \rangle
+ \langle [u_h], \check{\boldsymbol{\tau}}_h \rangle
&=0,& \check{\boldsymbol{\tau}}_h \in \check{Q}_{h}.
\end{array}
\right.
\end{equation}
If $V_h|_{\mathcal{E}_h} \subset \check Q_hn$, as $\eta_1\rightarrow 0$, the resulting formulation is exactly the primal formulation \eqref{primalEq}.
Under the assumptions (G1)-(G3), Theorem \ref{Th:inf-supGrad} implies the well-posedness of \eqref{XGconf}, and
\begin{equation}
\|\boldsymbol{\sigma}_h\|_{0, h} + \|u_h\|_{1, h} +\|\check{\boldsymbol{\sigma}}_h\|_{0, h}\lesssim \|f\|_0 .
\end{equation}
{ By Remark \ref{remarkgrad}, the primal formulation \eqref{primalEq}
}
is also well-posed with
\begin{equation}
\|\boldsymbol{\sigma}_h^P\|_{0, h}+\|u_h^P\|_{1, h}\lesssim
\sup_{v_h\in V_h^P} \frac{(f,v_h)}{\|v_h\|_{1, h}}.
\end{equation}
\begin{remark}
If $V_h=V_h^{k+1}, Q_h=Q_h^k, \check Q_h=\check Q_h^k, k\ge 1$, the { formulation \eqref{XGconf}} tends to a high order nonconforming discretization
\eqref{non:primal} for the elasticity problem with only one variable. The relationship between the Crouzeix-Raviart element discretization and discontinuous Galerkin method for
linear elasticity can be found in \cite{hansbo2003discontinuous}.
\end{remark}
In addition, a similar analysis to the one of Theorem \ref{th:mix} { provides} the following theorem.
\begin{theorem}
Assume that (G1)-(G3) hold. Let $(\boldsymbol{\sigma}_h, u_h)\in Q_h \times V_h $ be the solution of \eqref{XGDG} and $(\boldsymbol{\sigma}_h^P, u_h^P)\in Q_{h}\times V_{h}^{P}$ be the solution of the corresponding primal method \eqref{primalEq}. Then the formulation \eqref{XGDG} with $\rho_1 + \rho_2\rightarrow 0$ converges to the primal method \eqref{primalEq} and
\begin{equation}
\|\boldsymbol{\sigma}_h - \boldsymbol{\sigma}_h^P\|_0 + \|\epsilon_h (u_h - u_h^P)\|_0 + \|h_e^{-1/2}[u_h-u_h^P]\|_0
\lesssim (\rho_1^{1/2} + \rho_2^{1/2}) \|f\|_0.
\end{equation}
\end{theorem}
\section{Numerical examples}\label{sec:numerical}
In this section, we display some numerical experiments in 2D to verify the estimate provided in Theorem \ref{Th:inf-supGrad} and \ref{Th:inf-supDiv}. We consider the model problem \eqref{model} on the unit square ${\rm\Omega}=(0,1)^2$ with the exact displacement
$$
u= (\sin (\pi x)\sin(\pi y), \sin (\pi x)\sin(\pi y))^T,
$$
and set $f$ and $g$ to satisfy the above exact solution of \eqref{model}.
The domain is partitioned by uniform triangles. The level one triangulation $\mathcal{T}_1$ consists of two right triangles, obtained by cutting the unit square with a north-east line. Each triangulation $\mathcal{T}_i$ is refined into a half-sized triangulation uniformly, to get a higher level triangulation $\mathcal{T}_{i+1}$. For all the numerical tests in this section, fix the parameters $\rho_1=\rho_2=\gamma=1$ and $E=1$.
{\subsection{Various methods with fixed $\nu$}
In this subsection, we fix $\nu=0.4$. }
Figure \ref{fig:lowgrad1} and \ref{fig:lowgrad} plot the errors for the lowest order $H^1$-based methods mentioned in this paper.
Figure \ref{fig:lowgrad1} and \ref{fig:lowgrad} show that the $H^1$-based XG methods with
$$
Q_h=Q_h^0, V_h=V_h^1, \check Q_h=\check Q_h^0, \check V_h=\check V_h^i\quad \mbox{with}\quad i=0, 1
$$
are not well-posed, while those with
$$
Q_h=Q_h^0, V_h=V_h^1, \check Q_h=\check Q_h^1, \check V_h=\check V_h^i\quad \mbox{with}\quad i=0, 1
$$
are well-posed and admit the optimal convergence rate 1.00 as analyzed in Theorem \ref{Th:inf-supGrad}. The discrete spaces of the former methods satisfy Assumption (G1), but does not meet Assumption (G2). This implies that Assumption (G2) is necessary for the wellposedness of the corresponding method.
\begin{figure}[H]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGgrad1.jpg}
\caption{\small Errors of the lowest order $H^1$-based methods with $
Q_h=Q_h^{\alpha_1}$, $V_h=V_h^{\alpha_2}$, $\check Q_h=\check Q_h^{\alpha_3}$, $\check V_h=\check V_h^{\alpha_4}$ and $\alpha=(\alpha_1, \alpha_2, \alpha_3, \alpha_4)$.}
\label{fig:lowgrad1}
\end{figure}
\begin{figure}[H]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGgrad2.jpg}
\caption{\small Errors of the lowest order $H^1$-based methods with $
Q_h=Q_h^{\alpha_1}$, $V_h=V_h^{\alpha_2}$, $\check Q_h=\check Q_h^{\alpha_3}$, $\check V_h=\check V_h^{\alpha_4}$ and $\alpha=(\alpha_1, \alpha_2, \alpha_3, \alpha_4)$.}
\label{fig:lowgrad}
\end{figure}
\begin{figure}[H]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGdiv1.jpg}
\caption{\small Errors of the lowest order $H({\rm div})$-based methods with $
Q_h=Q_h^{\alpha_1}$, $V_h=V_h^{\alpha_2}$, $\check Q_h=\check Q_h^{\alpha_3}$, $\check V_h=\check V_h^{\alpha_4}$ and $\alpha=(\alpha_1, \alpha_2, \alpha_3, \alpha_4)$.}
\label{fig:lowdiv1}
\end{figure}
Figure \ref{fig:lowdiv1} and \ref{fig:lowdiv} plot the errors of solutions for the lowest order $H({\rm div})$-based methods, which are new in literature. It is shown that the $H({\rm div})$-based methods with
\begin{equation}\label{test:choice0}
Q_h=Q_h^{1}, V_h=V_h^0, \check Q_h=\check Q_h^{i}, \check V_h=\check V_h^0\quad \mbox{with}\quad 0\le i\le 1
\end{equation}
are not well-posed. Although the error $\|{\rm div}_h(\boldsymbol{\sigma}-\boldsymbol{\sigma}_h)\|_0$ converges at the optimal rate $1.00$, the errors $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ and $\|u-u_h\|_0$ do not converge at all. It also shows in Figure \ref{fig:lowdiv1} and \ref{fig:lowdiv} that the { new lowest order $H({\rm div})$-based methods} with a larger space for $\check V_h$
\begin{equation}\label{test:choice}
Q_h=Q_h^{1}, V_h=V_h^0, \check Q_h=\check Q_h^{i}, \check V_h=\check V_h^{1}\quad \mbox{with}\quad 0\le i\le 1
\end{equation}
are well-posed and the corresponding errors $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$, $\|{\rm div} _h(\boldsymbol{\sigma} - \boldsymbol{\sigma}_h)\|_0$ and $\|u-u_h\|_0$ admit the optimal convergence rate $1.00$. This coincides with the results in Theorem \ref{Th:inf-supDiv}. The comparison between the $H({\rm div})$-based methods in \eqref{test:choice0} and \eqref{test:choice} implies that Assumption (D2) is necessary for the wellposedness of the corresponding method.
\begin{figure}[H]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGdiv2.jpg}
\caption{\small Errors of the lowest order $H({\rm div})$-based methods with $
Q_h=Q_h^{\alpha_1}$, $V_h=V_h^{\alpha_2}$, $\check Q_h=\check Q_h^{\alpha_3}$, $\check V_h=\check V_h^{\alpha_4}$ and $\alpha=(\alpha_1, \alpha_2, \alpha_3, \alpha_4)$.}
\label{fig:lowdiv}
\end{figure}
\begin{figure}[!ht]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGdiv3.jpg}
\caption{\small Errors for some high order $H({\rm div})$-based methods with $
Q_h=Q_h^{\alpha_1}$, $V_h=V_h^{\alpha_2}$, $\check Q_h=\check Q_h^{\alpha_3}$, $\check V_h=\check V_h^{\alpha_4}$ and $\alpha=(\alpha_1, \alpha_2, \alpha_3, \alpha_4)$.}
\label{fig:div}
\end{figure}
Consider the $L^2$ norm of the error of the stress tensor $\boldsymbol{\sigma}$. Figure \ref{fig:div} plots the errors for higher order $H({\rm div})$-based methods. For the XG formulation with $\alpha=(2, 1, 2, 2)$, $k=1$ is less than $n=2$. Theorem \ref{Th:inf-supDiv} indicates that the convergence rate of $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ is $2.00$ { for this new second order $H({\rm div})$-based method}, which is verified by the numerical results in Figure \ref{fig:div}. For the XG formulation with $\alpha=(3, 2, 2, 3)$, $k=n$ and the convergence rate of $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ shown in Figure \ref{fig:div} is $4$, which coincides with the estimate in Theorem \ref{Th:sigL2div}. This comparison indicates that the assumption $k\ge n$ in Theorem \ref{Th:sigL2div} is necessary and the error estimate of $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ is optimal.
{\subsection{The lowest order methods with various $\nu$}
Figure \ref{fig:gradlam} plots the relative error of the approximate solutions of the $H^1$-based method with $Q_h^{0}\times \check Q_{h}^1\times V_h^{1}\times \check V_{h}^{0}$ with different $\nu$ ($\nu$ tends to $\frac12$). Figure \ref{fig:gradlam} shows that both $\|\epsilon_h(u -u_h)\|_0$ and $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ converge at the optimal rate 1.00, and the error $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ are almost the same for different value of $\nu$.
\begin{figure}[!ht]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGlamGrad.jpg}
\caption{\small Errors for the lowest order $H^1$-based methods $Q_h^{0}\times \check Q_{h}^1\times V_h^{1}\times \check V_{h}^{0}$ with different $\nu$.}
\label{fig:gradlam}
\end{figure}
Figure \ref{fig:gradlam} plots the relative error of the approximate solutions of the $H({\rm div})$-based method with $Q_h^{1}\times \check Q_{h}^0\times V_h^{0}\times \check V_{h}^{1}$ with different $\nu$ ($\nu$ tends to $\frac12$). Figure \ref{fig:divlam} shows that the errors $\|u -u_h\|_0$, $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ and $\|{\rm div} (\boldsymbol{\sigma} - \boldsymbol{\sigma}_h)\|_0$ converge at the optimal rate 1.00, and the errors $\|\boldsymbol{\sigma} - \boldsymbol{\sigma}_h\|_0$ and $\|{\rm div} (\boldsymbol{\sigma} - \boldsymbol{\sigma}_h)\|_0$ are almost the same as $\nu$ tends to $\frac12$ which shows that the proposed schemes are locking-free.
\begin{figure}[!ht]
\setlength{\abovecaptionskip}{0pt}
\setlength{\belowcaptionskip}{0pt}
\centering
\includegraphics[width=7cm]{XGlamDiv.jpg}
\caption{\small Errors for the lowest order $H({\rm div})$-based methods $Q_h^{1}\times \check Q_{h}^0\times V_h^{0}\times \check V_{h}^{1}$ with different $\nu$.}
\label{fig:divlam}
\end{figure}
}
\section{Conclusion}\label{concl}
In this paper, a unified analysis of a four-field formulation is presented and analyzed for linear elasticity problem.
This formulation is closely related to most HDG methods, WG methods, LDG methods and mixed finite elements in the literature.
And some new methods are proposed following the unified framework. Some particular methods are proved to be hybridizable.
In addition, uniform inf-sup conditions for the four-field formulation provide a
unified way to prove the optimal error estimate under two different sets of assumptions.
Also, these assumptions guide the design of some well-posed formulations new in literature.
\bibliographystyle{spmpsci}
| {
"attr-fineweb-edu": 1.607422,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbQ85qhDACnynonmL |
\section{Introduction}
While neural networks have achieved impressive successes in domains like image
classification or machine translation, standard models still struggle with tasks
that require very long-term memory without interference and would thus benefit
from a separation of memory and computation \cite{NTM,NTM_impl}.
Neural Turing Machines (NTM) attempt to address these tasks by augmenting recurrent neural
networks with an explicit memory to which the network has read and write
access \cite{NTM,NTM_impl}. Unfortunately, such models are notoriously hard to train,
even compared to other deep learning models \cite{NTM_impl}.
In our contribution, we propose to address this training problem by replacing
the learned recurrent neural network controller of a NTM with
an echo state network (ESN) \cite{ESN}. In other words, we only learn the
controller for the read and write head of our memory access as well as the output
mapping, all of which is possible via standard linear regression. To construct
the training data for our read and write head controllers, we only require a
standard dynamic time warping alignment. We call this model a \emph{reservoir memory machine} (RMM).
Our model can also be seen as an augmentation of echo state networks with
an explicit external memory, such that input information can be stored
for arbitarily long times without interference, whereas the maximum memory
horizon for regular echo state networks is limited to the number of neurons in
the reservoir \cite{ESN,MemCap,DeepESN}.
In the remainder of this paper, we first refresh the reader's memory regarding
standard ESNs, then formally define our own model - reservoir memory machines -,
and finally show that our proposed model is sufficient to solve three
benchmark tasks for Neural Turing Machines with much faster training.
\section{Echo state networks}
An echo state network (ESN) \cite{ESN} is a recurrent network, i.e.\
the neural activations $\vec h_t \in \R^m$ at time $t$ are computed as $\vec h_t
= \mathrm{tanh}\Big( \bm{U} \cdot \vec x_t + \bm{W} \cdot \vec h_{t-1} \Big)$,
where $\vec x_t \in \R^n$ is the input at time $t$, $\bm{U} \in \R^{m \times n}$
are the input weights, and $\bm{W} \in \R^{m \times m}$ are the recurrent weights
of the network. The output $\vec y_t \in \R^L$ of the network at time $t$ is
computed as $\vec y_t = \bm{V} \cdot \vec h_t$, where $\bm{V} \in \R^{L \times m}$
are the output weights. ESNs have two distinct characteristics.
First, $\bm{U}$ and $\bm{W}$ are not learned but kept fixed after
initialization. This means that the activations $\vec h_1, \ldots, \vec h_T$
can be seen as a nonlinear preprocessing of the input, which makes learning $\bm{V}$
a generalized linear regression problem that can be solved analytically with the
pseudo-inverse.
Second, the recurrent weights $\bm{W}$ must ensure the \emph{echo state property},
i.e.\ past influences must degrade over time \cite{ESN,CRJ}.
This property is necessary to ensure that the network's dynamic is independent of
initial conditions and always adjusts to the input time series. On the other hand,
it necessarily limits ESNs to \emph{short term} memory tasks. In particular the
memory is upper-bounded by the number of neurons $n$ \cite{ESN,MemCap}. This is the
key limitation we aim to address.
In this paper, we employ the deterministic
'cycle reservoir with jumps' scheme to initialize $\bm{U}$
and $\bm{W}$ \cite{CRJ}. In this scheme, the entries of $\bm{U}$ are set to a constant value
$u \in (-1, 1)$ with a sign determined by a fixed, aperiodic sequence
(e.g.\ the digits of pi), and $\bm{W}$ is a sparse matrix with off-diagonal
cycle connections $w_{i, i+1} = w_c \in [0, 1)$ and longer 'jump' connections
$w_{i, i+l} = w_{i+l, i} = w_l \in [0, 1)$. Note that $u$, $w_c$, $w_j$, and
$l \in \N$ are hyper-parameters of the model. Because this initialization is deterministic,
we can compare different architectures more easily. In general, however, our architecture
is agnostic regarding the initialization.
\section{Reservoir memory machines}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\node at (0,0.5) {input};
\node (x) at (0,0) {$\vec x_t$};
\begin{scope}[shift={(2.5,0)}]
\node[neuron] (h1) at (180:0.6) {};
\node[neuron] (h2) at (252:0.6) {};
\node[neuron] (h3) at (324:0.6) {};
\node[neuron] (h4) at (036:0.6) {};
\node[neuron] (h5) at (108:0.6) {};
\node[left] at (0,1) {reservoir};
\path[edge]
(h1) edge[bend right] (h2)
(h2) edge[bend right] (h3)
(h3) edge[bend right] (h4)
(h4) edge[bend right] (h5)
(h5) edge[bend right] (h1)
(h1) edge[<->] (h3)
(h3) edge[<->] (h5);
\node at (0,0) {$\bm{W}$};
\node (h) at (1.1,0) {$\vec h_t$};
\end{scope}
\path[edge]
(x) edge node[above] {$\bm{U}$} (h1);
\begin{scope}[shift={(5.5,0)}]
\node[right] at (1.4,0.4) {memory};
\draw[semithick] (0,-0.25) rectangle (2.25,+0.25);
\draw[semithick] (0.75,-0.25) -- (0.75,+0.25);
\draw[semithick] (1.5,-0.25) -- (1.5,+0.25);
\node at (0.4,0) {$\vec m_{t, 1}$};
\node at (1.15,0) {$\ldots$};
\node at (1.9,0) {$\vec m_{t, K}$};
\node[right] at (1.35,0.9) {$\strut$write head};
\node[rectangle, draw=black] (write) at (1.15,0.9) {$\vec x_t$};
\node (k) at (1.15,0.15) {};
\path[edge, shorten <=0pt] (write) edge (k);
\path[edge, dashed, gray]
(x) edge[out=30,in=180, looseness=0.75] (write)
(h) edge (write);
\node[gray] at (+0.4,1.1) {$\vec u^w$};
\node[gray] at (+0.5,0.5) {$\vec v^w$};
\node[right] at (1.35,-0.9) {$\strut$read head};
\node[rectangle, draw=black] (read) at (+1.15,-0.9) {$\vec r_t$};
\node (l) at (1.15,-0.15) {};
\path[edge, shorten >=0pt] (l) edge (read);
\path[edge, dashed, gray]
(x) edge[out=-30,in=180, looseness=0.75] (read)
(h) edge (read);
\node[gray] at (+0.4,-1.1) {$\bm{U}^r$};
\node[gray] at (+0.65,-0.58) {$\bm{V}^r$};
\end{scope}
\node at (10,0.5) {output};
\node (y) at (10,0) {$\vec y_t$};
\path[edge]
(h) edge[out=-15,in=190, looseness=0.8] (y)
(read) edge (y);
\node at (9.3,0.1) {$\bm{V}$};
\node at (9.5,-0.4) {$\bm{R}$};
\end{tikzpicture}
\vspace{-0.5cm}
\end{center}
\caption{An illustration of reservoir memory machines. We first process the
input (left) with a cycle reservoir with jumps (center left). We then use input and
reservoir activations to control interaction with the memory (center right;
gray connections). Finally, we feed reservoir activations and memory
reads to the output (right).}
\label{fig:rmm}
\end{figure}
Our key contribution is an easy-to-train alternative to the Neural Turing Machine \cite{NTM}.
In particular, we propose to extend an ESN with an explicit memory,
a write head, which can copy inputs into memory, and a read head, which can
read from the memory. We call this augmented ESN version
a \emph{reservoir memory machine} (RMM). A sketch of the RMM architecture
is shown in Figure~\ref{fig:rmm}.
In more detail, the state of our system is now a quadruple $(\vec h_t, \bm{M}_t, k_t, l_t)$,
where $\vec h_t$ are the reservoir activations as before, $\bm{M}_t \in \R^{K \times n}$
is the current memory state (of size $K$), and $k_t, l_t \in \{1, \ldots, K\}$ are the
current position of the write and read head respectively.
The dynamics of the system are as follows. First, we copy the previous
memory state, i.e.\ $\bm{M}_t \gets \bm{M}_{t-1}$ (where $\bm{M}_{-1} = \bm{0}$).
Then, we control the write head with the value $c^w_t = \vec u^w \cdot \vec x^t + \vec v^r \cdot \vec h_t$,
where $\vec u^w \in \R^n$ and $\vec v^r \in \R^m$ are learnable parameters.
If $c^w_t > 0$, we write to the memory, i.e.\ $\vec m_{t, k} \gets \vec x_t$,
and increment $k_t \gets k_{t-1} + 1$ (re-setting $k_t$ to $1$ if it exceeds $K$).
Otherwise, we leave the memory and $k_t$ as is.
Similarly, in each time step we control the read head with the vector
$\vec c^r_t = \bm{U}^r \cdot \vec x_t + \bm{V}^r \cdot \vec h_t$, where
$\bm{U}^r \in \R^{3 \times n}$ and $\bm{V}^r \in \R^{3 \times m}$ are learnable parameters.
If $c^r_{t, 1} = \max\{ c^r_{t, 1}, c^r_{t, 2}, c^r_{t, 3} \} $, the read head stays in the same location, i.e.\
$l_t \gets l_{t-1}$; if $c^r_{t, 2} = \max\{ c^r_{t, 1}, c^r_{t, 2}, c^r_{t, 3} \}$, we increment $l_t \gets l_{t-1} + 1$
(re-setting $l_t$ to 1 if it exceeds $K$); otherwise, we re-set $l_t \gets 1$.
We then set the memory read at time $t$ as the $l_t$th row of $\bm{M}_t$, i.e.\
$\vec r_t \gets \vec m_{t, l_t}$.
The output of the system at time $t$ is $\vec y_t = \bm{V} \cdot
\vec h_t + \bm{R} \cdot \vec r_t$, where $\bm{V} \in \R^{L \times m}$ and $\bm{R} \in \R^{L \times n}$
are learnable parameters. Note that our proposed model is a strict extension
of an ESN because we can simply set $\bm{R} = \bm{0}$ and thus obtain a standard ESN.
However, we can potentially solve \emph{more} tasks.
\paragraph{Training:} Because the output generation depends on the memory content,
our first step is to train the write and read heads, i.e.\ the
parameters $\vec u^w$, $\vec v^w$, $\bm{U}^r$, and $\bm{V}^r$.
In more detail, we initialize
$\bm{R}$ as the identity matrix (padded with zeros whenever necessary) and then
identify for each output $\vec y_t$ the earliest input $\vec x_{\tau_t}$ that
minimizes the distance $\lVert \bm{R} \cdot \vec x_{\tau_t} - \vec y_t \rVert$.
Based on this, we generate an \emph{ideal} control sequence for the write head
$c^w_1, \ldots, c^w_T$ where $c^w_t = +1$ if $t \in \{\tau_1, \ldots, \tau_T\}$
and $c^w_t = -1$ otherwise.
This control sequence serves as our teaching signal for training
$\vec u^w$ and $\vec v^w$ via linear regression.
Next, we generate the tensor of all memory states
$(\bm{M}_1, \ldots, \bm{M}_T) \in \R^{T \times K \times n}$ as described above.
We then align this tensor with the output time series $\vec y_1, \ldots, \vec y_T$
via a variant of dynamic time warping with the recurrence:
$d_{l, t} = \lVert \bm{R} \cdot \vec m_{t, l} - \vec y_t \rVert +
\min\{ d_{l, t+1}, d_{l+1, t+1}, d_{1, t+1} \}$, where the entries in the minimum
correspond respectively to leaving the read-head location as is, incrementing it,
or resetting it to one. The base case of this recurrence is
$d_{l, T+1} = 0$ for all $l \in \{1, \ldots, K\}$.
Note that $\min\{d_{1, 1}, d_{2, 1}\}$ then corresponds to the error we achieve by
optimally moving the read head over the memory and always predicting the output
$\bm{R} \cdot m_{t, l_t}$. Accordingly, backtracing yields a teaching signal
to train the read head parameters $\bm{U}^r$ and $\bm{V}^r$ via linear regression.
Finally, we compute the sequence of memory reads $\vec r_1, \ldots, \vec r_T$ as
described above, which we use to train both
$\bm{V}$ and $\bm{R}$ via linear regression. Now, because we change $\bm{R}$,
the optimal alignments in the previous steps may change as well. Accordingly,
we repeat the training process until the loss increases or until
convergence, yielding an alternating optimization algorithm.
\section{Experiments}
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{scope}[shift={(-5,-1.62)}]
\begin{axis}[title={latch task},xlabel={$t$},ylabel={amplitude}, width=5cm, height=4.62cm,
xmin=0, ymax=1.8, legend pos={north east}, legend cell align={left},
ytick={0,1}]
\addplot[class0color, curves_thick] table[x=time,y=x] {latch_example.csv};
\addlegendentry{input}
\addplot[thick, class2color, densely dashed] table[x=time,y=y] {latch_example.csv};
\addlegendentry{output}
\end{axis}
\end{scope}
\begin{groupplot}[view={0}{90}, xlabel={$t$}, ymin=0, ymax=8,
group style={group size=2 by 2,
x descriptions at=edge bottom,y descriptions at=edge left,
horizontal sep=0.4cm, vertical sep=0.2cm},
width=4cm, height=3cm,
colormap/blackwhite]
\nextgroupplot[title={copy task},ymax=9,width=3.2cm, ylabel={input}]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=10,shader=flat corner] file {copy_example_input.csv};
\nextgroupplot[title={repeat copy task},ymax=9]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=10,shader=flat corner] file {repeat_copy_example_input.csv};
\nextgroupplot[width=3.2cm, ylabel={output}]
\addplot3[surf,mesh/ordering=y varies,mesh/rows=9,shader=flat corner] file {copy_example_output.csv};
\nextgroupplot
\addplot3[surf,mesh/ordering=y varies,mesh/rows=9,shader=flat corner] file {repeat_copy_example_output.csv};
\end{groupplot}
\end{tikzpicture}
\vspace{-0.7cm}
\end{center}
\caption{An example input and output sequence for all three data sets.}
\label{fig:data}
\end{figure}
In our experiments, we evaluate reservoir memory machines (RMMs) on three data sets that
require storage of inputs over long times without interference:
The \textit{latch} task requires to produce zeros until a spike in the input appears,
after which the model should produce ones. For the next input spike, the model should switch back to zeros,
and so on (Figure~\ref{fig:data}, left). We use three spikes with random positions and random sequence lengths of up to 200 time steps.
The \textit{copy} data set \cite{NTM} consists of 1-20 time steps with 8 random bits each, followed by a sequence
end token in an additional feature. After this, the goal is to exactly copy the input while the remaining input is
zero (Figure~\ref{fig:data}, center).
The \textit{repeat copy} data set \cite{NTM} extends the copy task by requiring the network to
copy the input sequence multiple times (refer to Figure~\ref{fig:data}, right).
We compare RMMs to standard ESNs and to a novel variant which we dub echo state
gated recurrent unit (ESGRU). This model uses the dynamic equations of a gated
recurrent units \cite{GRU} but keeps all weights fixed after initialization.
To ensure that all variance is due to memory access only, we use the same
reservoir for all networks, namely a cycle reservoir with jumps \cite{CRJ}.
We evaluate in a 20 fold crossvalidation, generating
10 sequences per fold (i.e.\ $190$ training sequences and $10$ test sequences). For each model,
we used a 3-fold nested crossvalidation for hyper-parameter optimization via random
search with 10 trials. The detailed experimental code is available at \url{https://gitlab.ub.uni-bielefeld.de/bpaassen/reservoir-memory-machines}.
\begin{table}
\caption{The average RMSE ($\pm$ standard deviation) across 20 crossvalidation
folds for all models and all data sets. NTM results are copied from \cite{NTM_impl}.}
\label{tab:results}
\begin{center}
\begin{tabular}{lccc}
\textbf{model} & \textbf{latch} & \textbf{copy} & \textbf{repeat copy} \\
\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}
ESN & $0.309 \pm 0.049$ & $0.358 \pm 0.030$ & $0.409 \pm 0.038$ \\
ESGRU & $0.402 \pm 0.116$ & $0.331 \pm 0.011$ & $0.375 \pm 0.018$ \\
RMM & $<10^{-3}$ & $0.027 \pm 0.025$ & $0.037 \pm 0.067$ \\
NTM & n.a. & $ < 10^{-3} $ \cite{NTM_impl} & $ < 10^{-3} $ \cite{NTM_impl}
\end{tabular}
\end{center}
\end{table}
The generalization root mean square error (RMSE) of all models on all datasets
is displayed in Table~\ref{tab:results}.
For all datasets, RMMs achieve a low (albeit nonzero) error, indicating that RMMs
are able to solve the tasks.
Additionally, we note that both ESNs and ESGRUs are \emph{not} able to solve the
tasks, because they have significantly higher errors in all datasets
($p < 10^{-3}$ according to a Wilcoxon sign-rank test with Bonferroni correction).
Note that a Neural Turing Machine achieves zero error on all tasks \cite{NTM_impl}.
We investigate the solution strategy of the RMM model in more detail on the
latch task. For this purpose, we use a trained RMM and let it extrapolate
to a much longer sequence (see Figure~\ref{fig:latch_example}, top)
than seen in training (length 1700 vs.\ 200 with 8 vs.\ 3 spikes). We note that the RMM extrapolates
perfectly (Figure~\ref{fig:latch_example}, second row) with an error $<10^{-3}$.
In more detail, we observe that the model only writes to memory once, namely storing
a 1 at the time of the first spike (Figure~\ref{fig:latch_example}, third row),
whereas the read head switches position at every spike (except the first one;
Figure~\ref{fig:latch_example}, bottom), thus producing the desired output.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{groupplot}[xlabel={sequence length $T$},
group style={group size=2 by 1, horizontal sep=0.5cm, y descriptions at=edge left},
width=5cm, height=3.2cm,ymin=0,ymax=0.6]
\nextgroupplot[title={training time$\strut$}, ylabel={runtime [s]}]
\addplot[thick, class1color, densely dotted]
table[x=length,y=ESN_train_mean,y error=ESN_train_std] {runtimes.csv};
\addplot[thick, class0color, densely dashed]
table[x=length,y=ESGRU_train_mean,y error=ESGRU_train_std] {runtimes.csv};
\addplot[thick, class2color]
table[x=length,y=ARMM_train_mean,y error=ARMM_train_std] {runtimes.csv};
\nextgroupplot[title={prediction time$\strut$}, legend pos={outer north east}, legend cell align={left}]
\addplot[thick, class1color, densely dotted]
table[x=length,y=ESN_pred_mean,y error=ESN_pred_std] {runtimes.csv};
\addlegendentry{ESN}
\addplot[thick, class0color, densely dashed]
table[x=length,y=ESGRU_pred_mean,y error=ESGRU_pred_std] {runtimes.csv};
\addlegendentry{ESGRU}
\addplot[thick, class2color]
table[x=length,y=ARMM_pred_mean,y error=ARMM_pred_std] {runtimes.csv};
\addlegendentry{RMM}
\end{groupplot}
\end{tikzpicture}
\vspace{-0.7cm}
\end{center}
\caption{Runtime results for training (left) and prediction (right)
of standard ESNs, ESGRUs, and RMMs for varying sequence length.}
\label{fig:runtimes}
\end{figure}
To evaluate the runtime, we train ESNs, ESGRUs, and RMMs
with a reservoir of 128 neurons each on a random 8-bit input sequence
with varying length, the output sequence being shifted by one.
We measure runtime on a consumer grade laptop with core i7 CPU.
Figure~\ref{fig:runtimes} shows the runtime results. We find that RMMs roughly
take 15 times longer to train compared to regular ESNs, which may be due to
more needed linear regression runs and an inefficient alignment implementation.
Still, even for long sequences we maintain training times well below a second.
Prediction time is roughly comparable to a standard ESN and faster than an ESGRU.
By comparison, training a NTM using the reference implementation
\cite{NTM_impl} on the copy task took more than 30 minutes.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{groupplot}[
group style={
group size=1 by 4, horizontal sep=0.5cm, vertical sep=0.25cm,
x descriptions at=edge bottom,
y descriptions at=edge left,
},
width = 0.95\textwidth,
height = 0.2\textwidth,
enlarge x limits,
enlarge y limits,
xmin=0, xmax=1800
]
\nextgroupplot[ymin=0, ymax=1, ylabel={$x_t,y_t$}, ytick={0, 1}]
\addplot[class0color, curves_thick] table[x=time, y=x] {latch_example_signal.csv};
\addplot[class2color, thick, densely dashed] table[x=time, y=y] {latch_example_signal.csv};
\nextgroupplot[ymin=0, ymax=1, ylabel={$\hat{y_t}$}, ytick={0, 1}]
\addplot[class1color, curves] table[x=time, y=yhat] {latch_example_signal.csv};
\nextgroupplot[ymin=0, ymax=1, ylabel={$m_{t, k}$}, ytick={0, 1}]
\addplot[class0color, curves] table[x=time, y=mem1] {latch_example_signal_writesCont.csv};
\addplot[class1color, curves] table[x=time, y=mem2] {latch_example_signal_writesCont.csv};
\node (m1) at (1775,0.75) {$m_{t, 1}$};
\node (m2) at (1775,0.1) {$m_{t, 2}$};
\nextgroupplot[ymin=1, ymax=2, xlabel={$t$}, ylabel={$l_t$}, ytick={1, 2}]
\addplot[class2color, curves_thin] table[x=time, y=read_loc] {latch_example_signal_readsCont.csv};
\end{groupplot}
\end{tikzpicture}
\vspace{-0.5cm}
\end{center}
\caption{From top to bottom: A long sequence from the latch task with the input as solid, the output as dashed line;
the prediction of the RMM;
the memoy entries over time; and
the read head position over time.}
\label{fig:latch_example}
\end{figure}
\section{Conclusion}
We have introduced reservoir memory machines (RMMs), which augment echo state
networks with an external memory, a write head that copies data from the
input to the memory, and a read head which couples the memory to the
output. We also provided a training algorithm for the write and read heads
based on dynamic time warping and linear regression in
an alternating optimization scheme. As such, our model retains the
training simplicity of echo state networks, but extends
its capabilities to some of the benchmark tasks of Neural Turing Machines.
We emphasize that our model is still strictly less powerful because other
benchmark tasks remain out of reach, especially those based on
content-based addressing.
Extending our model with such a mechanism is a task for future work.
Further, we still require a formal proof that our proposed model is
strictly more powerful than an ESN.
| {
"attr-fineweb-edu": 1.904297,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbTg4dbjiVD3oEuq6 |
\chapter*{Abstract}
\addcontentsline{toc}{chapter}{Abstract}
Condensed matter systems in the solid state owe much of their properties to the quantum behavior of their electronic degrees of freedom. Despite these dynamical degrees of freedom being relatively simple, the emergent phenomena which appear from collective behavior of the many constituent particles can lead to highly non-trivial characteristics. Two such non-trivial characteristics are topological phases, and the phenomena which result from many strongly correlated degrees of freedom.
Presented in this thesis are a set of projects which lie at the intersection between strong correlations and topological phases of matter.
The first of these projects is a treatment of an infinite dimensional generalization of the Su-Schrieffer-Heeger model with local Coulomb interactions which is treated exactly using the technique of dynamical mean-field theory, with the numerical renormalization group as the impurity solver. Observed in the solution is power-law augmentation of the non-interacting density of states. The topological spectral pole becomes broadened into a power-law diverging spectrum, and the trivial gapped spectrum becomes a power-law vanishing pseudogap. At stronger interaction strengths we have a first-order transition to a fully gapped Mott insulator. This calculation represents an exact solution to an interacting topological insulator in the strongly correlated regime at zero temperature.
The second set of projects involves the development of methods for formulating non-interacting auxiliary models for strongly correlated systems.
These auxiliary models are able to capture the full dynamics of the original strongly correlated model, but with only completely non-interacting degrees of freedom, defined in an enlarged Hilbert space. We motivate the discussion by performing the mapping analytically for simple interacting systems using non-linear canonical transformations via a Majorana decomposition. For the nontrivial class of interacting quantum impurity models, the auxiliary mapping is established numerically exactly for finite-size systems using exact diagonalization, and for impurity models in the thermodynamic limit using the numerical renormalization group, both at zero and finite temperature. We find that the auxiliary systems take the form of generalized Su-Schriefeer-Heeger models, which inherit the topological characteristics of those models. These generalized Su-Schrieffer-Heeger models are also formalized and investigated in their own right as novel systems.
Finally, we apply the auxiliary field methodology to study the Mott transition in the Hubbard model. In terms of the auxiliary system, we find that the Mott transition can be understood as a topological phase transition, which manifests as the formation and dissociation of topological domain walls.
\chapter*{Acknowledgements}
\addcontentsline{toc}{chapter}{Acknowledgements}
I would first of all thank my supervisor Andrew Mitchell for inviting me to Dublin and into his burgeoning group and into his first batch of PhD students. His support and strong investment in my education over the past few years is what made this thesis possible.
Among the group members, I collaborated most with Dr. Sudeshna Sen, and without her expertise and contributions, several aspects of the work in this thesis would not be as high quality as it is.
I am grateful to the other group members, Emma Minarelli
and
Jonas Rigo,
for providing camaraderie in our shared situation.
Similarly, I have appreciated the company of Eoin, Eoin, and George in our shared office.
I must also thank the School of Physics management staff, particularly Bairbre Fox and John Brennan
for all their help with bureaucratic and administrative affairs, as well as the acquisition of IKEA furniture.
I would like to thank my Research Studies Panel members Profs. Vio Buchete and Peter Duffy for their additional guidance and support over the course of my studies here.
I would also like to thank my examiners, Prof. Vladimir Loboskin and Prof. Nuala Caffrey, and particularly my special external examiner Prof. C\'edric Weber, for their time in evaluating this work.
Outside of the academic sphere, I am grateful to my band mates from the Jazz Society, Ben, Marcie, Michael, Ois\'in, and Ted, for the many gigs we managed to perform together.
Finally I must acknowledge the support of my parents for innumerable aspects of life, but more directly as it pertains to this thesis, for harboring me in their shelter during a global pandemic which occurred during the progress of this work.
\chapter{Interacting SSH Model on the Bethe Lattice\label{ch:bethessh}}
\chaptermark{Interacting Bethe SSH}
In this chapter a variant of the SSH model is treated with on-site Coulomb interactions. The interactions are treated with DMFT-NRG introduced in \S\ref{sec:dmft}. A prerequisite to performing this calculation involves mapping the original $1d$ SSH model into a form which has a well-defined infinite dimensional limit, suitable for treatment with DMFT~\cite{bethessh}.
Topological phases of matter have been intensely studied in recent years~\cite{hk}, but there still remain a number of open questions regarding the interplay of topology and strong interactions in condensed matter systems~\cite{interacting}, including the role of topological invariants.
The $1d$ SSH model with on-site Coulomb interactions has previously been analyzed perturbatively~\cite{hubbardpeierls}. This analysis showed that the Hubbard interaction does not qualitatively alter the characteristics of the noninteracting SSH model for $U<U_c$. In particular, the soliton solutions of the SSH model persist in the presence of the interaction.
A similar study~\cite{interactingrm} investigated the topology of the $1d$ Rice-Mele model with nearest-neighbor interactions in the low-energy weak-interaction regime of finite system size using functional renormalization group (fRG) and density matrix renormalization group (DMRG) techniques. In this work the topological edge states are modified due to spatial variations in the self-energy close to the system boundary. Such effects are not present in the system considered here due to the local nature of the self-energy within DMFT and the absence of a boundary.
Other previous studies of the SSH model with interactions considered nearest-neighbor density-density interactions~\cite{sirker}.
The SSH model with Hubbard interaction has also been studied using exact diagonalization (ED)~\cite{guo} and density matrix renormalization group (DMRG)~\cite{manmana,barbiero} techniques. These studies were restricted to finite system sizes in $1d$. In contrast, the approach taken below is that of DMFT-NRG and is performed in the limit of infinite dimensions and system size.
This exact solution in the limit of infinite dimensions to date has not been performed in the literature.
A distinguishing feature of the present study in contrast to the aforementioned work is that here the effect of the Mott metal-insulator transition on the topological features is seen. The existing studies mentioned previously have not been able to touch on this element as they have worked exclusively in $1d$, where the Mott transition does not occur in the ground state~\cite{liebwu}.
\section{SSH Model on the Bethe Lattice}
\begin{figure}[ht!]
\centering
\begin{tikzpicture}[scale=0.65, line width=2pt, every node/.style={scale=0.75,inner sep=3pt}, every path/.style={scale=0.75}]
\input{bethelatt.tex}
\end{tikzpicture}
\begin{tikzpicture}[scale=0.65, every node/.style={scale=0.75,inner sep=3pt}, every path/.style={scale=0.75}]
\input{bethessh_fig.tex}
\end{tikzpicture}
\caption[Schematic of the SSH model on the Bethe lattice]{Schematic of the higher dimensional SSH model fitted to the Bethe lattice with $\tensor*{\kappa}{_A} = 3$, $\tensor*{\kappa}{_B} = 2$. The single blue links represent links with $t_A$ and the double red links represent links with $t_B$. The $\bullet$- ($\circ$-)sites are those with majority $t_A$ ($t_B$) bonds.\label{fig:sshbethe}}
\end{figure}
This chapter investigates the characteristics of an SSH-type topological insulator with local Coulomb interactions which are handled by DMFT.
DMFT becomes exact in the limit of infinite lattice coordination number. The objective in the present chapter is to exploit this limit to obtain exact results for interacting topologically non-trivial systems. In order to analyze a strongly correlated topological insulator in this framework, it is necessary to construct a model which possesses topological features in this limit. The choice here is to generalize the SSH model to the Bethe lattice, a lattice commonly utilized in DMFT calculations.
To create a model on the Bethe lattice which captures the same properties as the SSH model, it is necessary to not only define two hopping parameters $t_A$ and $t_B$, but also to partition the
coordination number $\kappa$ into $\tensor*{\kappa}{_A}$ and $\tensor*{\kappa}{_B}$, with $\kappa = \tensor*{\kappa}{_A} + \tensor*{\kappa}{_B}$.
The Hamiltonian for this model can be written as
\begin{equation}
\op{H}{} =
\left( \smashoperator[r]{\sum_{\ell25\{\tensor*{\kappa}{_A}\}}} t_A \, \opd{c}{j+\ell} \op{c}{j} + \smashoperator[r]{\sum_{\ell25\{\tensor*{\kappa}{_B}\}}} t_B \, \opd{c}{j+\ell} \op{c}{j} + \hc \right)
\end{equation}
where $\ell25\{\tensor*{\kappa}{_A}\}$ indicates summing over all nearest neighbors in the set $\{\tensor*{\kappa}{_A}\}$ and $\ell25\{\tensor*{\kappa}{_B}\}$ similarly for $\{\tensor*{\kappa}{_B}\}$.
An example of this lattice is shown schematically in Fig.~\ref{fig:sshbethe}.
\subsection{The Green functions}
The limit of infinite coordination number is taken in a controlled way in order to ensure that parameters remain finite and the salient non-trivial features of the model are preserved.
Although the Bethe lattice is infinite in extent and therefore only possesses a bulk and no boundary, it is possible to identify through the Green functions a certain type of bulk-boundary correspondence.
The Green functions for this system may be calculated by considering the Green functions for ``surface'' sites. Since the surface of a $d$-dimensional system is $(d-1)$-dimensional, surface sites of the SSH Bethe lattice are defined as either a $\bullet$- or $\circ$-site with either a $t_A$ or $t_B$ bond removed.
\begin{subequations}
A surface $\circ$-site with a $t_A$ link removed has the Green function
\begin{align}
G^\circ_{\text{s}A}(z)
&=
\cfrac{1}
{ z - (\tensor*{\kappa}{_B} - 1) t_A^2 \ G^\bullet_{\text{s}A}(z) - \tensor*{\kappa}{_A} t_B^2 \ G^\bullet_{\text{s}B}(z) }
\end{align}
and the analogous site with a $t_B$ bond removed is
\begin{align}
G^\circ_{\text{s}B}(z)
&=
\cfrac{1}
{ z - (\tensor*{\kappa}{_A} -1) t_B^2 \ G^\bullet_{\text{s}B}(z) - \tensor*{\kappa}{_B} t_A^2 \ G^\bullet_{\text{s}A}(z) } \,.
\end{align}
The analogous surface Green functions for $\bullet$-sites are
\begin{align}
G^\bullet_{\text{s}A}(z)
&=
\cfrac{1}
{ z - (\tensor*{\kappa}{_A} - 1) t_A^2 \ G^\circ_{\text{s}A}(z) - \tensor*{\kappa}{_B} t_B^2 \ G^\circ_{\text{s}B}(z) }
\\
G^\bullet_{\text{s}B}(z)
&=
\cfrac{1}
{ z - (\tensor*{\kappa}{_B} - 1) t_B^2 \ G^\circ_{\text{s}B}(z) - \tensor*{\kappa}{_A} t_A^2 \ G^\circ_{\text{s}A}(z) } \,.
\end{align}
\end{subequations}
The bulk Green functions can be written in terms of these surface Green functions as
\begin{align}
G_{\text{b}}^\bullet
&=
\cfrac{1}
{ z - \cfrac{\tensor*{\kappa}{_A} t_A^2}{ z - (\tensor*{\kappa}{_B} - 1) t_A^2 \ G^\bullet_{\text{s}A} - \tensor*{\kappa}{_A} t_B^2 \ G^\bullet_{\text{s}B} } - \cfrac{\tensor*{\kappa}{_B} t_B^2}{ z - (\tensor*{\kappa}{_A} -1) t_B^2 \ G^\bullet_{\text{s}B} - \tensor*{\kappa}{_B} t_A^2 \ G^\bullet_{\text{s}A} }
}
\\
&=
\cfrac{1}
{ z - \tensor*{\kappa}{_A} t_A^2 \ G^\circ_{\text{s}A} - \tensor*{\kappa}{_B} t_B^2 \ G^\circ_{\text{s}B}
}
\intertext{and}
G_{\text{b}}^\circ
&=
\cfrac{1}
{ z - \cfrac{\tensor*{\kappa}{_A} t_A^2}{ z - (\tensor*{\kappa}{_A} - 1) t_A^2 \ G^\circ_{\text{s}A} - \tensor*{\kappa}{_B} t_B^2 \ G^\circ_{\text{s}B} } - \cfrac{\tensor*{\kappa}{_B} t_B^2}{ z - (\tensor*{\kappa}{_B} - 1) t_B^2 \ G^\circ_{\text{s}B} - \tensor*{\kappa}{_A} t_A^2 \ G^\circ_{\text{s}A} }
}
\\
&=
\cfrac{1}
{ z - \tensor*{\kappa}{_A} t_A^2 \ G^\bullet_{\text{s}A} - \tensor*{\kappa}{_B} t_B^2 \ G^\bullet_{\text{s}B}
} \,.
\end{align}
In the limit of infinite coordination number, $\tensor*{\kappa}{_A} - 1 \approx \tensor*{\kappa}{_A}$ and $\tensor*{\kappa}{_B} - 1 \approx \tensor*{\kappa}{_B}$. In this limit, the $A$ and $B$ surface Green functions on the $\bullet$- and $\circ$-sites become
\begin{equation}
\begin{gathered}
G^\circ_{\text{s}A}(z)
\approx
\cfrac{1}
{ z - \tensor*{\kappa}{_B} t_A^2 \ G^\bullet_{\text{s}A}(z) - \tensor*{\kappa}{_A} t_B^2 \ G^\bullet_{\text{s}B}(z) }
\approx
G^\circ_{\text{s}B}(z)
\\
G^\bullet_{\text{s}A}(z)
\approx
\cfrac{1}
{ z - \tensor*{\kappa}{_B} t_B^2 \ G^\circ_{\text{s}B}(z) - \tensor*{\kappa}{_A} t_A^2 \ G^\circ_{\text{s}A}(z) }
\approx
G^\bullet_{\text{s}B}(z) \,.
\end{gathered}
\end{equation}
The `surface' sites for both types of sites are equivalent to each their corresponding bulk counterpart,
\begin{equation}
\begin{aligned}
G^\circ_{\text{s}*}(z)
&\approx
G^\circ_{\text{b}}(z)
\\
G^\bullet_{\text{s}*}(z)
&\approx
G^\bullet_{\text{b}}(z) \,,
\end{aligned}
\end{equation}
where $* 25 \{A,B\}$ and `$\text{b}$' denotes ``bulk.'' The distinction between a $d$-dimensional bulk and its $(d-1)$-dimensional boundary is lost in the limit of infinite dimensions.
The equivalence of the bulk and surface Green functions demonstrates an appearance of the bulk-boundary correspondence in the Bethe SSH model.
In order to have the Green functions be non-trivial under the limit of $\kappa\to\infty$, it is necessary to ensure that the system parameters are extensive. To this end, it is useful to
define the quantities
\begin{align}
\mathsf{x}^2 &\vcentcolon= \tensor*{\kappa}{_B} t_B^2
&
\mathsf{y} &\vcentcolon= \frac{t_B}{t_A}
&
\mathsf{z} &\vcentcolon= \frac{\tensor*{\kappa}{_B}}{\tensor*{\kappa}{_A}}
\end{align}
as remaining fixed when limit is taken.
The Green functions in the infinite dimensional limit then take the form of
\begin{subequations}
\begin{align}
G^\bullet_\infty(z) &= \frac{1}{z - \left( \frac{\mathsf{x}^2}{\mathsf{z}} + \frac{\mathsf{x}^2}{\mathsf{y}^2} \right) G^\circ_\infty(z)}
\intertext{and}
G^\circ_\infty(z) &= \frac{1}{z - \left( \mathsf{x}^2 + \frac{\mathsf{x}^2}{\mathsf{y}^2 \mathsf{z}} \right) G^\bullet_\infty(z)} \,.
\end{align}
\end{subequations}
These equations recover the Green functions of the standard SSH model for both the trivial and topological phases identifying rescaled hopping parameters as $\tilde{t}_A^2 \vcentcolon= \frac{\mathsf{x}^2}{\mathsf{z}} + \frac{\mathsf{x}^2}{\mathsf{y}^2}$ and $\tilde{t}_B^2 \vcentcolon= \mathsf{x}^2 + \frac{\mathsf{x}^2}{\mathsf{y}^2 \mathsf{z}}$. With these replacements the Green functions for the two sites take the form of
\begin{subequations}
\begin{align}
G^\circ_\infty(z) &= \frac{1}{z - \cfrac{\tilde{t}_B^2}{z - \tilde{t}_A^2 G^\circ_\infty(z)}}
\intertext{and}
G^\bullet_\infty(z) &= \cfrac{1}{z - \cfrac{\tilde{t}_A^2}{z - \tilde{t}_B^2 G^\bullet_\infty(z)}} \,,
\end{align}
\end{subequations}
which match the functional form of the edge site Green functions of the $1d$ SSH model in the trivial and topological phases respectively.
By way of the comparison with the SSH chain Green functions, the $\bullet$-sites exhibit the topological configuration when the compound effective hopping parameters satisfy $\tilde{t}_A < \tilde{t}_B$. In terms of the basic parameters, this enforces the simultaneous requirement that $t_A < t_B$ and $\tensor*{\kappa}{_A} > \tensor*{\kappa}{_B}$. Inverting this requirement results in the $\circ$-sites carrying the topological configuration and the $\bullet$-sites exhibiting the trivial configuration. If either $\mathsf{y}=1$ or $\mathsf{z}=1$, then $\tilde{t}_A = \tilde{t}_B$ and $G^\bullet_\infty(z) = G^\circ_\infty(z)$, which results in the system exhibiting a semi-elliptical metallic local density of states on every site.
The following analysis is performed with the $\bullet$-sites always taking the topological character and the $\circ$-sites always trivial, \textit{i.e.} parameters are chosen such that $\tilde{t}_A < \tilde{t}_B$ from here on.
As with the $1d$ SSH model, the bandwidth is defined as $D = \tilde{t}_A + \tilde{t}_B$. This $D$ sets the energy scale of the system of the interactions in the following.
\begin{figure}[ht!]
\subfiglabel{\includegraphics[scale=1]{hsshU0a.pdf}}{3.125,2}{fig:hsshU0a}
\subfiglabel{\includegraphics[scale=1]{hsshU0b.pdf}}{3.125,2}{fig:hsshU0b}
\vspace{-\baselineskip}
\caption{Spectral functions on the $\bullet$-sites \subref{fig:hsshU0a} and $\circ$-sites \subref{fig:hsshU0b} of the non-interacting Bethe SSH model. \label{fig:hsshU0.0}}
\end{figure}
The $\circ$- and $\bullet$-sites then play the roles of the `bulk' and `boundary' sites in the bulk-boundary correspondence\index{bulk-boundary correspondence}. The Bethe SSH lattice contains both types of sites by construction, and enforcing a topological state on one type of site forces the other type to be trivial and insulating. In this way the infinite boundary-less Bethe SSH lattice still realizes the bulk-boundary correspondence.
It is worth noting that these topological sites are still bulk sites, as the Bethe lattice has no real boundary, in contrast to the usual $1d$ model wherein the topological site only occurs on the boundary of the chain.
\subsubsection{Choice of Lattice}
The higher dimensional generalization of the SSH model presented here requires a lattice with certain properties. The structure of the lattice must be such that $\mathsf{z}\neq1$ on every site of the lattice.
There does not exist a lattice exhibiting these properties which also has a reciprocal lattice. A mathematical proof of this is not known to the author at present.
In order to produce an SSH spectral pole, it is not necessary for every site to have the same set of parameters $\{\tensor*{\kappa}{_A},\tensor*{\kappa}{_B},t_A,t_B\}$, but rather it is only necessary that $\tensor*{\kappa}{_A} > \tensor*{\kappa}{_B}$ and $t_A < t_B$ on alternating sites. For randomized parameters restricted to fit the requirements on each site, the spectral function takes a form which is analogous to the $1d$ SSH model with disordered hopping amplitudes, \textit{cf.} Fig.~\ref{fig:dtsshspec}.
\section{Effect of Interactions}
An interacting version of this model may be constructed by adding a standard on-site Hubbard four-fermion spin-spin interaction term.
\begin{figure}[ht!]
\centering
\scalebox{1}[0.65]
{
\begin{tikzpicture}[scale=0.65, every node/.style={scale=0.75,inner sep=3pt}, every path/.style={scale=0.75}]
\input{bethessh_fig.tex}
\end{tikzpicture}
}\\
\vspace{-7.5\baselineskip}
\scalebox{1}[0.65]
{
\begin{tikzpicture}[scale=0.65, every node/.style={scale=0.75,inner sep=3pt}, every path/.style={scale=0.75}]
\input{bethessh_fig.tex}
\end{tikzpicture}
}\\
\vspace{\baselineskip}
\caption[Schematic of lattice for Hubbard-SSH model]{Schematic of lattice for Hubbard-SSH model, which consists of one copy of the Bethe lattice for each spin and local interactions between corresponding sites.\label{fig:doublebethessh}}
\end{figure}
This involves two copies of the Bethe lattice, one for each spin, with the local Hubbard interaction taking place between corresponding sites, illustrated in Fig.~\ref{fig:doublebethessh}.
The Hamiltonian of this Bethe lattice Hubbard-SSH model is given by
\begin{equation}
\op{H}{\textsc{hssh}}
=
\smashoperator{\sum_{j25\Gamma_{\textsc{bl}},\sigma}}\ \left[
\varepsilon \, \opd{c}{j,\sigma} \op{c}{j,\sigma} + \left(
t_A \smashoperator{\sum_{\ell25\{\tensor*{\kappa}{_A}\}}} \opd{c}{j+\ell,\sigma} \op{c}{j,\sigma} +
t_B \smashoperator{\sum_{\ell25\{\tensor*{\kappa}{_B}\}}} \opd{c}{j+\ell,\sigma} \op{c}{j,\sigma} + \hc\! \right) \right]
+ U \smashoperator{\sum_{j25\Gamma_{\textsc{bl}}}} \opd{c}{j,\uparrow} \op{c}{j,\uparrow} \opd{c}{j,\downarrow} \op{c}{j,\downarrow}
\end{equation}
where $j$ indexes the sites on the Bethe lattice, $\ell$ runs over nearest neighbors either in the set $\{\tensor*{\kappa}{_A}\}$ or $\{\tensor*{\kappa}{_B}\}$, and $\sigma 25 \{\uparrow,\downarrow\}$ is the spin index.
The system is solved using DMFT-NRG as described in \S\ref{sec:dmft}.
The full interacting Green functions for the two types of sites are given by
\begin{subequations}
\begin{equation}
\begin{aligned}[b]
G^\bullet_\sigma(z)
&= \cfrac{1}{z-\varepsilon - \cfrac{\tilde{t}_A^2}{z-\varepsilon - \tilde{t}_B^2 G^\bullet_\sigma(z) - \Sigma^\circ_\sigma(z)} - \Sigma^\bullet_\sigma(z)}
\\
&= \frac{1}{z-\varepsilon - K^\circ_\sigma(z) - \Sigma^\bullet_\sigma(z)}
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}[b]
G^\circ_\sigma(z)
&= \cfrac{1}{z-\varepsilon - \cfrac{\tilde{t}_B^2}{z-\varepsilon - \tilde{t}_A^2 G^\circ_\sigma(z) - \Sigma^\bullet_\sigma(z)} - \Sigma^\circ_\sigma(z)}
\\
&= \frac{1}{z-\varepsilon - K^\bullet_\sigma(z) - \Sigma^\circ_\sigma(z)} \,.
\end{aligned}
\end{equation}
\label{eq:hsshbethegreenfunctions}
\end{subequations}
In the following analysis, parameters are chosen with
\begin{equation}
( \mathsf{x} , \mathsf{y} , \mathsf{z} ) = ( 1.0,\, 3.0,\, 2 )
\label{eq:hsshparam}
\end{equation}
such that $\tilde{t}_A^2 = \frac{11}{18} \approx 0.61$ and $\tilde{t}_B^2 = \frac{19}{18} \approx 1.06$. Since $\tilde{t}_A < \tilde{t}_B$, the $\bullet$-sites express the topological feature and the $\circ$-sites express the trivial spectrum.
With this parameterization choice the effective band gap is $2\lvert \tilde{t}_B - \tilde{t}_A \vert \approx 0.49$ and the effective full band width is $2D \equiv 2\lvert \tilde{t}_B + \tilde{t}_A \rvert \approx 3.62$.
As the calculation takes place in the infinite dimensional limit, the solution presented here is numerically exact and not an approximation.
The NRG is performed with $\Lambda=2.0$ keeping 3000 states in each iteration at a temperature of $T/D = 10^{-6}$, which is considered to be effectively zero temperature.
The system is now first analyzed in the case with small, but finite, interaction strength, $0<U \ll D$. This is in the perturbative regime, however the solution presented here is obtained from the full non-perturbative DMFT-NRG calculation. The reason for studying this regime is to ascertain how the interactions affect the different elements of the SSH spectrum without being concerned with whether the effects are interfering with each other.
At higher interaction strengths the spectral features become heavily broadened, and it is difficult to determine exactly which spectral features are augmented in which way from the interactions.
While these results are in the perturbative regime, they have been obtained using the full non-perturbative DMFT-NRG.
The spectral functions for the $\bullet$- and $\circ$-sites at $U/D = 0.01$ on a linear scale are plotted in Fig.~\ref{fig:hsshU0.01linear}. The contrasting features of these spectra compared to the non-interacting case ($U=0.0$, \textit{cf.} Fig.~\ref{fig:hsshU0.0}) are not clearly seen as they occur at logarithmically small scales. It is therefore advantageous to examine the spectral functions on a logarithmic scale, as shown in Fig.~\ref{fig:hsshUsmalllog}, where the spectral functions for the $\bullet$- and $\circ$-sites is plotted for $U/D=0.001$ and $U/D=0.01$.
The DMFT-NRG solution shows that the zero-energy pole on the topological $\bullet$-sites is broadened to exhibit power-law behavior which appears at $\omega \sim \mathcal{O}(U)$ and decays exponentially into the preformed gap. On the logarithmic scale plots in Fig.~\ref{fig:hsshUsmalllog}, the low energy power-law appears linear. At higher energy the power-law feature develops a shoulder which decays into the gap. This shoulder is the broadening. The continuum bands outside the gap are not meaningfully affected by the interactions at this strength. This similar to the case of the Hubbard model as solved in \S\ref{sec:hubbardsolution}. In Fig.~\ref{fig:hubbardsolution}, the difference in the continuum band of the Hubbard model spectrum between $U/D = 0$ and $U/D = 1.0$ is minimal. The interaction strength considered for the HSSH model at present is orders of magnitude less than that.
For $\omega < U$, the spectral function exhibits the behavior $\mathcal{A}^{\bullet/\circ}(\omega)\sim|\omega|^{\mp r}$ where the exponent is $-r$ for $\bullet$-sites and $+r$ for $\circ$-sites ($r>0$).
The value of the power-law scaling $r$ is dependent upon the system parameters $\tilde{t}_A$ and $\tilde{t}_B$.
For the parameterization~\eqref{eq:hsshparam}, the exponent is $r\approx0.41$.
This value changes based on the choice of parameters $(\mathsf{x},\mathsf{y},\mathsf{z})$.
For $(\mathsf{x},\mathsf{y},\mathsf{z}) = ( 1.0, 2.0, 2 )$, the non-interacting band gap is $2 |\tilde{t}_B - \tilde{t}_A| \approx 0.39$ and $r \approx 0.43$.
For $(\mathsf{x},\mathsf{y},\mathsf{z}) = ( 1.0, 3.0, 3 )$, the non-interacting band gap is $2 |\tilde{t}_B - \tilde{t}_A| \approx 0.70$ and $r \approx 0.37$.
From this comparison it is seen that it is coincidental that the power-law scale factor $r$ is approximately the same as the band gap for the parameterization~\eqref{eq:hsshparam} as this is not the case for different choices of parameterization.
\begin{figure}[ht!]
\includegraphics[scale=1]{hsshU0_01a.pdf}
\includegraphics[scale=1]{hsshU0_01b.pdf}
\caption[Spectral functions for the $\bullet$- and $\circ$-sites at $U/D = 0.01$ on a linear scale]{Spectral functions for the $\bullet$- and $\circ$-sites at $U/D = 0.01$ on a linear scale. On this scale the effects of the interactions are difficult to visualize. See the logarithmic scale plots in Fig.~\ref{fig:hsshUsmalllog} for a better visualization.\label{fig:hsshU0.01linear}}
\end{figure}
\begin{figure}[ht!]
\begin{subfigure}[c]{0.5\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=\linewidth]{hsshU0_001alog.pdf}};
\node at (3,1.875) {\subref*{fig:hsshU0_001alog}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:hsshU0_001alog}}
\end{subfigure}
\begin{subfigure}[c]{0.5\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=\linewidth]{hsshU0_001blog.pdf}};
\node at (3,1.875) {\subref*{fig:hsshU0_001blog}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:hsshU0_001blog}}
\end{subfigure}
\begin{subfigure}[c]{0.5\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=\linewidth]{hsshU0_01alog.pdf}};
\node at (3,1.875) {\subref*{fig:hsshU0_01alog}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:hsshU0_01alog}}
\end{subfigure}
\begin{subfigure}[c]{0.5\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width=\linewidth]{hsshU0_01blog.pdf}};
\node at (3,1.875) {\subref*{fig:hsshU0_01blog}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:hsshU0_01blog}}
\end{subfigure}
\caption[Appearance of power-law features at low $U$ on the interacting Bethe-SSH model]{Appearance of power-law features at low $U$ on the interacting Bethe-SSH model. The feature on the right in all plots are the continuum SSH bands. The space between the bands and the low energy feature is a hard gap. The low energy spectral functions scale as $|\omega|^{\pm r}$ where the power is $-r$ in \subref{fig:hsshU0_001alog} and \subref{fig:hsshU0_01alog}, and $+r$ in \subref{fig:hsshU0_001blog} and \subref{fig:hsshU0_01blog}. Note that the power-law feature onsets at $\omega \sim \mathcal{O}(U/D)$.\label{fig:hsshUsmalllog}}
\end{figure}
As interactions are adiabatically increased into the non-perturbative regime, the low energy spectral features of both $\bullet$- and $\circ$-sites retain their power-law behavior with the same constant power $r$.
Both the low energy feature and the continuum bands exhibit exponential decay into the hard gap until the broadened low energy feature encounters the band. The power-law feature continues to onset at $\omega \sim \mathcal{O}(U)$, such that the broadened low energy feature encounters the band at $U \sim \lvert \tilde{t}_B - \tilde{t}_A \rvert/2$. Once this point is reached, the power-law feature occupies the entire low energy region, but does not extend beyond the former gap width. This can be seen by comparing the spectra at $U/D = 3.0$ in Fig.~\ref{fig:hsshU3log} with the spectra at $U \ll \lvert \tilde{t}_B - \tilde{t}_A \rvert$ in Fig.~\ref{fig:hsshUsmalllog}. The spectra in Fig.~\ref{fig:hsshUsmalllog} show that the inner band edge appears at $\omega/D \sim 10^{-1}$. The spectra at $U/D = 3.0$ in Fig.~\ref{fig:hsshU3} are heavily broadened by the interactions, but the power-law feature is still confined to $|\omega| \lesssim \lvert \tilde{t}_B - \tilde{t}_A \rvert$.
As in the Hubbard model, as interactions are further increased the central features of the spectrum become compressed towards the Fermi level and similarly here the power-law features are pushed away from the bands.
\begin{figure}[ht!]
\includegraphics[scale=1]{hsshU3a.pdf}
\includegraphics[scale=1]{hsshU3b.pdf}
\caption{Spectral functions for the $\bullet$- and $\circ$-sites at $U/D=3.0$. The original spectral features are now heavily broadened.\label{fig:hsshU3}}
\end{figure}
\begin{figure}[ht!]
\includegraphics[scale=1]{hsshU3alog.pdf}
\includegraphics[scale=1]{hsshU3blog.pdf}
\caption[Spectral functions for the $\bullet$- and $\circ$-sites at $U/D=3.0$ on a logarithmic scale]{Spectral functions for the $\bullet$- and $\circ$-sites at $U/D=3.0$ on a logarithmic scale. Observed here is the low energy power-law feature which is retained from the perturbative regime. The region of the power-law is confined to the width of the band gap of the non-interacting system.\label{fig:hsshU3log}}
\end{figure}
\begin{figure}[ht!]
\includegraphics[scale=1]{hsshU5_4a.pdf}
\includegraphics[scale=1]{hsshU5_4b.pdf}
\caption[Spectral functions for the $\bullet$- and $\circ$-sites at $U/D=5.4$. The spectral functions here are hard gapped and are in a Mott insulating phase.]{Spectral functions for the $\bullet$- and $\circ$-sites at $U/D=5.4$. The spectral functions here are hard gapped and are in a Mott insulating phase. The characteristic Mott pole in their self-energies can be seen in Fig.~\ref{fig:hsshmottse}.\label{fig:hsshMottphase}}
\end{figure}
The main result here is that for the Bethe-SSH model in the presence of local Coulomb interactions, sites in the topological configuration exhibit a Kondo resonance situated at zero-energy and the sites in the trivial configuration exhibit a pseudogap with power-law behavior. The solution is presented in Fig.~\ref{fig:hsshsolution}.
\begin{figure}[htp!]
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU0a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU0b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU0_1a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU0_1b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU0_5a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU0_5b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU1a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU1b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU2a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU2b.pdf}
\end{subfigure}
\caption[DMFT solution to the Bethe lattice Hubbard-SSH model]{DMFT solution to the Bethe lattice Hubbard SSH model for the $\bullet$ (left) $\circ$ (right) sites.\label{fig:hsshsolution}}
\end{figure}
\begin{figure}[htp!]\ContinuedFloat
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU3a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU3b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU4a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU4b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU5a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU5b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU5_3a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU5_3b.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU5_4a.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{seqhsshU5_4b.pdf}
\end{subfigure}
\caption[DMFT solution to the Bethe lattice Hubbard-SSH model]{DMFT solution to the Bethe lattice Hubbard SSH model for the $\bullet$ (left) $\circ$ (right) sites.}
\end{figure}
Above a critical interaction strength $U_{c}$, both types of sites exhibit a Mott insulating phase with a hard gap.
Under the parameterization \eqref{eq:hsshparam} the Mott transition\index{Mott transition} occurs at $5.3 < U_{c2}/D < 5.4$. The spectrum on both sites in the Mott insulating phase is shown in Fig.~\ref{fig:hsshMottphase}. The phase with $U>U_{c}$ can unambiguously be called Mott insulating, but the phase with $U<U_{c}$ cannot uniformly be called \textit{e.g.} topological, as the topological states only exist on one sublattice. With respect to the Mott phase transition, it may be informally called the `metallic' phase by analogy with the Hubbard phase diagram, but the system does not take on a uniform phase in this region, with one sublattice having a spectrum adiabatically connected to the boundary spectrum of a topological insulator and the other sublattice being adiabatically connected to a conventional band insulator.
As in the basic Hubbard model~\eqref{eq:hubbard}, this system exhibits a hysteresis region in the phase diagram (\textit{cf.} Fig.~\ref{fig:hubbardcoexist} and the surrounding discussion).
Initializing the DMFT algorithm with a Green function of the Mott insulating phase, such as $U/D > 5.4$, and iterating the calculation with progressively decreasing $U$ it is found that that $U_{c1}$ is $4.2 < U_{c1}/D < 4.3$. This means that the hysteresis region lies in the parameter region $4.3 \lesssim U/D \lesssim 5.4$. The comparison of spectral functions within the coexistence regime is plotted in Fig.~\ref{fig:hsshcoexistence}.
Within the hysteresis region, the system with interaction strength tuned down adiabatically from $U > U_{c2}$ can be described as having interaction strength $U_+$. Conversely, the system with interaction strength tuned up adiabatically from $U < U_{c1}$ can be described as having interaction strength $U_-$.
\begin{comment}
\begin{subequations}
\begin{align}
G^\bullet(z)
&= \cfrac{1}{z-\varepsilon - \Sigma_b(z) - \cfrac{t_A^2}{z-\varepsilon - \Sigma_a(z) - t_B^2 G^\bullet(z)}}
\\
&= \cfrac{1}{z-\varepsilon - \Sigma_b(z) - K_b(z)}
\end{align}
\begin{align*}
\Delta_b(z) = t_A^2 G^\circ(z)
\end{align*}
\begin{align}
G^\circ(z)
&= \cfrac{1}{z-\varepsilon - \Sigma_a(z) - \cfrac{t_A^2}{z-\varepsilon - \Sigma_b(z) - t_A^2 G^\circ(z)}}
\\
&= \cfrac{1}{z-\varepsilon - \Sigma_a(z) - K_a(z)}
\end{align}
\begin{align*}
\Delta_a(z) = t_B^2 G^\bullet(z)
\end{align*}
\end{subequations}
\end{comment}
\begin{figure}[ht!]
\subfiglabel{\includegraphics[scale=1]{hsshU4_5a.pdf}}{3,1.875}{fig:hsshU4_5a}
\subfiglabel{\includegraphics[scale=1]{hsshU4_5b.pdf}}{3,1.875}{fig:hsshU4_5b}
\subfiglabel{\includegraphics[scale=1]{hsshUc1U4_5a.pdf}}{3,1.875}{fig:hsshUc1U4_5a}
\subfiglabel{\includegraphics[scale=1]{hsshUc1U4_5b.pdf}}{3,1.875}{fig:hsshUc1U4_5b}
\caption[Spectral functions on the $\bullet$- and $\circ$-sites at $U/D=4.5$ demonstrating the coexistence hysteresis region.]{Spectral functions on the $\bullet$- and $\circ$-sites at $U/D=4.5$. Panels \subref{fig:hsshU4_5a},\subref{fig:hsshU4_5b} are evaluated at $U_{-}/D = 4.5$, and panels \subref{fig:hsshUc1U4_5a},\subref{fig:hsshUc1U4_5b} are evaluated at $U_{+}/D = 4.5$, where $U_{-}$($U_{+}$) refers to approaching the interaction strength adiabatically from metallic (insulating) phase respectively. This demonstrates the coexistence hysteresis region $U_{c1} < U < U_{c2}$.\label{fig:hsshcoexistence}}
\end{figure}
A remark worth making is that the situation described by the Hubbard-SSH model presented here is different than the class of interacting topological insulators known as `topological Mott insulators'\index{topological Mott insulator} in the literature~\cite{interacting}. Topological Mott insulators are analogous to ordinary non-interacting topological insulators in the sense that they host topological states on their boundaries with an insulating bulk. In topological Mott insulators the bulk is a Mott insulator.
The Hubbard-SSH model here exhibits Mott insulating behavior on both of its sublattices in the $U>U_c$ regime, and in the $U < U_c$ regime where sites of one sublattice exhibit topological features, the sites of the other sublattice are not Mott insulating. As such, it does not present itself as a model of a topological Mott insulator.
\subsection{Self-Energy Analysis}\label{sec:hsshse}
\begin{figure}[ht!]
\subfiglabel{\includegraphics{hsshSU3a.pdf}}{3.125,2}{fig:hsshSU3a}
\subfiglabel{\includegraphics{hsshSU3b.pdf}}{3.125,2}{fig:hsshSU3b}
\caption{$-\Im\Sigma(\omega)$ of the HSSH model at $U/D = 3.0$. At low energy the self-energy scales as $\omega^2$, which is indicative of a Fermi liquid.\label{fig:hsshse}}
\end{figure}
As in the full Green functions, the distinguishing characteristics of the self-energies occurs at low energy. In the $U<U_c$ phase, the self-energy on the $\circ$-sites scales as $\omega^2$, shown in Fig.~\ref{fig:hsshSU3b}, which is indicative of a Fermi liquid. This shown explicitly in Fig.~\ref{fig:hsshtrivse}. On the $\bullet$-sites, the self-energy possesses power-law behavior $|\omega|^{r}$, shown in Figs.~\ref{fig:hsshSU3a} and \ref{fig:hsshtopse}. The power $r$ is the same as the power which appears in the Green functions. From the Dyson equation
\begin{equation}
G^\bullet(z) = \frac{1}{z - \varepsilon - \tilde{t}^2_A G^\circ(z) - \Sigma^\bullet(z) \strut}
\end{equation}
it can be observed that the power-law features of $\Im\Sigma^\bullet(\omega)$ and $G^\circ(\omega)$ contribute constructively to the opposite sign power-law which appears in $G^\bullet(\omega)$. The analogous Dyson equation for $G^\circ(\omega)$ shows that at low energy only the power-law from $G^\bullet(\omega)$ affects $G^\circ(\omega)$ since $\omega^2 \ll |\omega|^{-r}$.
\begin{figure}[ht!]
\includegraphics{SU3bzoom.pdf}
\includegraphics{SU3logb.pdf}
\vspace{-\baselineskip}
\caption{The low energy feature of $-\Im\Sigma^{\circ}(\omega)$ at $U/D = 3.0 < U_c$. The low energy feature goes as $-\Im\Sigma^{\circ}(\omega) \sim \omega^2$, which is indicative of a Fermi liquid state.\label{fig:hsshtrivse}}
\end{figure}
\begin{figure}[ht!]
\includegraphics{SU3azoom.pdf}
\includegraphics{SU3loga.pdf}
\vspace{-\baselineskip}
\caption{The low energy feature of $-\Im\Sigma^{\bullet}(\omega)$ at $U/D = 3.0 < U_c$. The power-law feature onsets at $|\omega| \sim 10^{-1}$. This power-law behavior is indicative that the $\bullet$-site is a non-Fermi liquid.\label{fig:hsshtopse}}
\end{figure}
This low energy behavior persists throughout the $U<U_c$ phase, even though higher energy features change with increasing $U$, with the self-energies of both sites following similar behavior as the Hubbard model self-energy as $U\to U_c^-$.
The power $r$ does not depend on the strength of interaction $U$, but the scale at which $\mathcal{A}^{\bullet/\circ}(\omega) \sim |\omega|^{\pm r}$ does.
\begin{figure}[ht!]
\subfiglabel{\includegraphics{hsshSU5_4a.pdf}}{3.125,2}{fig:hsshSU5_4a}
\subfiglabel{\includegraphics{hsshSU5_4b.pdf}}{3.125,2}{fig:hsshSU5_4b}
\subfiglabel{\includegraphics{SU5_4loga.pdf}}{3.125,2}{fig:hsshSU5_4loga}
\subfiglabel{\includegraphics{SU5_4logb.pdf}}{3.125,2}{fig:hsshSU5_4logb}
\caption{$-\Im\Sigma(\omega)$ on the $\bullet$- and $\circ$-sites at $U/D = 5.4 > U_c$. The continuum bands decay exponentially into the gap \subref{fig:hsshSU5_4loga},\subref{fig:hsshSU5_4logb}. The Mott pole is not visible on the log-scale plots.\label{fig:hsshmottse}}
\end{figure}
Above the critical interaction strength, the self-energy on both sites features a Mott pole located in a gap. The gap can be identified as a hard gap as the bands of the imaginary part of the self-energy decay exponentially into the gap. This is shown in Fig.~\ref{fig:hsshmottse}.
Qualitative information about the two sites in the two phases can be gathered from examining the Luttinger integral\index{Luttinger integral}. The Luttinger integral is given by
\begin{equation}
I^{\bullet/\circ}_{L} = \frac{2}{\pi} \Im \int_{-\infty}^{0} \text{d}\omega\, G^{\bullet/\circ}(\omega) \frac{\partial \Sigma^{\bullet/\circ}(\omega)}{\partial \omega}
\end{equation}
for each of the $\bullet$- and $\circ$-sites.
The form of the Luttinger integral has similar functional form to the Volovik-Essen-Gurarie invariant~\eqref{eq:volovikessengurarie} as $\Sigma(\omega) \sim G^{-1}(\omega)$.
For $U> U_c$, the Luttinger integral takes the value $I_{L} = 1$ on both sites. This is indicative of both sites existing in a Mott insulating phase which is corroborated with the observation that the self energy on both sites features a Mott pole.
On the $\circ$-sites with $U<U_c$, the Luttinger integral evaluates to $I^\circ_{L} = 0$ which is implies that the sites which are adiabatically connected to the topologically trivial configuration take the form of a Fermi liquid in the presence of interactions. The behavior of the self energy of these sites in this regime is $\Im\Sigma(\omega) \overset{|\omega| \ll 1}{\sim} \omega^2$ which is consistent with the Fermi liquid picture.
For the parameters used in the above analysis, the $\bullet$-sites with $U<U_c$ feature a Luttinger integral which evaluates to $I^\bullet_{L} \approx 0.17$, which is a non-trivial value. This indicates that the $\bullet$-sites in the $U<U_c$ phase cannot be considered to be Fermi liquids, but also are not simple local moments as in the Mott insulating case.
The Luttinger integral depends on the effective non-interacting bandwidth $\lvert \tilde{t}_B - \tilde{t}_A \rvert$, which controls the power-law $r$, and its value is therefore not a unique identifier of the system's phase. It does however indicate non-Fermi liquid correlations in the system.
\section{Classification of the Topological Phases}
A standard method of classifying the topological phases of a system is calculating its Chern number. For the SSH model, this is accomplished by calculating the Zak phase\index{Zak phase}
\begin{equation}
\gamma_{\textsc{z}} = \smashoperator{\oint_{\textsc{bz}}} {A}(k) = \begin{cases} 0 & t_A \geq t_B \\ \pi & t_A < t_B \end{cases}
\end{equation}
where ${A}(k) = -\text{i}\langle \psi(k) | \partial_k \psi(k) \rangle\text{d}k$ is the $U(1)$ connection 1-form on the Bloch bundle of the first Brillouin zone and the $|\psi(k)\rangle$ is the eigenvector of the lower band. Unlike the higher dimensional Berry phase which is the integral of the curvature 2-form, the Zak phase is not gauge invariant.
An alternative method, which is particularly suited to interacting systems, is given by the Volovik-Essin-Gurarie invariant~\cite{volovik,gurarie,essingurarie} which is defined in terms of the momentum space Green's functions as
\begin{equation}
\mathcal{N} = \int \bigwedge_{k}^{d} G(\omega,k) \, \d_{k} G^{-1}(\omega,k)
\label{eq:volovikessengurarie}
\end{equation}
where $\bigwedge$ denotes the exterior products of the differential 1-form of the momentum space Green functions $G \d G^{-1}$ over the $d$-dimensions of the system.
A further additional method is by using the topological Hamiltonian~\cite{simpinv,topham}\index{topological Hamiltonian}
\begin{equation}
H_{\text{T}}(k) = H_0(k) + \Sigma(\omega=0,k) ,
\label{eq:topham}
\end{equation}
which is comprised of the non-interacting part of the Hamiltonian $H_0(k)$, and the self-energy of the interacting system evaluated at zero frequency $\Sigma(0,k)$. Topological invariants are then obtained in the usual manner as if $H_{\text{T}}(k)$ were the Hamiltonian of a free fermion system, for example by the Chern number or the Volovik-Essen-Gurarie functional \eqref{eq:volovikessengurarie}.
While the Bethe lattice does not have a defined dual, a non-interacting tight-binding model on the Bethe lattice (\textit{e.g.} of the form Eq.~\eqref{eq:basickineticham}) can still be diagonalized in terms of a polarization $\theta$ which plays the role of an effective momentum~\cite{diagonalbethe}.
Since all sites on the Bethe lattice for this model are identical, an arbitrary site is chosen as a `center' and a state on this site is labelled $\lvert0\rangle$. Additional states are labelled $\lvert1\rangle$, $\lvert2\rangle$, $\lvert3\rangle$, $\ldots$, $\lvert n \rangle$, $\ldots$ with each state containing the total number of sites of degree $n$ away from $\lvert0\rangle$.
The action of the Hamiltonian on a given state $n>1$ is
\begin{equation}
\hat{H} \lvert n \rangle = \sqrt{\kappa-1} \left( \lvert n-1 \rangle + \lvert n+1 \rangle \right) \,.
\end{equation}
The nature of state $\lvert0\rangle$ as a reference state means that transitioning between states $\lvert0\rangle$ and $\lvert1\rangle$ involves $\kappa$ bonds as opposed to $\kappa-1$ bonds for transitioning between states $n>1$.
The dispersion relation is
\begin{equation}
\varepsilon(\theta) = 2 \sqrt{\kappa-1}\, t \cos\theta
\end{equation}
for finite $\kappa$ and
\begin{equation}
\varepsilon(\theta) = 2 \tilde{t} \cos\theta
\end{equation}
in the limit of $\kappa\to\infty$.
In the infinite dimensional limit, this scheme recovers the same dispersion relation as for a $1d$ homogeneous chain.
An analogous procedure may be implemented here to the case of the Bethe lattice SSH model. The difference in this case is the inhomogeneity in the hoppings from states $\lvert n \rangle$ to $\lvert n+1 \rangle$. This inhomogeneity can be cured by taking at each successive step the average over all hoppings between states $\lvert n \rangle$ and $\lvert n+1 \rangle$.
Taking a $\bullet$-site for $\lvert0\rangle$, the average of all hoppings to state $\lvert1\rangle$ is less than $t_0 \equiv \frac12(t_A + t_B)$ as by definition there are more $t_A$ bonds than $t_B$ bonds connecting to a $\bullet$-site. Similarly, the hoppings involved in hopping from state $\lvert1\rangle$ to state $\lvert2\rangle$ is always greater than $t_0$. For the hoppings involving states $\lvert n \rangle$, $n>0$, the hoppings involve $\tensor*{\kappa}{_A}-1$ or $\tensor*{\kappa}{_B}-1$ as only sites further away from $\lvert0\rangle$ are taken into account. This means that there exists the possibility that some sites involved in the hopping $\lvert n \rangle$ to $\lvert n+1 \rangle$ for which $\tensor*{\kappa}{_B}-1 = \tensor*{\kappa}{_A}$ or $\tensor*{\kappa}{_A}-1 = \tensor*{\kappa}{_B}$. These links however only constitute a subset of all links between the sites of $\lvert n \rangle$ and $\lvert n+1 \rangle$, the remainder of which will have a majority of either $t_B$ or $t_A$, thereby skewing the average in the appropriate way.
The hopping amplitude for the transition $\lvert0\rangle \to \lvert1\rangle$ is unique as all bonds from the 0-site are involved in the transition. For all other transitions $\lvert n \rangle \to \lvert n+1 \rangle$, some bonds do not participate as only sites further away from $\lvert0\rangle$ are taken into account. However, the multiplicity of each bond involved in the transition is the same.
For the finite dimensional example given in Fig.~\ref{fig:sshbethe}, the alternating hopping amplitudes of the diagonalized model are $\tilde{t}_A \approx 0.87$ and $\tilde{t}_B \approx 0.94$.
Since the self-energy of the DMFT solution is purely local it possesses no momentum, or effective momentum, dependence.
However, defining a topological invariant from the topological Hamiltonian in terms of the Green functions requires the calculation of the momentum space Green function, $G(\theta)$ ($\theta$ being the effective momentum described above). The Green function obtained from the DMFT solution is not this quantity, so additional work must be done in order to employ the topological Hamiltonian formalism to the interacting SSH system investigated in the above.
A conjecture is that states which are adiabatically connected to non-interacting topological phases retain their topological classification. While reasonable to make, based on the general notion that topological phases are robust to adiabatic deformations, this conjecture has not been proven to extend to the interacting regime~\cite{interacting}.
It is however known that topological invariants break down under the phase transition to $U>U_c$~\cite{breakdownii} since this phase is not adiabatically connected to states for which a well-defined topological invariant can be calculated. The topological invariant given by the topological Hamiltonian can therefore not be used in the Mott insulating regime. This is particularly relevant for topological Mott insulators.
\section{The Particle-Hole Asymmetric Case}
The preceding analysis was performed in the regime of particle-hole symmetry at half-filling, where $\varepsilon = -U/2$.
The DMFT analysis can easily be extended to the situation away from particle-hole symmetry, as in the case of a doped system.
The asymmetry can be parameterized as $\varepsilon \to \varepsilon = \varepsilon_0 - \mu$ where $\varepsilon_0 = -U/2$ and $\mu$ is the doping parameter
While relatively straightforward to implement in the DMFT-NRG framework, this calculation is computationally intensive. For high resolution the calculation was performed at a temperature of $T/D = 10^{-12}$ requiring $\sim90$ iterations of NRG at $\Lambda = 2.0$ keeping 6000 states.
The main result of this calculation is that there exists in this model a doping induced quantum phase transition. A sequence of spectral functions on both sublattices depicting this phase transition as $\mu$ is increased is shown in Fig.~\ref{fig:phasymmsequence}.
Near $\varepsilon = -U$, the spectrum develops a hard gap near $\varepsilon = 0$ on both sublattices. This gap is depicted in detail in Fig.~\ref{fig:dopingpt}.
Past this point, on the $\bullet$-sublattice the Kondo resonance reenters the upper band and the system again becomes metallic.
For $U/D = 3.6$ this reentrant phase transition occurs in the vicinity of $\mu/D \approx 1.5$.
On the $\circ$-sites, at $\mu_c$ the doping destroys the pseudogap state and produces a hard gap in the spectrum. As the doping is increased further, the gap fills in and the spectrum obtains finite value between the satellite bands. This is seen in the $\mu/D = 1.5$ panel of Fig.~\ref{fig:phasymmsequence}.
The phase transition to the hard gap is difficult to capture accurately with high resolution due to the strong competition between the Kondo resonance and the topological state which no longer coincide for $\mu \neq 0$. Due to the intertwined nature of the Green functions, \textit{cf.} Eq.~\eqref{eq:hsshbethegreenfunctions}, this affects the calculation for the pseudogapped spectra as well. For values of $\mu$ near to the phase transition, the DMFT-NRG calculation does not converge, with each DMFT iteration producing a drastically different solution than the previous iteration, even after very many DMFT iterations ($>200$) and small changes in parameters from the initialization ($\Delta \mu/D < 0.05$). Past the transition, $\mu > \mu_c$, the DMFT-NRG calculation does converge.
\begin{figure}[htp!]
\centering
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 0.0$}
\includegraphics[scale=1]{U3_6a.pdf}
\includegraphics[scale=1]{U3_6b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 0.4$}
\includegraphics[scale=1]{dU3_6e-2_2a.pdf}
\includegraphics[scale=1]{dU3_6e-2_2b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 0.6$}
\includegraphics[scale=1]{dU3_6e-2_4a.pdf}
\includegraphics[scale=1]{dU3_6e-2_4b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 0.9$}
\includegraphics[scale=1]{dU3_6e-2_7a.pdf}
\includegraphics[scale=1]{dU3_6e-2_7b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\caption[Spectral functions on the $\bullet$- and $\circ$-sites of the HSSH model at $U/D=3.6$ with doping $\varepsilon = \varepsilon_0 - \mu$ where $\varepsilon_0/D = -1.8$]{Spectral functions on the $\bullet$- and $\circ$-sites of the HSSH model at $U/D=3.6$ with doping $\varepsilon = \varepsilon_0 - \mu$ where $\varepsilon_0/D = -1.8$}
\end{figure}
\begin{figure}[htp!]\ContinuedFloat
\centering
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 1.0$}
\includegraphics[scale=1]{dU3_6e-2_8a.pdf}
\includegraphics[scale=1]{dU3_6e-2_8b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 1.1$}
\includegraphics[scale=1]{dU3_6e-2_9a.pdf}
\includegraphics[scale=1]{dU3_6e-2_9b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 1.3$}
\includegraphics[scale=1]{dU3_6e-3_1a.pdf}
\includegraphics[scale=1]{dU3_6e-3_1b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\caption*{\vspace{-\baselineskip} \qquad\qquad $\mu/D = 1.5$}
\includegraphics[scale=1]{dU3_6e-3_3a.pdf}
\includegraphics[scale=1]{dU3_6e-3_3b.pdf}
\vspace{-\baselineskip}
\end{subfigure}
\caption[Spectral functions on the $\bullet$- and $\circ$-sites of the HSSH model at $U/D=3.6$ with doping $\varepsilon = \varepsilon_0 - \mu$ where $\varepsilon_0/D = -1.8$]{Spectral functions on the $\bullet$- and $\circ$-sites of the HSSH model at $U/D=3.6$ with doping $\varepsilon = \varepsilon_0 - \mu$ where $\varepsilon_0/D = -1.8$. Note the doping induced phase transition at $\mu/D\approx 1.3$. Exploded detail plots shown in Fig.~\ref{fig:dopingpt}.\label{fig:phasymmsequence}}
\end{figure}
The calculation presented here is only for a single value of $U$ characteristic of the $U<U_c$ phase. In principle this work could straightforwardly be extended to capture other regions of the parameter space to develop a full phase diagram of the system in the $U$--$\mu$-plane.
\begin{figure}[ht!]
\includegraphics[scale=1]{dU3_6e-3_1azoom.pdf}
\includegraphics[scale=1]{dU3_6e-3_1bzoom.pdf}
\caption{Doping induced gap in spectrum on both sites at $\mu/D=1.3$.\label{fig:dopingpt}}
\end{figure}
\section{Outlook}
This chapter developed a scheme for treating the topological SSH model with interactions in the strongly correlated limit. New to the literature here is the reformulation of a topological insulator in infinite dimensions whose characteristics retain the distinction between topological and trivial configurations, and the subsequent exact solution in the non-perturbative strongly correlated case by means of DMFT+NRG. In particular, this calculation shows that a Mott transition may occur in a topological insulator.
The infinite dimensional Bethe SSH model devised for this calculation may be a platform for future experimental work. The standard $1d$ SSH model has been replicated by various quantum simulators, such as in cold atom experiments~\cite{zakobs}. Similar experiments could be engineered to replicate the the Bethe SSH model and confirm the appearance of topological states on the alternating shells of the Bethe lattice.
The SSH model is a prototypical example of a topologically non-trivial system in $1d$.
A second prototypical example of a $1d$ topological system is the Kitaev superconducting wire~\cite{kitaev}\index{Kitaev superconductor}, of class $BD$I which is similar to the $A$III class of the SSH model. A higher dimensional case of this model fitted to the Bethe lattice also exists and has not previously been reported in the literature.\footnote{The name ``Kitaev superconductor'' is employed here rather than the more generic ``Kitaev model'' to preserve the distinction between the superconducting wire and another ``Kitaev model'' which is a spin liquid model on a honeycomb lattice.}
The basic structure of the Kitaev superconductor is that of a $1d$ $p$-wave superconductor described by the Hamiltonian
\begin{equation}
\hat{H}_{\textsc{k}}
=
\sum_{j} \left[ \tensor*{\varepsilon}{_{j}} \opd{c}{j} \op{c}{j} + \tensor*{t}{_{j}} \left( \opd{c}{j+1} \op{c}{j} + \opd{c}{j} \op{c}{j+1} \right) + \tensor*{\Delta}{_{j}} \opd{c}{j+1} \opd{c}{j} + \tensor*{\Delta}{^*_{j}} \op{c}{j} \op{c}{j+1} \right]
\end{equation}
where $\tensor*{\Delta}{_j}$ is the superconducting order parameter.
In the Majorana\index{Majorana} representation, the Hamiltonian for this model reads
\begin{equation}
H_{\textsc{k}\gamma} =
\i \sum_{j} \Big\{ \tensor*{\varepsilon}{_j} \gamma_{[j,1} \gamma_{j,2]} + \tensor*{t}{_j} \gamma_{[j+1,1} \gamma_{j,2]}
- \tensor*{\Delta}{^*_j} \gamma_{j,(2|} \gamma_{j+1,|1)} + \tensor*{\Delta}{_j} \gamma_{j+1,(2|} \gamma_{j,|1)} \Big\}
\end{equation}
where $c_j = (\gamma_{j,1} - \text{i} \gamma_{j,2})$.
The conventional index notations~\cite{schutz} $A_{[i} B_{j]} = \frac12 (A_i B_j - A_j B_i )$ and $A_{(i|j} B_{k|l)} = \frac12 (A_{ij} B_{kl} + A_{lj} B_{ki} )$ have been employed in this expression.
The superconducting order parameter $\Delta_j$ can be described by $\Delta_j = \lvert \Delta \rvert e^{\text{i}\varphi(j)}$
On the $1d$ Kitaev superconductor, zero-energy Majorana zero modes can be generated on every site provided that the magnitude of $\Delta$ alternates sign,
\begin{equation}
\lvert \Delta_j \rvert = \begin{cases} +t_j & j \text{ odd} \\ -t_j & j \text{ even} \end{cases} \,.
\end{equation}
This is tantamount to the phase $\varphi$ rapidly oscillating such that $|\Delta_j| = - |\Delta_{j+1}|$. In the conventional parameterization of the Kitaev superconductor, the Majorana zero modes manifest only on the boundary sites of the wire.
These Majorana modes, however, only manifest themselves at the special point $\varepsilon_j = 0$. This is in contrast to the standard Kitaev superconductor in $1d$ where the Majorana modes manifest for the entire parameter regime $\lvert \varepsilon \rvert < \lvert 2 t \rvert$. In a similar manner to the SSH model discussed above, this configuration for the Kitaev superconductor can be mapped onto a Bethe lattice and the limit taken to infinite coordination number.
A detailed DMFT analysis of this system is left as an outlet for future work. However the Majorana decomposition briefly discussed here will be generalized and taken into a new context in the next chapter.
The approach for treating an interacting topological insulator presented here was based on performing the infinite dimensional counterpart of a $1d$ model in class $A$III. A complementary approach which has appeared in the literature has treated the infinite dimensional limit of the $2d$ Chern insulator~\cite{interactingchern}, which is a topological insulator in the Cartan class $A$. Due to Bott periodicity systems of this class are topologically non-trivial in dimensions of $d=2\mod2$, or in all even dimensions.
This approach makes use of the Clifford algebra valuedness of the Hamiltonian to iteratively construct Hamiltonians in successively larger even dimensions. The limit to infinite dimensions is accomplished by taking the limit through only even dimensions. This strategy allows the application of DMFT to $2d$ Chern insulators.
This scheme is complementary to the approach developed in this chapter. It is well known that the topological phases of matter are dimension dependent~\cite{hk,tenfoldclassification}. The construction of this chapter can be viewed as a method of obtaining a well-defined infinite dimensional limit of a $1d$ topological insulator. The method of~\cite{interactingchern} on the other hand, can be viewed as a systematic way of taking the infinite dimensional limit of a $2d$ topological insulator.
This chapter studied the effects of strong interactions on a system whose non-interacting counterpart is topological. In contrast, the following chapters investigate strongly correlated systems and reveal how a notion of topology can be found within them.
\chapter{Conclusion\label{ch:conclusion}}
The work of this thesis centered around two paradigms in theoretical physics: models of topologically non-trivial electronic states and the construction of effective models for complex interacting many-body systems.
With regards to topological phenomena in condensed matter, a novel result developed here (\S\ref{ch:bethessh}) was the
adaptation of a prototypical $1d$ topological insulator to infinite dimensions thereby allowing the application of DMFT to produce an exact solution when strong correlations are included.
It was found that the topological state becomes broadened to a power-law diverging spectrum and that the system experiences a metal-insulator transition to a Mott insulator above a critical interaction strength.
The SSH model is also a prototypical, if somewhat tautological, example of a crystalline topological insulator. A potential generalization of the technique developed here is to other crystalline topological insulators to treat interactions with DMFT. This treatment may involve extensions of the base DMFT as well, such as multi-band or cluster DMFT.
Also within the topological paradigm, this thesis developed several classes of generalizations of the SSH model and demonstrated the cases in which these models do or do not exhibit topological features (\S\ref{ch:genssh}). The lessons learned here provide the basis of a toolkit for engineering $1d$ tight-binding chains with nearest-neighbor kinetics which possess a wide variety of desired spectral features. Given the similarities between the SSH model and the Kitaev superconductor, it is possible that generalizations of the Kitaev superconductor similar to the generalizations of the SSH model developed here also exist, with possible phases exhibiting Majorana zero modes. Such generalized models may be of interest to experiments attempting to utilize Majorana modes as qubits for quantum computation.
The second part of this thesis involved the development of auxiliary models for strongly correlated systems which replicate the dynamics of the correlations, but are themselves fully non-interacting (\S\ref{ch:aux}).
An application of this type of effective model was demonstrated for computing quantum transport through quantum dots.
This use of effective model has the potential to simplify transport calculations as non-interacting transport formulas can be used rather than the more complicated transport formulas for fully interacting systems.
In principle it could be possible to circumvent the requirement of exactly solving the quantum dot impurity problem by instead employing specifically designed auxiliary models. The auxiliary chains as developed in this thesis are well understood in terms of the relation between their parameterization and their spectral output. Estimating the desired features of the dot self-energy would be sufficient to engineer an auxiliary system modeling the desired interactions. For more sophisticated quantum dots, such as multiorbital impurities, multilegged ladders would need to be employed, as referenced in \S\ref{sec:motttopologyoutlook}. Developing these types of auxiliary models and benchmarking them in a manner similar to the development of the toy model in \S\ref{sec:mttoy}, these multilegged models could also be employed to circumvent the necessity of exact solutions.
It was also found that these auxiliary models have features that make them interesting systems themselves, and their properties can lead to interesting interpretations of the original system. This was demonstrated in the construction of the auxiliary models for the Hubbard model, where it was revealed that the auxiliary models take the form of generalized SSH chains and that the Mott metal-insulator transition manifests itself as a topological phase transition in the auxiliary model.
Using features of the self-energy of the Hubbard model near the transition, a toy model was constructed which replicated the relevant qualitative features of this spectrum. The construction of this toy model built upon the understanding of generalized SSH models gathered from \S\ref{ch:genssh}.
Another type of effective model developed here was based on the use of novel Majorana decompositions of fermions to generate non-linear canonical transformations (\S\ref{ch:aux}). The key property exploited here was the observation that various compound Majorana operators can be combined to form objects which obey the usual fermionic commutation relations.
In addition to forming the basis of non-interacting auxiliary models for strongly correlated systems, these decompositions may lead to more interesting Majorana models and novel appearances of Majorana degrees of freedom in condensed matter systems. In the conventional Majorana framework, the Kitaev superconducting wire is a model on which Majorana degrees of freedom can be identified. The generalized Majorana framework developed here may facilitate the demonstration of Majorana degrees of freedom on other models for which the standard decomposition is not applicable. In particular, the polynomial type decomposition briefly referenced may have interesting applicability to interacting models where quartic terms appear in the Hamiltonian.
The various effective models developed in this thesis have ample opportunity to be developed to an even more sophisticated level and the case study applications investigated here may prove to be only a shallow foray into their full potential.
\chapter{Impurity Effective Models\label{ch:aux}}
The difficulties in dealing with strongly correlated systems motivates the development of effective models which capture the dynamics of the full interacting problem, but are themselves noninteracting.
The DMFT described and used in preceding chapters is itself an example of an effective model for treating strongly correlated systems. Rather than treating the entire interacting system, it treats an approximation of the system where interactions only occur locally at a single point within a larger non-interacting background.
This chapter is devoted to the development of two such similar effective models, the first being based on non-linear canonical transformations and Majorana\index{Majorana} degrees of freedom, and the second based on a novel auxiliary field mapping.
The generalized Majorana decomposition developed here is similar to previous work involving their use in non-linear canonical transformations of strongly correlated fermionic systems \cite{bazzanella,bazzanellakondo,bazzanellanlct,bazzanellamott}. In this previous work, Majoranas were employed to form representations of the symmetry groups of specific models, such as the Hubbard model~\cite{bazzanellamott} and Kondo lattice~\cite{bazzanellakondo}.
The approach to Majorana degrees of freedom taken in this thesis is conceptually similar to the geometric algebra framework of mathematical physics~\cite{hestenes,hestenessobczyk,doranlasenby}. In contrast to standard vector calculus in physics where vectors are taken as elements living in sections of the tangent bundle, in geometric algebra vectors representing physical quantities are taken to be elements of sections of a Clifford bundle. The inspiration here comes from using such a framework to manipulate Majorana degrees of freedom as they behave as Clifford algebra valued vectors, as well as providing a framework for manipulating higher-order tuples of Majoranas. In contrast to the work of~\cite{bazzanella,bazzanellakondo,bazzanellanlct,bazzanellamott}, the transformation defined in this thesis aims to be more general and not dependent on the symmetries of the specific model under consideration.
The second section of this chapter is devoted to the development and basic application of an auxiliary field mapping scheme which systematically maps interacting lattice models onto fully non-interacting lattice models that still reproduce the original dynamics.
The non-interacting equivalent model is not only simpler to treat, but offers insights into the underlying structure and dynamics of such systems.
An application showcasing the utility of this effective model is given for the case of calculating quantum transport through an interacting impurity.
This auxiliary field mapping will play a central role the subsequent chapter where it is applied to the Mott transition in the Hubbard model.
\section{Non-Linear Canonical Transformations\label{sec:majorana}}
The idea of exploiting non-linear canonical transformations as a method of treating strongly correlated systems in~\cite{bazzanella,bazzanellanlct} was inspired by the observation that the local symmetry group of the Hubbard model possesses a non-linear $U(1)$ symmetry~\cite{oestlundmele}. In~\cite{oestlundmele} it was noted that the local symmetry group of the Hubbard model is $G = SU(2)_S \otimes SU(2)_I \otimes U(1)_{NL} \otimes \mathbbm{Z}_2$ where the $SU(2)$ groups correspond to spin and isospin symmetries, and the $\mathbbm{Z}_2$ group is a parity symmetry which exchanges spin and isospin. The electromagnetic charge symmetry $U(1)_Q$ is a subgroup of the isospin group.
The Hubbard model also admits a local non-linear $U(1)_{NL}$ symmetry which acts as
\begin{align}
\opd{c}{j,\sigma} &\mapsto \opd{c}{j,\sigma} ( 1 - \op{n}{-\sigma} ) + \e^{2\i\chi} \opd{c}{j,\sigma} \op{n}{-\sigma}
\end{align}
The action of this transformation is to change the phase of the doubly occupied state by $\e^{-2\i\chi}$ while preserving the zero and singly occupied states.
This transformation can be generated by $\opd{\tilde{c}}{\sigma} = R^\dagger(\chi) \opd{c}{\sigma} R(\chi)$ where
\begin{equation}
R(\chi) = \e^{2 \i \chi \left( \hat{n}_{\uparrow} - \frac12 \right) \left( \hat{n}_{\downarrow} - \frac12 \right)} \,.
\end{equation}
Under the conventional Majorana decomposition
\begin{align}
\opd{c}{\uparrow} &= \frac{\gamma_1 + \i \gamma_2}{2}
&
\opd{c}{\downarrow} &= \frac{\gamma_3 + \i \gamma_4}{2}
\,,
\end{align}
the generator of the non-linear transformation becomes
\begin{equation}
R(\chi) = \e^{- \i \frac{\chi}{2} \gamma_1 \gamma_2 \gamma_3 \gamma_4} \,.
\end{equation}
This generator may be applied directly to the Majorana basis as $\tilde{\gamma}_{i} = R^\dagger(\chi) \gamma_{i} R(\chi)$ yielding Majoranas in the transformed basis as
\begin{equation}
\tilde{\gamma}_{i} = \gamma_{i} \cos\chi + \sgn(\pi) \i \gamma_{j} \gamma_{k} \gamma_{l} \sin\chi
\end{equation}
where $i \neq j,k,l$ and the sign depends on the permutation $\pi$ of the Majoranas. As evidenced by this transformation, non-linear canonical transformations can be implemented in a Majorana basis in a straightforward manner and such transformations also lead to the introduction of higher-order Majorana tuplets as degrees of freedom.
\subsection{General Majorana Decomposition}
Previous approaches to non-linear canonical transformations using Majorana degrees of freedom~\cite{bazzanella} were based on realizing representations of relevant symmetry groups in terms of Majoranas. For example, the transformations in~\cite{bazzanella} were based on realizing representations of $SU(2)_{S} \otimes SU(2)_{I}$, or more generally, $SU(2^n) \otimes SU(2^n)$.
The approach taken here is a more abstract and less model dependent approach and is conceptually similar to, and motivated by, the geometric algebra\index{geometric algebra} framework of mathematical physics~\cite{hestenes,hestenessobczyk,doranlasenby}. As a primer for the generalized Majorana formalism developed below, presented here is a brief review of geometric algebra. Geometric algebra and the associated geometric calculus is a framework which replaces the standard Gibbs-Heaviside vector algebra, where functions on space(time) manifold $M$ are taken to be sections of the tangent bundle $TM$, with functions which are sections of the real Clifford bundle over $M$, a vector bundle whose fibers are a real Clifford algebra. Formally, a Clifford algebra is defined for a given vector space $V$ and a quadratic form $Q$. The present analysis of Majoranas takes the vector space to be over the field of real numbers $\mathbbm{R}$ and the quadratic form to be the Euclidean metric. An orthonormal basis for the real $k$-dimensional Clifford algebra $\text{Cl}_{k}(\mathbbm{R})$ is given by $\{e_n\}$ such that a Clifford vector $a$ can be written in components as $a = a^1 e_1 + \cdots + a^n e_n$.
The product of two Clifford vectors $a$ and $b$ is described by~\cite{hestenes,hestenessobczyk,doranlasenby}
\begin{equation}
a \, b = a \cdot b + a \extp b
\label{eq:geometricproduct}
\end{equation}
where $a \cdot b \vcentcolon= \frac12(a \, b + b \, a)$ is the symmetric part and $a \extp b \vcentcolon= \frac12(a \, b - b \, a)$ is the antisymmetric part.
The antisymmetric subalgebra is an exterior algebra, which allows the construction of higher rank vectors with bases given by exterior products of the vector basis. To illustrate, the graded basis elements of $\text{Cl}_{3}(\mathbbm{R})$ are
\begin{equation}
\begin{gathered}
1
\\
e_1 \qquad e_2 \qquad e_3
\\
e_1 \extp e_2 \qquad e_2 \extp e_3 \qquad e_3 \extp e_1
\\
e_1 \extp e_2 \extp e_3
\end{gathered}
\end{equation}
which are termed the scalar, vector, bivector, and trivector basis elements respectively. These vectors are bases of contravariant vector spaces. Given a metric, there is exists an isomorphism between these higher rank vectors and exterior forms of the same rank. Since the basis elements are orthogonal to each other, $e_i \cdot e_j = \delta_{ij}$, the higher grade elements can be written directly in terms of the Clifford product rather than the wedge product, \textit{e.g.} $e_1 \extp e_2 \equiv e_1 e_2$. The highest rank vector of a Clifford algebra is also called the pseudoscalar as it functions as a scalar quantity which is odd under parity transformations. The unit pseudoscalar also functions as an operator in $\text{Cl}_k$ as the generator of duality transformations, analogously to the Hodge star operator in the algebra of exterior forms. In $d$ dimensions, the Hodge star operator is a map $\star \vcentcolon \bigwedge_n \to \bigwedge_{d-n}$ which maps $n$-forms to $(d-n)$-forms. This is accomplished in geometric algebra by means of taking the geometric product of an element with the unit pseudoscalar and contracting terms with the inner product.
\begin{align}
e_1 \, e_1 e_2 e_3 &= e_2 e_3
&
e_2 \, e_1 e_2 e_3 &= e_3 e_1
&
e_3 \, e_1 e_2 e_3 &= e_1 e_2
\end{align}
This shows that the unit bivectors in $\text{Cl}_{3}(\mathbbm{R})$ are dual to the unit vectors. A historical prototype of the vector algebra of $3d$ space was that of the quaternions $\mathbbm{H}$, the four-dimensional normed division algebra over the reals invented by William Rowan Hamilton in 1843. A quaternion $q 25 \mathbbm{H}$ is defined as
\begin{equation}
q \vcentcolon= \{ a + b \boldsymbol{i} + c \boldsymbol{j} + d \boldsymbol{k} \;\vert\; a,b,c,d 25 \mathbbm{R} ;\; \boldsymbol{i}^2 = \boldsymbol{j}^2 = \boldsymbol{k}^2 = \boldsymbol{i} \boldsymbol{j} \boldsymbol{k} = -1 \} \,.
\end{equation}
In terms of representing vector objects in $\mathbbm{R}^3$, the elements $\boldsymbol{i}$, $\boldsymbol{j}$, and $\boldsymbol{k}$ can be taken as an orthonormal basis. Within this vectorial approach, the complex quaternionic units do not actually correspond to the unit vectors $e_a$, but rather the unit bivectors $e_a e_b$.
Historically the use of quaternions to describe physical objects in space was superseded by the Gibbs-Heaviside algebra as it was computationally simpler and more visually intuitive, however quaternions prove to be useful in the computer graphics industry for implementing rotations in $3d$ graphics where they have advantages over Euler angles, for instance, by avoiding the issue of gimbal lock.
Examples of Clifford algebra known in conventional physics are the algebra of the Pauli matrices and of the Dirac matrices.
Indeed the Clifford product Eq.~\eqref{eq:geometricproduct} is already known in conventional physics by means of the Pauli matrix identity
\begin{equation}
{\sigma}_a \; {\sigma}_b = \left(\vec{a} \cdot \vec{b}\right) \mathbbm{1} + \i \, \left(\vec{a} \times \vec{b}\right) \cdot \vec{\sigma}
\end{equation}
where ${\sigma}_a = \vec{a} \cdot \vec{\sigma}$ with Euclidean metric.
The use of Clifford algebras is also well known in the description of relativistic spinors~\cite{pdg}. In this context the internal Clifford metric possesses Lorentzian signature.
Interest in Majorana degrees of freedom in contemporary condensed matter literature is dominated by their potential application as qubits in quantum computation~\cite{kitaev,tqcreview}.
The conventional decomposition of fermionic degrees of freedom to Majorana degrees of freedom within the context of second-quantized operators is given by
\begin{align}
\opd{c}{a} &= \frac{\gamma_{1,a} + \i \gamma_{2,a}}{2}
&
\op{c}{a} &= \frac{\gamma_{1,a} - \i \gamma_{2,a}}{2}
\label{eq:stdmajorana}
\end{align}
where the Majorana operators obey the Clifford algebra relation
\begin{subequations}
\begin{align}
\{ \tensor*{\gamma}{_\mu} , \tensor*{\gamma}{_\nu} \} = 2 \tensor*{\delta}{_{\mu,\nu}}
\intertext{and are self-conjugate,}
\gamma_\mu^\dagger = \gamma_\mu \;.
\end{align}
\end{subequations}
The indices $\mu$ and $\nu$ represent all relevant degrees of freedom, such as lattice site and spin.
Immediate consequences of these relations are that Majorana operators are idempotent, $\gamma_\mu^2 = 1$, and obey antisymmetric statistics, $\tensor*{\gamma}{_\mu} \tensor*{\gamma}{_\nu} = - \tensor*{\gamma}{_\nu} \tensor*{\gamma}{_\mu}$ for $\mu \neq \nu$. Another consequence of this property is that it implies that there does not exist the notion of a Fock space for Majorana operators. Since $\gamma^2 = 1$, it follows that there is no notion of a vacuum state such that $\gamma \lvert \Omega \rangle = 0$.
Majorana degrees of freedom as described here are sometimes referred to as ``Majorana fermions'' in the condensed matter literature due to their similarity with the Majorana fermions that appear in relativistic field theory which are their own anti-particle, a property which manifests as the relation $\gamma^\dagger = \gamma$. However, the Majorana degrees of freedom which appear in condensed matter contexts, such as in the Kitaev wire, do not generally obey fermionic statistics, but rather non-Abelian anyonic statistics. Although this property does not feature in this work, the characterization of Majorana degrees of freedom as ``fermions'' will be eschewed for this reason as well as to more easily distinguish between Majorana degrees of freedom and the complex fermion degrees of freedom.
The Clifford algebraic properties of the Majorana operators also allow for generalizations of this basic decomposition to operators which go beyond the single Majorana operator $\gamma$. Taking inspiration from the geometric algebra formalism discussed above, the following develops a new generalized formalism for decomposing fermions into Majorana degrees of freedom. This generalized decomposition is expressed in the form
\begin{align}
\opd{c}{} &= \frac{\Gamma^\dagger_\Re + \Gamma^\dagger_\Im}{2}
&
\op{c}{} &= \frac{\Gamma_\Re + \Gamma_\Im}{2}
\label{eq:genmajorana}
\end{align}
where $\Gamma_\Re$, $\Gamma_\Im$ are now arbitrary polynomial functions of the basic Majorana operators $\gamma$ and indices associated to the fermions and internal degrees of freedom have been suppressed for clarity.
In order to preserve fermionic anticommutation relation $\{ \opd{c}{a} , \op{c}{b} \} = \tensor{\delta}{_{a,b}}$, the generalized Majorana operators must obey the conditions
\begin{align}
\tensor*{\Gamma}{_\Re^\dagger} &= \tensor*{\Gamma}{_\Re} \,,
&
\Gamma_\Re^2 &= +1 \,,
&
\tensor*{\Gamma}{_\Im^\dagger} &= -\tensor*{\Gamma}{_\Im} \,,
&
\Gamma_\Im^2 &= -1 \,,
&
&\text{and}
&
\{ \tensor*{\Gamma}{_\Re} , \tensor*{\Gamma}{_\Im} \} &= 0 \,.
\label{eq:genmajoranarelations}
\end{align}
The conjugation ($\dagger$) operation involves reversing the order of constituent elementary Majorana operatiors $\gamma$ as well as taking the complex conjugate. For example, for $\Gamma = \i \gamma_{\mu_1} \gamma_{\mu_2} \cdots \gamma_{\mu_n}$, conjugation returns $\Gamma^\dagger = -\i \gamma_{\mu_n} \gamma_{\mu_{n-1}} \cdots \gamma_{\mu_1}$.
Observe that the generalized decomposition Eq.~\eqref{eq:genmajorana} does not exhibit an explicit dependence on the imaginary unit, but rather it appears implicitly through the definition of $\Gamma_\Re$ and $\Gamma_\Im$ due to the algebraic properties of the $\dagger$ operation. In the case of the standard two-fold decomposition Eq.~\eqref{eq:stdmajorana}, $\Gamma_\Re = \gamma_1$ and $\Gamma_\Im = -\i \gamma_2$. In this case the imaginary unit appears within the definition of the object $\Gamma_\Im$. The imaginary unit however is not restricted to lie within the definition of $\Gamma_\Im$, nor must it appear there.
A first example of a generalized Majorana operator is $\Gamma_\Re = \i \gamma_1 \gamma_2 \gamma_3$. Its properties can be confirmed to obey \eqref{eq:genmajoranarelations}:
\begin{align*}
&\begin{aligned}[t]
\Gamma_\Re^2
&= (\i \gamma_1 \gamma_2 \gamma_3)(\i \gamma_1 \gamma_2 \gamma_3)
\\
&= - \gamma_1 \gamma_2 \gamma_3 \gamma_1 \gamma_2 \gamma_3
\\
&= - (-1)^3
\\
&= 1 \,,
\end{aligned}
&
&\begin{aligned}[t]
\Gamma_\Re^\dagger
&= -\i \gamma_3 \gamma_2 \gamma_1
\\
&= -\i (-1)^3 \gamma_1 \gamma_2 \gamma_3
\\
&= \Gamma_\Re \,.
\end{aligned}
\end{align*}
This calculation shows that a triplet of Majorana operators times the imaginary unit is algebraically equivalent to a single Majorana operator. As demonstrated here the imaginary unit may appear in the definition of $\Gamma_\Re$, even though $\Gamma_\Re$ captures the real part of the original complex fermion degree of freedom.
Similarly, a Majorana which obeys the properties of $\Gamma_\Im$ can be obtained from $-\i\Gamma_\Re = \gamma_1 \gamma_2 \gamma_3$.
In accordance with the definition \eqref{eq:genmajorana}, a fermion that can be constructed from these operators may be
\begin{align}
\opd{c}{} &= \frac{\i \gamma_1 \gamma_2 \gamma_3 + \gamma_4 \gamma_5 \gamma_6}{2} \,,
&
\op{c}{} &= \frac{\i \gamma_1 \gamma_2 \gamma_3 - \gamma_4 \gamma_5 \gamma_6}{2} \,.
\end{align}
This example demonstrates the rationale of not explicitly including the imaginary unit in the definition \eqref{eq:genmajorana}. The definition avoids the potentially misleading association between the postfactor of the imaginary unit with the component which has odd parity under Hermitian conjugation.
A pair of generalized Majoranas which lead to an unconventional decomposition are
\begin{align}
\Gamma_\Re &= \gamma_1 \,,
&
\Gamma_\Im &= \gamma_2 \gamma_3 \gamma_4 \,.
\end{align}
This pair represents an unusual representation as it manifests as a purely $\mathbbm{R}$-valued decomposition
\begin{align}
\opd{c}{} &= \frac{\gamma_1 + \gamma_2 \gamma_3 \gamma_4}{2} \,,
&
\op{c}{} &= \frac{\gamma_1 + \gamma_4 \gamma_3 \gamma_2}{2} = \frac{\gamma_1 - \gamma_2 \gamma_3 \gamma_4}{2} \,.
\end{align}
The adjoint property manifests not as complex conjugation, but rather as in the reversal of the Majorana polynomial.
A more elaborate example of this generalized transformation involves
\begin{align}
\Gamma_\Re &= \frac{\gamma_1 + \gamma_2}{\sqrt{2}} \,,
&
\Gamma_\Im &= \frac{\gamma_3 \gamma_4 \gamma_5 - \i \gamma_6}{\sqrt{2}} \,.
\label{eq:majoranapolynomial}
\end{align}
This examples demonstrates that the fermions need not be a monomial, but can in general be polynomials of higher order Majorana terms. To preserve the correct commutation relations, the $\Gamma$ are restricted to be odd degree polynomials.
A key aspect of the Majorana operators that will be exploited in this work is the geometric interpretation of their Clifford algebraic structure.
In the Majorana representation, the Clifford algebraic properties of the Majorana algebra allow for the expression of canonical transformations in terms of a non-linear $U(1)$ transformation~\cite{bazzanella}. This transformation can be implemented by means of the generalized Euler identity
\begin{equation}
\e^{\pm \mathcal{I} N \theta} = \mathbbm{1} \cos(N\theta) \pm \mathcal{I} \sin(N\theta)
\label{eq:generaleuler}
\end{equation}
where $N 25 \mathbbm{R}$, $\theta 25 \mathbbm{R}/2\pi\mathbbm{Z}$, $\mathcal{I}^2 = -\mathbbm{1}$, and $\mathcal{I}$ must be power associative. This transformation amounts to a rotation in the $k$-dimensional Clifford algebra $\mathrm{Cl}_k$. This expression generalizes the conventional Euler identity which appears as a special case for $\mathcal{I} = \pm\i$.
A unitary transformation operator acting on a Majorana operator is the form of a rotor\index{rotor}~\cite{hestenes,hestenessobczyk,doranlasenby}
\begin{equation}
R(\theta) = \e^{\mathcal{I} \theta / 2}
\label{eq:rotor}
\end{equation}
whose actions are implemented by the transformation
\begin{equation}
\gamma' = R(\theta) \gamma R^\dagger(\theta)
\end{equation}
with $R^\dagger(\theta) = \e^{\mathcal{I}^\dagger \theta / 2} = \e^{-\mathcal{I} \theta / 2}$.
With a Hamiltonian in its Majorana representation, it is possible to perform a non-linear canonical transformation in Clifford algebra space to a new Majorana representation, which can then be recombined into a new fermion representation.
\subsubsection{Hubbard Atom}\label{nlctha}
An example of the utility of these non-linear canonical transformations is that of application to the Hubbard atom.
This case serves as an example of how a non-linear canonical transformation can transform an interacting model into a non-interacting one. This transformation relies on mixing the physical degrees of freedom of the Hubbard atom with auxiliary fermion degrees of freedom which are initially decoupled from the physical system. The auxiliary part of the system consists of two species of spinless fermions independently isolated from each other and the physical spinful fermions.
The Hamiltonian of the total system is
\begin{equation}
\hat{H}
= \underbrace{
\varepsilon \left( \opd{c}{\uparrow} \op{c}{\uparrow} + \opd{c}{\downarrow} \op{c}{\downarrow} \right) + U \opd{c}{\uparrow} \op{c}{\uparrow} \opd{c}{\downarrow} \op{c}{\downarrow}
}_{\text{Physical}}
+
\underbrace{
\varepsilon_f \hat{f}^\dagger \hat{f} + \varepsilon_g \hat{g}^\dagger \hat{g}
}_{\text{Auxiliary}}
\end{equation}
where $\op{c}{\uparrow}$, $\op{c}{\downarrow}$, $\hat{f}$, and $\hat{g}$ obey the usual fermionic commutation relations.
\begin{comment}
\begin{equation}
H
= \underbrace{
\varepsilon \left( \tensor*{c}{^\dagger_\uparrow} \tensor*{c}{_\uparrow} + \tensor*{c}{^\dagger_\downarrow} \tensor*{c}{_\downarrow} \right) + U \tensor*{c}{^\dagger_\uparrow} \tensor*{c}{_\uparrow} \tensor*{c}{^\dagger_\downarrow} \tensor*{c}{_\downarrow}
}_{\text{Physical}}
+
\underbrace{
\varepsilon_f f^\dagger f + \varepsilon_g g^\dagger g
}_{\text{Auxiliary}}
\end{equation}
\end{comment}
The Hilbert space may be broken into two components, one for the physical degrees of freedom and one for the auxiliary degrees of freedom. The total Hilbert space for the combined system is $\mathcal{H} = \mathcal{H}_{\textsc{ha}} \otimes \mathcal{H}_{f} \otimes \mathcal{H}_{g}$ where $\mathcal{H}_{f}$ and $\mathcal{H}_{g}$ are the Hilbert spaces of the auxiliary fermions and $\dim\mathcal{H} = 8$.
These Hilbert subspaces are both spanned by a four-dimensional Majorana representation, represented in the following by $\gamma$ and $\mu$ respectively. The fermionic degrees of freedom of the original model can be transformed into a Majorana representation as
\begin{align}
\tensor*{c}{^\dagger_\uparrow} &= \frac{\gamma_1 + \i \gamma_2}{2} \,,
&
\tensor*{c}{^\dagger_\downarrow} &= \frac{\gamma_3 + \i \gamma_4}{2} \,,
\\
f^\dagger &= \frac{\mu_1 + \i \mu_2}{2} \,,
&
g^\dagger &= \frac{\mu_3 + \i \mu_4}{2} \,.
\end{align}
These $\gamma$ and $\mu$ degrees of freedom obey the characteristic Clifford algebra relations of Majorana operators
\begin{align}
\{ \gamma_i , \gamma_j \} &= 2 \delta_{ij}
=
\{ \mu_i , \mu_j \} \,,
&
\{ \gamma_i , \mu_j \} &= 0 \,.
\end{align}
Under this transformation the original Hamiltonian becomes
\begin{equation}\begin{aligned}[b]
H
&=
\begin{multlined}[t]
\frac{\varepsilon}{2} \left( 2 - \i \gamma_1 \gamma_2 - \i \gamma_3 \gamma_4 \right) + \frac{U}{4} \left( - \gamma_1 \gamma_2 \gamma_3 \gamma_4 - \i \gamma_1 \gamma_2 - \i \gamma_3 \gamma_4 + 1 \right) \\+ \frac{\varepsilon_f}{2} \left( 1 - \i \mu_1 \mu_2 \right) + \frac{\varepsilon_g}{2} \left( 1 - \i \mu_3 \mu_4 \right)
\end{multlined}
\\
H
&= \i \Delta \left( \gamma_1 \gamma_2 + \gamma_3 \gamma_4 \right) - \frac{U}{4} \gamma_1 \gamma_2 \gamma_3 \gamma_4 - \i \frac{\varepsilon_f}{2} \mu_1 \mu_2 - \i \frac{\varepsilon_g}{2} \mu_3 \mu_4
+ \varepsilon + \frac{U}{4} + \frac{\varepsilon_f}{2} + \frac{\varepsilon_g}{2}
\end{aligned}\end{equation}
with \( \Delta = \frac{\varepsilon}{2} + \frac{U}{4} \) a factor which parameterizes the doping on the Hubbard atom.
The Hamiltonian is now of a form to which a non-linear canonical transformation can be applied based on \eqref{eq:generaleuler}.
To achieve this, it is useful to define the unit pseudoscalar $P$ for each Majorana subspace,
\begin{align}
P_\gamma &= \gamma_1 \gamma_2 \gamma_3 \gamma_4
&
&\text{and}
&
P_\mu &= \mu_1 \mu_2 \mu_3 \mu_4 \,,
\end{align}
which have the properties
\begin{align}
P_\gamma^2 &= +1 = P_\mu^2 \,,
&
\{ \gamma_j , P_\gamma \} &= 0 = \{ \mu_j , P_\mu \} \,,
&
[ \gamma_j , P_\mu ] &= 0 = [ \mu_j , P_\gamma ] \,,
&
[ P_\gamma , P_\mu ] &= 0 \,.
\end{align}
The unit pseudoscalar parameterizes a $U(1)$ symmetry within each subspace with transformation generator $\e^{\i P \theta}$.
The unit pseudoscalars enter into the definition of a rotation kernel which serves as the basis for the element $\mathcal{I}$ in Eq.~\eqref{eq:generaleuler},
\begin{subequations}
\begin{align}
\mathcal{I}_{\gamma,j} &= \i ( \gamma_j \mu_j ) \gamma_1 \gamma_2 \gamma_3 \gamma_4 = \i (\gamma_j \mu_j ) P_\gamma \,,
\\
\mathcal{I}_{\mu,j} &= \i ( \gamma_j \mu_j ) \mu_1 \mu_2 \mu_3 \mu_4 = \i ( \gamma_j \mu_j ) P_\mu \,.
\end{align}
\end{subequations}
A distinct rotation kernel is defined for each Majorana of each subspace. A specific example of one of the kernels is
\begin{equation}
\begin{aligned}
\mathcal{I}_{\mu,1}
&= \i (\gamma_1 \mu_1) \mu_1 \mu_2 \mu_3 \mu_4
\\
&= \i \gamma_1 \mu_2 \mu_3 \mu_4
\end{aligned}
\end{equation}
The principle behind this definition is to produce an object such that when a Majorana is acted upon by the operator \eqref{eq:generaleuler}, it will contract with elements of the rotation kernel and the resulting expression, for a given $\theta$, will be a properly normalized Majorana in a new basis.
A Majorana of the $\gamma$-basis can for example be transformed to a tuple in the $\mu$-basis by
\begin{equation}
\begin{aligned}[b]
\gamma_1 \mapsto \gamma^\prime_1(\theta)
&= \e^{\mathcal{I}_{\mu,1} \theta/2} \gamma_1 \e^{-\mathcal{I}_{\mu,1} \theta/2}
\\
&= \left( \cos\tfrac\theta2 + \i \gamma_1 \mu_2 \mu_3 \mu_4 \sin\tfrac\theta2 \right) \gamma_1 \left( \cos\tfrac\theta2 - \i \gamma_1 \mu_2 \mu_3 \mu_4 \sin\tfrac\theta2 \right)
\\
&= \gamma_1 \cos\theta - \i \mu_2 \mu_3 \mu_4 \sin\theta
\\
\gamma^\prime_1(\tfrac\pi2)
&= - \i \mu_2 \mu_3 \mu_4 \,.
\end{aligned}
\end{equation}
In this way, a Majorana degree of freedom from the physical subspace may be canonically transformed to a function of Majorana degrees of freedom of the auxiliary subspace.
With this rotation kernel the original Hamiltonian at half-filling, $\varepsilon = - U/2$,
\begin{align}
H &= - \frac{U}{4} \gamma_1 \gamma_2 \gamma_3 \gamma_4 - \i \frac{\varepsilon_f}{2} \mu_1 \mu_2 - \i \frac{\varepsilon_g}{2} \mu_3 \mu_4
\end{align}
can be put into a new form with respect to a new Majorana basis.
Each Majorana of the phase space can be canonically transformed using the rotor operator\index{rotor} with a specified angle for each Majorana
\begin{subequations}
\begin{align}
\gamma'_j &= R_{\mu,j}(\theta_{\mu,j}) \, \gamma_j \, R_{\mu,j}^\dagger(\theta_{\mu,j})
\\
\mu'_j &= R_{\gamma,j}(\theta_{\gamma,j}) \, \mu_j \, R_{\gamma,j}^\dagger(\theta_{\gamma,j})
\end{align}
\end{subequations}
The angles for each rotor are chosen to be
\begin{align}
\left. \begin{matrix} \theta_{\gamma,1} \\ \theta_{\mu,1} \end{matrix} \right\}
&= \frac\pi2
&
\left. \begin{matrix} \theta_{\gamma,2} ,\, \theta_{\gamma,3} ,\, \theta_{\gamma,4} \\ \theta_{\mu,2} ,\, \theta_{\mu,3} ,\, \theta_{\mu,4} \end{matrix} \right\}
&= 0
\end{align}
Following this parameterization, the Majoranas in the new rotated basis are
\begin{align}
&\begin{aligned}
\gamma'_1 &= \gamma_1
\\
\gamma'_2 &= \i \gamma_3 \gamma_4 \mu_1
\\
\gamma'_3 &= -\i \gamma_2 \gamma_4 \mu_1
\\
\gamma'_4 &= \i \gamma_2 \gamma_3 \mu_1
\end{aligned}
&
&\begin{aligned}
\mu'_1 &= -\i \gamma_2 \gamma_3 \gamma_4
\\
\mu'_2 &= \mu_2
\\
\mu'_3 &= \mu_3
\\
\mu'_4 &= \mu_4
\end{aligned}
\label{eq:nlctangles}
\end{align}
In the rotated basis, the Hamiltonian at half-filling takes the form
\begin{equation}
H' = -\i\frac{U}{4} \gamma'_1 \mu'_1 + \frac{\epsilon_f}{2} \gamma'_2 \gamma'_3 \gamma'_4 \mu'_2 - \i\frac{\epsilon_g}{2} \mu'_3 \mu' _4 \,.
\end{equation}
As the $f$ and $g$-sites are not coupled to the original physical degrees of freedom of the Hubbard atom, they can be considered to be ``gauge'' degrees of freedom, meaning that the physics is independent of the parameterization of the $f$ and $g$ orbitals. A convenient gauge choice for the $\epsilon_f$ and $\epsilon_g$ are $\epsilon_f = 0$ and $\epsilon_g = \frac{U}{2}$. This leads to the Hamiltonian taking the form
\begin{equation}
H' = -\i \frac{U}{4} \left( \gamma'_1 \mu'_1 + \mu'_3 \mu'_4 \right) \,.
\label{eq:dimerinmajoranans}
\end{equation}
In this form, the remaining Majoranas can be recombined into a new set of fermions,
\begin{align}
a^\dagger &= \frac{\mu'_3 + \i \gamma'_1}{2}
&
b^\dagger &= \frac{\mu'_1 + \i \mu'_4}{2} \,,
\end{align}
and the Hamiltonian is then recast into the form
\begin{equation}
H' = t \left( a^\dagger b + b^\dagger a \right)
\label{eq:nlctha}
\end{equation}
where $\displaystyle t = \frac{U}{2}$. This is the form of a non-interacting dimer. This overall process of transforming the Hubbard atom with gauge fermions to a non-interacting dimer is illustrated in Fig.~\ref{fig:hubbardatom}.
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6}]
\node[circle,draw=black,fill=black!10,thick,inner sep=6pt] (H) at (-4,0){};
\node[circle,draw=black,thick,inner sep=6pt] (a) at (2.5,0){};
\node[circle,draw=black,thick,inner sep=6pt] (b) at (4,0){};
%
\node[circle,draw=black,thick,inner sep=6pt] (f) at (-2.5,0.75){};
\node[circle,draw=black,thick,inner sep=6pt] (g) at (-2.5,-0.75){};
%
\node[below=8pt] at (f) {$\varepsilon_f$};
\node[below=8pt] at (g) {$\varepsilon_g$};
%
\node at (H) {$\uparrow\downarrow$};
\node[below=8pt] at (H) {$\varepsilon$};
\node[below=8pt] at (a) {$\phantom{b}a\phantom{b}$};
\node[below=8pt] at (b) {$b$};
%
\path (H) edge[-latex,line width=0.75pt,double distance=0.5pt] node[above] {$U$} (H);
%
\node at (0,0) {$\Rightarrow \; \begin{Bmatrix} \gamma \\ \mu \end{Bmatrix} \overset{\textsc{nlct}}{\mapsto} \begin{Bmatrix} \gamma' \\ \mu' \end{Bmatrix} \;\Rightarrow$};
%
\draw[-,line width=1pt] (a)--(b) node[midway,above] {$t$};
\end{tikzpicture}
\caption[Non-linear canonical transformation of the interacting Hubbard atom to a non-interacting dimer using gauge degrees of freedom]{Non-linear canonical transformation of the interacting Hubbard atom to a non-interacting dimer using gauge degrees of freedom ($f$ and $g$).\label{fig:hubbardatom}}
\end{figure}
It can be shown that the Green functions for the original physical system and the auxiliary system coincide:
\begin{equation}
\Green{\op{c}{\uparrow}}{\opd{c}{\uparrow}}_z = \Green{\op{a}{}}{\opd{a}{}}_z \,.
\end{equation}
The above calculation was performed on the Hubbard atom with particle-hole symmetry at half-filling at zero temperature. It is possible to extend this calculation to more general cases, although the necessary algebraic manipulations becomes much more involved. For the particle-hole symmetric case, the choice of angles for the rotors in Eq.~\eqref{eq:nlctangles} could be chosen by inspection of the desired form of the final Hamiltonian \eqref{eq:dimerinmajoranans}. For the particle-hole asymmetric case, the choice of angles in principle involves a system of coupled equations for the eight rotor angles. A solution to this set of equations which transforms the particle-hole asymmetric Hubbard atom to a non-interacting system remains to be found.
The preceding discussion shows how the concept of decomposing fermions into Majorana\index{Majorana} degrees of freedom can be generalized into a more elaborate scheme.
Within this scheme, nontrivial and non-linear canonical transformations may be applied which manifest as rotations in the Clifford algebra space spanned by the Majorana degrees of freedom, generated by the generalized Euler identity~\eqref{eq:generaleuler}. The generalized Euler identity opens up the possibility of even more elaborate transformations than those considered in the previous discussion. Following the example in Eq.~\eqref{eq:majoranapolynomial}, it is possible to find a representation of the rotation kernel $\mathcal{I}$ which is a Majorana polynomial, such as $\mathcal{I} = \frac{\gamma_3 \gamma_4 \gamma_5 - \i \gamma_6}{\sqrt{2}}$.
An additional potential aspect which could be generalized is the internal metric within the Majorana algebra. The metric appears in the anticommutation relation $\{ \gamma_a , \gamma_b \} = 2 g_{ab}$ where $g$ is the metric tensor which is usually taken to be Euclidean. This could be modified to be a Riemannian metric. It remains to be shown whether such a scheme would preserve the appropriate Hermitian fermion statistics, or if this generalization would only be applicable to non-Hermitian systems.
A potential continuation of exploiting the Clifford algebraic properties obeyed by the Majorana degrees of freedom is to develop a Majorana calculus constructed along the lines of geometric calculus~\cite{hestenessobczyk,doranlasenby}, where the existence of a geometric object which functions algebraically as the imaginary unit $\sqrt{-1}$ facilitates a generalization of complex analysis to higher dimensions. Within the framework of geometric calculus the notion of a complex number $z = x + \i y$ is generalized to $\mathcal{Z} = x + \mathcal{I} y$ where as in the above $\mathcal{I}$ has some internal structure with a geometric interpretation and has the property that $\mathcal{I}^2 = -1$. The standard complex analysis in geometric calculus can be reproduced using the Clifford algebra space $\text{Cl}_{2}(\mathbbm{R})$ with the ``imaginary'' unit being the unit pseudoscalar $\mathcal{I} = e_1 \extp e_2$. A first generalization which goes beyond this is in the $3d$ case where $\mathcal{I} = e_1 \extp e_2 \extp e_3$. It is in this sense that geometric calculus can extend complex analysis to higher dimensions. Conceptually, this calculus could be interpreted as a variant of the Gra{\ss}mann calculus used in the calculation of fermionic path integrals, although it is not immediately obvious what a physically meaningful integrand might be.
This non-linear canonical transformation is an analytic transformation which transforms interacting degrees of freedom to non-interacting degrees of freedom. This transformation however suffers from algebraic complexity.
As such, it is unwieldy for practical purposes, but serves as an illustration showing that formally an analytic canonical transformation from interacting degrees of freedom to non-interacting degrees of freedom does exist.
\section{Auxiliary Chain Mapping\label{sec:finitemapping}}
The preceding framework presented a method of converting an interacting system to a non-interacting one by means of non-linear canonical transformations and the introduction of gauge degrees of freedom. For simple cases such as the Hubbard atom with particle-hole symmetry an analytic transformation is derivable. However for more general cases, even such as the Hubbard atom without particle-hole symmetry, the method quickly becomes drastically more complicated to implement and at this stage of the method's development serves more as an illustrative proof-of-concept rather than a generally applicable tool to deliver quantitative results.
Presented now in this section is a mapping scheme which also transforms an interacting system to a non-interacting one which can be applied to a much wider range of systems. This method is similar in spirit to the previous scheme, but it is founded on a different technical basis, which is that of Green functions.
This section builds up the formalism of constructing auxiliary systems starting from the simplest cases with finite degrees of freedom and treating systems of gradually increasing complexity, including finite temperature effects, before then describing the method for treating systems of infinite degrees of freedom in the thermodynamic limit.
The form of the effective models which are constructed here are that of $1d$ tight-binding chains with nearest-neighbor dynamics. This form is chosen for its computational simplicity, but in principle the mapping could be performed to other prescribed systems. The auxiliary mapping is not itself a solution strategy for solving strongly correlated systems, but rather enables a reinterpretation of their solutions. A prerequisite for constructing the auxiliary model is a solution to the original physical system. However, the solutions derived below may serve as a starting point for approximate toy-model solutions to certain interacting systems.
\subsection{Exact Mappings for Finite Systems}
The auxiliary chain effective models are first constructed here for finite systems of modest size which can be solved using exact diagonalization.
For a system of finite size, its spectrum consists of a finite number of discrete poles. In terms of the effective chain model, this corresponds to a Green function described by a continued fraction of finite depth.
\subsubsection{Hubbard Atom}
The simplest example of an application of this mapping is the Hubbard model in the atomic limit, $U\to\infty$. This system reduces to a set of isolated Hubbard atoms, each of which consists of a single site of interacting fermions with the Hamiltonian given by
\begin{equation}
\hat{H}_{\textsc{h.a.}} = \varepsilon \left( \opd{c}{\uparrow} \op{c}{\uparrow} + \opd{c}{\downarrow} \op{c}{\downarrow} \right) + U \opd{c}{\uparrow} \op{c}{\uparrow} \opd{c}{\downarrow} \op{c}{\downarrow} \,.
\end{equation}
As the sites are decoupled, each site can be analyzed independently.
As calculated by the Green function equation-of-motion approach in \S\ref{sec:hubbardatomgf}, the Green function for the Hubbard atom is
\begin{align}
\Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z
&=
\frac{1 - \langle \op{n}{-\sigma} \rangle}{z - \varepsilon} + \frac{\langle \op{n}{-\sigma} \rangle}{z - \varepsilon - U} \,.
\intertext{This Green function can be written in continued fraction form as}
\Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z
&=
\cfrac{1}{z - \varepsilon - \langle \op{n}{-\sigma} \rangle U - \cfrac{( 1 - \langle \op{n}{-\sigma} \rangle ) \langle \op{n}{-\sigma} \rangle U^2}{z - \varepsilon - ( 1 - \langle \op{n}{-\sigma} \rangle ) U}} \,. \label{eq:hacontfrac}
\end{align}
This Green function can be reinterpreted as the Green function for an auxiliary system as
\begin{equation}
\Green{\op{a}{1}}{\opd{a}{1}}_z
=
\cfrac{1}{z - \epsilon_1 - \cfrac{\tilde{V}^2}{z - \epsilon_2}}
\end{equation}
which is clearly the form of a tight binding dimer with on-site potentials $\epsilon_1$ and $\epsilon_2$, and the hopping amplitude between the two sites $\tilde{V}$.
In terms of the degrees of freedom of the original physical system, the parameters of the effective system are
\begin{align}
\epsilon_1 &= \varepsilon + \langle \op{n}{-\sigma} \rangle U \,,&
\epsilon_2 &= \varepsilon + ( 1 - \langle \op{n}{-\sigma} \rangle ) U
\,,&
\tilde{V}^2 &= ( 1 - \langle \op{n}{-\sigma} \rangle ) \langle \op{n}{-\sigma} \rangle U^2
\,,
\intertext{or in the particle-hole symmetric case for $-U < \varepsilon < 0$ with $T \ll U, |\varepsilon|$ where $\langle \op{n}{-\sigma} \rangle = \frac12 = \langle \op{n}{+\sigma} \rangle$, the parameters are}
\epsilon_1 &= -\frac{U}{2} \,,&
\epsilon_2 &= -\frac{U}{2} \,,
&
\tilde{V} &= \frac{U}{2} \,.
\end{align}
As in \S\ref{nlctha} (\textit{cf.} the calculation leading to Eq.~\eqref{eq:nlctha}), his shows again that there exists a formal mapping from the single-site interacting Hubbard atom to a non-interacting tight-binding dimer. This mapping is depicted schematically in Fig.~\ref{fig:haauxmap}.
Following again the discussion in \S\ref{sec:hubbardatomgf}, in more general circumstances the expectation value of the fermion number is given by
\begin{equation}
\langle \op{n}{-\sigma} \rangle = \int \d\omega f(\omega) \mathcal{A}_\sigma(\omega)
\end{equation}
where
$f(\omega)$ is the Fermi-Dirac distribution. With $-U < \varepsilon < 0$, this leads the expectation value of the number operator to be
\begin{equation}
\begin{aligned}[b]
\langle \op{n}{\pm\sigma} \rangle
&= f(\varepsilon) (1-\langle \op{n}{mp\sigma} \rangle) + f(\varepsilon+U) \langle \op{n}{mp\sigma} \rangle
\\
\langle \op{n}{\pm\sigma} \rangle
&= \frac{f(\varepsilon)}{1-f(\varepsilon+U)+f(\varepsilon)} \,.
\end{aligned}
\end{equation}
This shows that the functional form of the auxiliary dimer remains the same at finite temperature, but the exact values of the parameters change.
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6}]
\node[circle,draw=black,fill=black!10,thick,inner sep=1pt] (h) at (-2,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (a1) at (2,0){$\phantom{\uparrow\downarrow}$};
\node[rectangle,draw=black,thick,inner sep=1pt] (a2) at (4,0){$\phantom{\uparrow\downarrow}$};
%
\node[below=9pt] at (h) {$\varepsilon$};
%
\draw[line width=1.2pt](a1)--(a2) node[midway,above] {$\tilde{V}$};
%
\path (h) edge[-latex,line width=0.75pt,double distance=0.5pt] node[above] {$U$} (h);
\end{tikzpicture}
\caption{Mapping of the Hubbard atom (left) to a non-interacting auxiliary system (right).\label{fig:haauxmap}}
\end{figure}
While the continued fraction form of the Hubbard atom Green function \eqref{eq:hacontfrac} is well-known in the literature, its equivalent interpretation as the Green function of a non-interacting dimer is new.
\subsubsection{Anderson Dimer}
The next simplest system to consider is that of the Anderson dimer, a model which consists of a single interacting site hybridized to a non-interacting bath comprising a single lattice site. For simplicity, the analysis will first be performed at particle-hole symmetry at zero temperature. The Anderson dimer at finite temperature case will be analyzed in a following subsection, as well as Anderson models with larger bath sizes. The Hamiltonian of the Anderson dimer at half-filling, $\varepsilon_d = -U/2$, is given by
\begin{equation}
\op{H}{\text{dimer}} = U \left( \opd{d}{\uparrow} \op{d}{\uparrow} - \tfrac12 \right) \left( \opd{d}{\downarrow} \op{d}{\downarrow} - \tfrac12 \right) + V \smashoperator{\sum_{\sigma25\{\uparrow,\downarrow\}}}\ \left( \opd{d}{\sigma} \op{c}{\sigma} + \opd{c}{\sigma} \op{d}{\sigma} \right)
\end{equation}
where the $\op{d}{}$ operators act on the impurity site and the $\op{c}{}$ operators on the bath.
\begin{figure}[htp!]
\centering
\begin{subfigure}[b]{0.45\linewidth}
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6},scale=0.75]
\node[circle,draw=black,fill=black!10,thick,inner sep=1pt] (3) at (0,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (4) at (2,0){$\phantom{\uparrow\downarrow}$};
%
\node[below=9pt] at (3) {$\varepsilon_d$};
%
\draw[line width=1.2pt](3)--(4) node[midway,above] {$V$};
%
\path (3) edge[-latex,line width=0.75pt,double distance=0.5pt] node[above] {$U$} (3);
%
\node[draw=none,thick,rectangle,inner sep=6pt] (a1) at (0,-1.5) {};
\node[draw=none,thick,rectangle,inner sep=6pt] (a2) at (0,-3) {};
\draw[draw=none,line width=1.2pt](3)--(a1) node[midway,left] {};
\draw[draw=none,line width=1.2pt](a1)--(a2) node[midway,left] {};
%
\node at ($(4)+(2,1)$) {\subref*{fig:andersondimerschematic}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:andersondimerschematic}}
\end{subfigure}
\begin{subfigure}[t]{0.45\linewidth}
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6},scale=0.75]
\node[circle,draw=black,thick,inner sep=1pt] (3) at (0,0){$\phantom{\uparrow\downarrow}$};
\node[circle,draw=black,thick,inner sep=1pt] (4) at (2,0){$\phantom{\uparrow\downarrow}$};
%
\draw[line width=1.2pt](3)--(4) node[midway,above] {$V$};
%
\node[draw=black,thick,rectangle,inner sep=6pt] (a1) at (0,-1.5) {};
\node[draw=black,thick,rectangle,inner sep=6pt] (a2) at (0,-3) {};
\draw[line width=1.2pt](3)--(a1) node[midway,left] {$\tilde{V}$};
\draw[line width=1.2pt](a1)--(a2) node[midway,left] {$\tilde{t}_1$};
%
\node at ($(4)+(2,1)$) {\subref*{fig:dimerauxlatt}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:dimerauxlatt}}
\end{subfigure}
\caption{Mapping of the Anderson dimer \subref{fig:andersondimerschematic} to a non-interacting auxiliary system \subref{fig:dimerauxlatt}.}
\end{figure}
The ground state of the Anderson dimer has quantum numbers $Q=2$ and $S_z=0$ with Hamiltonian
\begin{equation}
\boldsymbol{H}_{2,0} = \begin{pmatrix*}[r] -\frac{U}{2} & 0 & V & V \\ 0 & -\frac{U}{2} & -V & -V \\ V & -V & 0 & 0 \\ V & -V & 0 & 0 \end{pmatrix*}
\end{equation}
in terms of the basis
\begin{equation}
\left\lvert \phi \right\rangle_{2,0}
=
\begin{pmatrix}
\lvert \uparrow , \downarrow \rangle
\\
\lvert \downarrow , \uparrow \rangle
\\
\lvert \uparrow\!\downarrow , - \rangle
\\
\lvert - , \uparrow\!\downarrow \rangle
\end{pmatrix} \,.
\end{equation}
The Schr\"odinger equation for the Hamiltonian of this subspace is
\begin{equation}
\boldsymbol{H}_{2,0} \left\lvert 2 , 0 ; j \right\rangle = E^j_{2,0} \left\lvert 2 , 0 ; j \right\rangle
\end{equation}
with eigenenergies $E^j_{2,0}$ and eigenstates $\left\lvert 2 , 0 ; j \right\rangle$ with $j25\{1,\ldots,4\}$. The ground state energy of this subspace is $E^1_{2,0} = -\frac14 ( U + \sqrt{U^2 + 64 V^2} )$.
The system also allows excited states with quantum numbers $Q=3$ and $S_z=1/2$ where the Hamiltonian in this subspace is
\begin{equation}
\boldsymbol{H}_{3,\frac12} = \begin{pmatrix*}[r] 0 & -V \\ -V & -\frac{U}{2} \end{pmatrix*}
\end{equation}
in terms of the basis
\begin{equation}
\left\lvert \phi \right\rangle_{3,\frac12}
=
\begin{pmatrix}
\lvert \uparrow\!\downarrow , \uparrow \rangle
\\
\lvert \uparrow , \uparrow\!\downarrow \rangle
\end{pmatrix} \,.
\end{equation}
The Schr\"odinger equation for this subspace is
\begin{equation}
\boldsymbol{H}_{3,\frac12} \left\lvert 3, \tfrac12 ; j \right\rangle = E^j_{3,\frac12} \left\lvert 3, \tfrac12 ; j \right\rangle
\end{equation}
with eigenenergies $E^j_{3,\frac12}$ and eigenstates $\left\lvert 3, \tfrac12 ; j \right\rangle$ with $j25\{1,2\}$. The eigenenergies of this subspace are $E^1_{3,\frac12} = -\frac14 ( U + \sqrt{U^2 + 16 V^2} )$ and $E^2_{3,\frac12} = -\frac14 ( U - \sqrt{U^2 + 16 V^2} )$.
The Green function on the impurity can be obtained from the Lehmann representation\index{Lehmann representation} as
\begin{equation}
G(z) \equiv
\Green{\op{d}{\uparrow}}{\opd{d}{\uparrow}}_z
=
\sum_{j=1}^{2} \left\lvert \left\langle 3,\tfrac12 ; j \left\lvert \opd{d}{\uparrow} \right\rvert 2,0 ; 1 \right\rangle \right\rvert^2 \left[ \frac{1}{z - E_{2,0}^{1} + E_{3,\frac12}^{j}} + \frac{1}{z + E_{2,0}^{1} - E_{3,\frac12}^{j}} \right] \,.
\label{eq:dimergreenfunction}
\end{equation}
Accounting for degeneracies, the $T=0$ Green function obtained by Eq.~\eqref{eq:dimergreenfunction} consists of four distinct poles. This implies that the interacting two-site system can be mapped onto a non-interacting four-site system.
The parameters of the effective chain can be obtained from the finite poles of the Lanczos representation by applying the Lanczos algorithm described in \S\ref{sec:lanczos}. The Lehmann representation of the Green function delivers the set of weights $\{ w_p \}$ and positions $\{ \omega_p \}$ of the spectral poles with the spectral function taking the form of $\mathcal{A}(\omega) = \sum_{j} w_j \delta(\omega-\omega_j)$. These define the form of the Hamiltonian in a diagonal representation $\boldsymbol{H}_{\text{D}}$. This Hamiltonian is then processed by Lanczos algorithm to produce the corresponding Hamiltonian in tridiagonal form $\boldsymbol{H}_{\text{T}}$, which defines the set of parameters $\{ \tilde{V} , \tilde{t}_1 \}$ from the off-diagonal terms. This set parameterizes the system as a tight-binding system for the configuration shown in Fig.~\ref{fig:dimerauxlatt} with Hamiltonian
\begin{equation}
\op{\widetilde{H}}{\text{dimer}} = \sum_{\sigma25\{\uparrow,\downarrow\}} \left[ V \left( \opd{d}{\sigma} \op{c}{\sigma} + \opd{c}{\sigma} \op{d}{\sigma} \right) + \tilde{V} \left( \opd{d}{\sigma} \op{f}{\sigma;1} + \opd{f}{\sigma;1} \op{d}{\sigma} \right) + \tilde{t} \left( \opd{f}{\sigma;1} \op{f}{\sigma;2} + \opd{f}{\sigma;1} \op{f}{\sigma;2} \right) \right]
\end{equation}
where the operators $\op{f}{\sigma;n}$ act on the auxiliary sites (square sites in Fig.~\ref{fig:dimerauxlatt}). Note that there are technically two copies of the auxiliary lattice Fig.~\ref{fig:dimerauxlatt}, one for each spin. Since there is no coupling between different spins in the auxiliary model, each spin sector can be analyzed independently.
The Green function on the $d$-site of the auxiliary model takes the form
\begin{equation}
\widetilde{G}(z) \equiv \Green{\op{d}{\sigma}}{\opd{d}{\sigma}}_z = \cfrac{1}{z - \cfrac{\tilde{V}^2}{z - \cfrac{\tilde{t}_1^2}{z}} - \cfrac{V^2}{z}} \,,
\label{eq:auxdimergreenfunction}
\end{equation}
such that the spectral function corresponds to the original physical system:
\begin{equation}
-\frac1\pi \Im \widetilde{G}(z) \overset{!}{=} -\frac1\pi \Im G(z) \,.
\end{equation}
Given Eq.~\eqref{eq:auxdimergreenfunction} and Eq.~\eqref{eq:dimergreenfunction}, this is an algebraic expression which relates the auxiliary parameters $\tilde{V}$ and $\tilde{t}$ to the physical parameters $U$ and $V$.
Performing the analysis,\footnote{While it may appear that this is a single algebraic equation for two unknowns, the fractions involved can be rationalized, where the numerators of the rationalized fractions are polynomials in $z$. Matching coefficients for different powers of $z$ yields a set of algebraic equations which can be simultaneously solved for $\tilde{V}$ and $\tilde{t}$ by making use of the fact that monomials of different powers are linearly independent.} it is found that $\tilde{V} = U/2$ and $\tilde{t}_1 = 3 V$.
\begin{comment}
\subsubsection{Anderson Trimer}
\begin{equation}
\hat{H} = U \left( \opd{d}{\uparrow} \op{d}{\uparrow} - \tfrac12 \right) \left( \opd{d}{\downarrow} \op{d}{\downarrow} - \tfrac12 \right) + V \sum_{\sigma} \left( \opd{d}{\sigma} \op{f}{1,\sigma} + \opd{f}{1,\sigma} \op{d}{\sigma} \right) + t \sum_{\sigma} \left( \opd{f}{1,\sigma} \op{f}{2,\sigma} + \opd{f}{2,\sigma} \op{f}{1,\sigma} \right)
\end{equation}
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6}]
\node[circle,draw=black,thick,inner sep=1pt] (3) at (0,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (4) at (2,0){$\phantom{\uparrow\downarrow}$};
\node[circle,draw=black,thick,inner sep=1pt] (5) at (4,0){$\phantom{\uparrow\downarrow}$};
%
\node[below=9pt] at (3) {$\varepsilon$};
%
\draw[line width=1.2pt](3)--(4) node[midway,above] {$V$};
\draw[line width=1.2pt](4)--(5) node[midway,above] {$t$};
%
\path (3) edge[-stealth,line width=0.75pt,double distance=0.5pt] node[above] {$U$} (3);
\end{tikzpicture}
\end{figure}
$Q=3$ $S_z=-1/2$
\begin{equation}
H_{3,-\frac12} =
\begin{pmatrix*}[r]
-\frac{U}{2} & 0 & 0 & -t & -t & 0 & 0 & 0 & 0
\\
0 & -\frac{U}{2} & 0 & -t & -t & 0 & 0 & -V & -V
\\
0 & 0 & -\frac{U}{2} & 0 & 0 & 0 & 0 & V & V
\\
-t & t & 0 & -\frac{U}{2} & 0 & 0 & -V & 0 & 0
\\
-t & t & 0 & 0 & -\frac{U}{2} & V & 0 & 0 & 0
\\
0 & 0 & 0 & 0 & V & 0 & 0 & -t & 0
\\
0 & 0 & 0 & -V & 0 & 0 & 0 & 0 & t
\\
0 & -V & V & 0 & 0 & -t & 0 & 0 & 0
\\
0 & -V & V & 0 & 0 & 0 & t & 0 & 0
\end{pmatrix*}
\end{equation}
$Q=4$ $S_z=0$
\begin{equation}
H_{4,0} =
\begin{pmatrix*}[r]
0 & 0 & 0 & t & -t & 0 & 0 & 0 & 0
\\
0 & 0 & 0 & t & -t & 0 & 0 & V & -V
\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & V & -V
\\
t & t & 0 & 0 & 0 & -V & 0 & 0 & 0
\\
-t & -t & 0 & 0 & 0 & 0 & -V & 0 & 0
\\
0 & 0 & 0 & -V & 0 & -\frac{U}{2} & 0 & -t & 0
\\
0 & 0 & 0 & 0 & -V & 0 & -\frac{U}{2} & 0 & -t
\\
0 & V & V & 0 & 0 & -t & 0 & -\frac{U}{2} & 0
\\
0 & -V & -V & 0 & 0 & 0 & -t & 0 & -\frac{U}{2}
\end{pmatrix*}
\end{equation}
$Q=3$ $S_z=+1/2$
\begin{equation}
|\phi\rangle_{3,\frac12}
=
\begin{pmatrix}
| \uparrow , \uparrow , \downarrow \rangle
\\
| \uparrow , \downarrow , \uparrow \rangle
\\
| \downarrow , \uparrow , \uparrow \rangle
\\
| \uparrow , \uparrow\!\downarrow , - \rangle
\\
| \uparrow , - , \uparrow\!\downarrow \rangle
\\
| \uparrow\!\downarrow , \uparrow , - \rangle
\\
| - , \uparrow , \uparrow\!\downarrow \rangle
\\
| \uparrow\!\downarrow , - , \uparrow \rangle
\\
| - , \uparrow\!\downarrow , \uparrow \rangle
\end{pmatrix}
\end{equation}
\begin{equation}
H_{3,+\frac12} =
\begin{pmatrix*}[r]
-\frac{U}{2} & 0 & 0 & t & t & 0 & 0 & 0 & 0
\\
0 & -\frac{U}{2} & 0 & -t & -t & 0 & 0 & V & V
\\
0 & 0 & -\frac{U}{2} & 0 & 0 & 0 & 0 & -V & -V
\\
t & -t & 0 & -\frac{U}{2} & 0 & -V & 0 & 0 & 0
\\
t & -t & 0 & 0 & -\frac{U}{2} & 0 & V & 0 & 0
\\
0 & 0 & 0 & -V & 0 & 0 & 0 & t & 0
\\
0 & 0 & 0 & 0 & V & 0 & 0 & 0 & -t
\\
0 & V & -V & 0 & 0 & t & 0 & 0 & 0
\\
0 & V & -V & 0 & 0 & 0 & -t & 0 & 0
\end{pmatrix*}
\end{equation}
$Q=4$ $S_z=1$
\begin{equation}
H_{4,1} = \begin{pmatrix*}[r]
0 & -V & 0
\\
-V & -\frac{U}{2} & -t
\\
0 & -t & -\frac{U}{2}
\end{pmatrix*}
\end{equation}
\end{comment}
\subsection{Anderson $\boldsymbol{N}$-mers}
This mapping to an auxiliary system can naturally be generalized to treat $1d$ Anderson models of longer finite length, or Anderson $N$-mers, where $N$ being the length of the non-interacting $1d$ bath.\footnote{The nomenclature is more properly ``$(N+1)$-mer'' as for $N=1$ the system is described as being the Anderson \textit{di}mer and so on. The nomenclature is chosen to preserve the convention that the end of the chain is an impurity coupled to a bath via a hybridization $V$ and the bath parameterized by $t_n$ with $n=1,2,\ldots,N$.}
The setup is an interacting impurity with $U=1$ and $\varepsilon=-U/2$, coupled to a chain of $N=1,3,5,7,9$ bath sites. As a concrete example, the hybridization between impurity and bath is $V=0.2$ and the hopping between the other bath sites is uniformly $t=0.4$. The Green functions for these systems are evaluated at $T=0$. The spectrum is obtained as in the preceding dimer example by means of exact diagonalization to construct the Lehmann representation of the Green function.
$N$-mers of odd $N$ are taken such that the ground state is a singlet. For even $N$ the ground state is a doublet which then leads to more complicated transitions between the ground state and the excited states due to quantum number degeneracies. It is not necessary to entertain this complication to obtain results which appropriately illustrate the construction of the auxiliary chain for $N$-mers of various lengths (the results are qualitatively similar).
As a precursor to mapping to the auxiliary system, the spectrum is truncated by discarding all elements of the Lehmann sum whose weight is $< 10^{-6}$. For auxiliary chains associated to $N$-mers of length $N \geq 3$, the auxiliary systems become very long. The truncation cutoff limits the length of the auxiliary chain to a modest number of sites while still recovering the primary features of the desired spectrum. Lowering the truncation threshold significantly increases the dimension of the auxiliary system Hilbert space but the increase in accuracy of reproducing the desired spectrum is minimal. For few bath sites, the parameters of the constructed auxiliary system are not meaningfully affected by the truncation. The reconstructed spectrum from the auxiliary chain obtained from the truncated spectrum is nearly identical to the original exact spectrum. However, this truncation begins to produce errors as the number of bath sites is increased. In the spectrum of the larger $N$-mers there is a proliferation of very many poles with slightly different energies with very small weights. Ultimately these are forming the continuum spectrum in the thermodynamic limit, where the auxiliary chain is infinitely long. The construction of an auxiliary system for an impurity model in the thermodynamic limit will be discussed in the following section.
With the truncation, on systems of a larger number of bath sites the reconstructed spectrum develops deviations from the true spectrum near the outer band edges of the spectrum. These small errors can be seen in the spectral plots for $N=7$ and $N=9$ in Fig.~\ref{fig:Nmers}. These errors are analogous to the errors encountered when modeling a continuum, or system of infinite extent, with a finite number of degrees of freedom.
The Lehmann representation of the spectral functions yields a form of the Hamiltonian in the diagonal basis $\boldsymbol{H}_{\text{D}}$.
The auxiliary chain parameters are obtained by transforming the Hamiltonian into a tridiagonal basis, which results in a Green function of continued fraction form.
The Hamiltonian in its tridiagonal basis is obtained from the Lanczos algorithm as described in \S\ref{sec:lanczos}. The tridiagonal Hamiltonian produced by the Lanczos algorithm consists of a set of parameters $\{ \tilde{V} , \tilde{t}_n \}$ where $n = 1,\ldots,N$ ranges over the number of bath sites in the original $N$-mer.
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6},scale=0.75]
\node[circle,draw=black,fill=black!10,thick,inner sep=1pt] (0) at (0,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (1) at (2,0){$\phantom{\uparrow\downarrow}$};
\node[circle,draw=black,thick,inner sep=1pt] (2) at (4,0){$\phantom{\uparrow\downarrow}$};
\node[circle,inner sep=3pt] (int) at (6,0){};
\node[circle,draw=black,thick,inner sep=1pt] (N) at (8,0){$\phantom{\uparrow\downarrow}$};
%
\node[below=9pt] at (0) {$\varepsilon$};
%
\draw[line width=1.2pt](0)--(1) node[midway,above] {$V$};
\draw[line width=1.2pt](1)--(2) node[midway,above] {$t$};
\draw[line width=1.2pt](2)--($(2)+(1,0)$) node[above] {$t$};
\draw[line width=1.2pt,dashed]($(2)+(1,0)$)--(int);
\draw[line width=1.2pt](N)--($(N)-(1,0)$) node[above] {$t$};
\draw[line width=1.2pt,dashed](int)--($(N)-(1,0)$);
%
\path (0) edge[-latex,line width=0.75pt,double distance=0.5pt] node[above] {$U$} (0);
\end{tikzpicture}
\\
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6},scale=0.75]
\node[circle,draw=black,thick,inner sep=1pt] (0) at (0,0){$\phantom{\uparrow\downarrow}$};
\node[circle,draw=black,thick,inner sep=1pt] (1) at (2,0){$\phantom{\uparrow\downarrow}$};
\node[circle,draw=black,thick,inner sep=1pt] (2) at (4,0){$\phantom{\uparrow\downarrow}$};
\node[circle,inner sep=3pt] (int) at (6,0){};
\node[circle,draw=black,thick,inner sep=1pt] (N) at (8,0){$\phantom{\uparrow\downarrow}$};
%
\draw[line width=1.2pt](0)--(1) node[midway,above] {$V$};
\draw[line width=1.2pt](1)--(2) node[midway,above] {$t$};
\draw[line width=1.2pt](2)--($(2)+(1,0)$) node[above] {$t$};
\draw[line width=1.2pt,dashed]($(2)+(1,0)$)--(int);
\draw[line width=1.2pt](N)--($(N)-(1,0)$) node[above] {$t$};
\draw[line width=1.2pt,dashed](int)--($(N)-(1,0)$);
%
\node[draw=black,thick,rectangle,inner sep=6pt] (a1) at (0,-1.5) {};
\node[draw=black,thick,rectangle,inner sep=6pt] (a2) at (0,-3) {};
\node[inner sep=6pt] (ai) at (0,-4.5) {};
\node[draw=black,thick,rectangle,inner sep=6pt] (an) at (0,-6) {};
%
\draw[line width=1.2pt](0)--(a1) node[midway,left] {$\tilde{V}$};
\draw[line width=1.2pt](a1)--(a2) node[midway,left] {$\tilde{t}_1$};
\draw[line width=1.2pt,dashed](ai)--($(a2)-(0,0.75)$);
\draw[line width=1.2pt](a2)--($(a2)-(0,0.75)$) node[left] {$\tilde{t}_2$};
\draw[line width=1.2pt,dashed](ai)--($(an)+(0,0.75)$);
\draw[line width=1.2pt](an)--($(an)+(0,0.75)$) node[left] {$\tilde{t}_N$};
\end{tikzpicture}
\caption[Schematic illustrating the auxiliary field mapping from an interacting Anderson $N$-mer to a non-interacting system]{Schematic illustrating the auxiliary field mapping from an interacting Anderson $N$-mer (top) to a non-interacting system hybridized to an auxiliary system (bottom).\label{fig:Nmerschematic}}
\end{figure}
\begin{figure}[htp!]
\includegraphics{finiteA_N1.pdf}
\includegraphics{tn_aux_N1.pdf}
\includegraphics{finiteA_N3.pdf}
\includegraphics{tn_aux_N3.pdf}
\includegraphics{finiteA_N5.pdf}
\includegraphics{tn_aux_N5.pdf}
\includegraphics{finiteA_N7.pdf}
\includegraphics{tn_aux_N7.pdf}
\includegraphics{finiteA_N9.pdf}
\includegraphics{tn_aux_N9.pdf}
\caption[Spectral functions and auxiliary chain hopping parameters for Anderson $N$-mers]{Spectral functions and auxiliary chain hopping parameters for Anderson $N$-mers with $N = 1, 3, 5, 7, 9$ at $T=0$. The true spectral function is plotted in solid lines and the reconstructed spectral function from the auxiliary chain is plotted in dashed teal lines.\label{fig:Nmers}}
\end{figure}
The auxiliary chains calculated for $N$-mers of various lengths are shown in Fig.~\ref{fig:Nmers}. The right-hand panels show the values of the parameters $\{ \tilde{t}_n \}$ and the left-hand panels show the reconstructed spectrum $\mathcal{A}(\omega) = -\frac1\pi\Im\widetilde{G}(\omega)$ and their comparison to the original spectrum as calculated exactly from the exact diagonalization of the Anderson model via the Lehmann sum.
Note that $\tilde{V} = U/2$ in all cases. For $\tilde{t}_n$ with $n>2$, the chain parameters develop complexity, embodying the spectral complexity.
\subsubsection{Finite Temperature}
At finite temperature there exist transitions between excited states, which increase the multiplicity of finite matrix elements, and hence poles, in the Green function. This then results in an auxiliary chain of greater number of sites than for the zero temperature case. The system considered in this analysis is the Anderson dimer, although the results generalize to larger systems. Unlike the case for the Hubbard atom, the effective model for the Anderson dimer at finite temperature is not simply a reparameterization of the effective model for the zero temperature case.
In the following calculations the Anderson impurity model is parameterized by $U = 1.0$ and $V = 0.2$.
As in the previous examples of finite chains, the hybridization to the effective auxiliary system is fixed to $\tilde{V} = U/2$. Numerical results are shown in Fig.~\ref{fig:nmertns}.
\begin{figure}[htp!]
\includegraphics{Aimp_N1_T0_01.pdf}
\includegraphics{aux_N1_T0_01.pdf}
\vspace{-0.5\baselineskip}
\includegraphics{Aimp_N1_T0_04.pdf}
\includegraphics{aux_N1_T0_04.pdf}
\vspace{-0.5\baselineskip}
\includegraphics{Aimp_N1_T0_08.pdf}
\includegraphics{aux_N1_T0_08.pdf}
\vspace{-0.5\baselineskip}
\includegraphics{Aimp_N1_T0_12.pdf}
\includegraphics{aux_N1_T0_12.pdf}
\vspace{-0.5\baselineskip}
\includegraphics{Aimp_N1_T0_16.pdf}
\includegraphics{aux_N1_T0_16.pdf}
\vspace{-0.5\baselineskip}
\includegraphics{Aimp_N1_T0_2.pdf}
\includegraphics{aux_N1_T0_2.pdf}
\caption[Axiliary chain parameters and their associated reconstructed spectrum of the Anderson dimer at finite temperature]{Axiliary chain parameters and their associated reconstructed spectrum of the Anderson dimer at finite temperature for $T = 0.01, 0.04, 0.08, 0.12, 0.16, 0.20$. The $T=0$ spectrum is plotted in teal dashed lines for comparison.\label{fig:nmertns}}
\end{figure}
\begin{figure}[htp!]
\centering
\includegraphics{dimertT.pdf}
\caption{Temperature dependence of the auxiliary chain hopping parameters for the Anderson dimer.\label{fig:dimertT}}
\end{figure}
At the low (non-zero) temperature of $T=0.01 > 0$ the value of $\tilde{t}_2$ is small, but finite ($\tilde{t}_2 \approx 4.73\times10^{-4}$). Owing to this finite value the Lanczos algorithm does not truncate at this stage and continues to produce finite values for further hopping amplitudes down the chain. However, at the level of the Green function this small value essentially splits the auxiliary chain into two disjoint parts, so that the spectral function on the end of the chain does not see the entire chain, just the first two auxiliary sites. The small value of $\tilde{t}_2$ remains small with marginally higher temperature, up to a certain threshold. As the temperature increases beyond this threshold, the amplitude of $\tilde{t}_2$ also increases to $\tilde{t}_2 \sim \mathcal{O}(\tilde{V})$, thereby increasing the hybridization to the sites deeper into the auxiliary chain and the Green function is no longer effectively truncated. This is shown by the filled red circles in Fig.~\ref{fig:dimertT} where the value of $\tilde{t}_2$ is negligible until $T \sim 0.02$ where it begins to take on appreciable values. The meaning of this transition stems from the correspondence of the sites of the auxiliary chain to the states which are accessible to the original system. At low temperature the number of accessible states is limited. As temperature is increased, a greater number of excited states become available. This increase in the availability of excited states is captured in the increase in amplitude of $\tilde{t}_2$, which governs the hybridization to the additional sites of the auxiliary chain replicating these excited states.
With increasing temperature, the values of the hopping amplitudes $\tilde{t}_3$, $\tilde{t}_4$, and $\tilde{t}_5$ do not significantly change from their values at low temperature. The temperature dependence of the chain parameters is plotted in Fig.~\ref{fig:dimertT}.
Extrapolating this analysis to $N$-mers of longer length, it can be inferred that finite temperature effects of an auxiliary chain can effectively be modeled by a chain partitioned into two parts. The head of the chain captures the properties of the physical system at low temperature. The tail partition of the chain consists of hopping amplitudes that remain essentially fixed over a range of temperature scales. The hybridization between the two partitions is negligible at very low temperatures, but becomes finite when the temperature increases beyond a particular threshold. The parameters of the head of the chain however, do change over different temperature scales.
Empirical analysis of the auxiliary chain parameters show that the hybridization from the impurity site to the auxiliary chain is fixed to $\tilde{V} = U/2$, but the nature of how the remaining parameters of the head of the chain modulate with respect to temperature remains an open question.
\section{Auxiliary Field Mapping for Infinite Systems\label{sec:mapping}}
So far the systems considered for the effective field mapping were of finite size. For these systems, the spectrum consisted of a finite number of poles which could then be mapped exactly to an auxiliary chain, also of a finite depth. For systems of infinite size with a continuous spectrum, a more sophisticated method for generating the parameters of the auxiliary chain is here needed.
Rather than mapping the spectrum of the infinite system onto an auxiliary system, here only the self-energy is mapped. The mapping is performed such that the physical Green function possesses the same dynamics.
\subsection{Recursion Algorithm}
The method of generating the parameters of the auxiliary chain for an infinite system is performed recursively from an input interaction self-energy, rather than exact diagonalization of the physical system.
For an impurity system with a general bath, the Green function on the impurity is
\begin{equation}
G(z) \equiv \Green{\op{d}{\sigma}}{\opd{d}{\sigma}}_z = \cfrac{1}{z - \varepsilon - \Gamma(z) - \Sigma(z)}
\end{equation}
where $\Gamma(z)$ represents the hybridization of the impurity to the physical bath and $\Sigma(z)$ is the local interaction self-energy on the impurity. Since the Green function is analytic, it follows that the self-energy is also an analytic function~\cite{analyticse}.
This can been seen from inverting the Dyson equation
\begin{align}
\Sigma(z) &= z - \varepsilon - \Gamma(z) - \frac{1}{G(z)}
\end{align}
and noting that the self-energy obeys the Kramers-Kronig relations\index{Kramers-Kronig relations} just as the Green function does. This also implies that the self-energy is causal.
Since the self-energy is analytic and causal, it is possible for a hybridization function, which is also analytic and causal, to play the same role as the self-energy in the Green function as the mathematical structure of the Green function would be preserved. The self-energy may then be replaced by a hybridization $\Sigma(\omega) \equiv \Delta_0(\omega)$ to auxiliary degrees of freedom described by some noninteracting Hamiltonian $\hat{H}_{\text{aux}}$. The full single-particle dynamics can therefore be reproduced by replacing $\hat{H}_{\text{int}} \mapsto \hat{H}_{\text{aux}} + \hat{H}_{\text{hyb}}$. Specifically, $\hat{H}_{\text{aux}}$ is taken to be a noninteracting semi-infinite tight-binding chain coupled at one end to the physical lattice degrees of freedom. The general form of such an auxiliary Hamiltonian is
\begin{equation}
\hat{H}_{\text{aux}} = \sum_{\substack{n=m \\ \sigma 25 \{\uparrow,\downarrow\}}}^{\infty} \left[ {\epsilon}_{n} \opd{f}{\sigma;n} \op{f}{\sigma;n} + {t}_{n} \left( \opd{f}{\sigma;n} \op{f}{\sigma;n+1} + \hc \right) \right]
\label{eq:impHaux}
\end{equation}
where the $n$ index labels sites within each auxiliary chain. In $\hat{H}_{\text{aux}}$ the $m$ index is fixed to $m=1$, but will be considered as a dummy index for illustrative purposes in the following explanation.
The coupling of the auxiliary systems to the physical impurity site is given by the hybridization Hamiltonian
\begin{equation}
\hat{H}_{\text{hyb}} = {t}_0 \sum_{ \sigma 25 \{\uparrow,\downarrow\}} \left( \opd{d}{\sigma} \op{f}{\sigma;1} + \opd{f}{\sigma;1} \op{d}{\sigma} \right) \,.
\label{eq:impHhyb}
\end{equation}
The mapping onto this auxiliary chain is analogous to the mapping onto the Wilson chain in NRG, but there are some key differences.
Both schemes involve the mapping of an analytic function onto the parameters of a tight-binding chain. In NRG it is the non-interacting bath of the impurity which is mapped to Wilson chain.
In the recursion algorithm here it is the local self-energy of an interacting site which is mapped to the auxiliary chain.
A key distinguishing difference between the two mappings is that the Wilson chain involves the mapping of a discretized spectrum, but the auxiliary chain here is constructed from the full continuous spectrum of the self-enegy without approximations.
Similarly to the Wilson chain, features at low energy are captured by parameters further down into the chain.
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[every node/.style={line width=1.2pt,inner sep=6pt,scale=1.0}, every path/.style={line width=2pt,scale=1.5}]
\input{effectivehybs.tex}
\end{tikzpicture}
\caption[Hierarchy of the effective hybridization Green functions]{Hierarchy of the effective hybridization Green functions $\tensor*{\widetilde{G}}{^{(m)}_{1,1}}(z)$. The effective Green functions only take into account sites down the chain (shown in black).}
\end{figure}
The Green function $\tensor*{\widetilde{G}}{^{(m)}_{1,1}}(z) = \Greenline{\op{f}{\sigma;m}}{\opd{f}{\sigma;m}}_{z}$ is the Green function on the $m$-th site of the auxiliary chain with all sites $<m$ removed. It is the Green function on the boundary of the truncated chain, hence the $1,1$ subscript.
The effective Green functions $\widetilde{G}$ are obtained from the self-energy of the physical model and the hybridization onto the auxiliary chain $V$,
\begin{equation}
\Sigma(\omega) \mapsto \Delta_0(\omega) \equiv {t}_0^2 \widetilde{G}^{(1)}_{1,1}(\omega) ,
\end{equation}
which serves as a normalization factor to satisfy the requirement of a Green function
\begin{equation}
-\frac1\pi \int \Im \widetilde{G}^{(1)}_{1,1}(\omega)\text{d}\omega \overset{!}{=} 1 .
\end{equation}
The self-energy $\Sigma(\omega)$ is not an object which is solved for using the auxiliary system, but it is rather an input. In order for the mapping scheme to be initialized, the solution to the self-energy must be provided as an input. As in the rest of this work, the solution for the self-energy is taken from the NRG calculation of a defined quantum impurity model.
\begin{comment}
\begin{equation}
\hat{H}_{\text{aux}} = \sum_{\substack{j 25 \Gamma \\ \sigma 25 \{\uparrow,\downarrow\}}} \sum_{n=m}^{\infty} \left[ \tensor{e}{_{n}} \opd{f}{j,\sigma;n} \op{f}{j,\sigma;n} + \tensor{t}{_{n}} \left( \opd{f}{j,\sigma;n} \op{f}{j,\sigma;n+1} + \hc \right) \right]
\end{equation}
The outer sum sums over all sites $j$ of the physical lattice $\Gamma$ for each spin $\sigma$. The inner sum captures the dynamics of the auxiliary degrees of freedom where the $n$ index labels sites within each auxiliary chain. In $\hat{H}_{\text{aux}}$ the $m$ index is fixed to $m=1$, but will be considered as a dummy index for illustrative purposes in the following explanation.
The hybridization from the physical site to the auxiliary system is described by the Hamiltonian
\begin{equation}
\hat{H}_{\text{hyb}} = V \smashoperator[r]{\sum_{\substack{j 25 \Gamma \\ \sigma 25 \{\uparrow,\downarrow\}}}} \left( \opd{c}{j,\sigma} \op{f}{j,\sigma;1} + \opd{f}{j,\sigma;1} \op{c}{j,\sigma} \right)
\end{equation}
\end{comment}
From the equations of motion for Green functions of a non-interacting chain, the Green function for the edge site of a chain beginning from site $n$ of the effective chain is
\begin{equation}
\begin{aligned}[b]
{\widetilde{G}}^{(n)}_{1,1}(z) &\equiv \cfrac{1}{z - {\epsilon}_{n} - \cfrac{{t}_{n}^2}{z - {\epsilon}_{n+1} - \cfrac{{t}_{n+1}^2}{z - \ddots}}}
\\
&=
\cfrac{1}{z - {\epsilon}_{n} - {t}_{n}^2 {\widetilde{G}}{^{(n+1)}_{1,1}}(z)}
\\
\widetilde{G}^{(n+1)}_{1,1}(z)
&=
\frac{1}{{t}_n^2} \left[ z - {\epsilon}_{n} - \frac{1}{\widetilde{G}^{(n)}_{1,1}(z)} \right] \,.
\end{aligned}
\label{eq:greenrecursion}
\end{equation}
Since $\tensor*{\widetilde{G}}{^{(n)}_{1,1}}(z)$ is a Green function for any $n$, it holds that $-\frac1\pi \int \Im \tensor*{\widetilde{G}}{^{(n)}_{1,1}}(\omega)\text{d}\omega = 1$ also for any $n$. Using this fact and the previous equation, an expression for $t_n$ are computed as
\begin{equation}
{t}_{n}^2 = \frac1\pi \int_{-\infty}^{\infty} \Im \frac{1}{\widetilde{G}^{(n+1)}_{1,1}(\omega)} \text{d}\omega \,.
\end{equation}
The effective Green functions are related to the hybridization functions by $\Delta_{n}(\omega) = {t}_{n}^2 \widetilde{G}^{(n+1)}_{1,1}(\omega)$.
In terms of the hybridization functions, the auxiliary field mapping may be cast as
\begin{equation}
\Sigma(z)
\equiv \Delta_{0}(z)
= \cfrac{{t}_0^2}{z - {\epsilon}_{1} - \cfrac{{t}_1^2}{z - {\epsilon}_{2} - \cfrac{{t}_2^2}{z - \ddots}}}
\label{eq:continuedfraction}
\end{equation}
with the recursion relation
\begin{equation}
\begin{aligned}[b]
\Delta_{n+1}(z) &= z - {\epsilon}_{n} - \frac{{t}_{n}^2}{\Delta_{n}(z)} \,.
\end{aligned}
\label{eq:recursion}
\end{equation}
From the normalization of the Green functions, the $\{\tilde{t}_n\}$ in terms of the hybridization are
\begin{equation}
{t}_n^2 = -\frac1\pi \Im\!\int \d\omega\ \Delta_n(\omega)
\end{equation}
and the on site potentials can be constructed as
\begin{equation}
{\epsilon}_n = -\frac{1}{\pi {t}_{n-1}^2} \Im \int \d\omega \, \omega \Delta_{n-1}(\omega) \,.
\end{equation}
At particle-hole symmetry, the hybridization functions $\Delta_{n}(\omega)$ are even functions of $\omega$. This means that the integrand of the formula for the ${\epsilon}_{n}$ is odd. Therefore the auxiliary chain of a particle-hole symmetric system features ${\epsilon}_{n} = 0$ $\forall n$.
A high energy cutoff $D$ for the domain of $\omega$ is enforced such that $\Im \Sigma(\omega) \propto \theta(D - |\omega|)$ as small numerical errors can produce large perturbations in the real component at high energies, which leads to the algorithm delivering inaccurate results due to the recursive nature of the algorithm compounding the errors into both the real and imaginary parts of the iterated hybridization functions. This additionally breaks the Kramers-Kronig relationship between the real and imaginary parts of the auxiliary Green functions, which further contributes to the breakdown of the recursion algorithm.
Introducing the cutoff $D$ stabilizes the algorithm, but is found not to affect physical results for $\lvert \omega \rvert < D$.
\label{sec:cfetechnicalities}
Before proceeding to applications it is worthwhile to first mention some technicalities associated with the mapping in the case where the self-energy exhibits vanishing power-law behavior near the Fermi level~\cite{motttopology}. This is the case which includes the Fermi liquid, a state which arises in such relevant systems as the Anderson impurity model and the Hubbard model~\cite{hewson,dmft}.
In mapping the self-energy of a Fermi liquid there exists a technical complexity in determining the $\{{t}_n\}$ numerically due to the low energy form of the input self-energy.
Following from Eq.~\eqref{eq:lowese}, a Fermi liquid at low energy takes the form of
\begin{equation}
\Sigma(\omega) \overset{\omega\to0}{\sim} a_0 \omega + \i b_0 \omega^2
\end{equation}
with $a_0 , b_0 < 0$.
After making the initial association of $\Sigma(\omega) \equiv \Delta_0(\omega) = t_0^2 \tensor*{\widetilde{G}}{^{(1)}_{1,1}}(\omega)$, the first step in the recursion algorithm is the calculation of $\Delta_1(\omega)$ as
\begin{align*}
\Delta_1(\omega)=t_1^2 \tensor*{\widetilde{G}}{^{(2)}_{1,1}}(\omega) = \omega^+ - \frac{1}{\tensor*{\widetilde{G}}{^{(1)}_{1,1}}(\omega)} \,.
\end{align*}
Since both the real and imaginary parts of $\tensor*{\widetilde{G}}{^{(1)}_{1,1}}(\omega)$ are vanishingly small as $\omega\to 0$ and are equal to zero at $\omega=0$, this leads to a non-analyticity in $\Delta_1(\omega=0)$ and hence a singular part in the corresponding Green function $\tensor*{\widetilde{G}}{^{(2)}_{1,1}}(\omega)$. The Green function can be written as $\tensor*{\widetilde{G}}{^{(2)}_{1,1}}(\omega) = \tensor*{\widetilde{G}}{^{(2)}_{1,1}}^{\text{reg}}(\omega)+ \tensor*{\widetilde{G}}{^{(2)}_{1,1}}^{\text{sing}}(\omega)$, where $\tensor*{\widetilde{G}}{^{(2)}_{1,1}}^{\text{reg}}(\omega)$ denotes the regular (continuum) part, and $\tensor*{\widetilde{G}}{^{(2)}_{1,1}}^{\text{sing}}(\omega)$ denotes the singular part. More precisely,
\begin{align*}
\Delta_1(\omega\to0)
&= \omega^+ - \frac{t_0^2}{a_0 \omega + \i b_0 \omega^2}
\\
&= \omega^+ - \frac{t_0^2}{a_0} \left[ \frac{1}{\omega^+} - \frac{\i b_0}{a_0 + \i b_0\omega}\right] \,.
\end{align*}
Therefore,
\[
\Delta_1^{\text{reg}}(\omega\to 0) = \omega + \frac{t_0^2 b_0^2 \omega}{a_0(a_0^2 + b_0^2 \omega^2)} + \i \frac{t_0^2 b_0}{a_0^2 + b_0^2 \omega^2} \;,
\]
such that
\begin{equation}
\Delta_1(\omega\to 0) = \Delta_1^{\text{reg}}(\omega\to0) - \frac{t_0^2}{a_0 \omega^+} \,.\label{eq:sigma1}
\end{equation}
The second term on the right-hand side of Eq.~\eqref{eq:sigma1} corresponds to a pole in the imaginary part concomitant with a diverging real part of $\Delta_1(\omega)$ at $\omega=0$. Furthermore, this pole resides on top of a background function, $\Delta_1^{\text{reg}}(\omega)$, such that $\Delta_1^{\text{reg}}(0)=\beta_1$. The residue of this pole is $\alpha_1=\frac{t_0^2}{|a_0|}$. To fix $t_1^2$ the spectral normalization is used,
\begin{align}
t_1^2 = \int \mathcal{A}_{1}^{\text{reg}}(\omega) \d\omega +\alpha_1 \,.
\label{eq:t1sq}
\end{align}
Proceeding to the next step of the recursion, the low energy spectral behavior of $\Delta_2(\omega)$ is
\begin{align}
\Delta_2(\omega)=\omega-\frac{\omega t_1^2}{\omega \Delta_1^{\text{reg}}(\omega)+\alpha_1}
\end{align}
whose imaginary part is
\begin{align}
\Im\Delta_2(\omega)&=-\omega t_1^2 \Im \frac{1}{\omega\Delta_1^{\text{reg}}(\omega)+\alpha_1}\nonumber \\
&=\frac{\omega^2 t_1^2 \Im\Delta_1^{\text{reg}}}{(\omega \Re\Delta_1^{\text{reg}} + \alpha_1)^2 + (\omega \Im\Delta_1^{\text{reg}})^2} \,.
\label{eq:sigma_even_site}
\end{align}
Substituting in the respective low energy dependencies of $\Delta_1^{\text{reg}}(\omega)$ it is found that ${\Im}\Delta_2(\omega)\overset{\omega\to0}{\sim} b_2\omega^2$. Similarly, from the evaluation of $\Re\Delta_2(\omega\to 0)$ it follows from the presence of a non-zero $\alpha_1$ that $\Re\Delta_2(\omega) \overset{\omega\to0}{\sim} a_2\omega$, just as in a Fermi liquid. The advantage of separating the regular and singular parts of the odd-site chain hybridization functions is clear from the structure of Eq.~\eqref{eq:sigma_even_site}, where the information about the underlying pole from the previous iteration is embedded via its weight, and allows the recursion algorithm to deal with a regular function numerically.
Based on the above, it is evident that at every odd recursion step $\Delta_{2n+1}(\omega)$ has a pole structure similar to $\Delta_1(\omega)$ with pole weight $\alpha_{2n+1}=\frac{t_{2n}^2}{|a_{2n}|}$ and subsequently the Fermi liquid character will follow for every even numbered site in the chain, $\Delta_{2n}(\omega)$. In summary, the following recursion relations describe the flow of the low-energy expansion coefficients,
\begin{subequations}
\begin{align}
\Delta_{2n}(\omega\to 0) &= a_{2n}\omega + \i b_{2n}\omega^2 & (a_{2n},b_{2n}&<0) \,,
\\
\Delta_{2n+1}(\omega\to 0) &= \frac{\alpha_{2n+1}}{\omega^+} + \i\beta_{2n+1} \,,
\end{align}
\label{eq:flhybparams}
\end{subequations}
where
\begin{subequations}
\begin{align}
a_{2n} &= 1-\frac{t_{2n-1}^2}{\alpha_{2n-1}} \,,
&
b_{2n} &= \frac{t_{2n-1}^2 \beta_{2n-1}}{\alpha_{2n-1}^2} \,,
\\
\alpha_{2n+1} &= \frac{t_{2n}^2}{|a_{2n}|} \,,
&
\beta_{2n+1} &= \frac{t_{2n}^2 b_{2n}}{a_{2n}^2} \,.
\end{align}
\end{subequations}
The asymptotic properties of the $\Delta_n(\omega)$ are therefore completely determined by the initialized values $a_0$, $b_0$ and the $\{t_n\}$ determined by the above continued fraction expansion.
Note that the above analytic structure is characteristic of a Fermi liquid: All $\Im\Delta_n(\omega)$ for {even} $n$ have low-energy quadratic behavior, while all {odd} $n$ functions have zero-energy poles.
On the practical level of the numerical calculations, for every odd iteration the singular pole feature is cut from $\mathcal{A}_{2n+1}(\omega)$ and a Hilbert transform is then performed to obtain the correct corresponding regular real part, and hence $\Delta_{2n+1}^{\text{reg}}(\omega)$. The even or odd $t_{n}$ are subsequently evaluated at each iteration from the normalization of $\mathcal{A}_{n}(\omega)$ as above.
This recursion scheme is highly sensitive to the precision of the input.
In order to capture the appropriate low energy behavior of the imaginary part of the self-energy in the chain parameters it is typically necessary to manually correct some of the numerical artifacts which appear. The types of artifacts which are necessary to correct are described in~\S\ref{sec:seproblems}. The region where the irregularities appear is relatively small, so it is generally easy to extrapolate the correct behavior of $\Im\Sigma(\omega)$.
\subsection{Single Impurity Anderson Model\label{sec:auxsiam}}
As a first case study, this section details how this auxiliary field mapping can be applied to the single impurity Anderson model.
The initial starting point of the auxiliary field mapping is obtaining a solution for the self-energy of the Anderson model. The solution here is obtained from NRG analysis of the single impurity Anderson model.
From \S\ref{ch:methods} the Hamiltonian for the single-impurity Anderson model is
\begin{equation}
\hat{H}_{\textsc{aim}} = \sum_{k,\sigma} \tensor*{\varepsilon}{_{k}} \opd{c}{k,\sigma} \op{c}{k,\sigma} + \sum_{k,\sigma} \left( \tensor*{V}{_{k,\sigma}} \opd{c}{k,\sigma} \op{d}{\sigma} + \tensor*{V}{^*_{k,\sigma}} \opd{d}{\sigma} \op{c}{k,\sigma} \right) + \tensor*{\varepsilon}{_{d}} \opd{d}{\sigma} \op{d}{\sigma} + U \opd{d}{\uparrow} \op{d}{\uparrow} \opd{d}{\downarrow} \op{d}{\downarrow} \tag{\ref*{eq:siam}}
\end{equation}
The initial condition for the impurity model is taken as a flat band hybridization of bandwidth $D$
\begin{equation}
\Im\Delta(\omega) = -\frac{V^2 \pi}{2 D} \Theta(D-|\omega|) \,.
\end{equation}
The real part of the hybridization is obtained by a Hilbert transform and is found to be
\begin{equation}
\Re \Delta(\omega) = \frac{V^2}{D} \ln\left\lvert \frac{\omega+D}{\omega-D} \right\rvert \,.
\end{equation}
This ensures that the hybridization function obeys the Kramers-Kronig relations\index{Kramers-Kronig relations} and possesses the correct analytic structure.
In the following, the parameters are chosen to be $V/D=0.1$ and $D=1.0$.
This flat band hybridization is plotted in Fig.~\ref{fig:inputflathyb}.
\begin{figure}[htp!]
\centering
\includegraphics{flatband.pdf}
\caption[Anderson model flat band hybridization function]{Anderson model flat band hybridization function as the input of the NRG self-energy calculation.\label{fig:inputflathyb}}
\end{figure}
The impurity is treated with particle-hole symmetry at half-filling with $U/D = 0.3$ and $\varepsilon_d/D = -0.15$. The resulting self-energy as obtained from NRG at different temperatures is plotted in Fig.~\ref{fig:siamV0.1}.
As will be shown below, the auxiliary chains take on the form of generalized SSH models which exhibit features belonging to the same class of generalizations discussed in \S\ref{ch:genssh}. The present chapter will discuss primarily the qualitative features of these auxiliary chains. A more detailed quantitative study analysis of the auxiliary chains which arise in infinite systems will be postponed until the following chapter, \S\ref{ch:motttopology}, where a more elaborate story will be told.
The primary focus of the analysis here is in the temperature dependence of the auxiliary parameters.
Before analyzing the finite temperature case, it is worth understanding the form of the auxiliary chain for the $T=0$ Fermi liquid self-energy, shown in Fig.~\ref{fig:siamV0.1T0}. The auxiliary chains exhibit a uniform stagger, taking the form of $t_n = \overline{t} \pm \delta t_n$. In this way the auxiliary chains are said to be of generalized SSH form.
Plotted in Fig.~\ref{fig:T1e-5V1tn} are the chain parameters generated by the auxiliary field mapping, and plotted in Fig.~\ref{fig:finiteTchainzoom1e-5} are the odd (blue) and even (red) hopping parameters in the asymptotic regime. Plotted here are the $\delta t_n$'s only, with the normalization $\overline{t} = 0$. The overall form of the chain parameters is that of $1/n$ decay, which is consistent with the pattern uncovered in \S\ref{sec:pseudogapssh}.
At $T=0$, the long distance behavior of the auxiliary chain hopping parameters takes the form
\begin{equation}
t_n \sim \frac{1}{2}\sqrt{1-(-1)^n\frac{2}{n+d}} \,,
\label{eq:amsetn}
\end{equation}
where $d \sim 1/Z$ is related to the quasiparticle weight $Z$,
\begin{equation}
Z = \left( 1 - \left. \frac{\d \Re \Sigma}{\d \omega} \right\rvert_{\omega=0} \right)^{-1} \,.
\end{equation}
The factor of 2 in Eq.~\eqref{eq:amsetn} is understood to arise from $-\Im\Sigma(\omega) \overset{|\omega| \ll D}{\sim} \omega^2$, which matches the expected behavior from the generalized SSH models constructed from Eq.~\eqref{eq:powerlawtn}.
At zero temperature the imaginary part of the self-energy exhibits Fermi liquid behavior at low energy, $-\Im\Sigma(\omega) \overset{|\omega| \ll D}{\sim} \omega^2$,
while at finite temperatures the self-energy plateaus to a finite value at zero frequency. The high energy ($|\omega|\gg T$) features of $-\Im\Sigma(\omega)$ have no significant temperature dependence.
\begin{figure}[htp!]
\centering
\begin{subfigure}{\linewidth}
\centering
\phantomsubcaption{\label{fig:siamV0.1T0}}
\vspace{-\baselineskip}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{siam_V01_T1e-10.pdf}};
\node at (5.25,1.5) {\footnotesize{\subref*{fig:siamV0.1T0}}};
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{siam_V01_T1e-4.pdf}};
\node at (5.25,1.5) {\footnotesize{\subref*{fig:siamV0.1T1e-4}}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:siamV0.1T1e-4}}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{siam_V01_T1e-3.pdf}};
\node at (5.25,1.5) {\footnotesize{\subref*{fig:siamV0.1T1e-3}}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:siamV0.1T1e-3}}
\vspace{-\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{siam_V01_T1e-2.pdf}};
\node at (5.25,1.5) {\footnotesize{\subref*{fig:siamV0.1T1e-2}}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:siamV0.1T1e-2}}
\end{subfigure}
\caption[$-\Im\Sigma(\omega)$ for the single impurity Anderson model at finite temperatures]{$-\Im\Sigma(\omega)$ for the single impurity Anderson model with $V=0.1$ and $U=0.3$ at half-filling evaluated at \subref{fig:siamV0.1T0} $T=0$, \subref{fig:siamV0.1T1e-4} $T=10^{-4}$, \subref{fig:siamV0.1T1e-3} $T=10^{-3}$, and \subref{fig:siamV0.1T1e-2} $T=10^{-2}$.
At $T=0$ $-\Im\Sigma(\omega) \overset{|\omega| \ll D}{\sim} \omega^2$, but at finite temperatures $-\Im\Sigma(\omega)$ plateaus to a finite value. Note that only low energy parts of $-\Im\Sigma(\omega)$ are meaningfully affected by temperature; the high energy ($|\omega|\gg T$) features are essentially unchanged across the parameter regime.
\label{fig:siamV0.1}}
\end{figure}
As seen in Fig.~\ref{fig:finiteTchain}, the head of the chain corresponding to various temperatures takes on a similar form, but the envelope which follows changes with the temperature. Since the parameters at the head of the chain determine the high energy features of the spectrum, it is expected that these would be nearly identical for the various temperatures from visual inspection of the self-energies in Fig.~\ref{fig:siamV0.1}. The detailed effects of temperature on the envelope of the chain are illustrated in Fig.~\ref{fig:finiteTchainzoom}.
A prominent characteristic of these auxiliary systems is the appearance of a thermal length scale $\xi_T$ in the auxiliary chains.
\begin{figure}[htp!]
\begin{subfigure}{0.49\linewidth}
\phantomsubcaption{\label{fig:T1e-5V1tn}}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-5V1tn.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:T1e-5V1tn}};
\end{tikzpicture}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\phantomsubcaption{\label{fig:T1e-4V1tn}}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-4V1tn.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:T1e-4V1tn}};
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\phantomsubcaption{\label{fig:T1e-3V1tn}}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-3V1tn.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:T1e-3V1tn}};
\end{tikzpicture}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\phantomsubcaption{\label{fig:T1e-2V1tn}}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-2V1tn.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:T1e-2V1tn}};
\end{tikzpicture}
\end{subfigure}
\caption[Auxiliary chain parameters for the self-energy of the single impurity Anderson model at finite temperatures]{Auxiliary chain parameters for the self-energy of the single impurity Anderson model at \subref{fig:T1e-5V1tn} $T=0$, \subref{fig:T1e-4V1tn} $T=10^{-4}$, \subref{fig:T1e-3V1tn} $T=10^{-3}$, and \subref{fig:T1e-2V1tn} $T=10^{-2}$. \label{fig:finiteTchain}}
\end{figure}
$t_n$ at large $n$ correspond to features approximately at $\omega \sim 1/n$. The fact that at finite temperature the $t_n$'s reach a fixed value without oscillations, $t_{2n} \simeq t_{2n-1}$, is expected as at finite-$T$, $-\Im\Sigma(\omega)$ does not tend to zero, but rather plateaus at a finite value.
\begin{figure}[htp!]
\begin{subfigure}{\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-5V1tnzoom.pdf}};
\node at (7.25,2) {\footnotesize\subref*{fig:finiteTchainzoom1e-5}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:finiteTchainzoom1e-5}}
\vspace{-2\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-4V1tnzoom.pdf}};
\node at (7.25,2) {\footnotesize\subref*{fig:finiteTchainzoom1e-4}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:finiteTchainzoom1e-4}}
\vspace{-2\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-3V1tnzoom.pdf}};
\node at (7.25,2) {\footnotesize\subref*{fig:finiteTchainzoom1e-3}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:finiteTchainzoom1e-3}}
\vspace{-2\baselineskip}
\end{subfigure}
\begin{subfigure}{\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{T1e-2V1tnzoom.pdf}};
\node at (7.25,2) {\footnotesize\subref*{fig:finiteTchainzoom1e-2}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:finiteTchainzoom1e-2}}
\vspace{-\baselineskip}
\end{subfigure}
\caption[Auxiliary chain parameters for the Anderson model at finite temperatures]{Envelopes of the auxiliary chain hopping parameters $t_n = t_0 + \delta t_n$ for single-impurity Anderson model at temperatures of \subref{fig:finiteTchainzoom1e-5} $T=0$, \subref{fig:finiteTchainzoom1e-4} $T=10^{-4}$, \subref{fig:finiteTchainzoom1e-3} $T=10^{-3}$, and \subref{fig:finiteTchainzoom1e-2} $T=10^{-2}$. The even $t_n$ are plotted in red and the odd $t_n$ are plotted in blue. The chain parameters exhibit an enveloped behavior at the front of the chain before settling down to a constant value at a thermal length scale $\xi_T \sim 1/T$. The $T=0$ case does not, which reflects $\xi_T \to \infty$. Points plotted are a subset of the total data set to best illustrate the envelopes.\label{fig:finiteTchainzoom}}
\end{figure}
At finite temperature, the $\{t_n\}$ exhibit $\sim1/n$ decay until settling at a constant value (within systematic numerical noise) on the order of $\xi_{T}$.
Analysis of the auxiliary chain parameters in Fig.~\ref{fig:finiteTchainzoom} reveals that the order of the thermal length scale is
\begin{equation}
\xi_{T} \sim \mathcal{O}(T^{-1}) \,.
\end{equation}
A chain parameterized by constant $t_n$'s corresponds to a boundary spectral function which is metallic and has finite value at the Fermi level.
\subsection{Hubbard-SSH Revisited}
The above analysis can also naturally be applied to the interacting Hubbard-SSH on the Bethe lattice seen in \S\ref{ch:bethessh}. Recall from \S\ref{sec:hsshse} that the self-energy on the $\circ$-sites for $U < U_{c}$ is a Fermi liquid with $-\Im\Sigma^{\circ}(\omega) \sim \omega^2$, just as in the Anderson model above. The auxiliary chain parameters for the $\Sigma^{\circ}(\omega)$ is also of the form of Eq.~\eqref{eq:amsetn}.
\begin{figure}[htp!]
\begin{subfigure}[t]{0.49\linewidth}
\includegraphics{hsshSU3a.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.49\linewidth}
\includegraphics{SU3loga.pdf}
\end{subfigure}
\caption{Self-energy of the HSSH model as obtained by the DMFT-NRG calculation of \S\ref{ch:bethessh}. \label{fig:hsshseredux}}
\end{figure}
As observed in Fig.~\ref{fig:hsshseredux}, the self-energy of the $\bullet$-sites for $U < U_{c}$ has low energy power-law behavior of $ -\Im\Sigma^{\bullet}(\omega) \sim \lvert \omega^{r_\bullet} \rvert$, but it is non-Fermi liquid with $r_\bullet \neq 2$.
Here again it is revealed that the long distance behavior of the auxiliary chains take the form of
\begin{equation}
t_n \overset{nZ\gg1}{\sim} \frac{D}{2} \sqrt{1 + (-1)^{n} \frac{r}{n+d}}
\end{equation}
where now $r$ no longer takes the value of 2 as characteristic of the Fermi liquid. Instead $r$ takes the value of $r = r_\bullet$, the power of the self-energy at $\lvert\omega\rvert\ll1$. This analysis reaffirms the assumptions leading into the toy model in the previous section, as well as the characteristics determined in the generalized SSH models discussed in \S\ref{ch:genssh}.
\section{Quantum Transport\label{sec:transport}}
The auxiliary field mapping presented in the previous section maps an interacting system to a non-interacting auxiliary system. This mapping is not a solution technique, as a solution for the self-energy is needed to initialize the auxiliary field mapping in the first place. A practical utility of this scheme is in the calculation of transport properties of quantum impurity models.
A standard method of calculating conductance is by means of the Kubo formula~\cite{bruusflensberg,kubotransport}, which is a general formula for calculating the linear response conductance for interacting systems in arbitrary configurations at arbitrary temperature. A difficulty in the application of the Kubo formula is that the current-current correlator is difficult to compute compared to the Green functions.
There exist numerous methods of calculating transport via Green functions, however these formulae are typically of limited scope and can only be applied in certain circumstances. A basic example is the Landauer formula~\cite{landauer,landauer2,greenlandauer}, which is applicable to transport through a non-interacting impurity with two terminals. This can be generalized to the multi-terminal case in the form of the Landauer-B\"{u}ttiker formula~\cite{landauerbuttiker}.
An electric transport equation which incorporates fully interacting impurities in non-equilibrium with two terminals is the Meir-Wingreen formula~\cite{meirwingreen}. However, the formulation in linear response is in terms of equilibrium retarded Green functions and is limited to the proportionate coupling case.
For non-proportionate coupling systems, Meir-Wingreen requires explicit non-equilibrium Green functions, even in linear response. These are notoriously challenging to calculate for interacting systems, and so these calculations typically resort to the Kubo formula in linear response.
However for a Fermi liquid, as in the case of the Anderson impurity model, the linear response at zero temperature in terms of the Green functions can be obtained by the Oguri formula~\cite{oguri}.
In general it would be useful to have here an equilibrium Green function based formulation for finite temperature transport in linear response for interacting models without the proportionate coupling condition. No such formulation is known.
The principal concept of the application here is to map an interacting quantum dot system to an auxiliary system without interactions in order to avoid the incorporation of interactions in the calculation of quantum transport expressions. This potentially leads to the simplification in the calculation of transport properties of quantum impurity models.
With an effective non-interacting model at hand, quantum transport can be calculated using standard Green function methods within the Landauer-B\"uttiker framework.
For physical concreteness, in this section (\ref{sec:transport}) the Planck constant $h$ is included explicitly as well as the unit electric charge $e$. This is done so that physically observable quantities have prefactors of meaningful dimension. The electrical conductance $\mathfrak{G}_C$ for instance has the appropriate dimensionful prefactor of $e^2/h$.
\subsubsection{Single Quantum Dot}
In this section the consequences of the mapping described above are explored in the context of quantum transport. A first application is to the case of transport through a single quantum dot, where the dot is represented by the paradigmatic single-impurity Anderson model,
\begin{equation}\label{eq:aim}
\hat{H}_{\text{SQD}} = \hat{H}_{\text{leads}} + \varepsilon_d ( \hat{n}_{\uparrow} + \hat{n}_{\downarrow} ) + \sum_{\alpha,\sigma} \left( \tensor{V}{_{\alpha}} \opd{d}{\sigma} \op{c}{\alpha \sigma} + \hc \right) + \hat{H}_{\text{int}} \;,
\end{equation}
where $\hat{n}_{\sigma} = \opd{d}{\sigma} \op{d}{\sigma}$, $\hat{H}_{\text{int}} = U_d \hat{n}_{\uparrow}\hat{n}_{\downarrow}$, and
\begin{equation}
\hat{H}_{\text{leads}} = \sum_{\alpha} \hat{H}_{\text{leads}}^{\alpha} = \sum_{\alpha,k,\sigma} \tensor*{\varepsilon}{_k} \opd{c}{\alpha k \sigma} \op{c}{\alpha k \sigma}
\label{eq:transportleads}
\end{equation}
with $\alpha = \text{s}, \text{d}$ for source and drain. The leads are characterized by their free Green functions, denoted $\mathcal{G}^0_{\alpha\alpha}(\omega) \equiv \Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_{\omega}$, with corresponding free density of states $\varrho(\omega)=-\tfrac{1}{\pi} \Im \mathcal{G}^0_{\alpha\alpha}(\omega)$ which are taken to be the same for both source and drain leads.
As described in the previous section, the auxiliary field mapping is an exact representation of the single-particle dynamics for an interacting system in terms of a completely non-interacting one. The Dyson equation for the Anderson impurity reads as
\begin{equation}\label{eq:Gaim}
G_{\sigma}(\omega)\equiv \Green{\op{d}{\sigma}}{\opd{d}{\sigma}}_{\omega} = \frac{1}{\omega - \varepsilon_d/\hslash -\sum\limits_{\alpha}\Delta^{\alpha}(\omega)- \Sigma(\omega)} \,,
\end{equation}
where $\Delta^{\alpha}(\omega)=V_{\alpha}^2 \mathcal{G}^0_{\alpha\alpha}(\omega)$ is the hybridization between the impurity and the physical lead $\alpha 25 \{ \text{s}, \text{d} \}$ and $\Sigma(\omega)$ is the interaction self-energy. For convenience the static contribution to the self-energy is absorbed into the definition of the renormalized level $\varepsilon_d^* = \varepsilon_d + \hslash~\Re \Sigma(0)$, and work with the dynamical part of the self-energy $\tilde{\Sigma}(\omega) = \Sigma(\omega) - \Re \Sigma(0)$ .
\begin{figure}[htp!]
\begin{subfigure}[c]{0.49\linewidth}
\begin{tikzpicture}[{thick}]
\node[circle,draw=black,fill=black!10,inner sep=1pt] (imp) at (0,0) {$\uparrow\downarrow$};
%
\def1.25{1.25}
\def3{2.5}
\def0.4{0.5}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\draw (s) arc(0:-90:3 cm and 0.4 cm);
\draw (s) arc(0:-90:3 cm and -0.4 cm);
%
\node at ($(s)+(-1.5,0)$) {source};
\node at ($(d)+(1.5,0)$) {drain};
%
\draw[-,line width=1.5pt] (s)--(imp) node[midway,above] {$V_{\text{s}}$};
\draw[-,line width=1.5pt] (d)--(imp) node[midway,above] {$V_{\text{d}}$};
%
\path[in=60,out=120,looseness=6] (imp) edge[-latex,line width=0.5pt,double distance=0.5pt] node[above] {$U_d$} (imp);
%
\begin{scope}[scale=0.67,draw=none]
\def2{1}
\def3{3}
\def2{2}
\foreach[evaluate=\s as \sc using (\s+0.5)] \s in {2,...,3}
{
\node[rectangle,inner sep=5pt] (d\s) at (0,-\sc) {};
}
\draw[draw=none,line width=1.25pt](imp)--(d2);
\foreach[evaluate=\s as \n using int(\s+1)] \s in {2,...,2}
{
\draw[draw=none,line width=1.25pt](d\s)--(d\n) node[midway,above] {};
}
\draw[draw=none,line width=1.25pt, dashed, line cap=round] (d3)--+(0,-1) {};
\end{scope}
\node at (3.75,1.4) {\footnotesize\subref*{fig:singlei}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:singlei}}
\end{subfigure}
\begin{subfigure}[c]{0.49\linewidth}
\begin{tikzpicture}[{thick}]
\node[circle,draw=black,inner sep=6pt] (imp) at (0,0) {};
%
\def1.25{1.25}
\def3{2.5}
\def0.4{0.5}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\draw (s) arc(0:-90:3 cm and 0.4 cm);
\draw (s) arc(0:-90:3 cm and -0.4 cm);
%
\node at ($(s)+(-1.5,0)$) {source};
\node at ($(d)+(1.5,0)$) {drain};
%
\draw[-,line width=1.5pt] (s)--(imp) node[midway,above] {$V_{\text{s}}$};
\draw[-,line width=1.5pt] (d)--(imp) node[midway,above] {$V_{\text{d}}$};
\begin{scope}[scale=0.67]
\def2{1}
\def3{3}
\def2{2}
\foreach[evaluate=\s as \sc using (\s+0.5)] \s in {2,...,3}
{
\node[rectangle,draw=red,inner sep=5pt] (d\s) at (0,-\sc) {};
}
\draw[red,line width=1.25pt](imp)--(d2);
\foreach[evaluate=\s as \n using int(\s+1)] \s in {2,...,2}
{
\draw[red,line width=1.25pt](d\s)--(d\n) node[midway,above] {};
}
\draw[red,line width=1.25pt, dashed, line cap=round] (d3)--+(0,-1) {};
\end{scope}
\node at (0.45,-0.6) {$V_{\text{aux}}$};
\node at (3.75,1.4) {\footnotesize\subref*{fig:singleimpaux}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:singleimpaux}}
\end{subfigure}
\caption[Auxiliary field mapping of transport through single quantum dot]{Schematic of the auxiliary field mapping. The interaction self-energy $\tilde{\Sigma}(\omega)$ of the single-impurity Anderson model \subref{fig:singlei} is mapped to an auxiliary non-interacting tight-binding chain \subref{fig:singleimpaux}. The current between physical source and drain leads due to a bias voltage in the interacting system is reproduced in the mapped non-interacting system through a zero-current constraint for the auxiliary `lead' in the 3-terminal Landauer-B\"uttiker formula
}
\label{fig:singleimp}
\end{figure}
Following the mapping described previously in \S\ref{sec:mapping}, the dynamical part of the self-energy is interpreted as a hybridization to a bath of auxiliary non-interacting degrees of freedom $\tilde{\Sigma}(\omega) \mapsto \Delta^{\text{aux}}(\omega)$.
The effect of the electronic scattering due to the Coulomb interaction on the impurity is reproduced exactly by a proper choice of the auxiliary bath. For simplicity it is assumed that the impurity is at particle-hole symmetry, with $\varepsilon_d = -U_d/2$, so that $\varepsilon_d^*=0$ and Eq.~\eqref{eq:impHaux} does not require inclusion of onsite potentials, $\tensor*{\epsilon}{_{n}} = 0$ $\forall n$. For notational clarity, the hybridization parameter from the impurity site the auxiliary chain in Eq.~\eqref{eq:impHhyb} will be notated $\tensor{V}{_{\text{aux}}}$ to avoid any conflation with hybridization to the leads.
The mapping is illustrated schematically in Fig.~\ref{fig:singleimp}.
\begin{comment
Here we describe the auxiliary system as a semi-infinite linear chain,
\begin{eqnarray}\label{eq:Haux}
\hat{H}_{\text{aux}} = \sum_{n=0}^{\infty}\sum_{\sigma} \left( t_n^{\phantom{\dagger}} \opd{f}{n\sigma} \op{f}{(n+1)\sigma} + \hc \right) \,,
\end{eqnarray}
where for simplicity we have now assumed particle-hole symmetry, $\epsilon_d=-U_d/2$ (such that $\epsilon_d^*=0$ and Eq.~\eqref{eq:impHaux} does not require inclusion of onsite potentials). The auxiliary chain is coupled at one end to the impurity,
\begin{eqnarray}\label{eq:Hhyb_aux}
\hat{H}_{\text{imp-aux}}= \sum_{\sigma} \left( \tensor{V}{_{\text{aux}}} \opd{d}{\sigma} \op{f}{0\sigma} + \hc \right) \,.
\end{eqnarray}
The impurity-auxiliary chain hybridization function is thus given by $\Delta^{\text{aux}}(\omega)=V_{\text{aux}}^2 \tensor*{\widetilde{G}}{^{(0)}_{1,1}}(\omega)$, where
$\tensor*{\widetilde{G}}{^{(0)}_{1,1}}(\omega) = \Green{\op{f}{0\sigma}}{\opd{f}{0\sigma}}^0_\omega$
is the boundary Green function of the isolated auxiliary system. The latter can be expressed simply as a continued fraction in terms of the tight-binding parameters $\{t_n\}$ in Eq.~\eqref{eq:impHaux} as $\tensor*{\widetilde{G}}{^{(0)}_{1,1}}(\omega)=1/[\omega^+ -\Delta^{\text{aux}}_0(\omega)]$ with $\Delta^{\text{aux}}_n(\omega)=t_n^2/[\omega^+-\Delta^{\text{aux}}_{n+1}(\omega)]$.
\end{comment
As per the auxiliary field mapping, the auxiliary parameters $V_{\text{aux}}$ and $\{t_n\}$ are uniquely determined by setting $\Sigma(\omega) \mapsto \Delta^{\text{aux}}(\omega) =V_{\text{aux}}^2 \tensor*{\widetilde{G}}{^{(1)}_{1,1}}(\omega)$.
At half-filling, $V_{\text{aux}} = U_d/2$. The other auxiliary parameters are regular and well-behaved, with the recursion being efficient and numerically stable.
Since the original self-energy $\Sigma(\omega)$ is a continuous function, the recursion does not terminate and the auxiliary chain is semi-infinite; however the $t_n$ settle down to a regular pattern after a finite number of steps.
The key point for the present discussion is not the specific form of these parameters for a given model, but rather the fact that this mapping exists and is unique.
For the mapped non-interacting system (Fig.~\ref{fig:singleimpaux}) there are {three} effective leads, with a resonant level impurity Green function,
\begin{equation}\label{eq:auxG}
G_{\sigma}(\omega)= \frac{1}{\omega -\varepsilon_d^*/\hslash -\sum\limits_{\gamma} \tensor{\Delta}{^{\gamma}}(\omega)} \,,
\end{equation}
where $\gamma 25 \{ \text{s}, \text{d}, \text{aux} \}$.
To obtain the conductance through the dot it is now necessary to calculate the current $I^{\text{d}}$ flowing from source lead to drain lead due to a source-drain bias voltage $\Delta V_{\text{b}}$. In the physical interacting system (Fig.~\ref{fig:singlei}), the linear response conductance $\mathfrak{G}_C$ follows from the Meir-Wingreen formula~\cite{meirwingreen},
\begin{equation}\label{eq:MW_cond}
\mathfrak{G}_C(T) = \frac{e^2}{h} \int \d\omega \left[- \frac{\partial f_{\text{eq}}(\omega)}{\partial \omega} \right] \mathcal{T}(\omega,T) \;,
\end{equation}
where $f_{\text{eq}}(\omega) =1/(1+\e^{\hslash\omega/k_{\text{B}} T})$ is the equilibrium Fermi function, and $\mathcal{T}$ effective transmission function
\begin{equation}\label{eq:trans}
\mathcal{T}(\omega,T) = \frac{4\pi V_{\text{s}}^2 V_{\text{d}}^2 \varrho(\omega)}{V_{\text{s}}^2+V_{\text{d}}^2} \sum_{\sigma} \left [ -\Im G_{\sigma}(\omega,T) \right]
\end{equation}
in terms of the lead-coupled interacting impurity Green function Eq.~\eqref{eq:Gaim}.
For the mapped non-interacting system (Fig.~\ref{fig:singleimpaux}), it is instead possible to use the 3-terminal linearlized Landauer-B\"uttiker formula~\cite{landauerbuttiker} for the current into lead $\gamma$,
\begin{equation}\label{eq:LB}
I^{\gamma}(T) = \frac{e}{h} \sum_{\beta\ne\gamma}\int \d\omega \left[ \frac{\partial f_{\text{eq}}(\omega)}{\partial \omega} \right](\mu_{\gamma}-\mu_{\beta})\mathcal{T}_{\gamma\beta}(\omega) \;,
\end{equation}
where $\mu_{\gamma}$ is the chemical potential of lead $\gamma$ and $\mathcal{T}_{\gamma\beta}(\omega)=4\Gamma_{\gamma}(\omega)\Gamma_{\beta}(\omega)\sum_{\sigma}|G_{\sigma}(\omega)|^2$,
with $G_{\sigma}(\omega)$ the effective non-interacting Green function given in Eq.~\eqref{eq:auxG}, and $\Gamma_{\gamma}(\omega) = - \Im \Delta^{\gamma}(\omega)$. Eq.~\eqref{eq:LB} is a generalization of the usual Landauer-B\"uttiker formula to the case with inequivalent leads with arbitrary density of states. This is important because the auxiliary `lead' has a specific form that must be accounted for. Here it is assumed for simplicity that the source and drain leads are equivalent, such that $\Gamma_{\text{s}}(\omega)=\pi V_{\text{s}}^2\varrho(\omega)$ and $\Gamma_{\text{d}}(\omega)=\pi V_{\text{d}}^2\varrho(\omega)$ with the same, but otherwise arbitrary, density of states $\varrho(\omega)$.
The auxiliary lead is not a physical lead and so there is no voltage applied to it ($\mu_{\text{aux}} = 0$), and therefore no current flows into or out of it, $I^{\text{aux}} = 0$. The latter property is also required by current conservation in the physical system, $I^{\text{s}} = -I^{\text{d}}$. From Eq.~\eqref{eq:LB} these constraints imply that $V^2_{\text{s}} \tensor*{\mu}{_{\text{s}}} + V^2_{\text{d}} \tensor*{\mu}{_{\text{d}}}=0$. The voltage bias $e \Delta V_{\text{b}} \equiv \mu_{\text{s}} - \mu_{\text{d}}$ must be split across source and drain leads in a specific way to satisfy this constraint, with $\mu_{\text{s}} = e \Delta V_{\text{b}} (1+V^2_{\text{s}}/V^2_{\text{d}})^{-1}$ and $\mu_{\text{d}} = - e \Delta V_{\text{b}} (1 + V^2_{\text{d}} / V^2_{\text{s}})^{-1}$. Note however that the $\Delta V_{\text{b}} \to 0$ linear response conductance does not depend on the details of this splitting.
Substituting these expressions into Eq.~\eqref{eq:LB} results in the current into the drain lead being
\begin{equation}\label{eq:LBaux}
I^{\text{d}}(T) = \Delta V_{\text{b}} \frac{e^2 }{h} ~\frac{4\pi V^2_{\text{d}} V^2_{\text{s}}}{V^2_{\text{d}} + V^2_{\text{s}}} \int \d\omega \left[ -\frac{\partial f_{\text{eq}}(\omega)}{\partial \omega} \right] \varrho(\omega) \sum_{\gamma,\sigma}\Gamma_{\gamma}(\omega)|G_{\sigma}(\omega)|^2 \;.
\end{equation}
This reduces correctly to Eqs.~\eqref{eq:MW_cond} and \eqref{eq:trans} since $\sum\limits_{\gamma}\Gamma_{\gamma}(\omega)|G_{\sigma}(\omega)|^2 = - \Im G_{\sigma}(\omega)$ from the Dyson equation~\eqref{eq:auxG}.
These arguments generalize trivially to any multi-orbital two-lead system in proportionate coupling. The Green function for the effective orbital $\op{\overline{d}}{\sigma}$ coupling to the leads can always be expressed as
\begin{equation}\label{eq:G_PC_aux}
\overline{G}_{\sigma}(\omega)\equiv \Green{\op{\overline{d}}{\sigma}}{\opd{\overline{d}}{\sigma}}_{\omega} = \frac{1}{\omega - \varepsilon_d/\hslash - \sum\limits_{\alpha}\Delta^{\alpha}(\omega)- \Sigma'_{\sigma}(\omega)} \,,
\end{equation}
where $\Sigma'_{\sigma}(\omega)$ includes the effect of scattering from coupling of $\op{\overline{d}}{\sigma}$ to the other impurity degrees of freedom, as well as accounting for electronic interactions. Following the same steps as before, $\Sigma'_{\sigma}(\omega)$ is mapped to a single non-interacting auxiliary chain \eqref{eq:impHaux} coupled at one end to a single resonant level \eqref{eq:impHhyb}, which is also coupled to the physical source and drain leads. Schematically, the mapped system is identical to that depicted in Fig.~\ref{fig:singleimpaux}. Such an example will be illustrated for the triple quantum dot in a following subsection.
The non-proportionate coupling case is more subtle, since the equivalent non-interacting form of the transmission function for use in Eq.~\eqref{eq:LB} must be determined. In general this requires mapping the effective self-energies to {two} auxiliary chains. The mapping for the case of the serial two-impurity Anderson model is illustrated in the following subsection.
To summarize this subsection, quantum transport for interacting systems can be understood in terms of the non-interacting Landauer-B\"uttiker formula~\cite{landauerbuttiker}, in which the self-energy plays the role of an additional fictitious lead, subject to a zero-current constraint.
This formulation provides a simple way of viewing the correction to quantum transport due to interactions.
Furthermore, the auxiliary chain representation may provide a route to simple approximations, given its convenient structure and well-defined asymptotic form. For example, at $T=0$ in the metallic Kondo screened case,
\begin{equation}
t_n \sim \frac{D}{2}\sqrt{1-(-1)^n\frac{2}{n+d}} \,,
\end{equation}
for large $n$, where $D$ is the effective bandwidth and $d \sim 1/Z$ is related to the quasiparticle weight $Z$. This is the same asymptotic form as the single impurity Anderson model analyzed previously in \S\ref{sec:auxsiam}. The conductance formulae can be expressed in terms of the auxiliary chain parameters.
\subsubsection{Double Quantum Dot}
Calculating the transport properties can be simplified by using transport equations for non-interacting leads.
This is made possible my mapping the interaction self-energy to an auxiliary non-interacting tight-binding chain that is then regarded as an additional lead.
The next more general system which can be analyzed through this framework is transport through an impurity which consists of two tandem quantum dots that are symmetrically hybridized to the source and drain leads as illustrated in Fig.~\ref{fig:dqdp}.
\begin{figure}[htp!]
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}[thick]
\node[circle,draw=black,fill=black!10,inner sep=1pt] (imp1) at (-0.75,0) {$\uparrow\downarrow$};
\node[circle,draw=black,fill=black!10,inner sep=1pt] (imp2) at (0.75,0) {$\uparrow\downarrow$};
\node[above=0.5cm] at (imp1) {$\footnotesize1$};
\node[above=0.5cm] at (imp2) {$\footnotesize2$};
%
\draw[-,line width=1.5pt] (imp1)--(imp2) node[midway,above] {$t$};
%
\def1.25{2}
\def3{3}
\def0.4{0.5}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\draw (s) arc(0:-90:3 cm and 0.4 cm);
\draw (s) arc(0:-90:3 cm and -0.4 cm);
%
\node at ($(s)+(-1.5,0)$) {source};
\node at ($(d)+(1.5,0)$) {drain};
%
\draw[-,line width=1.5pt] (s)--(imp1) node[midway,below] {$V$};
\draw[-,line width=1.5pt] (d)--(imp2) node[midway,below] {$V$};
%
\node at ($(d)+(3,1)$) {\footnotesize \subref*{fig:dqdp}};
\end{tikzpicture}
\\
\phantomsubcaption{\label{fig:dqdp}}
\end{subfigure}
\\
\vspace{\baselineskip}
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}[thick]
\node[circle,draw=black,fill=black!10,inner sep=1pt] (imp1) at (0,0.75) {$\uparrow\downarrow$};
\node[circle,draw=black,fill=black!10,inner sep=1pt] (imp2) at (0,-0.75) {$\uparrow\downarrow$};
\node[above=0.5cm] at (imp1) {e};
\node[below=0.5cm] at (imp2) {o};
%
%
\def1.25{2}
\def3{3}
\def0.4{0.5}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\draw (s) arc(0:-90:3 cm and 0.4 cm);
\draw (s) arc(0:-90:3 cm and -0.4 cm);
%
\node at ($(s)+(-1.5,0)$) {source};
\node at ($(d)+(1.5,0)$) {drain};
%
\draw[line width=1.5pt](imp1)--(d) node[midway,above] {$\frac{V}{\sqrt{2}}$};
\draw[line width=1.5pt](imp1)--(s) node[midway,above] {$\frac{V}{\sqrt{2}}$};
\draw[line width=1.5pt](imp2)--(d) node[midway,below] {$-\frac{V}{\sqrt{2}}$};
\draw[line width=1.5pt](imp2)--(s) node[midway,below] {$\frac{V}{\sqrt{2}}$};
%
\node at ($(d)+(3,1.5)$) {\footnotesize \subref*{fig:dqdeo}};
\end{tikzpicture}
\\
\phantomsubcaption{\label{fig:dqdeo}}
\end{subfigure}
\caption[Interacting double quantum dot system in the physical and even/odd bases]{Interacting double quantum dot system in the physical~\subref{fig:dqdp} and even/odd~\subref{fig:dqdeo} bases.}
\end{figure}
The impurity Green function is
$\boldsymbol{G}(\omega) = \left[ \left[\boldsymbol{G}^0(\omega)\right]^{-1} - \boldsymbol{\Sigma}(\omega) \right]^{-1}$
where $\boldsymbol{G}^0(\omega)$ is the free Green function is given by
\begin{equation}
\left[ \boldsymbol{G}^{0}(\omega) \right]^{-1} = \begin{pmatrix} \omega - \varepsilon/\hslash - \Delta(\omega) & t/\hslash \\ t/\hslash & \omega - \varepsilon/\hslash - \Delta(\omega) \end{pmatrix}
\end{equation}
where $\Delta(\omega)$ is the physical hybridization function for the leads.
The auxiliary field mapping cannot be directly implemented in this configuration as the self-energy is not a local scalar function, but rather a matrix function. The impurity can be diagonalized into a form such that the self-energy is a local scalar function by performing a decomposition into an even/odd parity basis as $\op{d}{\text{e}/\text{o}} \vcentcolon= \frac{1}{\sqrt{2}} ( \op{d}{1} \pm \op{d}{2} )$. In this basis the Green function becomes diagonal as
\begin{equation}
\boldsymbol{G}(\omega)
=
\left[ \left[\boldsymbol{G}^0(\omega)\right]^{-1} - \boldsymbol{\Sigma}(\omega) \right]^{-1} = \begin{pmatrix} G_{\text{e}}(\omega) & 0 \\ 0 & G_{\text{o}}(\omega) \end{pmatrix}
\end{equation}
where
\begin{equation}
G_{\text{e}/\text{o}}(\omega) = \frac{1}{\omega - (\varepsilon \mp t)/\hslash - \Delta(\omega) - \Sigma_{\text{e}/\text{o}}(\omega)} \,.
\end{equation}
With the self-energy now diagonalized in the even/odd basis with $\Sigma_{\text{e}/\text{o}}(\omega)$ scalar functions, it is now possible to apply the auxiliary chain mapping of substituting the self-energies with hybridizations to auxiliary subsystems by making the identification $\Delta_{\text{e}/\text{o}}(\omega) \equiv \Sigma_{\text{e}/\text{o}}(\omega)$. This auxiliary model for the double quantum dot in the even/odd basis is shown in Fig.~\ref{fig:evenodddqd}
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[thick]
\node[circle,draw=black,inner sep=4pt] (imp1) at (0,0.75) {e};
\node[circle,draw=black,inner sep=4pt] (imp2) at (0,-0.75) {o};
%
%
\def1.25{2}
\def3{3}
\def0.4{0.5}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\draw (s) arc(0:-90:3 cm and 0.4 cm);
\draw (s) arc(0:-90:3 cm and -0.4 cm);
%
\node at ($(s)+(-1.5,0)$) {source};
\node at ($(d)+(1.5,0)$) {drain};
%
\draw[line width=1.5pt](imp1)--(d) node[midway,above] {$\frac{V}{\sqrt{2}}$};
\draw[line width=1.5pt](imp1)--(s) node[midway,above] {$\frac{V}{\sqrt{2}}$};
\draw[line width=1.5pt](imp2)--(d) node[midway,below] {$-\frac{V}{\sqrt{2}}$};
\draw[line width=1.5pt](imp2)--(s) node[midway,below] {$\frac{V}{\sqrt{2}}$};
%
\begin{scope}[scale=0.67]
\def2{2}
\def3{3}
\def2{2}
\foreach[evaluate=\s as \sc using (\s+0.5)] \s in {2,...,3}
{
\node[rectangle,draw=red,inner sep=5pt] (u\s) at (0,\sc) {};
\node[rectangle,draw=red,inner sep=5pt] (d\s) at (0,-\sc) {};
}
\draw[red,line width=1pt](imp1)--(u2);
\draw[red,line width=1pt](imp2)--(d2);
\foreach[evaluate=\s as \n using int(\s+1)] \s in {2,...,2}
{
\draw[red,line width=1pt](u\s)--(u\n) node[midway,above] {};
\draw[red,line width=1pt](d\s)--(d\n) node[midway,above] {};
}
\draw[red,line width=1pt, dashed, line cap=round] (u3)--+(0,1) {};
\draw[red,line width=1pt, dashed, line cap=round] (d3)--+(0,-1) {};
\end{scope}
\end{tikzpicture}
\caption[Auxiliary field representation of a double quantum dot in the even/odd basis]{Auxiliary field representation of a double quantum dot in the even/odd basis. The impurities in the even/odd basis are now non-interacting sites which are hybridized to auxiliary chains (in red), which are also non-interacting. The total system is now a four-lead system without interactions.\label{fig:evenodddqd}}
\end{figure}
The resulting system is now a non-interacting four-lead system with effective Green functions given by
\begin{equation}
G_{\text{e}/\text{o}}(\omega) = \cfrac{1}{\omega - (\varepsilon \pm t)/\hslash - \Delta(\omega) - \Delta_{\text{e}/\text{o}}(\omega)} \,.
\end{equation}
Transport through the dots can now be calculated via the Landauer-B\"uttiker formula for non-interacting multi-terminal transport
\begin{equation}
I^\gamma = \frac{e}{h} \sum_{\beta \neq \gamma} \int \d\omega \left[ -\frac{\partial f(\omega)}{\partial \omega} \right] ( \mu_\beta - \mu_\gamma ) \mathcal{T}_{\gamma\beta}(\omega)
\end{equation}
where the voltage biases are
$\mu_{\text{s}} = +\frac12 e V_{\text{b}}$, $\mu_{\text{d}} = -\frac12 e V_{\text{b}}$, $\mu_{\text{e}/\text{o}} = 0$.
The
transmission function $\mathcal{T}_{\gamma\beta}(\omega)$
is given by
\begin{equation}
\mathcal{T}_{\gamma\beta}(\omega) = 4 \sum_{\sigma} \Tr \left[ \boldsymbol{G}_\sigma(\omega) \boldsymbol{\Gamma}_\gamma(\omega) \boldsymbol{G}^*_\sigma(\omega) \boldsymbol{\Gamma}_\beta(\omega) \right]
\end{equation}
with
$\boldsymbol{\Gamma}_{\alpha}(\omega) = -\Im \Delta_{\alpha}(\omega)$ and $\beta,\gamma 25 \{\text{s}, \text{d}, \text{e}, \text{o}\}$. In the even/odd basis the
$\boldsymbol{\Gamma}$ are now frequency dependent and leads do not need to be equivalent.
\begin{equation}
\begin{aligned}
\boldsymbol{\Gamma}_{\text{source}} &= \begin{pmatrix} \frac12 & \frac12 \\[0.25em] \frac12 & \frac12 \end{pmatrix} \Gamma \,,
&
\boldsymbol{\Gamma}_{\text{drain}} &= \begin{pmatrix*}[r] \frac12 & -\frac12 \\[0.25em] -\frac12 & \frac12\end{pmatrix*} \Gamma
\end{aligned}
\end{equation}
where $\Gamma = \pi V^2 \varrho_0$.
\begin{equation}
\begin{aligned}
\boldsymbol{\Gamma}_{\text{even}} &= \begin{pmatrix} -\Im\Sigma_{\text{even}}(\omega) & 0 \\ 0 & 0 \end{pmatrix}
&
\boldsymbol{\Gamma}_{\text{odd}} &= \begin{pmatrix} 0 & 0 \\ 0 & -\Im\Sigma_{\text{odd}}(\omega) \end{pmatrix}
\end{aligned}
\end{equation}
The linear response conductance is
\begin{equation}
\mathfrak{G}_C \equiv \frac{I^{\text{d}}}{V_{\text{b}}} = \frac{2e^2}{h} \int \d\omega \left[ -\frac{\partial f(\omega)}{\partial \omega} \right] \mathcal{T}_{\text{eff}}(\omega)
\end{equation}
with transmission function
\begin{equation}
\mathcal{T}_{\text{eff}}(\omega) = \Gamma^2 \left\vert G_{\text{e}}(\omega) - G_{\text{o}}(\omega) \right\rvert^2 - \Gamma \, \Im \Sigma_{\text{e}}(\omega) \left\lvert G_{\text{e}}(\omega) \right\rvert^2 - \Gamma \, \Im \Sigma_{\text{o}}(\omega) \left\lvert G_{\text{o}}(\omega) \right\rvert^2 \,.
\end{equation}
The transmission function
reduces to the Oguri formula~\cite{oguri} at $\omega = 0$,
\begin{equation}
\begin{aligned}[b]
\mathcal{T}_{\text{eff}}(0)
&= \Gamma^2 \left\lvert G_{\text{e}}(\omega) - G_{\text{o}}(\omega) \right\rvert^2
\\
&\equiv \Gamma^2 \left\lvert G_{12}(\omega) \right\rvert^2 \,.
\end{aligned}
\end{equation}
In order to match the behavior of the original physical system it is required that
$I^{\text{e}/\text{o}} = 0$ and $I^{\text{s}} = -I^{\text{d}}$.
There does however appear to be a complication with this implementation. In making the transformation into the even/odd basis and decomposing the local even/odd self-energy into the auxiliary chain form, the voltage bias applied to the source and drain leads does not appear to be the same as the voltage bias applied to the physical source and drain leads of the original system. Simply imposing that $\mu_{\text{e}} = 0 = \mu_{\text{o}}$ is apparently insufficient to fully reproduce the exact conductance. Further analysis is left for future investigation.
\subsubsection{Triple Quantum Dot}
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[{thick}]
\node[circle,draw=black,fill=black!10,inner sep=1pt] (1) at (0,0) {$\uparrow\downarrow$};
\node[circle,draw=black,fill=black!10,inner sep=1pt] (2) at ($(1)+(-0.707,1.125)$) {$\uparrow\downarrow$};
\node[circle,draw=black,fill=black!10,inner sep=1pt] (3) at ($(1)+(0.707,1.125)$) {$\uparrow\downarrow$};
%
\node[below=0.25cm] at (1) {$1$};
\node[above left=0.125cm] at (2) {$2$};
\node[above right=0.125cm] at (3) {$3$};
%
\draw[-] (1)--(2) node[midway,left] {$t$};
\draw[-] (3)--(2) node[midway,above] {$t'$};
\draw[-] (1)--(3) node[midway,right] {$t$};
%
\def1.25{1.25}
\def3{3}
\def0.4{0.5}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\draw (s) arc(0:-90:3 cm and 0.4 cm);
\draw (s) arc(0:-90:3 cm and -0.4 cm);
%
\node at ($(s)+(-1.5,0)$) {source};
\node at ($(d)+(1.5,0)$) {drain};
%
\draw[-,line width=1.5pt] (s)--(1) node[midway,below] {$V$};
\draw[-,line width=1.5pt] (d)--(1) node[midway,below] {$V$};
\end{tikzpicture}
\caption{Triple quantum dot in symmetric proportionate coupling.\label{fig:tqd}}
\end{figure}
The triple quantum dot consists of three sites with local interactions all of which are hybridized to each other with amplitudes $t_{ij}$. The case taken under consideration here is with the triple quantum dot in the proportionate coupling configuration, where the source and drain leads are symmetrically hybridized to the same element of the dot (site $1$ of the dot). The configuration will also be taken to be the symmetric coupling case where the couplings from site $1$ to the sites $2$ and $3$ are equal, $t_{12} = t = t_{13}$, but the coupling between sites $2$ and $3$ is different, $t_{23} = t' \neq t$. This configuration is diagrammed in Fig.~\ref{fig:tqd}.
This system is described by the Hamiltonian
$\hat{H} = \hat{H}_{\text{TQD}} + \hat{H}_{\text{hyb}} + \hat{H}_{\text{leads}}$
where
\begin{subequations}
\begin{align}
\hat{H}_{\text{TQD}} &=
\begin{multlined}[t]
\sum_{j=1,2,3} \left[ \varepsilon \left( \hat{n}_{j\uparrow} + \hat{n}_{j\downarrow} \right) + U \hat{n}_{j\uparrow} \hat{n}_{j\downarrow} \right]
\\
+ t \sum_{\sigma} \left( \opd{d}{1\sigma} \op{d}{2\sigma} + \opd{d}{1\sigma} \op{d}{3\sigma} + \hc \right)
+ t' \sum_{\sigma} \left( \opd{d}{2\sigma} \op{d}{3\sigma} + \hc \right) \,,
\end{multlined}
\\
\hat{H}_{\text{hyb}} &= V \sum_{\sigma} \left( \opd{d}{1\sigma} \op{c}{\sigma} + \hc \right) \,,
\end{align}
\end{subequations}
with the leads Hamiltonian taking the same form as Eq.~\eqref{eq:transportleads} and $\op{c}{\sigma} = \frac{1}{\sqrt{2\pi}} \sum_k \tensor{V}{_k} \op{c}{k}$.
The matrix Green function on the quantum dot is given by
\begin{align}
\boldsymbol{G}_{\text{TQD}}(z) &= \left[ z \mathbbm{1} - \boldsymbol{h} - \boldsymbol{\Delta}(z) - \boldsymbol{\Sigma}(z) \right]^{-1}
\intertext{with}
\boldsymbol{h} &= \frac1\hslash \begin{pmatrix} \varepsilon & t & t \\ t & \varepsilon & t' \\ t & t' & \varepsilon \end{pmatrix}
\intertext{and}
\boldsymbol{\Delta}(z) &= \begin{pmatrix} V^2 G^0_{\text{bath}}(z) & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \,,
\end{align}
where $-\frac1\pi \Im G^0_{\text{bath}}(\omega) = \rho(\omega)$ as before.
The impurity problem for this quantum dot is solved using NRG, yielding a $\Sigma(\omega)$ which may then serve as the input for the auxiliary field mapping. The noninteracting auxiliary system for the triple quantum dot is of the same form as that used for the single impurity system as shown schematically in Fig.~\ref{fig:singleimpaux}. Analysis reveals that the triple dot exhibits a local moment phase when $t' < t$, and exhibits a strong coupling phase when $t' > t$~\cite{akmtqd}.
For very low energy scales, corresponding to very long distances down the auxiliary chain, the recursion algorithm no longer returns accurate results. However, important characteristics of the self-energy appear at very low energy scales of $\omega \lesssim 10^{-6}$. At such low energy scales the recursion algorithm is unable to properly capture the resolution of the input data and the reconstructed spectrum no longer matches the self-energy. In order to construct an auxiliary chain which captures these features, it is necessary to perform the recursion calculation on a data set which has a rescaled frequency axis. This is done by rescaling the $\omega$ of the input data by a factor of $10^{4}$, such that the breakdown scale of $10^{-6}$ in the rescaled data corresponds to a scale of $10^{-10}$ in the original unscaled data. The resulting $t_n$ parameters of the auxiliary chain as calculated by the recursion algorithm are then also scaled by a factor of $10^{4}$. This rescaled calculation is shown in the right-hand panel of Fig.~\ref{fig:tqdparameters} with the unscaled calculation for $\omega \gtrsim 10^{-6}$ plotted in the left-hand panel.
The system parameters for the calculation are chosen to be $U = 0.4$, $\epsilon = -U/2$, $V = 0.1$, and $t = 0.0005$ with conduction bandwidth $D=1.0$. The remaining parameter $t'$ is tuned to alter the phase of the triple dot impurity. For the local moment phase, the parameter $t'$ is chosen to be $t' = 0 < t$, and is chosen to be $t' = 0.01 > t$ for the strong coupling regime.
\begin{figure}[htp!]
\centering
\includegraphics[width=0.67\linewidth]{auxchainTQD.pdf}
\caption[Auxiliary chain parameters for the triple quantum dot]{Auxiliary chain parameters for the triple quantum dot. To clearly show the envelope structure only the values of $\delta t_n$ are plotted, the $t_n$ are still exhibit the characteristic alternating pattern. $\Sigma'_{\sigma}(\omega)$ in the inset denotes the self-energy including the effect of scattering from coupling to the other impurity degrees of freedom. Figure reproduced from~\cite{multiorbitaltransport}.\label{fig:tqdparameters}}
\end{figure}
Fig.~\ref{fig:tqdparameters} shows that at higher energies where the self-energies of the two models agree, the corresponding $t_n$ are also essentially identical for $n<100$. At lower energies, the self-energies become very different, with $-\Im\Sigma(\omega) \sim \omega^2$ in the Kondo strong coupling phase and $-\Im\Sigma(\omega) \sim \frac{1}{\ln\lvert\omega\rvert}$ in the local moment phase. In this regime the $t_n$ also become different at large $n$, $n \gg 100$. In particular there is a crossing point in the envelope of the $\delta t_n$ at $n_c$ in the local moment phase, corresponding to the crossover behavior of the self-energy at $\omega_c \sim 1/n_c$.
Importantly, the rather simple form of the $\delta t_n$ may motivate the development of simple approximate toy models for the self-energy in interacting systems.
\section{Outlook}
Presented in this chapter were the development of two novel methods of mapping interacting systems to non-interacting systems while preserving the original system's dynamics.
The first of these methods involved the introduction of gauge degrees of freedom and implementation of non-linear canonical transformations applied to fermionic systems in their Majorana basis. The Majorana modes exhibited by the standard decomposition spurred a great deal of theoretical and experimental research due to their appearance in the Kitaev superconducting wire. It is possible the generalized decomposition developed here could also lead to physically interesting models where Majorana modes appear nontrivially.
The second scheme presented here was based on generating an effective model from the original system's spectrum or self-energy involving auxiliary non-interacting degrees of freedom.
While the choice of auxiliary system here was taken to be a $1d$ tight-binding model, it is possible that other systems may be chosen as well. Such an example might be a semi-infinite square lattice, where the impurity site is hybridized to the end corner of the square.
It is also plausible to extend the scheme constructed here to the case where the self-energy is not purely local and possesses momentum dependence. For this situation it is envisioned that the auxiliary systems would take the form of interconnected ladders rather than simple disjoint chains.
The difficulty in analyzing strongly correlated systems motivates the development of approaches which aim to reduce the complexity of the calculations involved. The auxiliary field mapping developed in this chapter is one such example which simplifies the system to be analyzed by mapping it to a fully non-interacting system which preserves the dynamics of the original strongly interacting system. As this mapping requires already the solution to the self-energy, it is not a method for solving strongly interacting systems, but serves as a platform for determining characteristics and behavior of such systems, as demonstrated by the example of its application in quantum transport situations above.
Even though it is a powerful tool, DMFT is constrained computationally primarily by the need to solve the local impurity problem. While NRG is the impurity solver employed throughout this thesis, another common solver is exact diagonalization. Exact diagonalization has some advantages in that the impurity problem can be solved exactly in equilibrium, as well as non-equilibrium cases.
In the auxiliary mappings developed in this chapter, additional (non-interacting) degrees of freedom are added to the system to take into account the dynamics of the interactions. A method which takes a converse approach was developed in~\cite{weberembedding}. The central concept of this approach is to emulate a non-interacting bath with many degrees of freedom with an interacting bath of few degrees of freedom.
The context of this work is that of the utilizing exact diagonalization as the impurity solver within DMFT. While this solver allows the production of the exact impurity model spectrum, also in the full non-equilibrium situation, it is computationally limited by the size of the bath and produces a severely discretized spectrum.
The key idea presented in~\cite{weberembedding} is that the computational cost for calculating the spectrum of an Anderson impurity model with a non-interacting bath is the same as for that with an interacting bath. Compared to the non-interacting bath, diagonalization of the interacting bath results in a much higher density of poles in the discrete spectrum. Use of an interacting bath is therefore analogous to using a much larger non-interacting bath, but at the same computational cost.
There is however no guarantee that the system with the interacting bath generates the same physics as the original system with the non-interacting bath. The discrepancy here is due to the fact that in the non-interacting case the self-energy is a scalar function which exists only on the impurity site, whereas for an interacting bath the self-energy is a matrix function existing also on the bath.
In~\cite{weberembedding} this is partially compensated for by tuning the bath interactions such that the self-energy takes a block diagonal structure. This limits the non-local contributions to the self-energy which then leads to the interacting bath producing more accurately the physics of the non-interacting bath.
This strategy provides another method for treating strongly correlated systems with a reduced computational cost.
There are clear conceptual similarities with the auxiliary field approach developed in this chapter.
The auxiliary field mapping constructed in \S\ref{sec:mapping} provides a numerical method of producing an effective model of strongly correlated systems which is fully non-interacting. Up to now analysis of the form of the auxiliary systems has been brief. The next chapter presents a detailed study of these auxiliary systems within the context of the Mott transition in the Hubbard model.
\chapter{Elements of Condensed Matter Physics\label{ch:methods}}
This chapter introduces the technical apparatuses utilized throughout this thesis.
We begin by reviewing the basic theoretical framework employed to study many-body quantum systems.
A central object introduced is that of Green functions, from which a variety of physical quantities can be derived. The primary quantity examined in this thesis is the local spectral density of states.
The Green functions also play a central role in numerical computational schemes employed throughout this thesis.
Two in particular which are described in the following are the numerical renormalization group (NRG)~\cite{nrg} and dynamical mean-field theory (DMFT)~\cite{dmft}.
This chapter ends with an introduction to topology as it relates to condensed matter systems, and the Su-Schrieffer-Heeger model~\cite{ssh,shortcourse} is presented as an example system which exhibits these topological features.
\section{Many-Body Quantum Theory}
A many-body system consisting of $Q$ electrons, with positions $r_q$, and $P$ nuclei, with atomic numbers $Z_p$ and positions $R_p$, in the absence of external electromagnetic fields and ignoring nuclear and relativistic effects can be described by the Hamiltonian
\begin{equation}
\begin{aligned}
\hat{H} =
&-\sum_{q=1}^{Q} \frac{\hslash^2}{2 m_{e}} \triangle_{q}
+ \frac12 \sum_{\substack{q,q'=1\\ q'\neq q}}^{Q} \frac{e^2}{|r_{q} - r_{q'}|}
-\sum_{p=1}^{P} \sum_{q=1}^{Q} \frac{e^2 Z_p}{|R_p - r_q|}
\\
&-\sum_{p=1}^{P} \frac{\hslash^2}{2 M_{p}} \triangle_{p}
+ \frac12 \sum_{\substack{p,p'=1 \\ p'\neq p}}^{P} \frac{e^2 Z_{p} Z_{p'}}{|R_{p} - R_{p'}|}
\end{aligned}
\label{eq:toe}
\end{equation}
in Gau{\ss}ian units where $e$ is the unit electric charge, $m_e$ the mass of the electron and $M_p$ the mass of the $p^{\text{th}}$ nuclei.
The many-body wavefunction for this system $\mathnormal{\Psi} = \mathnormal{\Psi}(t, \{r\}, \{R\})$ obeys the Schr\"odinger equation
\begin{equation}
-\frac{\hslash}{\i} \frac{\partial}{\partial t} \mathnormal{\Psi} = \hat{H} \mathnormal{\Psi} \,.
\label{eq:toeschroedinger}
\end{equation}
Such a wavefunction in principle describes most electronic condensed matter phenomena. This system however is in general not solvable, so approximations need to be made in order to obtain useful results for physical systems.
The physical systems modeled in this work are those which are in the solid state where the atomic nuclei are arranged in the structure of a crystal lattice with negligible kinetic energy. The rigidity of the nuclei also implies the negligibility of the potential between the nuclei as well, rendering both terms in the second line of \eqref{eq:toe} irrelevant. This case can be described more formally by the Born-Oppenheimer approximation~\cite{bornoppenheimer}. Nonetheless, the Schr\"odinger equation \eqref{eq:toeschroedinger} is still typically computationally intractable as $Q \sim \mathcal{O}(10^{23})$ in the solid state.
A many electron wavefunction is anti-symmetric under exchange of electrons. This means that the algebra of single electron wavefunctions is an exterior algebra $\bigwedge(V)$.
A many body wavefunction of $Q$ electrons $\mathnormal{\Psi}_Q$ can be described by a linear combination of the exterior products of the single electron wavefunctions $\psi_{q_i}$ as
\begin{equation}
\mathnormal\Psi_Q = \sum_{\pi} a_{q_1,q_2,\cdots,q_Q} \psi_{q_1} \extp \psi_{q_2} \extp \cdots \extp \psi_{q_Q}
\label{eq:antisymmetricstatistics}
\end{equation}
with the sum over all permutations $\pi$.
The total $Q$-body wavefunction may also be written more traditionally in terms of the Slater determinant
\begin{equation}
\mathnormal\Psi(r_1,\ldots,r_Q) = \frac{1}{\sqrt{Q!}} \begin{Vmatrix} \psi_{1}(r_1) & \psi_{2}(r_1) & \cdots & \psi_{Q}(r_1) \\ \psi_{1}(r_2) & \psi_{2}(r_2) & \cdots & \psi_{Q}(r_2) \\ \vdots & \vdots & \ddots & \vdots \\ \psi_{1}(r_Q) & \psi_{2}(r_Q) & \cdots & \psi_{Q}(r_Q) \end{Vmatrix} \,.
\end{equation}
The alternating nature of the determinant automatically satisfies the sign change of exchanging a pair of fermionic particles.
Dealing with such a wavefunction for $Q>2$ quickly becomes highly computationally intensive. It is therefore desirable to use an alternative computational strategy which is more computationally tractable. Such a computational strategy may be found in the formalism of field quantization.
In statistical mechanics, systems with dynamical particle number are treated with the grand canonical ensemble, with partition function $\mathcal{Z}_{\textsc{GC}} = \Tr \e^{-\beta (\hat{H}-\mu\hat{N})}$, rather than the canonical ensemble, whose partition function is $\mathcal{Z}_{\textsc{C}} = \Tr \e^{-\beta \hat{H}}$. The relevant Hamiltonian is then $\hat{\mathcal{H}} = \hat{H} - \mu \hat{N}$ rather than simply $\hat{H}$. In this work the Fermi level is normalized such that $\mu = 0$, with the understanding that there exist filled states at energies $\varepsilon < 0$. The grand canonical Hamiltonian will then be notated simply as $\hat{H}$.
\subsection{Field Quantization}
The methods of ordinary quantum mechanics are suitable for treating systems comprised of a small number of particles with the particle number fixed.
When dealing with many particles in which the particle number may not be conserved, it is useful to treat the quantum particles instead as excitations of a quantum field.
Many-body degrees of freedom may be quantized according to field quantization~\cite{fradkinqft,altlandsimons,feynmanstatmech}.
Field quantization also goes by the descriptive name of ``occupation number formalism'' of quantum mechanics, and for historical reasons it also goes by the less descriptive nomenclature of ``second quantization''\index{second quantization}.
Historically the quantum field approach was developed in the context of quantizing the electromagnetic field and to solve paradoxes associated with early approaches to relativistic quantum mechanics, but it is now understood more generally as a method for treating many identical particles, which inherently includes the case where particle number is a dynamical quantity, unlike ordinary quantum mechanics where particle number is fixed. As previously mentioned, this amounts to using the grand canonical ensemble versus the canonical ensemble in statistical mechanics.
A difference between the relativistic case and the non-relativistic many-body case is that the many-body case can always, in principle, be treated with a many-body Schr\"odinger equation, whereas such a framework is inherently unavailable for relativistic theories.
The necessary conclusion that relativistic quantum mechanics is inherently a many-body quantum theory may be seen from the Dirac equation of relativistic fermions~\cite{feynmanqed}: The Dirac equation possesses an unbounded negative energy spectrum. This would imply that there exists finite amplitude for a positive energy Dirac fermion to transition to an arbitrary negative energy state releasing an arbitrary amount of radiation energy. This issue can be cured by postulating that all negative energy states are already filled, thereby preventing positive energy states from transitioning to those states. Therefore, in order to consider a single relativistic fermion, it is actually necessary to consider the infinite body case and the field formalism is inevitable.
In the non-relativistic condensed matter scenario the number of states is finite but typically rather large, $\mathcal{O}(10^{23})$.
Another distinguishing feature of relativistic Dirac fermions in vacuum and fermions in condensed matter is that the vacuum is defined by the Fermi level, so that the `negative energy' states are populated by actual fermions whose energies are below the Fermi level.
The space of states of a quantum system with dynamical particle number is given by the direct sum of fixed number Hilbert spaces. This algebraic space is known as a Fock space\index{Fock space},
\begin{equation}
\mathcal{F} = \mathcal{H}_{0} \oplus \mathcal{H}_{1} \oplus \cdots \oplus \mathcal{H}_{N} \oplus \cdots
\end{equation}
where $\mathcal{H}_j$ is the $j$-particle Hilbert space.
An element of Fock space takes the form of a state vector
\begin{equation}
\left\lvert n_0 , n_1 , \ldots, n_N , \ldots \right\rangle = \lvert n_0 \rangle \oplus \lvert n_1 \rangle \oplus \cdots \oplus \lvert n_N \rangle \oplus \cdots
\label{eq:fockvector}
\end{equation}
where $\lvert n_i \rangle$ denotes the occupation of a single-particle state $n_i$. In the fermionic case, the occupation is either 0 or 1 for each state with unique quantum numbers.
Operators acting on Fock space vectors take the form of $\opd{a}{i}$ and $\op{a}{i}$ which are maps ${\hat{a}}{^{(\dagger)}_i} : \mathcal{F} \to \mathcal{F}$, but with $\opd{a}{i} : \mathcal{H}_{i} \to \mathcal{H}_{i+1}$ and $\op{a}{i} : \mathcal{H}_{i} \to \mathcal{H}_{i-1}$. They function as creating or annihilating a state $\lvert n_i \rangle$ by increasing or decreasing that state's occupation. Fock space operators are formally taken to be
operator valued distributions
where the the operators are smeared by a localized function, such as an
$f(k) 25 L^2$.
An operator valued distribution takes the form of~\cite{macroqft}
\begin{equation}
\opd{a}{j,s} = \frac{1}{(2\pi)^{d/2}} \int \d^dk\, f_j(k) \opd{a}{k,s}
\end{equation}
with spatially localized wave packet $f_j(x)$
\begin{equation}
f_j(x) = \frac{1}{(2\pi)^{d/2}} \int \d^dk\, f_j(k) \e^{\i k \cdot x}
\end{equation}
which belongs to an orthonormal set,
\begin{equation}
( f_i , f_j ) = \int \d^dk\, f^*_i(k) f_j(k) = \delta_{ij}
\end{equation}
The action of the operator valued distribution $\opd{a}{j,s}$ on the vacuum state is the creation of a state localized at $j$ with quantum number(s) $s$ and wavefunction $f_j$.
The necessity of employing operator valued distributions rather than Hilbert space operators directly is due to the fact that position is not a well-defined quantum number. It is also necessary as plane wave states are non-normalizable.
The operators arising in field quantization respect the exchange statistics of their respective fields. For fermionic fields, the operators inherit the exterior algebra of the single particle fermionic wavefunctions \eqref{eq:antisymmetricstatistics}.
The space of physical states can be constructed by first defining a vacuum state $\lvert 0 \rangle$ which satisfies
\begin{equation}
\op{a}{j} \lvert 0 \rangle = 0 \,.
\end{equation}
Finitely occupied states can be constructed by application of the $\opd{a}{j}$ operator to this vacuum state as
\begin{equation}
\lvert j \rangle \vcentcolon= \opd{a}{j} \lvert 0 \rangle = \int \d^dk\, f_{j}(k) \opd{a}{k} \lvert 0 \rangle
\end{equation}
where the second equality emphasizes the distributional nature of the operator.
These states then form the basis for constructing the Fock space.
The action of a non-relativistic quantum field is given by
\begin{equation}
S[\hat{\psi}^\dagger,\hat{\psi}]
=
\int \d^d r \left[ -\frac\hslash\i \opd{\psi}{}(r) \frac{\partial}{\partial t} \op{\psi}{}(r) - \frac{\hslash^2}{2m} \nabla \opd{\psi}{}(r) \cdot \nabla \op{\psi}{}(r) - V(r) \opd{\psi}{}(r) \op{\psi}{}(r) \right]
\end{equation}
where $\hat{\psi}^{(\dagger)}(r)$ are field operators which create or annihilate a field excitation at point $r$. The canonical momenta for $\op{\psi}{}$ and $\opd{\psi}{}$ are
\begin{align}
\op{\Pi}{\psi} &= \i \hslash \opd{\psi}{}
&
\op{\Pi}{\psi^\dagger} &= -\i \hslash \op{\psi}{}
\end{align}
which obey the canonical commutation relations.
For fermionic fields, the canonical commutation relation involves the anticommutator,
\begin{equation}
\{ \op{\psi}{}(r) , \op{\Pi}{\psi}(r') \} = \i\hslash \delta(r-r') \,.
\end{equation}
It follows from this that the field $\hat{\psi}$ obeys the commutation algebra
\begin{align}
\{ \op{\psi}{a}(r) , \opd{\psi}{b}(r') \} &= \delta_{ab} \delta(r-r')
&
\{ \op{\psi}{a}(r) , \op{\psi}{b}(r') \} &= 0 = \{ \op{\psi}{a}(r) , \opd{\psi}{b}(r') \}
\label{eq:car}
\end{align}
for general quantum numbers $a$ and $b$. The resulting equations of motion of this action yield the field equivalent of the Schr\"odinger equation.
The corresponding Hamiltonian may be derived from the action as
\begin{equation}
\hat{H} = \int \d^d r \left[ -\frac{\hslash^2}{2m} \opd{\psi}{}(r) \triangle \op{\psi}{}(r) + \opd{\psi}{}(r) \hat{V}(r) \op{\psi}{}(r) \right] \,.
\end{equation}
The discussion here only involved fermionic fields, which obey antisymmetric exchange statistics, \eqref{eq:car}. Bosonic fields on the other hand obey symmetric exchange statistics, and therefore require a slightly different quantization treatment. Since bosonic fields do not feature in this thesis their treatment will be omitted here, but can be found any standard field theory text, such as~\cite{altlandsimons,fetterwalecka,abrikosov}.
The physical systems which will be discussed in the following will be models of solids whose atomic nuclei are arranged into crystalline lattices with the localized orbitals around these atomic nuclei being the relevant dynamical degrees of freedom.
The interpretation of the quantum many-body system in condensed matter on a lattice is not of many independent indistinguishable electrons orbiting atoms at various lattice sites, but rather that of a single electron quantum field permeating the entire system with excitations of this field occurring at the various lattice sites.
\subsection{Tight-Binding Models}
A large class of materials in the solid state at the microscopic level take the form of a periodic crystalline arrangement of atoms. These periodic structures are the unit cells of the system. Basic phenomenological models often involve unit cells with just one, or only a few, atomic sites. Real materials, particularly elaborate compounds, can consist of unit cells containing a dozen or more atoms.
While any real material is of course of finite size, the bulk of a material sample contains such a large number of unit cells that the bulk can be said to consist of an infinite number of them. The atomic potential can then be described as possessing translational symmetry which manifests in the relation
\begin{equation}
V(r + \ell) = V(r)
\end{equation}
where $\ell$ is a displacement vector between the position $r$ in a unit cell with the corresponding position in a neighboring unit cell.
This means that the electronic wavefunction $\psi(r)$ must also obey this symmetry such that $\psi(r+\ell) = \psi(r)$.
This symmetry means that the wavefunctions may be written in terms of Bloch functions
\begin{equation}
\psi_{k}(r) = u_k(r) \e^{\i k \cdot r}
\end{equation}
where $u_k(r+\ell) = u_k(r)$ and $k$ is $2\pi$ periodic.
Wavefunctions which are localized on each lattice site are Wannier functions, which are formed from the Fourier transform of the Bloch functions as
\begin{equation}
\phi(r-R_j) = \frac{1}{\sqrt{\mathcal{V}}}\int \d^dk\, \e^{\i k \cdot R_j} \psi_k(r)
\end{equation}
where $\mathcal{V}$ is the volume of the unit cell.
Wannier functions form an orthogonal basis of wavefunctions,
\begin{equation}
\int \d^dr\, \tensor*{\phi}{^*_a}(r-R_i) \tensor*{\phi}{_b}(r-R_j) = \delta_{ij} \delta_{ab} \,,
\end{equation}
taking into account quantum numbers $a$ and $b$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[yscale=0.075,xscale=1.25]
\begin{scope}
\clip (-0.6,2) rectangle (6.6,-35);
\draw[blue] (-2,-32)--++(11,0);
\draw[blue] (-2,-19)--++(11,0);
\draw[blue] (-2,-11)--++(11,0);
\draw[blue] (-2,-6)--++(11,0);
\def0.4{0.4}
\foreach \a in {-1,0,1,2,3,4,5,6,7,8}
\draw[scale=1,domain=((\a*10+1)/10):((\a*10+9)/10),smooth,variable=\x,purple,fill=white] plot ({\x},{
-0.4/((\x+2)*(\x+2))
-0.4/((\x+1)*(\x+1))
-0.4/((\x)*(\x))
-0.4/((\x-1)*(\x-1))
-0.4/((\x-2)*(\x-2))
-0.4/((\x-3)*(\x-3))
-0.4/((\x-4)*(\x-4))
-0.4/((\x-5)*(\x-5))
-0.4/((\x-6)*(\x-6))
-0.4/((\x-7)*(\x-7))
-0.4/((\x-8)*(\x-8))
-0.4/((\x-9)*(\x-9))
});
\end{scope}
\end{tikzpicture}
\caption[Schematic of potential well of condensed matter lattice]{Schematic of potential well of condensed matter lattice with energy levels shown. In the tight-binding paradigm all energy states except the highest are ``tightly bound'' to the local atomic well, meaning that only an electron occupying the highest energy bound state possesses enough kinetic energy to have finite amplitude to tunnel into a neighboring well.}
\end{figure}
The approximation of the tight-binding model is that only the highest bound state has finite amplitude to tunnel through the potential to an available bound state on a neighboring site. The tight-binding model can also be used to treat the case where multiple orbitals are dynamical, or where there exist the possibility of long range tunneling to next-nearest neighbor (NNN) sites, or N$^n$NN sites in general. Systems in which the orbitals are highly localized, which is the situation idealized by the tight-binding approximation, include the cases of partially filled $d$- or $f$-shell orbitals in transition metals and rare earth elements.
In the tight-binding approximation, the kinetic term of the electrons in Eq.~\eqref{eq:toe} may be obtained from analyzing the overlap between the localized wavefunction on neighboring sites.
\begin{equation}
t_{ij,\sigma} = \int \d^d r\, \tensor*{\phi}{^*_{\sigma}}(r-R_i) \left( -\frac{\hslash^2}{2m} \triangle + V(r,R) \right) \tensor*{\phi}{_{\sigma}}(r-R_j)
\label{eq:hoppingterm}
\end{equation}
where $V(r,R)$ is the potential produced by the underlying atomic lattice. For Hermitian dynamics, $t_{ij,\sigma}$ is symmetric in $i$ and $j$.
This calculation is necessary for generating a tight-binding model based on a specific material where the particular interatomic potentials $V$ is in principle able to be determined from \textit{ab initio} methods, such as density functional theory~\cite{dft,kohnsham}.
The models considered in this thesis are abstract models designed to capture features of certain broad classes of systems and not intended to obtain numerical data for specific materials. As such, the work which follows defines the $t$'s at the level of the tight-binding model and treats them as free model parameters.
The $t$ parameter describes the kinetics of a quantum field excitation propagating from one lattice site to another and it is conventionally called the hopping parameter, although the `$t$' notation originates from the terminology of these excitations `tunneling' through the lattice potential to different sites.
The inter-electron Coulomb interactions in the tight-binding formalism may be parameterized as
\begin{equation}
U_{ijkl,\sigma\sigma'} = \frac12 \int \d^d r \int \d^d r' \, \tensor*{\phi}{^*_{\sigma}}(r-R_i) \tensor*{\phi}{^*_{\sigma'}}(r'-R_j) \frac{e^2}{\lvert r-r' \rvert} \tensor*{\phi}{_{\sigma'}}(r'-R_k) \tensor*{\phi}{_{\sigma}}(r-R_l) \,.
\end{equation}
In the cases which will be considered in the following, the Coulomb interaction will be considered to be a local interaction only, so only electrons occupying the same atomic site experience the interaction. This means that the interaction parameter reduces to
\begin{equation}
U_{ijkl,\sigma\sigma'} \simeq U_{\sigma\sigma'} \delta(R_i - R_j) \delta(R_k - R_k) \delta(R_k - R_l) \equiv U \,.
\label{eq:localU}
\end{equation}
Since electrons obey the Pauli exclusion principle, the electrons involved in the local $U$ interaction must possess different quantum numbers. For electron quasiparticles considered here, the relevant quantum number is the spin projection, $\sigma 25 \{\uparrow,\downarrow\}$, such that $U_{\sigma\sigma'} \equiv U_{\uparrow\downarrow}$.
The field theory formulation of quantum theory on a lattice can be constructed as described in the previous section where the smearing functions involved in the definition of the operator valued distributions are the Wannier functions,
\begin{equation}
\opd{c}{i\sigma} = \int \d r \, \tensor*{\phi}{_\sigma}(r-R_i) \opd{\psi}{\sigma}(r) \,.
\end{equation}
This operator creates a state with wavefunction $\phi_\sigma(r-R_i)$. These operators obey the fermionic commutation algebra
\begin{align}
\{ \op{c}{i,r} , \opd{c}{j,s} \} &= \delta_{ij} \delta_{rs}
&
\{ \op{c}{i,r} , \op{c}{j,s} \} &= 0 = \{ \opd{c}{i,r} , \opd{c}{j,s} \}
\end{align}
where $i$ and $j$ are lattice sites and $r$ and $s$ are quantum number labels.
In terms of the full microscopic description of solid state materials, the dynamical degrees of freedom may not be individual single electrons, but composite objects consisting of many particles. These composite objects are termed quasiparticles\index{quasiparticle} and can still be treated as individual quantum objects. In terms of the preceding discussion, the wavefunctions $\phi(r)$ now represent the localized wavefunction of the quasiparticle, the kinetic term in Eq.~\eqref{eq:hoppingterm} is the kinetic energy of the quasiparticle, and the potential $V$ in Eq.~\eqref{eq:hoppingterm} is the effective potential affecting the quasiparticle, which may be a composite result from the atomic nuclei as well as the electrons. The overall formalism remains the same as quasiparticles are treated as single quantum particles. These quasiparticles typically can be labelled with the same quantum numbers as elementary electrons, but they can exhibit some differences. An example is in the fractional quantum Hall effect~\cite{fqhexp} where the quasiparticles are observed to possess fractional electric charge~\cite{laughlinfqh,fracchargef,fracchargei}.
The simplest dynamical tight-binding model is that of fermions propagating on a regular lattice $\Gamma$,
\begin{equation}
\hat{H} = \sum_{\boldsymbol{j}25\Gamma} \tensor*{\varepsilon}{_{\boldsymbol{j}}} \opd{c}{\boldsymbol{j}} \op{c}{\boldsymbol{j}} + \sum_{\boldsymbol{j}25\Gamma} \sum_{\boldsymbol{r}} \left( \tensor*{t}{_{\boldsymbol{j},\boldsymbol{j}+\boldsymbol{r}}} \opd{c}{\boldsymbol{j}} \op{c}{\boldsymbol{j}+\boldsymbol{r}} + \hc \right)
\label{eq:tightbindingchain}
\end{equation}
where $\boldsymbol{j}$ labels the lattice sites and $\boldsymbol{j}+\boldsymbol{r}$ parameterizes a Manhattan metric displacement from site $\boldsymbol{j}$ by $\boldsymbol{r}$ sites. $\varepsilon_{\boldsymbol{j}}$ is the energy of the single-particle state $\lvert \boldsymbol{j} \rangle$. Note that these energies are not eigenenergies of the Hamiltonian as single-particle quantum numbers are not good quantum numbers for a many-body Hamiltonian. The models this thesis is concerned with are of nearest neighbor kinetics, where $\boldsymbol{j}+\boldsymbol{r}$ is a displacement of 1 lattice site away from site $\boldsymbol{j}$. In general $\tensor*{t}{_{\boldsymbol{j},\boldsymbol{j}+\boldsymbol{r}}} 25 \mathbbm{C}$ with $(\tensor*{t}{_{\boldsymbol{j},\boldsymbol{j}+\boldsymbol{r}}})^\dagger = \tensor*{t}{^*_{\boldsymbol{j}+\boldsymbol{r},\boldsymbol{j}}}$, but for the following discussion and in subsequent chapters it will be assumed that $\tensor*{t}{_{\boldsymbol{j},\boldsymbol{j}+\boldsymbol{r}}} = \tensor*{t}{_{\boldsymbol{j}+\boldsymbol{r},\boldsymbol{j}}} 25 \mathbbm{R}$ without loss of generality.
As an example, consider a $d$-dimensional homogeneous system such that $\tensor*{t}{_{\boldsymbol{j},\boldsymbol{j}+1}} = t$ and $\tensor*{\varepsilon}{_{\boldsymbol{j}}} = \varepsilon \,$ $\forall \boldsymbol{j}25\Gamma$, with $\Gamma$ the hypercubic lattice.
The eigenenergies for this system can be found by diagonalizing the Hamiltonian \eqref{eq:tightbindingchain} by Fourier transformation into momentum space,
\begin{align}
\op{c}{\boldsymbol{j}} &= \frac{1}{\sqrt{2\pi}} \sum_{\boldsymbol{k}} \op{c}{\boldsymbol{k}} \e^{\i \boldsymbol{k} \cdot \boldsymbol{j}} \,,
&
\opd{c}{\boldsymbol{j}} &= \frac{1}{\sqrt{2\pi}} \sum_{\boldsymbol{k}} \opd{c}{\boldsymbol{k}} \e^{-\i \boldsymbol{k} \cdot \boldsymbol{j}} \,,
\end{align}
which results in the Hamiltonian taking the form of a free electron gas
\begin{equation}
\hat{H} = \sum_{\boldsymbol{k}} \tensor{\varepsilon}{_{\boldsymbol{k}}} \opd{c}{\boldsymbol{k}} \op{c}{\boldsymbol{k}}
\end{equation}
which has dispersion
\begin{equation}
\varepsilon_{\boldsymbol{k}} = \varepsilon + 2t \sum_{n=1}^{d} \cos(k_n) \,.
\label{eq:squaredispersion}
\end{equation}
This transformation also made use of the identity for the discrete delta function
\begin{equation}
\sum_{\boldsymbol{j}} \e^{\i ( \boldsymbol{k} - \boldsymbol{k}' ) \boldsymbol{j}} = 2\pi \tensor{\delta}{_{\boldsymbol{k},\boldsymbol{k}'}} \,.
\end{equation}
In the $1d$ case where $\Gamma$ is described as a chain, the dispersion relation is $\tensor{\varepsilon}{_k} = \varepsilon + 2 t \cos(k)$. The bandwidth $D$ can be identified as $D = 2 t$.
It is also convenient to write the Hamiltonian in other representations than in terms of the operators $\hat{c}^{(\dagger)}_{j}$. One representation is in terms of state vectors, or in Dirac notation.
In Dirac notation the Hamiltonian \eqref{eq:tightbindingchain} is expressed as
\begin{equation}
\hat{H} = \sum_{j} \left( \left\lvert j \right\rangle \varepsilon \left\langle j \right\rvert + \left\lvert j+1 \right\rangle t \left\langle j \right\rvert + \left\lvert j \right\rangle t \left\langle j+1 \right\rvert \right)
\end{equation}
where the state vectors label the occupation of the $j^{\text{th}}$ state of the many-body state vector: $\lvert j \rangle \equiv \lvert \cdots, n_j , \cdots \rangle$ in the notation of Eq.~\eqref{eq:fockvector}. The creation and annihilation operators in this notation are then given by $\opd{c}{j} = \lvert j+1 \rangle \langle j \rvert$ and $\op{c}{j} = \lvert j-1 \rangle \langle j \rvert$.
A Hamiltonian matrix can be formed from the matrix elements of the Hamiltonian as
\begin{equation}
[\boldsymbol{H}]_{ij} = \langle i \lvert \hat{H} \rvert j \rangle \,.
\end{equation}
For the Hamiltonian \eqref{eq:tightbindingchain} with $\Gamma \simeq \mathbbm{Z}^+$, the Hamiltonian matrix is
\begin{equation}
\boldsymbol{H} =
\begin{pmatrix}
\varepsilon_{1} & t_{12} & & O \\
t_{21} & \varepsilon_{2} & t_{23} & \\
& t_{32} & \varepsilon_{3} & \ddots \\
O & & \ddots & \ddots
\end{pmatrix} \,,
\end{equation}
with the corresponding operator reconstructed as
\begin{equation}
\hat{H} = \lvert \vec\psi \rangle \boldsymbol{H} \langle \vec\psi \rvert \,.
\end{equation}
This matrix representation is particularly useful for describing systems with internal degrees of freedom, where states with $S$ internal degrees of freedom can be written as
\begin{equation}
\tensor*{\vec{\psi}}{^\dagger_j} = \adjvec{\opd{c}{j_1} & \opd{c}{j_2} & \cdots & \opd{c}{j_S}}
\end{equation}
with the Hamiltonian taking the form of
\begin{equation}
\hat{H} = \sum_{j25\Gamma} \tensor*{\vec{\psi}}{^\dagger_j} \tensor*{\boldsymbol{h}}{_0} \tensor*{\vec{\psi}}{_j} + \tensor*{\vec{\psi}}{^\dagger_{j+r}} \tensor*{\boldsymbol{h}}{_1} \tensor*{\vec{\psi}}{_j} + \tensor*{\vec{\psi}}{^\dagger_j} \tensor*{\boldsymbol{h}}{^\dagger_1} \tensor*{\vec{\psi}}{_{j+r}}
\end{equation}
where $\boldsymbol{h}_0$ and $\boldsymbol{h}_1$ are $S\times S$ matrices describing dynamics of orbitals within unit cells and orbitals between unit cells respectively.
\section{Physical Models}
This section presents some models in the tight-binding formalism which serve as base starting points for modeling various electronic phenomena in materials stemming from inter-electron interactions. An example of a non-interacting tight-binding model displaying non-trivial phenomena will be discussed in detail in \S\ref{sec:sshmodel}.
\subsection{Anderson Impurity Model}
One of the most basic models which incorporate interactions is that of an impurity model, where the system is a model of free non-interacting fermions with the interactions only occurring at a finite number of specific locations. These locations, the impurities, may in general consist of many, but finite number of, internal degrees of freedom, such as a small cluster of sites whose states may possess spin or isospin quantum numbers.
A basic impurity model is the single impurity Anderson model~\cite{andersonmodel,hewson,kondo}.
This was originally conceived as a model describing localized magnetic moments on magnetic ions dissolved in non-magnetic metals. As the name suggests, the model consists of a single impurity in a non-interacting bath. The Anderson model corresponds to the dilute limit where the density of the magnetic impurities is sufficiently low that each of the impurities can be be treated independently of each other. An example of such a system is Mo-Nb alloys doped with Fe atoms~\cite{andersonmodel}.
The Hamiltonain of the single impurity Anderson model is given by
\begin{equation}
\hat{H}_{\textsc{siam}}
=
\underbrace{\sum_{k,\sigma} \tensor*{\varepsilon}{_{k}} \opd{c}{k,\sigma} \op{c}{k,\sigma}}_{\text{bath}}
+ \underbrace{\sum_{k,\sigma} \left( \tensor*{V}{_{k,\sigma}} \opd{c}{k,\sigma} \op{d}{\sigma} + \tensor*{V}{^*_{k,\sigma}} \opd{d}{\sigma} \op{c}{k,\sigma} \right)}_{\text{hybridization}}
+ \underbrace{\sum_{\phantom{,}\sigma\phantom{,}} \tensor*{\varepsilon}{_{d}} \opd{d}{\sigma} \op{d}{\sigma} + U \opd{d}{\uparrow} \op{d}{\uparrow} \opd{d}{\downarrow} \op{d}{\downarrow}}_{\text{impurity}}
\label{eq:siam}
\end{equation}
which consists of three parts: the bath, the impurity, and the hybridization between them. The $\op{c}{}$ operators act only on the bath, the $\op{d}{}$ operators act only on the impurity, and $\sigma 25 \{\uparrow,\downarrow\}$ labels their spin.
Since the bath is non-interacting, it can be diagonalized in momentum space.
From the hybridization amplitude $V_{k}$, it is typical to define a hybridization function $\Delta(z)$ as
\begin{equation}
\Delta(z) \vcentcolon= \sum_{k} \frac{\left\lvert V_k \right\rvert^2}{z - \varepsilon_k}
\label{eq:siamDelta}
\end{equation}
where $z$ is complex frequency.\footnote{The necessity of using complex frequency and the relation to real frequency $\omega$ will be introduced in \S\ref{sec:complexfrequency}.}
This function describes the dynamics between the bath and impurity. In the literature~\cite{nrg,hewson}, the quantity $\pi \sum_{k} \lvert V_{k} \rvert^2 \delta(\omega - \varepsilon_{k}) = -\Im\Delta(\omega)$ is often taken as the definition of the hybridization function as it represents coupling of amplitude $V_{k}$ from the impurity to the bath density of states $\sum_{k} \delta(\omega - \varepsilon_{k})$. $\Delta(z)$ is an analytic function and so can be completely specified by its imaginary part. The form of Eq.~{\eqref{eq:siamDelta}} can also be obtained from the functional integral form of {\eqref{eq:siam}} where the bath degrees of freedom $\op{c}{}$ are integrated out, leaving a term of the form $\sum_k \tensor*{V}{^*_k} \opd{d}{} \frac{1}{z - \varepsilon_k} \tensor*{V}{_k} \op{d}{} = \Delta(z)$, which contributes to the non-interacting propagator for the impurity degrees of freedom $\op{d}{}$.
The hybridization function essentially defines the characteristics of the impurity model: both the bath and impurity individually have elementary characteristics (as will be shown below in \S\ref{sec:gfeom} and \S\ref{sec:hubbardatomgf} respectively). It is the coupling between the two which results in non-trivial behavior.
\begin{figure}[h]
\centering
\begin{tikzpicture}[{thick}]
\node[circle,draw=black,fill=black!10,inner sep=1pt] (1) at (0,0) {$\uparrow\downarrow$};
%
\node[below=0.25cm] at (1) {$\varepsilon_d$};
%
\def1.25{1.25}
\def3{3}
\def0.4{0.4}
\coordinate (s) at (-1.25,0);
\coordinate (d) at (1.25,0);
\draw (d) arc(0:-90:-3 cm and 0.4 cm);
\draw (d) arc(0:-90:-3 cm and -0.4 cm);
%
\node at ($(d)+(1.5,0)$) {bath};
%
\draw[-,line width=1.5pt] (d)--(1) node[midway,above] {$V$};
%
\path (1) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (1);
\end{tikzpicture}
\caption[Schematic of the Anderson impurity model]{Schematic of the Anderson impurity model.\label{fig:anderson}}
\end{figure}
More general impurity systems may consist of additional internal degrees of freedom on the impurity, as well as hybridizations onto multiple different baths. In contemporary literature these more general impurity systems are commonly termed quantum dots. They are studied particularly in the non-equilibrium context where the various baths may have external voltage biases applied, thereby driving a current through the dot. Such a system is realized, for example, in semiconductor nanoribbon devices~\cite{emma2cck}.
\subsection{Hubbard Model}
The Hubbard model~\cite{gutzwiller,kanamori,hubbard} is a minimal model of a system with interacting fermions on a lattice.
Its Hamiltonian is
\begin{equation}
\hat{H}_{\textsc{h}} = \underbrace{\varepsilon \sum_{j,\sigma} \opd{c}{j,\sigma} \op{c}{j,\sigma} + t \sum_{j,\ell,\sigma} \left( \opd{c}{j,\sigma} \op{c}{j+\ell,\sigma} + \opd{c}{j+\ell,\sigma} \op{c}{j,\sigma} \right)}_{\op{H}{0}} + \underbrace{U \sum_{j} \opd{c}{j,\uparrow} \op{c}{j,\uparrow} \opd{c}{j,\downarrow} \op{c}{j,\downarrow}}_{\op{H}{I}}
\label{eq:hubbard}
\end{equation}
where lattice sites are labeled by $j$, $\ell$ is a displacement between sites on a given lattice $\Gamma$, and $\sigma 25 \{\uparrow,\downarrow\}$ labels the spin. The $\op{H}{0}$ and $\op{H}{I}$ are the free (kinetic) and interacting parts of the Hamiltonian respectively. The original motivation of this model was to provide an explanation of the itinerant ferromagnetism of transition metals, such as Fe and Ni, but its use cases far exceed that context. Indeed, the Hamiltonian \eqref{eq:hubbard} is the tight-binding model corresponding to the field quantized Eq.~\eqref{eq:toe} under the approximation \eqref{eq:localU}.
\begin{comment}
\begin{figure}[h]
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6}, site/.style={circle,draw,line width=1pt,fill=black!10,inner sep=0},scale=0.7]
\def25{25}
\def6{6}
\def5{5}
\node[site] (00) at (0,0){$\uparrow\downarrow$};
\path (00) edge[-latex,line width=0.5pt,double distance=0.5pt] (00)
\foreach \a in {-1,1}
{
\node[site] (1\a) at ({\a*25}:2.5){$\uparrow\downarrow$};
\path (1\a) edge[-latex,line width=0.5pt,double distance=0.5pt] (1\a)
\draw[-,line width=1.5pt] (00)--(1\a);
\foreach \b in {-3,0,2}
{
\node[site] (2\b) at ({\a*(25+5)+\b*6}:{5+(3+\b)*(2-\b)*(1/5)}){$\uparrow\downarrow$};\draw[-,line width=1.5pt] (1\a)--(2\b);
\path (2\b) edge[-latex,line width=0.5pt,double distance=0.5pt] (2\b)
}
\node[site] (1\a) at ({\a*25+180}:2.5){$\uparrow\downarrow$};
\path (1\a) edge[-latex,line width=0.5pt,double distance=0.5pt] (1\a)
\draw[-,line width=1.5pt] (00)--(1\a);
\foreach \b in {-2,0,3}
{
\node[site] (2\b) at ({\a*(25+5)+180+\b*6}:{5+(2+\b)*(3-\b)*(1/5)}){$\uparrow\downarrow$};\draw[-,line width=1.5pt] (1\a)--(2\b);
\path (2\b) edge[-latex,line width=0.5pt,double distance=0.5pt] (2\b)
}
}
\end{tikzpicture}
\caption{Schematic of the Hubbard model on the Bethe lattice with $\kappa = 4$. Only a cluster subset of the (infinite) Bethe lattice is illustrated. The $\uparrow\downarrow$ is indicative of the allowed states on each site, not necessarily that every site is doubly occupied.\label{fig:hubbardbl}}
\end{figure}
\end{comment}
\begin{comment
\begin{figure}[h]
\centering
\begin{tikzpicture}[every path/.style={in=60,out=120,looseness=6}]
\node[circle,draw=black,thick,inner sep=1pt] (1) at (-4,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (2) at (-2,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (3) at (0,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (4) at (2,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt] (5) at (4,0){$\uparrow\downarrow$};
%
\node at (-4,-0.6) {$\varepsilon$};
\node at (-2,-0.6) {$\varepsilon$};
\node at (0,-0.6) {$\varepsilon$};
\node at (2,-0.6) {$\varepsilon$};
\node at (4,-0.6) {$\varepsilon$};
%
\draw[line width=1.2pt](1)--(2) node[midway,above] {$t$};
\draw[line width=1.2pt](2)--(3) node[midway,above] {$t$};
\draw[line width=1.2pt](3)--(4) node[midway,above] {$t$};
\draw[line width=1.2pt](4)--(5) node[midway,above] {$t$};
\draw[line width=1.2pt, dashed, line cap=round] (1)--(-6,0) {};
\draw[line width=1.2pt, dashed, line cap=round] (5)--(6,0) {};
%
\path (1) edge[-latex,line width=0.5pt,double distance=0.5pt] node[above] {$U$} (1);
\path (2) edge[-latex',line width=0.5pt,double distance=0.5pt] node[above] {$U$} (2);
\path (3) edge[-stealth,line width=0.75pt,double distance=0.5pt] node[above] {$U$} (3);
\path (4) edge[-stealth',line width=0.5pt,double distance=0.5pt] node[above] {$U$} (4);
\path (5) edge[-angle 45,line width=0.5pt,double distance=0.5pt] node[above] {$U$} (5);
\end{tikzpicture}
\caption{Schematic of the Hubbard model in $1d$. The $\uparrow\downarrow$ is indicative of the allowed states on each site, not necessarily that every site is doubly occupied.\label{fig:hubbard1d}}
\end{figure}
\end{comment
\begin{comment}
\begin{figure}[h]
\centering
\begin{tikzpicture}[every path/.style={line width=1.5pt,scale=0.75},
every node/.style={scale=0.81}]
\node[circle,draw=black,thick,inner sep=1pt,scale=0.8] (l1) at (-3,1){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt,scale=0.8] (l2) at (-2.5,-1){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt,scale=0.8] (l0) at (-0.9,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt,scale=0.8] (r0) at (0.9,0){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt,scale=0.8] (r1) at (2.5,1){$\uparrow\downarrow$};
\node[circle,draw=black,thick,inner sep=1pt,scale=0.8] (r2) at (3,-1){$\uparrow\downarrow$};
%
\draw[line width=1.2pt](l0)--(r0) node[midway,above] {$t$};
\draw[line width=1.2pt](l0)--(l1) node[midway,above] {$t$};
\draw[line width=1.2pt](l0)--(l2) node[midway,below] {$t$};
\draw[line width=1.2pt](r0)--(r1) node[midway,above] {$t$};
\draw[line width=1.2pt](r0)--(r2) node[midway,above] {$t$};
\draw[line width=1.2pt, dashed] (l1)--($(l1)-(+0.75,+0.5)$) {};
\draw[line width=1.2pt, dashed] (l1)--($(l1)-(+0.75,-0.5)$) {};
\draw[line width=1.2pt, dashed] (l2)--($(l2)-(+0.5,+0.75)$) {};
\draw[line width=1.2pt, dashed] (l2)--($(l2)-(+0.75,-0.25)$) {};
\draw[line width=1.2pt, dashed] (r1)--($(r1)+(+0.85,+0.35)$) {};
\draw[line width=1.2pt, dashed] (r1)--($(r1)+(+0.85,-0.35)$) {};
\draw[line width=1.2pt, dashed] (r2)--($(r2)+(+0.5,-0.75)$) {};
\draw[line width=1.2pt, dashed] (r2)--($(r2)+(+0.85,-0.15)$) {};
%
\path (l0) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (l0);
\path (l1) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (l1);
\path (l2) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (l2);
\path (r0) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (r0);
\path (r1) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (r1);
\path (r2) edge[-latex',line width=0.5pt,double distance=0.5pt,in=60,out=120,looseness=6] node[above] {$U$} (r2);
\end{tikzpicture}
\end{figure}
\end{comment}
A characteristic of the Hubbard model is the Mott metal-insulator transition~\cite{mott,mottreview,mitreview}\index{Mott transition}, where the ground state of the system transitions from being metallic to being insulating upon increase of the interaction strength $U$. A system in this insulating phase is called a Mott insulator. This phase transition can occur without symmetry breaking. Although the ground state of many Mott insulators is magnetically ordered, this is not a requirement, and paramagnetic Mott insulators are known~\cite{qptcusc}.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\linewidth]{hubbardphasediagram.jpeg}
\caption[Phase diagram of the Hubbard model]{Phase diagram of the Hubbard model (figure reproduced from~\cite{physicstodaydmft}). $U_{\text{c}1}$ marks the critical $U$ for the transition from insulator to metal (decreasing $U$), and $U_{\text{c}2}$ marks the critical $U$ for the metal to insulator transition (increasing $U$). The relation between $W$ in the figure and $D$ in this text is $W = 2D$.\label{fig:hubbardphasediagram}}
\end{figure}
A schematic phase diagram of the Hubbard is shown in Fig.~\ref{fig:hubbardphasediagram}. At low temperatures the Hubbard model undergoes a first-order transition between metallic and insulating phases, which may occur at some critical interaction strength, $U_{c1}$ or $U_{c2}$. The transition occurs at $U_{c2}$ when the system is adiabatically evolved from the metallic phase with increasing $U$. In contrast, the transition occurs at $U_{c1}$ when the system is adiabatically evolved with decreasing $U$ from the insulating phase. The parameter space region $U_{c1} < U < U_{c2}$ is a hysteresis coexistence region. The coexistence region, as well as the discontinuous nature of the transition, is characteristic of a first-order phase transition. Additionally, as mentioned previously the Mott transition is possible without symmetry breaking, which is another characteristic of a first-order transition. At higher temperatures, the $U_{c1}$ and $U_{c2}$ transition curves merge to a second-order critical point. Above this point the metal-insulator transition becomes continuous, and therefore second-order.
The Hubbard model \eqref{eq:hubbard} as employed in this thesis is a single band model with local Coulomb interactions. A type of interaction not covered by this model is the Hund interaction, conventionally parameterized by $J$, which is a coupling between spins in different orbitals. As this thesis only deals with the single band Hubbard model, the Hund interaction will not be considered in the following.
The Mott insulating phase is the result of strong Coulomb interactions between electrons on the atomic sites. This is in contrast to conventional band insulators where the electronic energy gap is the result of the lattice. A Mott insulator is also distinguished from a system whose insulating behavior arises from Anderson localization, where the insulating characteristic is due to the presence of disorder~\cite{andersonlocalization}. A Mott insulator still possesses translation invariance on its lattice and is non-disordered. A further distinction between the Mott insulating and insulation arising from Anderson localization is that the Mott phase is inherently a many-body characteristic as it manifests from the interactions between particles. Anderson localization on the other hand is due to the characteristics of the background potential and is therefore not dependent on many dynamical degrees of freedom.
The Hubbard model admits an exact solution in $1d$~\cite{liebwu} which may be obtained from Bethe ansatz methods, more generally known as the quantum inverse scattering method. While this method produces an exact solution of the eigenstates and eigenspectrum, it does not provide the dynamics.
A major avenue of research involving the Hubbard model is its $2d$ form. The $2d$ Hubbard model on a square lattice is model which is representative of CuO$_2$ planes which exist in cuprate superconductors~\cite{qptcusc,htcsc}.
It is suspected that the doped $2d$ Hubbard model on the square lattice is the appropriate model for high-$T_c$ superconductors, although this remains an open question~\cite{qptcusc,htcsc}.
\section{Green Functions\label{sec:greenfunctions}}
The concept of Green functions were introduced by George Green for the solution of linear differential equations arising in classical electrodynamics~\cite{green}. Classical Green functions have since been abstracted as a general method to formally solve inhomogeneous linear differential equations~\cite{mathewswalker, byronfuller}.
An example of such an equation can be given by
\begin{equation}
\hat{L}[\partial] u(x) = f(x)
\end{equation}
where $\hat{L}[\partial]$ is a linear functional of the differentiation operator $\partial$. This equation admits the formal solution
\begin{equation}
u(x) = u_0(x) + \int \d^d x'\, G(x,x') f(x')
\end{equation}
where $u_0(x)$ is the homogeneous solution and $G(x,x')$ satisfies the relation
\begin{equation}
\hat{L} G(x,x') = \delta(x-x') \,.
\label{eq:gislinverse}
\end{equation}
The object $G(x,x')$ is the Green function for the operator $\hat{L}$. The form of Eq.~\eqref{eq:gislinverse} implies that the Green function can be interpreted as the inverse of the operator $\hat{L}$.
The notion of Green functions in quantum theory\footnote{The quantum Green function is alternatively referred to as the ``propagator'' or ``kernel'' (of a path integral) in the literature.} originally appeared in the context of the elementary processes of quantum electrodynamics~\cite{feynman,schwingergreen} and were first applied in condensed matter systems in the context of superconductivity~\cite{salam}.
Green function techniques were systematically developed for quantum many body systems in
\cite{galitskiimigdal,martinschwinger} following the formal developments of quantum field theory.
The systems analyzed in this work are primarily characterized and understood in terms of their Green functions\index{Green function}~\cite{abrikosov,fetterwalecka,bruusflensberg,mattuck,economou}. These are single-particle correlation functions in the many-body quantum system. In particular, the equation of motion approach~\cite{martinschwinger,zubarev} is an important tool. A reason for studying the Green functions of a system is that the elementary excitation spectrum is given by the poles of the Green function. As will be shown, signatures of topological states are also evident in the boundary Green functions.
\begin{comment}
For the eigenvalue equation
\begin{equation}
\hat{L} u_n = \lambda_n u_n
\end{equation}
it follows that
\begin{equation}
f(\hat{L}) u_n = f(\lambda_n) u_n
\end{equation}
holds for any well-behaved function $f$.
Proof of this statement follows from a power series expansion of $f(\hat{L})$. From this property it follows that
\begin{equation*}
\e^{\i \hat{H} t} \lvert\psi_n\rangle = \e^{\i E_n t} |\psi_n\rangle
\end{equation*}
and
\begin{equation*}
\frac{\1}{E_m - \hat{H}} \lvert \psi_n \rangle = \frac{1}{E_m - E_n} \lvert \psi_n \rangle
\end{equation*}
\end{comment}
\begin{comment}
state $\lvert \Psi(t') \rangle$
insert fermion at time $t'$ $\opd{c}{a}(t') \lvert \Psi(t') \rangle$
evolve in time to $t>t'$ $\hat{U}(t,t') \opd{c}{a}(t') \lvert \Psi(t') \rangle$
calculate overlap with $\opd{c}{a}(t) \lvert \Psi(t) \rangle$
\begin{align}
\langle \Psi(t) \rvert \op{c}{a}(t) \hat{U}(t,t') \opd{c}{a}(t') \lvert \Psi(t') \rangle
&= \langle \Psi \rvert \op{c}{a}(t) \opd{c}{a}(t') \lvert \Psi \rangle
\end{align}
\end{comment}
The concept of the Green function in quantum theory is as follows.
From an initial state $\lvert \mathnormal\Psi \rangle$,
act with operator $\hat{B}(t')$ at time $t'$
and
calculate overlap with state $\hat{A}(t) \lvert \mathnormal\Psi \rangle$ at some time $t$.
The Green function is the
quantum thermal expectation value over all such processes,\footnote{$\displaystyle
\left\langle \cdots \right\rangle \vcentcolon= \mathcal{Z}^{-1} \sum_{n} \left\langle n \left\lvert \cdots \, \e^{-\beta\hat{H}} \right\rvert n \right\rangle
$ and $\mathcal{Z}$ is the (grand canonical) partition function and $\{\lvert n \rangle\}$ is a complete basis of eigenstates.}
\begin{equation}
G^c(t,t')
\vcentcolon=
-\i \left\langle \mathsf{T} \hat{A}(t) \hat{B}(t') \right\rangle \,.
\end{equation}
A Green function of this form is called
the causal double-time Green function\index{Green function!double-time} where $\mathsf{T}$ denotes causal time ordering of the operators. Such a Green function is distinguished from many-time Green functions that arise, for example, in elementary particle physics calculations which involve many intermediary processes. In condensed matter and statistical mechanics, the double-time Green functions contain sufficiently complete information about the many-body system for most purposes~\cite{zubarev}.
Green functions come in the varieties of causal\index{Green function!causal}, retarded\index{Green function!retarded}, and advanced\index{Green function!advanced}, defined as
\begin{subequations}
\begin{align}
G^c(t,t') &\vcentcolon= -\i \left\langle \mathsf{T} \hat{A}(t) \hat{B}(t') \right\rangle
\\
G^r(t,t') &\vcentcolon= -\i \theta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \} \right\rangle
\\
G^a(t,t') &\vcentcolon= \phantom{-}\i \theta(t'-t) \left\langle \{ \hat{A}(t) , \hat{B}(t') \} \right\rangle
\end{align}
\end{subequations}
where $\{\cdot\,,\cdot\}$ is the anticommutator associated with Fermi statistics.\footnote{While the anticommutator is used exclusively in this work, the Green function may also be defined using the commutator instead. The use of either the commutator or anticommutator in the Green function is not defined \textit{a priori} and is in general given by $[\hat{A},\hat{B}]_{\zeta} = \hat{A} \hat{B} + \zeta \hat{A} \hat{B}$ where $\zeta = \pm$. The choice of $\zeta$ is determined by the properties of $\hat{A}$ and $\hat{B}$, usually the commutator ($-$) for bosonic operators and the anticommutator ($+$) for fermionic operators. The $\hat{A}$ and $\hat{B}$ may also in general be compound operators which potentially obey more complicated statistics than those of single Bose or Fermi operators, and in these cases the choice of commutator is based on convenience for the problem and operators involved~\cite{zubarev}.}
\begin{comment}
$\hat{A}(t) = \e^{\i \mathcal{H} t} \hat{A} \e^{-\i \mathcal{H} t}$
time ordering
\begin{equation}
\mathsf{T} \hat{A}(t) \hat{B}(t') = \theta(t-t') \hat{A}(t) \hat{B}(t') - \theta(t'-t) \hat{B}(t') \hat{A}(t)
\end{equation}
\begin{align}
G_r(t-t') &= \int_{-\infty}^{\infty} \d E \, \e^{- \i E (t-t')} G_r(E)
\\
G_r(E) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} \d t \, \e^{\i E t} G_r(t)
\end{align}
\begin{align}
G_r(E) &= \frac{1}{2\pi \i} \int_{-\infty}^{\infty} \d t \, \e^{\i E (t-t')} \theta(t'-t) \left\langle \{ \hat{A}(t) , \hat{B}(t') \} \right\rangle
\end{align}
\end{comment}
The retarded Green function is analytic in the upper half plane and the
advanced Green function is analytic in the lower half plane.
The retarded Green function is generally the most useful for extracting physically relevant quantities of a system, and will therefore be the only type of Green function considered in the following and notated with the superscript omitted.
\begin{comment}
\begin{figure}[h]
\centering
\begin{tikzpicture}[decoration={markings, mark=at position 0.425 with {\arrow[xshift=3pt]{Stealth[length=6pt,width=4pt]}}, mark=at position 0.9 with {\arrow[xshift=3pt]{Stealth[length=6pt,width=4pt]}}},scale=0.75]
\draw[->] (-2.5,0)--(2.5,0);
\draw[->] (0,-2.5)--(0,2.5);
\draw[-,line width=1pt,postaction={decorate}] (2,0) arc (0:180:2cm and 2cm)--(-2,0)--(2,0)--cycle;
\foreach \a in {0.2,0.4,...,1.99}
{
\node[scale=0.67] at (-\a,0.15) {$\times$};
}
\end{tikzpicture}
\hspace{5em}
\begin{tikzpicture}[decoration={markings, mark=at position 0.15 with {\arrow[xshift=3pt]{Stealth[length=6pt,width=4pt]}}, mark=at position 0.72 with {\arrow[xshift=3pt]{Stealth[length=6pt,width=4pt]}}},scale=0.75]
\draw[->] (-2.5,0)--(2.5,0);
\draw[->] (0,-2.5)--(0,2.5);
\draw[-,line width=1pt,postaction={decorate}] (2,0) arc (0:180:2cm and -2cm)--(-2,0)--(2,0)--cycle;
\foreach \a in {0.2,0.4,...,1.99}
{
\node[scale=0.67] at (\a,-0.15) {$\times$};
}
\end{tikzpicture}
\end{figure}
\end{comment}
Using the Heisenberg representation of the operators, $\hat{A}(t) = \e^{\i \hat{H} t} \hat{A} \e^{-\i \hat{H} t}$, a useful property of the Green function for time-independent Hamiltonians is
\begin{equation}
\begin{aligned}[b]
G(t,t')
&= -\i \theta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \} \right\rangle
\\
&= -\i \theta(t-t') \left\langle \hat{A} \e^{-\i (\hat{H}-E) (t-t')} \hat{B} + \hat{B} \e^{+\i (\hat{H}-E) (t-t')} \hat{A} \right\rangle \,,
\end{aligned}
\end{equation}
which means that $G(t,t') \equiv G(t-t')$.
This dependence of the Green function only on the interval means that the Green function can be found in frequency space by taking the Fourier transform as
\begin{equation}
G(\omega) = \int_{-\infty}^{\infty} \d t \, \e^{\i \omega t} G(t) \,.
\end{equation}
This transform, however, is only well-defined if the integral converges.
This can be ensured by imposing a convergence factor of $\e^{\pm\delta t}$ as
\begin{equation}
G(\omega) = \int_{-\infty}^{0} \d t \, \e^{\i \omega t + \delta t} G(t) + \int_{0}^{+\infty} \d t \, \e^{\i \omega t - \delta t} G(t)
\end{equation}
with $\delta >0$ and evaluated with the limit $\lim_{\delta\to0^+}$.\label{sec:complexfrequency}
The transform to frequency then involves a complex Fourier transform, or Laplace transform, with complex frequency $z 25 \mathbbm{C}$ rather than real frequency $\omega 25 \mathbbm{R}$. The frequency space Green function is then notated by $G(z)$.\footnote{The general complex frequency $z$ is also useful as it allows the definition of the single $G(z)$ for both real- and imaginary-time formulations. The imaginary-time Green function can be obtained as the case where $z = \i \omega_n$ where $\omega_n$ are the Matsubara frequencies~\cite{bruusflensberg}.}
The momentum space Green function $G(\boldsymbol{k})$ can be obtained by performing a Fourier transform on position. This can only be defined in the case of translation invariant Hamiltonians, which will not generally hold for the systems under consideration in the following, so the momentum space Green function will not be discussed further.
It is useful to introduce for the Green functions the alternative notation of Ref.~\cite{zubarev},
\begin{equation}
\Greenline{\hat{A}(t)}{\hat{B}(t')}\index{$\Green{\cdot\,}{\cdot}$}\index{Green function!Zubarev notation|see {$\Greenline{\cdot\,}{\cdot}$}}
\vcentcolon= G(t,t')\,,
\end{equation}
as $\hat{A}$ and $\hat{B}$ may in general be compound operators, which arises in the case of higher-order Green functions.\footnote{Note that this `order' terminology refers to the multiplicity of the operators $\hat{A}$ and/or $\hat{B}$, and not \textit{e.g.} the order of perturbation theory. What is referred to as `higher-order Green functions' still arise in non-perturbative calculations.} This notation additionally allows for transparency of which basis of the Hamiltonian the operators $\hat{A}$ and $\hat{B}$ are elements of.\footnote{An example of such a utility will be demonstrated in \S\ref{sec:domwalls}} The notation is also well suited for the equation of motion method for calculating Green functions, as will be shown in the subsequent section.
With this notation, the complex frequency Green function can be expressed as
\begin{equation}
\Green{\hat{A}}{\hat{B}}_z \vcentcolon= \i\int_{0}^{\infty} \d t\, \e^{\i z t} \Green{\hat{A}(t)}{\hat{B}(0)} \,.
\end{equation}
An important quantity which can be obtained from the retarded Green function is the spectral function, or density of states.
This can be obtained from the Lehmann representation\index{Lehmann representation} of the Green function,
\begin{align}
\Green{\op{c}{i}(t)}{\opd{c}{j}(0)}
&= \frac{1}{\mathcal{Z}} \sum_{n,m} \e^{-\beta E_n} \left( \langle n \lvert \op{c}{i} \rvert m \rangle \langle m \lvert \opd{c}{j} \rvert n \rangle \e^{\i (E_n - E_m) t} + \langle n \lvert \opd{c}{j} \rvert m \rangle \langle m \lvert \op{c}{i} \rvert n \rangle \e^{-\i (E_n - E_m) t} \right) \notag
\intertext{which in the energy representation is}
\Green{\op{c}{i}}{\opd{c}{j}}_z
&= \frac{1}{\mathcal{Z}} \sum_{n,m} \frac{\langle n \lvert \op{c}{i} \rvert m \rangle \langle m \lvert \opd{c}{j} \rvert n \rangle}{z + E_n - E_m} \left( \e^{-\beta E_n} + \e^{-\beta E_m} \right) \,. \label{eq:greenlehmann}
\end{align}
Here the states $\lvert n \rangle$ and $\lvert m \rangle$ label a complete basis of many-particle eigenstates which satisfy $\hat{H} \lvert n \rangle = E_n \lvert n \rangle$, and the identity $\sum_m \lvert m \rangle \langle m \rvert = 1$ has been used. The Lehmann representation is a convenient form from which Green functions can be calculated. However, obtaining the exact expression for the Green function requires knowing the complete basis of eigenfunctions and eigenenergies of the system in question. Its utility for exact calculations is therefore limited in many-body systems due to the exponential increase in dimension of the Hilbert space. From the form \eqref{eq:greenlehmann} it is apparent that the Fourier transform of the Green function cannot be with real frequency and the frequency space (causal) Green function must be analytically continued into the complex plane, such that the transform to frequency space then becomes a Laplace transform. The eigenenergy values which appear in the denominator lie on the real axis, since they are the eigenvalues of a Hermitian operator. The contour in the Laplace transform must be shifted to avoid these poles. The retarded Green function is obtained from the analytic continuation to $z = \omega + \i\delta$.
By means of the Sokhotski-Plemelj theorem \eqref{eq:sokhotskiplemelj},
\begin{equation}
\lim_{\epsilon\to0} \int_{a}^{b} \frac{f(x-x_0)}{x - x_0 \pm \i \epsilon} \d x = \mp \i \pi \int_{a}^{b} f(x) \delta(x-x_0) \d x + \fint_{a}^{b} \frac{f(x-x_0)}{x-x_0} \d x \,,
\tag{\ref*{eq:sokhotskiplemelj}}
\end{equation}
the Green function can be decomposed into real and imaginary parts.
As the retarded Green function is analytic, it follows that the part involving the Cauchy principal value is the real part
\begin{equation}
\Re G(\omega) = \frac1\pi \fint \frac{\Im G(\omega')}{\omega' - \omega} \d\omega' \,.
\end{equation}
The remaining imaginary part of the Green function can be identified with the spectral function, or local density of states. The spectral function\index{spectral function} for site $j$ can be defined as
\begin{equation}
\mathcal{A}_{j}(\omega)
= \frac{1}{\mathcal{Z}} \sum_{m,n} \lvert u_{nm;j} \rvert^2 \delta\left( \omega - ( E_m - E_n ) \right) \left( \e^{-\beta E_n} + \e^{-\beta E_m} \right)
\end{equation}
which consists of a set of delta poles with weights given by $\lvert u_{nm;j} \rvert^2$ at positions $\omega_{j} = E_m-E_n$, where $u_{nm;j} = \langle n \lvert \op{c}{j} \rvert m \rangle$ and $u_{mn;j}^*= \langle m \lvert \opd{c}{j} \rvert n \rangle$. The spectral function measures the number of states in the many-body spectrum at a given energy $\omega$. In this thesis the normalization of spectral functions $\mathcal{A}(\omega)$ is chosen such that
\begin{equation}
\int_{-\infty}^{\infty} \d\omega\, \mathcal{A}(\omega) = 1
\end{equation}
although other normalization choices exist in the literature.\footnote{A common alternative definition of the spectral function from the retarded Green function is $\mathcal{A}(\omega) = - 2 \Im \Greenline{\op{c}{j}}{\opd{c}{j}}_{\omega+\i0^+}$ \cite{altlandsimons,bruusflensberg} such that the integration measure is $\displaystyle \frac{\d\omega}{2\pi}$ rather than $\d\omega$, with $\displaystyle \int \frac{\d\omega}{2\pi} \mathcal{A}(\omega) = 1$.}
It follows from Eq.~\eqref{eq:sokhotskiplemelj} that the spectral function can be obtained from the local retarded Green function on site $j$ as
\begin{equation}
\mathcal{A}_{j}(\omega) = -\frac1\pi \lim_{\delta\to0^+} \Im \Green{\op{c}{j}}{\opd{c}{j}}_{\omega+\i\delta} \,.
\end{equation}
The spectral function is a scalar function of a real variable $\omega$ with $\mathcal{A}(\omega)$ being obvious notation. For complex analytic functions, such as the retarded single-particle Green function $G(z)$, it is useful to adopt the notation that for functions $f$ of complex variable $z$,
\begin{equation}
f(\omega) \equiv f(\omega^+) \equiv f(\omega+\i0^+) \vcentcolon= \lim_{\delta\to0^+} f(\omega+\i \delta)
\label{eq:omeganotation}
\end{equation}
so that the notation $G(\omega)$ has a well defined meaning.
For systems with repeating unit cells or internal degrees of freedom, it is useful to express the Green function in terms of a larger matrix, with each matrix element being the Green function over specific elements of these internal degrees of freedom.
The matrix Green function for systems with $S$ internal degrees of freedom is
\begin{equation}
\boldsymbol{G}_{\mu,\nu}(z) = \begin{pmatrix} \Green{\op{c}{\mu,n_1}}{\opd{c}{\nu,n_1}}_z & \cdots & \Green{\op{c}{\mu,n_1}}{\opd{c}{\nu,n_S}}_z \\ \vdots & \ddots & \vdots \\ \Green{\op{c}{\mu,n_S}}{\opd{c}{\nu,n_1}}_z & \cdots & \Green{\op{c}{\mu,n_S}}{\opd{c}{\nu,n_S}}_z \end{pmatrix}
\end{equation}
$\mu$, $\nu$ are indices enumerating the unit cells and the $n_i$ indices enumerate the internal degrees of freedom.
For a finite system, the spectrum consists of a set of discrete delta poles. However, these poles can be broadened under certain circumstances, such as finite lifetime of the quasiparticles or scattering from electron-phonon or electron-electron interactions.
Consider a Green function of the form
\begin{equation}
G(z) = \frac{1}{z - E - \i \Gamma} \,.
\end{equation}
The spectrum can be shown to be
\begin{equation}
\mathcal{A}(\omega) = \frac1\pi \frac{\Gamma}{(\omega - E)^2 + \Gamma^2}
\label{eq:broadA}
\end{equation}
which is not a delta function, but rather a Lorentzian peak of finite width. Examples of such a $\Gamma$ are the inverse lifetime of decaying quasiparticles.
In this circumstance the retarded Green function takes the form of $G(t) \approx -\i\theta(t) \e^{-\i E t} \e^{-t/\tau}$ where $\tau$ is the quasiparticle lifetime. This essentially amounts to a shift in the energy $E$ to $E + \i/\tau$. The spectrum then takes the form of Eq.~\eqref{eq:broadA} with $\Gamma = 1/\tau$.
A finite lifetime for quasiparticle states can arise, for example, due to coupling with an external bath, or contributions from the imaginary part of the self-energy, as will be seen later. The similar way in which both processes enter the Green functions suggests that the effect of interactions can be viewed as coupling to external baths. The self-energy arises from particle interactions and the broadening is interpreted as a spread in the energy spectrum due to scattering.
From the Lorenzian form of the delta function \eqref{eq:lorentzian}
\begin{equation}
\delta(x-x_0) = -\frac1\pi \lim_{\eta\to0^+} \Im \frac{1}{x-x_0+\i\eta} = \frac1\pi \lim_{\eta\to0^+} \frac{\eta}{(x-x_0)^2 + \eta^2} \,,
\end{equation}
it can be seen that for $\Gamma \to 0$ (or $\tau \to \infty$), the spectrum becomes a pole.
\subsection{Calculation Methods\label{sec:calcmeth}}
\subsubsection{Green Function Equations of Motion}\label{sec:gfeom}
The Green function equation of motion can be applied as a method to solve for the Green function algebraically.
\begin{comment}
For a general causal Green function with operators $\hat{A}$ and $\hat{B}$ with commutator $[\hat{A} , \hat{B}]_\zeta$, the equation of motion is obtained from applying the time derivative operator
\begin{equation}
\begin{aligned}[b]
\Green{\hat{A}(t)}{\hat{B}(t')}^{c}
&= - \i \left\langle \mathsf{T} \hat{A}(t) \hat{B}(t') \right\rangle
\\
&= - \i \theta(t-t') \left\langle \hat{A}(t) \hat{B}(t') \right\rangle - \zeta \i \theta(t'-t) \left\langle \hat{B}(t') \hat{A}(t) \right\rangle
\\
-\frac{\hslash}{\i} \frac{\partial}{\partial t} \Green{\hat{A}(t)}{\hat{B}(t')}^{c}
&= \hslash \delta(t-t') \left\langle [ \hat{A}(t) , \hat{B}(t') ]_\zeta \right\rangle - \i \left\langle \mathsf{T} [ \hat{A}(t) , \hat{H} ] , \hat{B}(t') ]_\zeta \right\rangle
\\
&= \hslash \delta(t-t') \left\langle [ \hat{A}(t) , \hat{B}(t') ]_\zeta \right\rangle + \Green{[ \hat{A}(t) , \hat{H} ]}{\hat{B}(t')}^{c}
\end{aligned}
\end{equation}
The Green function equation of motion is particularly useful as it takes the same form for the causal, retarded, and advanced forms. For example, the equation of motion for the retarded Green function is derived as
\end{comment}
For the retarded Green function, the equation of motion can be computed as
\begin{equation}
\begin{aligned}
\frac{\d}{\d t'} \Green{\hat{A}(t)}{\hat{B}(t')}
&= \frac{\d}{\d t'} \left( - \i \theta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \}\right\rangle \right)
\\
&= \i \delta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \}\right\rangle - \i \theta(t-t') \left\langle \{ \hat{A}(t) , \tfrac{\d}{\d t'} \hat{B}(t') \}\right\rangle
\\
&= \i \delta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \}\right\rangle - \theta(t-t') \left\langle \{ \hat{A}(t) , [ \hat{H} , \hat{B}(t') ] \}\right\rangle
\\
&= \i \delta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \}\right\rangle + \Green{ \hat{A}(t) }{ [ \hat{H} , \hat{B}(t') ] } \,.
\end{aligned}
\end{equation}
The Green function equation of motion is particularly useful as it takes the same form for the causal, retarded, and advanced forms, as can be easily shown~\cite{zubarev}.\footnote{
To see this it is also helpful to note the identity $\Greenline{[ \hat{A}(t) , \hat{H} ]}{\hat{B}(t')}\, = \Greenline{ \hat{A}(t) }{ [ \hat{H} , \hat{B}(t') ] }\,$.}
The retarded single particle Green function is given by$\Greenline{\op{c}{i}(t)}{\opd{c}{j}(t')}\,$,
which means that the Green function equation of motion takes the form o
\begin{equation}
\left( \frac{\hslash}{\i} \frac{\d}{\d t'} - \hat{H} \right) \Green{\op{c}{i}(t)}{\opd{c}{j}(t')} = \delta_{i,j} \delta(t-t') \,.
\end{equation}
From this expression it is seen that the single particle quantum Green function is functionally analogous to the classical Green function for the Schr\"odinger operator which supports the nomenclature for this quantity.
It is also illuminating to consider an interacting Hamiltonian of the form $\hat{H} = \hat{H}_0 + \hat{H}_I$ where $\hat{H}_0$ is the free kinetic part and $\hat{H}_I$ contains the interactions. Then the Green function equation of motion can be written in the form
\begin{equation}
\left( \frac{\hslash}{\i} \frac{\d}{\d t'} - \hat{H}_0 \right) \Green{\op{c}{i}(t)}{\opd{c}{j}(t')}
=
\delta_{i,j} \delta(t-t') + \Green{\op{c}{i}(t)}{[\hat{H}_I , \opd{c}{j}(t')]} \,.
\end{equation}
In this form it is clear that unlike the classical Green function, the quantum Green function is in principle a non-linear entity due to the presence of the interaction term.
Performing a Laplace transform of the equation of motion to complex frequency\index{Green function!equation of motion} yields
\begin{align}
\i \int_{0}^{\infty} \d t\, \e^{\i z (t-t')} \frac{\d}{\d t'} \Green{\hat{A}(t)}{\hat{B}(t')}
&= \i \int_{0}^{\infty} \d t\, \e^{\i z (t-t')}
\begin{multlined}[t][0.125\linewidth]
\Big[
\i \delta(t-t') \left\langle \{ \hat{A}(t) , \hat{B}(t') \}\right\rangle \\- \theta(t-t') \left\langle \{ \hat{A}(t) , [ \hat{H} , \hat{B}(t') ] \}\right\rangle
\Big]
\end{multlined}\notag
\\
z \Green{\hat{A}}{\hat{B}}_z
&= \left\langle \{ \hat{A} , \hat{B} \} \right\rangle + \Green{[\hat{A}}{[\hat{H} , \hat{B}]}_z \label{eq:greenzeom}\,.
\end{align}
As before, this equation of motion holds for causal, advanced, and retarded varieties. The retarded Green function is obtained from $z = \omega + \i \delta$.
This form is a highly convenient algebraic form from which the Green function can be solved for, either analytically or numerically. The Green function equation of motion Eq.~\eqref{eq:greenzeom} will appear as a central calculational tool throughout this thesis.
These equations of motion generally lead to an infinite hierarchy of Green functions of increasing order in the field operators.
From such an infinite hierarchy tower, a solution to the Green function can be found by resumming the tower or truncating with an appropriate approximation at some level in the hierarchy. An approximate solution for the Green function may also be found from applying perturbation theory to the Green function equations of motion.
With the basic formalism of the Green function equations of motion set up, it is now possible to examine a few examples. Consider the tight-binding model of a $1d$ chain with nearest-neighbor kinetics,
\begin{equation}
\hat{H} = \sum_{j25\Gamma} \left[ \tensor*{\varepsilon}{_j} \opd{c}{j} \op{c}{j} + \tensor*{t}{_j} \left( \opd{c}{j+1} \op{c}{j} + \opd{c}{j} \op{c}{j+1} \right) \right] .
\label{eq:basickineticham}
\end{equation}
The Green function equation of motion for the correlation between arbitrary sites $i$ and $j$ reads as
\begin{equation}
( z - \varepsilon_{j} ) G_{i,j}(z) = \delta_{i,j} + t_{j-1} G_{i,j-1}(z) + t_{j} G_{i,j+1}(z)
\label{eq:simplegreeneom}
\end{equation}
where $G_{i,j}(z) \equiv \Greenline{\op{c}{i}}{\opd{c}{j}}_z$.
A typical quantity of interest is the density of states on a particular site of the system, such as the boundary site.
The general method for solving the equation of motion in $1d$ is to put the equation into a continued fraction form. For a local Green function such a continued fraction takes the form
\begin{equation}
\begin{aligned}[b]
( z - \varepsilon_{j} ) G_{j,j}(z)
&= 1 + t_{j-1} G_{j,j-1}(z) + t_{j} G_{j,j+1}(z)
\\
\left( z - \varepsilon_{j} - t_{j-1} \frac{G_{j,j-1}(z)}{G_{j,j}(z)} - t_{j} \frac{G_{j,j+1}(z)}{G_{j,j}(z)} \right) G_{j,j}(z)
&= 1
\\
G_{j,j}(z)
&= \cfrac{1}{z - \varepsilon_{j} - t_{j-1} \frac{G_{j,j-1}(z)}{G_{j,j}(z)} - t_{j} \frac{G_{j,j+1}(z)}{G_{j,j}(z)}}
\label{eq:greencontinuedfraction}
\end{aligned}
\end{equation}
The fractions of Green functions in the denominator can be obtained by computing the equation of motion for $G_{i,j\pm1}(z)$, to yield
\begin{equation}
\begin{aligned}[b]
z G_{i,j+1}(z)
&= \varepsilon_{j+1} G_{i,j+1}(z) + t_{j} G_{i,j} + t_{j+2} G_{i,j+2}(z)
\\
t_{j} \frac{G_{i,j+1}(z)}{G_{i,j}(z)}
&= \cfrac{t_{j}^2}{z - \varepsilon_{j+1} - t_{j+1} \frac{G_{i,j+2}(z)}{G_{i,j+1}(z)}} .
\label{eq:green12}
\end{aligned}
\end{equation}
Iterating the equations of motion yields the continued fraction
\begin{equation}
G_{j,j}(z)
= \cfrac{1}{z - \varepsilon_{j} - \cfrac{t_{j-1}^2}{z - \varepsilon_{j-1} - \cfrac{t_{j-2}^2}{z - \ddots}} - \cfrac{t_{j}^2}{z - \varepsilon_{j+1} - \cfrac{t_{j+1}^2}{z - \ddots}}} .
\end{equation}
In general this continued fraction can only be treated numerically, but
for a homogeneous system where $\varepsilon_{j} = \varepsilon$ and $t_{j} = t$ $\forall j$, the equation of motion for a particular site $j$ can be solved analytically.
For a semi-infinite system where the chain has a boundary at $j=1$, the continued fraction \eqref{eq:greencontinuedfraction} for the boundary site $j=1$ is
\begin{equation}
G_{1,1}(z)
= \cfrac{1}{z - \varepsilon - t \frac{G_{1,2}(z)}{G_{1,1}(z)}}
\label{eq:green11fraction}
\end{equation}
which now requires the solution of $G_{1,2}(z)$. Making use of the homogeneity of the chain parameters in Eq.~\eqref{eq:green12},
\begin{equation}
t \frac{G_{1,2}(z)}{G_{1,1}(z)} = t^2 G_{1,1}(z)
\end{equation}
Inserting this expression back into the equation of motion for $G_{1,1}(z)$ produces
\begin{equation}
G_{1,1}(z)
= \cfrac{1}{z - \varepsilon - t^2 G_{1,1}(z)}
\tag{\ref*{eq:green11fraction}$^\prime$}
\end{equation}
which may be resummed to obtain the solution
\begin{equation}
G_{1,1}(z) = \frac{z - \varepsilon - \sqrt{(z - \varepsilon)^2 - 4 t^2}}{2 t^2}
\label{eq:green11}
\end{equation}
This demonstrates a procedure for computing the Green functions of $1d$ lattice Hamiltonians from their equation of motion. The situation in $d>1$ is generally much more complicated. The Green function for an analogous system on the $2d$ square lattice for instance does not take the form of a continued fraction.
For the bulk site of an infinite homogeneous $1d$ chain, it can be observed that the Green function equation of motion \eqref{eq:greencontinuedfraction} takes the form of an arbitrary site in the bulk coupled to two semi-infinite chains. The Green function for the bulk site is
\begin{equation}
G_{\text{bulk}}(z) = \cfrac{1}{z - \varepsilon - 2 t^2 G_{1,1}(z)}
\end{equation}
where $G_{1,1}(z)$ is given by \eqref{eq:green11}. The final expression for the bulk Green function is then
\begin{equation}
G_{\text{bulk}}(z) = \frac{1}{\sqrt{(z - \varepsilon)^2 - 4 t^2}} \,.
\end{equation}
This procedure may also be applied to Hamiltonians defined on higher dimensional lattices. A notable example which is important in this thesis is the Bethe lattice\cite{bethe}\index{Bethe lattice}, a cluster of which is illustrated in Fig.~\ref{bethelatt}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.65, line width=2pt, every node/.style={scale=0.75,inner sep=3pt}, every path/.style={scale=0.75}]
\input{bethelatt.tex}
\end{tikzpicture}
\caption[Schematic of a Bethe lattice]{Schematic of the Bethe lattice with $\kappa = 5$. Shown here is a cluster subset (or Cayley tree). The true Bethe lattice has infinite extent.\label{bethelatt}}
\end{figure}
The Bethe lattice is particularly useful as a basis for finding exact solutions to problems in statistical physics~\cite{baxter}.
The Bethe lattice $\Gamma_{\textsc{bl}}$ is defined as a simple connected undirected regular acyclic graph parameterized only by a coordination number (number of each site's nearest neighbors) $\kappa$. This means that there exists exactly one path between any two lattice sites. The Bethe lattice is defined to have an infinite number of sites. A Bethe lattice with a finite number of sites is known as a Bethe lattice cluster or a Cayley tree.
The infinite $1d$ chain can be interpreted as the minimal example of a Bethe lattice with $\kappa = 2$.
Since the Bethe lattice is infinite in extent with each site having the same coordination number, the Green function on all sites of a homogeneous Hamiltonian are equivalent, $G_{j,j}(z) \equiv G_{\textsc{bl}}(z)$ $\forall j 25 \Gamma_{\textsc{bl}}$. This Green function may be computed from the equation of motion in a similar manner to the previous example to yield
\begin{align}
G_{\textsc{bl}}(z)
&= \cfrac{1}{z-\varepsilon - \cfrac{\kappa}{2(\kappa-1)} \left( z-\varepsilon - \sqrt{(z-\varepsilon)^2 - 4(\kappa-1)t^2} \right)} \,.
\end{align}
For the case $\kappa=2$, the Bethe lattice takes the form of the $1d$ chain. Substituting this value into the Bethe lattice Green function readily produces the Green function for the bulk of an infinite $1d$ tight-binding model as to be expected.
A limit of the Bethe lattice which will be important later is the limit of infinite coordination number, $\lim_{\kappa\to\infty}\Gamma_{\textsc{bl}}$. Such a limit often proves useful for producing exact solutions in statistical mechanics~\cite{baxter}.
\label{infinitelimitbethe}
In this limit, the Green function is
\begin{align}
G_{\textsc{bl}}(z)
&= \lim_{\kappa\to\infty} \cfrac{1}{z-\varepsilon - \cfrac{\kappa}{2(\kappa-1)} \left( z-\varepsilon - \sqrt{(z-\varepsilon)^2 - 4(\kappa-1)t^2} \right)} \,.
\end{align}
It is seen that the Green function, and therefore the density of states, vanishes in the infinite coordination limit. In order to keep the density of states finite and non-trivial, it is necessary to scale the kinetic term by
\footnote{On the Bethe lattice it is sometimes taken in the literature that
$\displaystyle \tilde{t} = \frac{t}{\sqrt{\kappa-1}}$ to more exactly cancel the $\kappa$-dependent prefactor. The two choices result in the same limit as $\kappa\to\infty$.
}
\begin{equation}
t \mapsto \frac{\widetilde{t}}{\sqrt{\kappa}}
\end{equation}
With this scaling, the Green function becomes finite
\begin{align}
G_{\textsc{bl}}(z)
&= \lim_{\kappa\to\infty} \cfrac{1}{z-\varepsilon - \cfrac{\kappa}{2(\kappa-1)} \left( z-\varepsilon - \sqrt{(z-\varepsilon)^2 - 4(\kappa-1) \left(\frac{\widetilde{t}}{\sqrt{\kappa}}\right)^2} \right)}
\\
&= \frac{2}{z-\varepsilon + \sqrt{(z-\varepsilon)^2 - 4 \widetilde{t}^2}}
\intertext{or more conventionally,}
&= \frac{z - \varepsilon - \sqrt{(z-\varepsilon)^2 - 4 \widetilde{t}^2}}{2 \widetilde{t}^2} \,,
\end{align}
which is equivalent to the Green function on the boundary of a semi-infinite $1d$ homogeneous chain \eqref{eq:green11} with hopping parameter $\widetilde{t}$.
Another way in which the equation of motion method can be applied to lattices of dimension $d>1$ is via a partial Fourier transformation~\cite{convolutionmethod}. A semi-infinite $3d$ system with a $2d$ boundary is an example of an applicable situation. In this case the degrees of freedom of the infinite (or periodic) $2d$ boundary are Fourier transformed to a diagonal momentum space representation. The system then takes the form of a set of decoupled $1d$ chains, one for each quasimomentum $k_n$, extending into the bulk of the original system. A Green function for each $k_n$ can then be obtained from the equations of motion approach for the $1d$ chains as described above.
\subsubsection{Numerical Fast Recursion Method}
As shown above, for simple non-interacting systems it is possible to obtain an analytic solution for the Green function. However, often the desired end result is not the Green function itself, but rather objects which are obtained from the Green function, such as the spectral function. This involves taking the imaginary component of the analytically continued Green function which may be non-trivial, for example by needing to take the appropriate branch cut in the complex plane. Furthermore, in the case of more involved systems which are periodic but with a large unit cell, the solution for the Green function from the equations of motion would be a high order polynomial which would take a very complicated form.
These issues can be avoided by instead calculating the Green functions by numerically iterating the equations of motion.
Solving the Green function equation of motion for a non-interacting one dimensional system results in a solution in the form of a continued fraction with depth equal to the length of the system~\cite{cfrac}. For high resolution, it is desirable to use a very large system. Directly computing the Green function from such a large continued fraction is, however, computationally inefficient. This necessitates the implementation of a more efficient computational strategy. The strategy adopted here is that of a recursion algorithm where each iteration increases the effective system size exponentially by exploiting self-similarity of the system down the $1d$ chain~\cite{thing,otherthing}.
The typical form of a $1d$ non-interacting tight-binding Hamiltonian in condensed matter physics is
\begin{equation}
\hat{H} = \sum_{j} \left( | \psi_j \rangle \boldsymbol{h}_0 \langle \psi_j | + | \psi_{j+1} \rangle \boldsymbol{h}_1 \langle \psi_{j} | + | \psi_{j} \rangle \boldsymbol{h}^\dagger_1 \langle \psi_{j+1} | \right)
\end{equation}
where $\boldsymbol{h}_0$ represents dynamics within a unit cell and $\boldsymbol{h}_1$ represents dynamics between unit cells with $| \psi_j \rangle$ an $L$-dimensional vector representing the $j$-th unit cell consisting of $L$ degrees of freedom. The submatrices $\boldsymbol{h}_0$ and $\boldsymbol{h}_1$ are of dimension $L \times L$.
The Green function equation of motion for this Hamiltonian can be adapted as a matrix variant of Eq.~\eqref{eq:simplegreeneom},
\begin{align}
\left( z\mathbbm{1} - \tensor*{\boldsymbol{\boldsymbol{h}}}{_0} \right) \boldsymbol{G}_{j,0}(z) &= \tensor*{\boldsymbol{h}}{^\dagger_1} \boldsymbol{G}_{j-1,0}(z) + \tensor*{\boldsymbol{h}}{_1} \boldsymbol{G}_{j+1,0}(z) . \label{eq:greenitereom}
\\
\intertext{This equation of motion can be re-expressed as}
\boldsymbol{G}_{j,0}(z) &= \boldsymbol{\tau}_0 \boldsymbol{G}_{j-1,0}(z) + \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{G}_{j+1,0}(z) \tag{\ref*{eq:greenitereom}$^\prime$} \label{eq:greeniter}
\end{align}
with the introduction of auxiliary transfer matrices
\begin{subequations}
\begin{align}
\boldsymbol{\tau}_0 &= \left(z\mathbbm{1} - \boldsymbol{h}_0\right)^{-1} \boldsymbol{h}_1^\dagger
\\
\widetilde{\boldsymbol{\tau}}_0 &= \left(z\mathbbm{1} - \boldsymbol{h}_0\right)^{-1} \tensor{\boldsymbol{h}}{_1}
\end{align}
\end{subequations}
The equation of motion Eq.~\eqref{eq:greeniter} can be iterated to produce
\begin{align*}
\boldsymbol{G}_{j,0}(z)
&= \boldsymbol{\tau}_0 \left( \boldsymbol{\tau}_0 \boldsymbol{G}_{j-2,0}(z) + \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{G}_{j,0}(z) \right) + \widetilde{\boldsymbol{\tau}}_0 \left( \boldsymbol{\tau}_0 \boldsymbol{G}_{j,0}(z) + \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{G}_{j+2,0}(z) \right)
\\
&= (\mathbbm{1} - \boldsymbol{\tau}_0 \widetilde{\boldsymbol{\tau}}_0 - \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{\tau}_0)^{-1} \boldsymbol{\tau}_0^2 \boldsymbol{G}_{j-2,0}(z) + (\mathbbm{1} - \boldsymbol{\tau}_0 \widetilde{\boldsymbol{\tau}}_0 - \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{\tau}_0)^{-1} \widetilde{\boldsymbol{\tau}}_0^2 \boldsymbol{G}_{j+2,0}(z)
\\
&\equiv \boldsymbol{\tau}_1 \boldsymbol{G}_{j-2,0}(z) + \widetilde{\boldsymbol{\tau}}_1 \boldsymbol{G}_{j+2,0}(z) \tag{\ref*{eq:greeniter}$^\prime$}\label{eq:greeniterprime}
\end{align*}
with an iterated pair of auxiliary transfer matrices $\boldsymbol{\tau}_1$ and $\widetilde{\boldsymbol{\tau}}_1$. Each additional iteration of Eq.~\eqref{eq:greeniterprime} results in a Green function incorporating a factor of 2 sites more than the previous iteration. The $n$-th iteration is
\begin{align}
\boldsymbol{G}_{j,0}(z) &= \boldsymbol{\tau}_n \boldsymbol{G}_{j-2^n,0}(z) + \widetilde{\boldsymbol{\tau}}_n \boldsymbol{G}_{j+2^n,0}(z)
\end{align}
where the auxiliary transfer matrices are recursively obtained following Eq.~\eqref{eq:greeniterprime}
\begin{subequations}
\begin{align}
{\boldsymbol{\tau}}_{n+1} &= \left[ \mathbbm{1} - {\boldsymbol{\tau}}_n \widetilde{\boldsymbol{\tau}}_n - \widetilde{\boldsymbol{\tau}}_n {\boldsymbol{\tau}}_n \right]^{-1} {\boldsymbol{\tau}}_n^2
\\
\widetilde{\boldsymbol{\tau}}_{n+1} &= \left[ \mathbbm{1} - {\boldsymbol{\tau}}_n \widetilde{\boldsymbol{\tau}}_n - \widetilde{\boldsymbol{\tau}}_n {\boldsymbol{\tau}}_n \right]^{-1} \widetilde{\boldsymbol{\tau}}_n^2
\end{align}
\end{subequations}
The $\boldsymbol{\tau}$ and $\widetilde{\boldsymbol{\tau}}$ matrices have the interpretation of being a generalization of the interstitial hopping amplitude for cells $2^n$ apart.
In order to relate the non-local Green functions to a local one, the $n$-th order iteration may be taken with $j = 2^n$
\begin{align}
\boldsymbol{G}_{2^n,0}(z) &= \boldsymbol{\tau}_n \boldsymbol{G}_{0,0}(z) + \widetilde{\boldsymbol{\tau}}_n \boldsymbol{G}_{2^{n+1},0}(z)
\label{eq:poweriter}
\end{align}
and building a new iteration series now based on the $n$ index and iterating with $n=0,1,2,\ldots,N$ as
\begin{equation}
\begin{aligned}[b]
\boldsymbol{G}_{1,0}(z)
&= \boldsymbol{\tau}_0 \boldsymbol{G}_{0,0}(z) + \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{G}_{2,0}(z)
\\
&= \left( \boldsymbol{\tau}_0 + \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{\tau}_1 \right) \boldsymbol{G}_{0,0}(z) + \widetilde{\boldsymbol{\tau}}_1 \boldsymbol{G}_{4,0}(z)
\\
&= \left( \boldsymbol{\tau}_0 + \widetilde{\boldsymbol{\tau}}_0 \boldsymbol{\tau}_1 + \widetilde{\boldsymbol{\tau}}_0 \widetilde{\boldsymbol{\tau}}_1 \boldsymbol{\tau}_2 \right) \boldsymbol{G}_{0,0}(z) + \widetilde{\boldsymbol{\tau}}_2 \boldsymbol{G}_{8,0}(z)
\\
&\vdotswithin{=}
\\
&\equiv \boldsymbol{T} \boldsymbol{G}_{0,0}(z) + \widetilde{\boldsymbol{\tau}}_N \boldsymbol{G}_{2^{N+1},0}(z)
\end{aligned}
\end{equation}
where each step is iterated from Eq.~\eqref{eq:poweriter} with each $\boldsymbol{G}_{2^n,0}$ term downfolded to $\boldsymbol{G}_{0,0}$ and $\boldsymbol{G}_{1,0}$ terms from the preceding iterations, which produces each term of the $\boldsymbol{\tau}$, $\widetilde{\boldsymbol{\tau}}$-polynomial $\boldsymbol{T}$. This matrix $\boldsymbol{T}$ takes the form for general $N$
\begin{equation}
\boldsymbol{T} = \boldsymbol{\tau}_0 + \sum_{n=1}^{N} \left[ \left(\prod_{m=0}^{n-1} \widetilde{\boldsymbol{\tau}}_{m}\right) \boldsymbol{\tau}_{n} \right] \,.
\end{equation}
The recursion is truncated with the approximation $\boldsymbol{G}_{2^{N+1},0}(z) \simeq 0$, leaving $\boldsymbol{G}_{1,0}(z) \simeq \boldsymbol{T} \boldsymbol{G}_{0,0}(z)$.
Returning to the original equation of motion for the boundary unit cell Green function,
\begin{equation}
\begin{aligned}[b]
\left( z\mathbbm{1} - \tensor*{\boldsymbol{h}}{_0} \right) \boldsymbol{G}_{0,0}(z) &= \mathbbm{1} + \tensor*{\boldsymbol{h}}{_1} \boldsymbol{G}_{1,0}(z)
\\
&\simeq \mathbbm{1} + \tensor*{\boldsymbol{h}}{_1} \boldsymbol{T} \boldsymbol{G}_{0,0}(z)
\\
\boldsymbol{G}_{0,0}(z) &= \left[ z \mathbbm{1} - \boldsymbol{h}_0 - \boldsymbol{h}_1 \boldsymbol{T} \right]^{-1} \,.\label{eq:recursiongreen}
\end{aligned}
\end{equation}
This recursion scheme resulting in the final expression Eq.~\eqref{eq:recursiongreen} therefore allows for the computation of boundary-cell Green functions for very long systems of length $2^N$ directly from the elementary subblocks of the Hamiltonian which can accurately capture the properties of semi-infinite systems at low computational cost. The algorithm may also be adapted to produce the Green function for the second unit cell from the boundary~\cite{thing,otherthing}.
\subsubsection{Interacting Green functions}
\label{sec:hubbardatomgf}
The Green function for an interacting model generally cannot be computed exactly, but it is possible to obtain exact solutions for models which are sufficiently simple. Such a model is the Hubbard atom, which consists of a single interacting site. This model may be interpreted as the Hubbard model in the limit $U/t \to \infty$. In this limit the dynamics are dominated by the on-site interaction and hybridization between lattice sites becomes negligible. Each site may therefore be analyzed independently from each other.
This system is described by the Hamiltonian
\begin{equation}
\op{H}{\textsc{ha}} = \varepsilon \left( \opd{c}{\uparrow} \op{c}{\uparrow} + \opd{c}{\downarrow} \op{c}{\downarrow} \right) + U \opd{c}{\uparrow} \op{c}{\uparrow} \opd{c}{\downarrow} \op{c}{\downarrow} \,.
\end{equation}
The Green function is calculated from the equation of motion
\begin{align}
z \Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z
&= \langle \{ \op{c}{\sigma} , \opd{c}{\sigma} \} \rangle + \Green{\op{c}{\sigma}}{[\op{H}{\textsc{ha}},\opd{c}{\sigma}]}_z
\intertext{to yield}
&= 1 + \varepsilon \Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z + U \Green{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z
\end{align}
which makes use of the commutator
$
[ \op{H}{\textsc{ha}} , \opd{c}{\sigma} ] = \varepsilon \opd{c}{\sigma} + U \tensor*{\hat{n}}{_{-\sigma}} \opd{c}{\sigma} \,.
$
It is now necessary to compute the Green function $\Greenline{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z$. This involves the commutator
\begin{equation}
\begin{aligned}
[ \op{H}{\textsc{ha}} , \tensor*{\hat{n}}{_{-\sigma}} \opd{c}{\sigma} ] &= \varepsilon \tensor*{\hat{n}}{_{-\sigma}^2} \opd{c}{\sigma} + U \tensor*{\hat{n}}{_{-\sigma}^2} \opd{c}{\sigma} \\&= \varepsilon \tensor*{\hat{n}}{_{-\sigma}} \opd{c}{\sigma} + U \tensor*{\hat{n}}{_{-\sigma}} \opd{c}{\sigma} \,,
\end{aligned}
\end{equation}
where the second equality follows from the fact that the fermion number operator is indempotent, $\tensor*{\hat{n}}{_{-\sigma}^2} = \tensor*{\hat{n}}{_{-\sigma}}$.
This Green function is now calculated to be
\begin{equation}
\begin{aligned}[b]
z \Green{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z
&= \langle \{ \op{c}{\sigma} , \tensor*{\hat{n}}{_{-\sigma}} \opd{c}{\sigma} \} \rangle + \Green{\op{c}{\sigma}}{[ \op{H}{\textsc{ha}} , \tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma} ]}_z
\\
&= \langle \tensor*{\hat{n}}{_{-\sigma}} \rangle + \varepsilon \Green{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z + U \Green{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z
\\
\Green{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z
&= \frac{\langle \tensor*{\hat{n}}{_{-\sigma}} \rangle}{z - \varepsilon - U} \,.
\end{aligned}
\end{equation}
In this expression there are no more Green functions left to calculate.
It is now possible to write the Green function in closed form as
\begin{equation}
\begin{aligned}[b]
(z-\varepsilon) \Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z
&= 1 + U \Green{\op{c}{\sigma}}{\tensor*{\hat{n}}{_{-\sigma}}\opd{c}{\sigma}}_z
\\
\Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z
&= \frac{1}{z - \varepsilon} + \frac{U}{z-\varepsilon} \frac{\langle \tensor*{\hat{n}}{_{-\sigma}} \rangle}{z - \varepsilon - U}
\\
&=
\frac{1 - \langle \tensor*{\hat{n}}{_{-\sigma}} \rangle}{z - \varepsilon} + \frac{\langle \tensor*{\hat{n}}{_{-\sigma}} \rangle}{z - \varepsilon - U} \,.
\end{aligned}
\end{equation}
This leads to a spectral function which consists of two poles at $\omega_{p_1} = \varepsilon$ and $\omega_{p_2} = \varepsilon + U$, with weights $1 - \langle \op{n}{-\sigma} \rangle$ and $\langle \op{n}{-\sigma} \rangle$ respectively. As the Hamiltonian is symmetric under $\sigma \leftrightarrow -\sigma$, the Green functions are similarly symmetric under such an exchange.
Following from the equilibrium fluctuation-dissipation theorem, the general temperature dependent filling of the atom is described by
\begin{equation}
\langle \op{n}{\sigma} \rangle = \int \d\omega f(\omega) \mathcal{A}_{\sigma}(\omega)
\end{equation}
where $f(\omega) = 1/(1 + \e^{\beta \omega})$ is the Fermi function. Given the form of the spectral function, the filling of the atom can be computed as
\begin{equation}
\langle \op{n}{\pm\sigma} \rangle
=
\frac{f(\varepsilon)}{1 - f(\varepsilon+U) + f(\varepsilon)} \,.
\label{eq:hafilling}
\end{equation}
Provided that at sufficiently low temperature and if $-U < \varepsilon < 0$, it follows that $\langle \op{n}{-\sigma} \rangle \approx \frac12$.
Imposing particle-hole symmetry, where $\op{H}{\textsc{ha}}$ is symmetric under $\opd{c}{\sigma} \leftrightarrow \op{c}{\sigma}$, leads to the condition that $\varepsilon = -U/2$. In this case, $\langle \op{n}{\sigma} \rangle = \frac12$ exactly, by the particle-hole
symmetry, independent of temperature.
The Green function at particle-hole symmetry is then
\begin{equation}
\Green{\op{c}{\sigma}}{\opd{c}{\sigma}}_z
= \frac{\frac12}{z + \frac{U}{2}} + \frac{\frac12}{z - \frac{U}{2}}
\end{equation}
with spectral function
\begin{equation}
\mathcal{A}(\omega) = \frac12 \delta\left( \omega + \tfrac{U}{2} \right) + \frac12 \delta\left( \omega - \tfrac{U}{2} \right) \,,
\end{equation}
which consists of two poles situated at $\omega = \pm \frac{U}{2}$ each with weight $\frac12$. This is plotted in Fig.~\ref{fig:haspec}.
\begin{figure}[h]
\centering
\includegraphics{hubatspec.pdf}
\caption{Spectrum of the Hubbard atom consisting of two poles at $\omega=-\frac{U}{2}$ and $\omega=+\frac{U}{2}$ The shaded curve is the Fermi function $f(\omega)$.\label{fig:haspec}}
\end{figure}
It is observed that that $\mathcal{A}_{\sigma}(\omega) = \mathcal{A}_{\sigma}(-\omega)$, which corresponds to the system's particle-hole symmetry. For $\varepsilon \neq -U/2$, it still holds that $\langle \op{n}{\sigma} \rangle = \frac12$, meaning the system is still at half-filling, however the system no longer possesses particle-hole symmetry. Parameterizing the particle-hole asymmetry as $\eta \vcentcolon= 1 + 2\varepsilon/U$, the spectrum becomes $\mathcal{A}_{\sigma}(\omega) = \frac12 \delta\left( \omega + \tfrac{U}{2}(1-\eta) \right) + \frac12 \delta\left( \omega - \tfrac{U}{2}(1+\eta) \right)$. It follows that the system is only particle-hole symmetric for $\eta = 0$, or $\varepsilon = -U/2$, as $\mathcal{A}_{\sigma}(\omega) \neq \mathcal{A}_{\sigma}(-\omega)$ for $\eta \neq 0$.
While the single site Hubbard atom is a relatively trivial example of determining an interacting system's Green functions from the equations of motion method, this method can scale exponentially with increasing system size. A Hubbard model consisting of only two sites, a Hubbard dimer, can also be solved from the Green function equations of motion, however this involves the calculation of 105 Green functions in order to close the hierarchy~\cite{hubbarddimereom}. The Hubbard dimer is however more easily solved by exact diagonalization and the spectrum can then be recovered by the Lehmann representation.
For the full Hubbard model on a fully interacting lattice, the local Green Function equation of motion is
\begin{equation}
\begin{aligned}
z \Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma}}_z
&= \langle \{ \op{c}{j,\sigma} , \opd{c}{j,\sigma} \} \rangle + \Green{\op{c}{j,\sigma}}{[\op{H}{\textsc{h}} , \opd{c}{j,\sigma}]}_z
\\ &= \delta_{j,j} + \Green{\op{c}{j,\sigma}}{[\op{H}{0} , \opd{c}{j,\sigma}]}_z + \Green{\op{c}{j,\sigma}}{[\op{H}{I} , \opd{c}{j,\sigma}]}_z
\end{aligned}
\end{equation}
where
\begin{subequations}
\begin{equation}
\begin{aligned}
\Green{\op{c}{j,\sigma}}{[\op{H}{0} , \opd{c}{j,\sigma}]}_z
&= \epsilon \Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma}}_z + t \Green{\op{c}{j,\sigma}}{\opd{c}{j+1,\sigma}}_z + t \Green{\op{c}{j,\sigma}}{\opd{c}{j-1,\sigma}}_z
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}[b]
\Green{\op{c}{j,\sigma}}{[\op{H}{I} , \opd{c}{j,\sigma'}]}_z
&= U \Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma'} \opd{c}{j,-\sigma'} \op{c}{j,-\sigma'}}_z \,.
\end{aligned}
\end{equation}
\end{subequations}
This Green function can then be written in the compact form of
\begin{equation}
\begin{aligned}[b]
z \Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma}}_z
&=
\cfrac{1}{z - \epsilon - K_{\sigma}(z) - \Sigma_{\sigma}(z)}
\end{aligned}
\end{equation}
with
\begin{subequations}
\begin{align}
& K_{\sigma}(z) = t \sum_{r} \cfrac{\Green{\op{c}{j,\sigma}}{\opd{c}{j+r,\sigma}}_z}{\Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma}}_z}
\\
& \Sigma_{\sigma}(z) = U \cfrac{\Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma} \opd{c}{j,-\sigma} \op{c}{j,-\sigma}}_z}{\Green{\op{c}{j,\sigma}}{\opd{c}{j,\sigma}}_z} \label{eq:foverggreen}
\end{align}
\end{subequations}
where $\Sigma_\sigma(z)$ is the local self-energy. Explicit forms for the higher order Green functions obtained from the equations of motion of the elements in $K_\sigma(z)$ and $\Sigma_\sigma(z)$ are found in the appendix \S\ref{appendixeom}.
It is apparent from the Green functions and their equations of motion that the two particle expectation value does not factor into single particle expectation values
\begin{equation}
\langle \tensor*{\hat{n}}{_\uparrow} \tensor*{\hat{n}}{_\downarrow} \rangle \neq \langle \tensor*{\hat{n}}{_\uparrow} \rangle \langle \tensor*{\hat{n}}{_\downarrow} \rangle
\end{equation}
which demonstrates the inherent strongly correlated nature of the Hubbard model.
The Green functions for strongly correlated systems is not easily obtained. Unlike for non-interacting models, the hierarchy of the Green functions generally does not close, even in $1d$. For weak coupling, the Green functions can be computed in a perturbative expansion, but for strong coupling, non-perturbative methods or well defined approximations are needed.
\begin{comment}
\cite{dmft}
\cite{infinitehubbard}
\cite{jarrellhubbard}
\cite{bullahubbard}
Kramers-Kronig relations~\cite{mathewswalker}
\begin{align}
\Re\Sigma(\omega) &= \frac1\pi\fint_{-\infty}^{\infty} \frac{\Im\Sigma(\omega')}{\omega'-\omega} \d\omega'
&
\Im\Sigma(\omega) &= -\frac1\pi\fint_{-\infty}^{\infty} \frac{\Im\Sigma(\omega')}{\omega'-\omega} \d\omega'
\end{align}
where $\fint$ notates the Cauchy principal value
\end{comment}
\section{Numerical Renormalization Group}
The numerical renormalization group (NRG) is a fully non-perturbative renormalization group\footnote{The renormalization group is not technically a group in the formal mathematical sense as there is no notion of an inverse operation contained in renormalization group actions. The set of renormalization group actions is rather a monoid. Alternatively it may be thought of as a semigroup as the presence of an identity operation is superfulous.} transformation which can solve quantum impurity models numerically exactly~\cite{wilson,kww1,kww2,nrg}. NRG was originally developed to treat the Kondo model, which similar to the aforementioned Anderson impurity model describes dilute magnetic alloys, an example being single Fe atom magnetic impurities embedded in a non-magnetic material such as Au. NRG has since been generalized for a broader class of physical systems which may have multiple, but still few, interacting degrees of freedom such as quantum dots coupled to one or more non-interacting fermionic baths.
Quantum impurity models are essentially of the form of the Anderson impurity model, with an interacting impurity hybridized to a non-interacting bath. The NRG calculation is first initialized by forming the integral representation of the impurity model as
\begin{subequations}
\begin{align}
\hat{H}^{\int}_{\text{bath}} &= \sum_{\sigma} \int_{-D}^{D} \d \epsilon \, g(\epsilon) \opd{a}{\epsilon \sigma} \op{a}{\epsilon \sigma}
\\
\hat{H}^{\int}_{\text{hyb}} &= \sum_{\sigma} \int_{-D}^{D} \d \epsilon \, h(\epsilon) \left( \opd{d}{\sigma} \op{a}{\epsilon \sigma} + \opd{a}{\epsilon \sigma} \op{d}{\sigma} \right)
\end{align}
\label{eq:nrgintegralrep}
\end{subequations}
where $D$ is the bandwidth of the hybridization and the bath operators $\tensor*{\hat{a}}{^{(\dagger)}_{\epsilon \sigma}}$ satisfy the usual fermionic anticommutation relations. The function $g(\epsilon)$ is the dispersion of the band and $h(\epsilon)$ is the hybridization between the impurity and the band states. These functions are related to the impurity hybridization function $\Delta(z)$, \textit{e.g.} \eqref{eq:siamDelta}, as
\begin{equation}
\Im \Delta(\omega) = \int_{-D}^{D} \d\epsilon\, h(\epsilon)^2 \delta(\omega - g(\epsilon)) = h[g^{-1}(\omega)]^2 \frac{\d}{\d\omega} g^{-1}(\omega) \,.
\end{equation}
The key to NRG is the logarithmic separation of energy scales. Quantum impurity models are typically characterized by features at energy scales much lower than that of the bare Hamiltonian due to the Kondo effect~\cite{hewson,kondo}. The use of logarithmic scaling enables the use of exact diagonalization to resolve the low energy features, which would be impossible if the scaling was linear.
The domain of energy support of the bath spectral function is divided into a set of logarithmic intervals $\left[ \Lambda^{-n-1} , \Lambda^{-n} \right]$ with $\Lambda > 1$. The spectral function is then discretized by replacing the continuous spectrum with a discrete pole of the same total weight, as illustrated in Fig.~\ref{fig:nrgdisc}.
\begin{figure}[h!]
\centering
\input{nrglogdisc.tex}
\caption[Logarithmic discretization of the hybridization function in NRG]{Logarithmic discretization of the hybridization function in NRG. The hybridization function $\Delta(\omega)$ is the shaded region with the discretization bins marked by dashed lines. The orange peaks are poles centered in each bin with weight equal to the spectral weight of that bin.\label{fig:nrgdisc}}
\end{figure}
Within each discretization band, an orthonormal set of functions is defined
\begin{equation}
\tensor*{\psi}{^{\pm}_{np}}(\epsilon) = \begin{cases} \frac{\Lambda^{n/2}}{(1 - \Lambda^{-1})^{1/2}} \e^{\pm \i \omega_n p \epsilon} & |\epsilon| 25 \left( \Lambda^{-n-1} , \Lambda^{-n} \right] \\ \hfil 0 & |\epsilon| \notin \left( \Lambda^{-n-1} , \Lambda^{-n} \right] \end{cases}
\end{equation}
with $p 25 \mathbbm{Z}$ and
\begin{equation}
\omega_n \equiv \frac{2\pi}{\Lambda^{-n} - \Lambda^{-(n+1)}} \,.
\end{equation}
With these basis functions, the bath operators are expressed in a Fourier expansion as
\begin{equation}
\op{a}{\epsilon \sigma} = \sum_{n,p} \left[ \op{a}{np\sigma} \tensor*{\psi}{^+_{np}}(\epsilon) + \op{b}{np\sigma} \tensor*{\psi}{^-_{np}}(\epsilon) \right]
\end{equation}
with the inverse Fourier expressions being
\begin{align}
\op{a}{np\sigma} &= \int_{-D}^{D} \d\epsilon \left[ \tensor*{\psi}{^+_{np}}(\epsilon) \right]^* \op{c}{\epsilon\sigma} \,,
&
\op{b}{np\sigma} &= \int_{-D}^{D} \d\epsilon \left[ \tensor*{\psi}{^-_{np}}(\epsilon) \right]^* \op{c}{\epsilon\sigma} \,,
\end{align}
and $\op{a}{np\sigma}$ and $\op{b}{np\sigma}$ satisfy the usual fermionic anticommutation relations.
The elements of the Hamiltonian are then re-expressed with these operators lying in these discretization intervals.
For $h(\epsilon) = h = \text{const.}$, the
impurity couples only to the $p=0$ orbital. For general $\Delta(\omega)$, this can be accomplished by setting $h(\epsilon) = \text{const.}$ within each interval where the constant value is the average of the hybridization within that interval.
The discretized Hamiltonian for the impurity model is now
\begin{equation}
\begin{aligned}
\frac{\hat{H}}{D} = \hat{H}_{\text{imp}}
&+ \sum_{n,\sigma} \left( \xi^+_n \opd{a}{n0\sigma} \op{a}{n0\sigma} + \xi^-_n \opd{b}{n0\sigma} \op{b}{n0\sigma} \right)
\\&+ \frac{1}{\sqrt{\pi}} \sum_{n,\sigma} \left( \opd{d}{\sigma} \left( \gamma^+_n \op{a}{n0\sigma} + \gamma^-_n \op{b}{n0\sigma} \right)
+ \left( \gamma^+_n \opd{a}{n0\sigma} + \gamma^-_n \opd{b}{n0\sigma} \right) \op{d}{\sigma} \right)
\end{aligned}
\end{equation}
with the coefficients $\xi^\pm_n$ and $\gamma^\pm_n$ defined as
\begin{align}
\xi^\pm_n &= \frac{\int_{\pm\mathcal{I}_n} \d\epsilon\, \epsilon\Delta(\epsilon)}{\int_{\pm\mathcal{I}_n} \d\epsilon\, \Delta(\epsilon)} \,,
&
( \gamma^\pm_n )^2 &= \int_{\pm\mathcal{I}_n} \d\epsilon\, \Delta(\epsilon) \,,
\end{align}
and the integral domains are the intervals $+\mathcal{I}_n = [\Lambda^{-(n+1)},\Lambda^{-n}]$, $-\mathcal{I}_n = [-\Lambda^{-n},-\Lambda^{-(n+1)}]$.
An issue with the discretization is that it systematically underestimates the hybridization function. This can be compensated for by a correction factor $A_\Lambda$, whose exact expression depends on the functional form of the hybridization function.
For quantum impurity models the hybridization function input of NRG often takes the form of a flat band, as illustrated in Fig.~\ref{fig:nrgdisc}. However in other contexts, such as those which will be seen to appear in dynamical mean-field theory, non-trivial initial hybridization functions generally need to be considered. A particularly notable case is where the low energy behavior of the hybridization function exhibits power-law behavior, $\Delta(\omega) \sim |\omega|^r$.
For this case, the
scale factor takes the form of \cite{ingersent.PhysRevB.57.14254}
\begin{equation}
A_{\Lambda,r} = \left[ \frac{1-\Lambda^{-(2+r)}}{2+r} \right]^{1+r} \left[ \frac{1+r}{1-\Lambda^{-(1+r)}} \right]^{2+r} \ln\Lambda
\end{equation}
which includes the flat-band case, $r=0$.
This discretized Hamiltonian is now mapped onto a semi-infinite chain,
known as the
Wilson chain.
The model will now consist of the impurity on the $-1^{\text{th}}$ site of the semi-infinite chain which is hybridized to the single $0^{\text{th}}$ site of the chain~\cite{kww1}. The conduction operators on this chain for $n\geq0$ are given by
\begin{equation}
\op{c}{0\sigma} = \frac{1}{\sqrt{\xi_0}} \sum_{n} \left( \gamma^+_n \op{a}{n0\sigma} + \gamma^-_n \op{b}{n0\sigma} \right)
\end{equation}
which results in the Hamiltonian taking the form
\begin{equation}
\begin{multlined}[c][0.75\linewidth]
\frac{\hat{H}}{D} = \hat{H}_{\text{imp}} + \sqrt{\frac{\xi_0}{\pi}} \sum_{\sigma}\left( \opd{d}{\sigma} \op{c}{0,\sigma} + \opd{c}{0,\sigma} \op{d}{\sigma} \right) \\+ \sum_{\sigma,n=0}^{\infty} \left[ \tensor{\varepsilon}{_{n}} \opd{c}{n,\sigma} \op{c}{n,\sigma} + \tensor{t}{_n} \left( \opd{c}{n,\sigma} \op{c}{n+1,\sigma} + \opd{c}{n+1,\sigma} \op{c}{n,\sigma} \right) \right]
\end{multlined}
\end{equation}
where $\xi_0 = \int_{-D}^D \d\epsilon\, \Delta(\epsilon)$.
The parameters of the chain Hamiltonian are obtained recursively with the initialization
\begin{equation}
\begin{aligned}
\varepsilon_0 &= \frac{1}{\xi_0} \int_{-D}^{D} \d\varepsilon \; \varepsilon \Delta(\varepsilon) \,,
\\
t_0^2 &= \frac{1}{\xi_0} \sum_{m} \left[ (\xi^+_m - \varepsilon_0)^2 (\gamma^+_m)^2 + (\xi^-_m - \varepsilon_0)^2 (\gamma^-_m)^2 \right] \,,
\\
u_{1,m} &= \frac{1}{t_0} (\xi^+_m - \varepsilon_0) \frac{\gamma^+_m}{\sqrt{\xi_0}} \,,
\\
v_{1,m} &= \frac{1}{t_0} (\xi^-_m - \varepsilon_0) \frac{\gamma^-_m}{\sqrt{\xi_0}} \,,
\end{aligned}
\end{equation}
and the iteration proceeding for $n\geq1$ as
\begin{equation}
\begin{aligned}
\varepsilon_n &= \sum_{m} \left( \xi^+_m u_{n,m}^2 + \xi^-_m v_{n,m}^2 \right) \,,
\\
t_n^2 &= \frac{1}{\xi_0} \sum_{m} \left[ (\xi^+_m)^2 u_{n,m}^2 + (\xi^-_m)^2 v_{n,m}^2 \right] - t_{n-1}^2 - \varepsilon_n^2 \,,
\\
u_{n+1,m} &= \frac{1}{t_n} \left[ (\xi^+_m - \varepsilon_n) u_{n,m} - t_{n-1} u_{n-1,m} \right] \,,
\\
v_{n+1,m} &= \frac{1}{t_n} \left[ (\xi^-_m - \varepsilon_n) v_{n,m} - t_{n-1} v_{n-1,m} \right] \,.
\end{aligned}
\end{equation}
It is worth noting that the $1d$ chain form of the discretized bath, the Wilson chain, is independent of the actual physical geometry of the bath. The interpretation of successive sites are as orbitals successively further away from the impurity. Physical details of the actual bath enter only through the effective Wilson chain parameters $t_n$ and $\varepsilon_n$. The Wilson chain is shown schematically in Fig.~\ref{fig:wilsonchain}.
The next procedure in the calculation is an iterative diagonalization.
This is the step in which
the renormalization group character of NRG appears. The Hamiltonian is considered as a limit of Hamiltonians
\begin{equation}
\begin{aligned}
\hat{H}_N
= \Lambda^{\frac{N-1}{2}} \Bigg[ \hat{H}_{\text{imp}} &+ \sqrt{\frac{\xi_0}{\pi}} \sum_{\sigma} \left( \opd{d}{\sigma} \op{c}{0,\sigma} + \opd{c}{0,\sigma} \op{d}{\sigma} \right) \\
&+ \left. \sum_{\sigma,n=0}^{N} \tensor{\varepsilon}{_{n}} \opd{c}{n,\sigma} \op{c}{n,\sigma} + \sum_{\sigma,n=0}^{N-1} \tensor{t}{_n} \left( \opd{c}{n,\sigma} \op{c}{n+1,\sigma} + \opd{c}{n+1,\sigma} \op{c}{n,\sigma} \right) \right]
\end{aligned}
\end{equation}
with
\begin{equation}
\frac{\hat{H}}{D} = \lim_{N\to\infty} \Lambda^{\frac{N-1}{2}} \hat{H}_{N}
\end{equation}
Successive Hamiltonians are constructed iteratively as
\begin{equation}
\hat{H}_{N+1} = \Lambda^{\frac{1}{2}} \hat{H}_{N} + \Lambda^{\frac{N}{2}} \sum_{\sigma} \tensor{\varepsilon}{_{N+1}} \opd{c}{N+1,\sigma} \op{c}{N+1,\sigma} + \Lambda^{\frac{N}{2}} \sum_{\sigma} \tensor{t}{_{N+1}} \left( \opd{c}{N,\sigma} \op{c}{N+1,\sigma} + \opd{c}{N+1,\sigma} \op{c}{N,\sigma} \right)
\end{equation}
This construction of successive Hamiltonians is the renormalization operation.
At each Hamiltonian $\hat{H}_{N}$ an energy spectrum is found from an eigenbasis.
The states take the form of
\begin{equation}
\lvert Q ,\, S_z ;\, r \rangle
\end{equation}
with charge and spin projection quantum numbers, $Q$ and $S_z$; and an index $r$ labelling states with the same $Q$ and $S_z$.
The basis at successive iterations $N+1$ is constructed from the previous iteration. Since this basis would grow exponentially, thereby hindering the exact diagonalization of the Hamiltonian, only a truncated set of states are used to build the basis for the next iteration. The truncation is determined by a set energy cutoff with only the lowest energy states retained at each step. This process is diagrammed schematically in Fig.~\ref{fig:nrgstates}.
A complete basis of eigenstates, the Anders-Schiller basis, can be formed from the total set of discarded states~\cite{andersschiller}. The reason for needing this basis rather than the basis produced from the kept states, is that the kept states possess non-zero overlap with each other. This leads to an over counting of contributions to the Lehmann sum~\cite{fdmnrg}. This can be avoided by using the Anders-Schiller basis, which is complete.
\begin{figure}[ht!]
\centering
\begin{tikzpicture}[every node/.style={line width=1pt,inner sep=4pt,scale=1.},scale=1.5]
\node[circle,draw=black] (o) at (0,0){$\phantom{o}$};
\foreach \n [count=\m] in {0,...,4}
{
\node[rectangle,draw=black] (\m) at ($(\n,0)+(1,0)$){$\phantom{o}$};
\node[below=2pt,inner sep=8pt] at (\m) {$\tensor*[_{\phantom{\n}}]{\varepsilon}{_{\m}}$};
\ifnum \n>0
\draw[black,line width=10pt/\m] (\n)--(\m) node[midway,above] {$t_\n$};
\fi
}
\node[] (end) at (6,0){};
\draw[-,line width=1pt,double distance = 1pt] (o)--(1) node[midway,above] {$V$};
\draw[black,dashed,line cap=round,line width=1.5pt] (5)--(end);
\end{tikzpicture}
\caption[Schematic of the NRG Wilson chain]{Schematic of the NRG Wilson chain. The amplitude of the $t_n$'s decays logarithmically with increasing $n$.\label{fig:wilsonchain}}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale=1]{nrgtruncation.pdf}
\caption[Truncation of states in NRG]{Truncation of states in NRG. At each iteration, only a certain number of lowest energy states are kept (black lines). The higher energy states are discarded (gray dashed lines). The discarded states however are kept to form the complete Anders-Schiller basis~\cite{andersschiller} used for calculating spectral quantities.\label{fig:nrgstates}}
\end{figure}
A particularly relevant quantity which will need to be calculated is the Green function. These are constructed in NRG through their Lehmann representation\index{Lehmann representation} from the eigenstates obtained in the iterative diagonalization~\cite{fdmnrg,nrggreen}. Of particular relevance is the self-energy due to the interactions on the impurity. These are related to the Green function
\begin{equation}
\begin{aligned}[b]
F_\sigma(z)
&\vcentcolon= \Green{\op{d}{\sigma}}{[\hat{H}_{\text{Int}},\opd{d}{\sigma}]}_z
\end{aligned}
\end{equation}
which appears from the contribution of the interaction Hamiltonian in the single particle equations of motion.
From the Green function equations of motion the self-energy can be observed to be~\cite{bullaFG}
\begin{equation}
\Sigma_{\sigma}(z) = \frac{F_\sigma(z)}{G_\sigma(z)} \,.
\label{eq:FoverG}
\end{equation}
For the single impurity Anderson model,
\begin{equation}
F_{\sigma}(z) = U \Green{\op{d}{\sigma}}{\opd{d}{\sigma} \opd{d}{-\sigma} \op{d}{-\sigma}}_z
\end{equation}
and note the comparison with Eq.~\eqref{eq:foverggreen}.
The imaginary parts of $F_\sigma(z)$ and $G_\sigma(z)$ are calculated in NRG via the Lehmann representation.
Analogously to $G_\sigma(z)$, it is possible to write a spectral function for $F_\sigma(z)$ as
\begin{equation}
\mathcal{B}_{\sigma}(\omega) = -\frac1\pi \Im F_\sigma(\omega+\i0^+)
\end{equation}
which in the Lehmann representation is
\begin{equation}
\mathcal{B}_{\sigma}(\omega)
= \frac{1}{\mathcal{Z}} \sum_{m,n} \langle n \lvert \op{d}{\sigma} \rvert m \rangle \langle m \lvert \opd{d}{\sigma} \opd{d}{-\sigma} \op{d}{-\sigma} \rvert n \rangle \delta\left( \omega - ( E_m - E_n ) \right) \left( \e^{-\beta E_n} + \e^{-\beta E_m} \right)
\end{equation}
where $m,n$ span the a complete basis.
The discrete spectral poles from the Lehmann representation calculation are broadened to form a continuous spectrum.
This gives the imaginary part of $F_{\sigma}(\omega+\i0^+)$.
In cases where poles are distributed linearly, poles at $\omega_p$ of weight $w_p$ can be broadened using, for example, a Lorentzian distribution
\begin{align}
w_p \delta(\omega-\omega_p) &\mapsto w_p \frac1\pi \frac{\eta}{(\omega-\omega_p)^2 + \eta^2}
\label{eq:lorentzianpole}
\intertext{with $0 < \eta \ll 1$.
As the $\{\omega_n\}$ are distributed according to a logarithmic scale in NRG, it is more appropriate to broaden the poles instead using a logarithmic distribution, as~\cite{bullaFG}}
w_p \delta(\omega-\omega_p) &\mapsto w_p \frac{\e^{-b^2/4}}{b \omega_p \sqrt{\pi}} \e^{-\left(\frac{\ln(\omega/\omega_p)}{b}\right)^2} \,.
\tag{\ref*{eq:lorentzianpole}$^{\prime}$}
\end{align}
where $b$ is a broadening parameter, with values typically in the range of $0.3 \leq b \leq 0.6$.
This kernel then gives the appropriate broadening of the spectral poles to produce an approximate continuous spectrum.
A Hilbert transform is applied to the continuous imaginary parts to compute the corresponding real part such that the full function satisfies the Kramers-Kronig relations\index{Kramers-Kronig relations}.
The self-energy is then calculated according to Eq.~\eqref{eq:FoverG}
The self-energy may also be calculated directly from the Dyson equation
\begin{equation}
\tensor*{\Sigma}{_{\sigma}}(z) = \tensor*{G}{^{(0)}_{\sigma}}(z)^{-1} - \tensor*{G}{_{\sigma}}(z)^{-1}
\label{eq:SEfromDyson}
\end{equation}
where
\begin{equation}
\tensor*{G}{^{(0)}_{\sigma}}(z) = \frac{1}{z - \varepsilon_f - \Delta_{\sigma}(z)}
\end{equation}
is the free impurity Green function and $\Delta_\sigma(z)$ is the hybridization function as before. While this equation is exact analytically, it is numerically it is prone to errors. Conversely in \eqref{eq:FoverG}, since $F_{\sigma}(z)$ and $G_{\sigma}(z)$ are divided, only relative errors propagate to the solution for $\Sigma_{\sigma}(z)$. This means that \eqref{eq:FoverG} produces a more stable result numerically than the inverted Dyson expression \eqref{eq:SEfromDyson}.
\label{sec:seproblems}
Numerical errors can result in the case where $\Im\Sigma(\omega) > 0$ at low temperatures near the Fermi level $\omega=0$. In the context of the NRG-DMFT presented in this thesis this error is corrected by a sign flip, \textit{i.e.} $\Im\Sigma(\omega) \mapsto -\Im\Sigma(\omega)$ when $\Im\Sigma(\omega) > 0$. The errors introduced by this correction do not generally affect the convergence of the DMFT solution as the magnitude and domain of the flipped self-energy is small compared to the self-energy overall. An exception to this is when the self-energy is calculated for very weak interactions. In this case the flipped self-energy is of the same order of magnitude as the rest of the self-energy, and is over a large portion of the self-energy's frequency domain. This causes catastrophic errors to accumulate in the resulting Green functions calculated from NRG and the solution cannot be taken as accurate.
Alternative resolutions of the positive overshoot error include setting $\Im\Sigma(\omega) = 0$ in the overshoot region or shifting the whole imaginary part of the self-energy such that $\Im\Sigma(\omega) < 0$ over its entire range. However, also with this correction scheme numerical errors can ruin the calculation. In some instances there are negative divergences at low frequency of $\mathcal{O}(1)$ in the self-energy. While these divergences may be large in magnitude on the order of the self-energy, they are small compared to the hybridization function and only occur at exponentially low frequencies. These errors therefore do not dramatically affect the Green function as computed from the Dyson equation in the cases encountered in this thesis. Performing the shift procedure however can lead to dramatically increasing the entire self-energy over the whole frequency range, which then does significantly affect the Green function solution.
The non-analytic errors in the self-energy arising from the $F/G$ prescription appear to be a systematic error in NRG. This work does not attempt to solve this issue, but identifies it as an area to be addressed in the NRG community. A recent proposal for a new self energy estimator which does not suffer from the aforementioned issues is found in~\cite{newnrgse}.
\subsection{Solution to the Anderson Model\label{sec:amsolution}}
The numerical renormalization group was originally devised to solve the Kondo model and the resistance minimum problem of dilute magnetic impurities in metals~\cite{wilson}.
The method was subsequently expanded upon to solve the full single impurity Anderson model~\cite{kww1,kww2}. The impurity spectral function takes the form of a three-peak structure, with the central peak at particle-hole symmetry taking a maximum at $1/\pi\Gamma$, where $\Gamma = -\frac1\pi\Im\Delta(0)$. The spectral function and self-energy of the impurity in the Anderson model as obtained from NRG are plotted in Fig.~\ref{fig:siamsolution}.
\begin{figure}[h]
\begin{subfigure}{0.49\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{siamG.pdf}};
\node at (3.125,2) {\footnotesize\subref*{fig:siamG}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:siamG}}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{siamS.pdf}};
\node at (3.125,2) {\footnotesize\subref*{fig:siamS}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:siamS}}
\end{subfigure}
\vspace{-\baselineskip}
\caption[NRG solution to the single impurity Anderson model]{NRG solution to the single impurity Anderson model parameterized by $V/D = 0.1$, $U/D = 0.3$, and $\epsilon/D = -0.15$. Note that the peak of the spectral function is pinned to $1/\pi\Gamma$ at particle-hole symmetry.\label{fig:siamsolution}}
\end{figure}
The Anderson model has the characteristics of a Fermi liquid, where in particular, the low energy behavior of the self-energy goes as~\cite{hewson,mattuck}
\begin{subequations}
\begin{align}
\Re\Sigma(\omega) &\sim \omega + \text{const.}
\\
\Im\Sigma(\omega) &\sim \omega^2 \,,
\end{align}
\label{eq:lowese}
\end{subequations}
where the constant term in the real part is $\frac{U}{2}$ for system at particle-hole symmetry, but is in general unknown. A Fermi liquid also has the property~\cite{hewson,logangalpin}
\begin{equation}
\Im\int_{-\infty}^{0} \d\omega\, G(\omega) \frac{\partial \Sigma(\omega)}{\partial \omega} = 0 \,.
\end{equation}
These characteristics will become relevant in later chapters.
Another feature to make note of is the central peak of the spectral function. From the Dyson equation, the impurity Green function is
\begin{align}
G_{\sigma}(z)
&= \frac{1}{z - \varepsilon_f - \Sigma_\sigma(z)}
\\
&= G^0(z - \Sigma(z))
\end{align}
where $G^0(z)$ is the non-interacting Green function on the impruty with $U=0$. A renormalized energy level can be defined as $\tensor*{\varepsilon}{^*_f} = \tensor*{\varepsilon}{_f} + \Re\Sigma(0)$. At particle-hole symmetry $\tensor*{\varepsilon}{^*_f} = 0$. Evaluating the impurity spectral function at $\omega = 0$ and taking into account the low energy behavior of the self-energy \eqref{eq:lowese} leads to the result that
\begin{equation}
\mathcal{A}_\sigma(0) = -\frac1\pi \Im G^0(0) \,,
\label{eq:pinned}
\end{equation}
which means that the impurity spectral function at finite $U$ is fixed to the value of the non-interacting density of states at zero energy.
\section{Dynamical Mean-Field Theory\label{sec:dmft}}
While a system with a single interacting site can essentially be solved exactly, as shown in the previous section, a fully interacting system in general cannot. The treatment of such systems inevitably involves making approximations or treating effective models.
One computational method of such an effective theory is
dynamical mean-field theory (DMFT).
The concept of DMFT is that it treats non-local interactions not directly, as in \textit{e.g.} perturbation theory, but rather through auxiliary degrees of freedom which are self-consistently defined. Local interactions are however treated directly. The restriction from non-local interactions to local ones only is the core of the DMFT approximation.
Ordinary mean-field theory models a system by treating its dynamics only locally, with the behavior of the rest of the system modeled by a static background, the mean-field. A classic example of such a mean-field theory is that of the Ising model of magnetic spins where the mean-field represents a background overall magnetization of the entire system. In the Ising model, correlations are not taken between independent dynamical spins, but rather between single dynamical spins and a static mean-field average.
Dynamical mean-field theory on the other hand employs a similar concept, but allows for the implementation of a non-static mean-field, \textit{i.e.} a mean-field which in principle has energy and momentum dependence (although momentum dependence is suppressed in DMFT, as explained below).
A central concept which facilitates DMFT is that in infinite spatial dimensions the self energy becomes a purely local quantity~\cite{metznervollhardt,infinitehubbard,muellerhartmann}. This allows interactions to be treated only locally and non-local correlations can be ignored.
This can be seen from the diagrammatic perturbative expansion of the self-energy. Here considered are the self-energy
skeleton diagrams with internal lines consisting of the full interacting Green function~\cite{jarrellhubbard,schweitzerczycholl}.
Diagrammatically, the self-energy can be expressed as~\cite{mattuck,dmft}
\begin{equation}
\boldsymbol{\Sigma}_{\sigma}(z) =
\;
\begin{tikzpicture}[baseline={(current bounding box.center)},decoration={markings, mark= at position 0.5 with {\arrow[xshift=3pt]{Stealth[length=6pt,width=4pt]}}}, scale=0.75]
\coordinate (i0) at (0,0) node[below] {$i$};
\coordinate (i) at (0,1);
\draw[postaction={decorate}] (i) arc (90:450:0.5cm and -0.5cm) node[midway,below=0.125cm] {$-\sigma$};
\draw[dashed] (i)--(i0);
\draw[-] (-0.5,0)--(0.5,0);
\end{tikzpicture}
\;+\;
\begin{tikzpicture}[baseline={(current bounding box.center)}, decoration={markings, mark= at position 0.5 with {\arrow[xshift=3pt]{Stealth[length=6pt,width=4pt]}}}, scale=0.75]
\def1.5{1.5}
\coordinate (i1) at (-1,1.5);
\draw[postaction={decorate}] (i1) arc (30:150:-2cm and 1cm) coordinate (j1) node[midway,below=0.125cm] {$-\sigma\phantom{-}$};
\draw[postaction={decorate}] (j1) arc (30:150:2cm and -1cm);
\draw[dashed] (i1)--($(i1)+(0,-1.5)$) node[below] {$i$};
\draw[dashed] (j1)--($(j1)+(0,-1.5)$) node[below] {$j$};
\draw[postaction={decorate}] ($(i1)+(-0.5,-1.5)$)--($(j1)+(0.5,-1.5)$) node[midway,below] {$\sigma$};
\end{tikzpicture}
\;+\;
\cdots
\label{eq:sediagrams}
\end{equation}
where only the first terms are shown.
The non-local Green functions describing propagation from site $i$ to site $j$ scale with $\sim t \mapsto \widetilde{t}/\sqrt{d}$.
The multiplicity of internal lines in the non-local self-energy diagrams scales as $d$.
As shown in \eqref{eq:sediagrams}, the
diagrams for the non-local contributions to the self-energy scale at least as $(\widetilde{t}/\sqrt{d})^3$.
The overall scaling behavior of the non-local self-energy contributions is then at least as $1/\sqrt{d}$ which vanishes in the $d\to\infty$ limit.
The vanishing of these terms in the $d\to\infty$ limit thereby eliminates all non-local contributions to the self-energy.
The locality of the self-energy in the limit of infinite dimensions holds in both real as well as momentum space:
\begin{equation}
\begin{aligned}
\Sigma_{i,j}(z)
&= \Sigma(z,r_j - r_i)
\\
\Sigma(z) \delta_{i,j}
&= \frac{1}{2\pi} \int \d^d k\, \Sigma(z,k) \e^{\i k \cdot (r_j - r_i)}
\\
\Sigma(z) \frac{1}{2\pi} \int \d^d k\, \e^{\i k \cdot (r_j - r_i)}
&= \frac{1}{2\pi} \int \d^d k\, \Sigma(z,k) \e^{\i k \cdot (r_j - r_i)}
\\
\Sigma(z) &= \Sigma(z,k) \,.
\end{aligned}
\end{equation}
The locality of the self-energy implies that the Hubbard model in infinite dimensions can be mapped onto an Anderson Impurity model \eqref{eq:siam}~\cite{infinitehubbard,jarrellhubbard}.
From the Green function equations of motion, the Hubbard model self-energy can be found to be
\begin{equation}
\tensor*{\Sigma}{_{ij,\sigma}}(z) = U \frac{\Green{\op{c}{i,\sigma}}{\op{n}{j,-\sigma} \opd{c}{j,\sigma}}_z}{\Green{\op{c}{i,\sigma}}{\opd{c}{j,\sigma}}_z} \,.
\end{equation}
Such terms with $i\neq j$ arise when calculating the non-local $\Greenline{\op{c}{i,\sigma}}{\opd{c}{j,\sigma}}_z$ Green function.
Computing the Green function $\Greenline{\op{c}{i,\sigma}}{\op{n}{j,-\sigma} \opd{c}{j,\sigma}}_z$ involves evaluating the commutator $[ \hat{H}_{I} , \op{n}{j,-\sigma} \opd{c}{j,\sigma} ] = U \sum_{m} [ \op{n}{m,\uparrow} \op{n}{m,\downarrow} , \op{n}{j,-\sigma} \opd{c}{j,\sigma} ] \propto \sum_{m} \delta_{mj}$ where $\sum_{m}$ sums over all sites with the interaction term present. For the Anderson impurity model this trivially restricts the self-energy to be local as only the impurity site features the interaction term. In the Hubbard model the sum $\sum_{m}$ extends over all sites in the system, but since $\Sigma_{ij,\sigma}(z) = \Sigma_{\sigma}(z) \delta_{ij}$ in infinite dimensions, the Green function for the Hubbard model coincides with the Green function for the single impurity Anderson model.
In order to obtain finite and non-trivial models in the infinite dimensional limit, it is necessary to ensure that all parameters of the Hamiltonian are extensive.
Recall the form of the dispersion relation for a $d$ dimensional square lattice Eq.~\eqref{eq:squaredispersion}. The $d$-dimensional sum over $\cos(k_i)$ essentially amounts to a sum over random numbers distributed in $[-1,1]$. By the central limit theorem in the $d\to\infty$ limit, this leads to a density of states with Gau{\ss}ian distribution form
\begin{equation}
\mathcal{A}(\omega) = \frac{1}{2 t \sqrt{\pi d}} \e^{-\left(\frac{\omega - \varepsilon}{2 t \sqrt{d}}\right)^2} \,.
\end{equation}
In order for this quantity to be finite, it is necessary to scale the kinetic hopping parameter $t$ as $t\mapsto\widetilde{t}/\sqrt{d}$. In the Hubbard model \eqref{eq:hubbard}, the interaction term acts only locally, which means it does not scale with $d$. The scaling $t\mapsto\widetilde{t}/\sqrt{d}$ ensures that the relative energy scales of the kinetic and interaction terms remain comparable, even in the infinite dimensional limit, $\displaystyle \lim_{d\to\infty}\mathcal{O}(U/t) \sim 1$.
In the continuum, the density of states for free fermions in $d$ dimensions is proportional to $\varepsilon^{d/2 - 1}$. This expression cannot be scaled in such a way that it remains finite in the limit $d\to\infty$. It is therefore necessary to study the behavior of such systems regularized by a lattice~\cite{muellerhartmanninfinite}.
As mentioned in \S\ref{infinitelimitbethe}, the Bethe lattice is a lattice with a simple well defined infinite dimensional limit. It is therefore an appropriate lattice for building models to be treated by DMFT, and is the lattice of choice used throughout this work.
Other lattices which possess a well-defined limit to infinite coordination number are the $2d$ hexagonal lattice and $3d$ diamond lattice \cite{santoro}.
The coordination number of the Bethe lattice is not directly related to any spatial dimension. To consider the relation of the infinite coordination number to real-space dimension, consider the (hyper)cubic lattice whose unit lattice vectors span the real-space volume. The coordination number $\kappa$ for the hypercubic lattice is directly related to the spatial dimension $d$ by $\kappa = 2d$. In the infinite coordination number limit, the hopping amplitude then scales as
\begin{subequations}
\begin{equation}
t \to \frac{t}{\sqrt{\kappa}}
\end{equation}
or
\begin{equation}
t \to \frac{t}{\sqrt{2d}}
\end{equation}
\end{subequations}
which then yields the equivalence between the infinite coordination number limit and the infinite dimensional limit.
The infinite dimensional limit of DMFT represents a spatial mean-field approximation that appears far away from the $d=2$ or $d=3$ dimensionality of real systems. However for lattice models the coordination number may be much higher than the real-space dimensionality. In $d=3$ the face-centered cubic lattice has coordination number $\kappa=12$, which is a relatively ``large'' number.
It is nevertheless necessary for non-local contributions to be taken into account for precise comparison with real systems. Some methods for incorporating non-local contributions into DMFT are reviewed in \cite{nonlocaldmft}.
One example is that of cluster-DMFT~\cite{clusterdmft}, where instead of a single site impurity, the impurity model is taken to be a finite size cluster of interacting sites.
Another example is that of coupling DMFT with the functional renormalization group (FRG), producing the DMF$^2$RG theory~\cite{dmf2rg}. This scheme solves a model using DMFT as usual in the infinite dimensional limit, but then uses this solution to initialize an FRG calculation for the ``true'' finite dimensional system. The spatial dependence is incorporated through the FRG flow.
There also exist a variety of schemes incorporating DMFT into electronic structure calculations for real materials~\cite{electronicstructuredmft}.
A common method of such calculations involve coupling DMFT with density functional theory (DFT)~\cite{weberdmftdft}.
The DMFT algorithm is initiated by choosing an ansatz for the hybridization from impurity to the bath lattice,
\begin{equation}
\Delta(z) = \sum_k \frac{\left\lvert V_k \right\rvert^2}{z - \varepsilon_k} \,.
\label{eq:initialhyb}
\end{equation}
This hybridization $\Delta(z)$ then specifies a quantum impurity model for which the self-energy of the impurity $\Sigma_{\text{imp}}(z)$ may be calculated using an impurity solver. This self-energy can then be used to calculate the local lattice Green function\footnote{The hybrid notation $\phi[J_m;y_n)$ indicates that $\phi$ is a functional of functions $\{J_m\}$ and a function of variables $\{y_n\}$.}
\begin{equation}
G_{\text{latt}}[\Sigma_{\text{latt}};z) = \int \d\omega' \frac{\mathcal{A}_0(\omega')}{z - \varepsilon - \omega' - \Sigma_{\text{latt}}(z)}
\end{equation}
where $\mathcal{A}_0(\omega)$ is the non-interacting density of states for the system under consideration.
As an example, for the Hubbard model on an infinite dimensional Bethe lattice the lattice Green function takes the form
\begin{equation}
\begin{aligned}[b]
G_{\text{latt}}[\Sigma_{\text{latt}};z)
&= \cfrac{1}{z - \varepsilon - \Sigma_{\text{latt}}(z) - \cfrac{\kappa t^2}{z - \varepsilon - \Sigma_{\text{latt}}(z) - \cfrac{\kappa t^2}{\ddots}}}
\\
&= \cfrac{1}{z - \varepsilon - \Sigma_{\text{latt}}(z) - G_{\text{latt}}[\Sigma_{\text{latt}};z)} \,.
\end{aligned}
\end{equation}
At this stage of the DMFT cycle, it is necessary to compute the local self-energy. This can be accomplished from a number of techniques, such as iterated perturbation theory (IPT)~\cite{infinitehubbard}, exact diagonalization~\cite{eddmft,webered}, continuous-time--quantum Monte Carlo (CT-QMC)~\cite{ctqmc,weberctqmcdmft}, density matrix renormalization group (DMRG)~\cite{dmrgdmft}, or the numerical renormalization group (NRG)~\cite{bullahubbard,nrg}.
The lattice Green function is then used to self-consistently generate a new hybridization function, restarting the DMFT cycle.
The local lattice self-energy is taken to be the impurity self-energy of the impurity model,
\begin{equation}
\Sigma_{\text{latt}}(z) \,\hat{=}\, \Sigma_{\text{imp}}(z) \,,
\end{equation}
such that the lattice Green function is a functional of the impurity self-energy
\begin{equation}
G_{\text{latt}}[\Sigma;z) \,\hat{=}\, G_{\text{latt}}[\Sigma_{\text{imp}};z) \,.
\end{equation}
This then produces a result for the full interacting Green function of the lattice.
This Green function is then used to construct a new hybridization function $\Delta'(z)$ which defines a new quantum impurity model for the next cycle in the DMFT algorithm by
\begin{equation}
G_{\text{latt}}[\Sigma_{\text{imp}};z) \,\hat{=}\, G_{\text{imp}}[\Delta',\Sigma_{\text{imp}};z)
\end{equation}
where $G^{-1}_{\text{imp}}[\Delta,\Sigma_{\text{imp}};z) = {z - \varepsilon - \Delta(z) - \Sigma_{\text{imp}}(z)}$. This reinitializes the DMFT calculation starting from \eqref{eq:initialhyb} and the calculation proceedes again to calculate a new lattice Green function.
The cycle is run until the Green function solution reaches convergence, \textit{i.e.} when the $n^{\text{th}}$ cycle's hybridization function $\Delta^{(n)}(\omega)$ can be said to satisfy $\lvert \Delta^{(n)}(\omega) - \Delta^{(n-1)}(\omega) \rvert < \delta$, $\forall \omega$ with $\delta \ll 1$.
The DMFT result is exact in the limit of infinite dimensions, although it can also be applied to finite dimensional cases. In the case of finite dimensions, the DMFT self-consistency condition is an approximation (as its name suggests, a mean-field approximation). A physical interpretation of the DMFT approximation may be obtained from analyzing the Luttinger-Ward functional $\Phi[G]$~\cite{luttingerward,potthofffunctional}. The physical self-energy can be obtained from the stationary point of the Luttinger-Ward functional
\begin{equation}
\Phi[G] = \Omega[G^{-1}+\Sigma[G]] - \ln G + \Sigma[G] G
\end{equation}
as
\begin{equation}
\Sigma = \frac{\delta \Phi[G]}{\delta G} \,.
\end{equation}
$\Omega$ is the grand potential $\Omega = \Omega[G]$ with $\frac{\delta \Omega}{\delta G} = 0$ exact for physical $G$.
DMFT approximates the functional $\Phi[G]$ as the sum over skeleton and two particle irreducible (2PI) local diagrams as opposed to all diagrams~\cite{dmft,potthofffunctional}. The skeleton self-energy diagrams are those which only contain undressed Green function internal lines. This then provides some formal justification for applying DMFT to finite-dimensional systems. In this context the purely local self-energy calculated from DMFT is taken as an approximation.
\subsection{Solution to the Hubbard Model\label{sec:hubbardsolution}}
The previous section introduced a set of powerful numerical methods which can be used to solve systems which are otherwise intractable with analytical tools or less sophisticated computational methods. DMFT coupled with NRG as the impurity solver is able to compute the exact solution to the Hubbard model in the limit of infinite dimensions, with spectral functions evaluated directly on the real axis, down to $T=0$~\cite{bullahubbard}.
An example of a system which can be easily be solved by DMFT but is intractable by other means is the Hubbard model \eqref{eq:hubbard}.
The DMFT solution to the Hubbard model on the Bethe lattice in infinite dimensions over a range of $U$ is shown in Fig.~\ref{fig:hubbardsolution}.
Without interactions, $U=0$, the spectral function takes on a semi-elliptic form. As interactions are adiabatically increased, the spectrum begins to broaden and develops a characteristic three-peak structure, with the central peak of the spectral function pinned to $\frac1\pi$ following the discussion in \S\ref{sec:amsolution}. The self-energy takes the functional form of~\cite{analyticse}
\begin{equation}
\Sigma(z) = z - \varepsilon - \frac{1}{G(z)}
\end{equation}
which implies an inverse relationship between the spectral function and the self-energy. As the spectral function develops a three peak structure, the self-energy therefore takes on a two peak form. This is well illustrated in the $U/t=4$ panel of Fig.~\ref{fig:hubbardsolution}. The locations of the peaks are at $\omega \sim \pm \sqrt{Z}$ where $Z$ is the quasiparticle weight\index{quasiparticle weight} $Z = \left( 1 - \left. \partial \Re\Sigma / \partial \omega \right\rvert_{\omega=0} \right)^{-1}$.
A metal-to-insulator phase transition occurs in the vicinity of $U/t \sim 5.9$ as the interaction strength is adiabatically increased from $U=0$. At this point, the density of states at the Fermi level collapses leaving a hard gap. This transition in the spectral function is accompanied by the appearance of a pole at zero energy in the self-energy, which is referred to as the `Mott pole'. The self-energy of the Mott insulating phase can be written as
\begin{equation}
\Sigma(z) = \frac{\alpha}{z} + \tilde{\Sigma}(z)
\end{equation}
where $\alpha$ is the weight of the Mott pole and $\tilde{\Sigma}(z)$ represents the portion of the self-energy not including the pole. The pole weight is determined by~\cite{bullahubbard}
\begin{equation}
\frac{1}{\alpha} = \int \d\omega \frac{\mathcal{A}(\omega)}{\omega^2} \,.
\end{equation}
The appearance of the zero energy Mott pole can be interpreted qualitatively as the result of the merging of the two peaks of the self-energy in the metallic phase. As mentioned above the two peaks are located at $\omega \sim \pm \sqrt{Z}$. The quasiparticle weight at the Fermi level $Z$ decreases with increasing $U$, and vanishes at the metal-insulator phase transition.
\begin{figure}[htp!]
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU0G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU0S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU1G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU1S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU2G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU2S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU3G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU3S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU4G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU4S.pdf}
\end{subfigure}
\caption[DMFT solution to the Hubbard model]{DMFT solution to the Hubbard model on the Bethe lattice with bandwidth $D=2$\label{fig:hubbardsolution}}
\end{figure}
\begin{figure}[htp!]\ContinuedFloat
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU5G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU5S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU5_3G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU5_3S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU5_8G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU5_8S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU6G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU6S.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU8G.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HU8S.pdf}
\end{subfigure}
\caption[DMFT solution to the Hubbard model on the Bethe lattice]{DMFT solution to the Hubbard model on the Bethe lattice with bandwidth $D/t=2$. Note the collapse of the density of states at the Fermi level and the simultaneous appearance of a pole at zero energy in the self-energy at $U/t \sim 5.9$, indicative of the Mott metal-insulator transition.}
\end{figure}
The large $U$ insulating regime where $U \gg t$ can be considered as the many-body extension of the Hubbard atom case analyzed in \S\ref{sec:hubbardatomgf}. Rather than two poles, the spectral function features two bands centered around $\pm\frac{U}{2}$ whose broadened width is due to the many-body interactions generated by weak intersite tunneling $t$.
In addition to the metal-to-insulator phase transition, there also exists an insulator-to-metal phase transition which occurs at a different $U$ than the metal-to-insulator transition. This critical interaction strength is denoted $U_{c1}$ with the metal-to-insulator critical interaction strength termed $U_{c2}$ as $U_{c1} < U_{c2}$. The DMFT solution of the Hubbard model on the Bethe lattice in infinite dimensions reveals that $U_{c1}/t \approx 4.6$ and $U_{c2}/t \approx 5.9$. The region where $U_{c1} < U < U_{c2}$ is the hysteresis region shown in Fig.~\ref{fig:hubbardphasediagram}.
Within the hysteresis region, the system with interaction strength tuned down adiabatically from $U > U_{c2}$ can be described as having interaction strength $U_+$. Conversely, the system with interaction strength tuned up adiabatically from $U < U_{c1}$ can be described as having interaction strength $U_-$.
\begin{figure}[h]
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HUc2U5.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{HUc1U5.pdf}
\end{subfigure}
\caption[Coexistence hysteresis region of the Hubbard model]{Comparison between the spectral function of the Hubbard model at $U/t = 5$ with $D/t=2$ approaching from the metallic phase (left) and from the insulating phase (right). This demonstrates the hysteresis coexistence phase of the Hubbard model.\label{fig:hubbardcoexist}}
\end{figure}
Within this region, when the system is adiabatically tuned to this interaction strength from below ($U_-$), the system is metallic. When the system is adiabatically tuned to this interaction strength from above ($U_+$), it is a Mott insulator. An example of the system in this coexistence region is depicted in Fig.~\ref{fig:hubbardcoexist}. As demonstrated by the sequence in Fig~\ref{fig:hubbardsolution}, as the interaction strength is increased, the outer satellite bands move progressively further away from the Fermi level. Conversely, as the interaction strength is decreased the bands converge towards the Fermi energy. The critical point $U_{c1}$ occurs when the two bands touch.
Although the DMFT solution is exact only in the infinite dimensional limit, it is reasonably able to capture the physics of some examples of real materials. In particular, the phase diagram of $\mathrm{V}_2\mathrm{O}_3$ is well captured by the DMFT solution to the Hubbard model~\cite{dmft}.
\section{Topology}\label{sec:topology}
The use of topology in condensed matter physics has been steadily growing over the past few decades~\cite{mermin,avronseilersimon,volovik,horava,hk,fruchartcarpentier,shortcourse,altlandsimons}.
Before turning to its manifestations in physics, presented here first will be a discussion of the mathematical form of topology which appears in the contexts of interest~\cite{nakahara,baezmuniain,eguchi,frankel,schutz}.
The primary element taken into account is whether a certain system is topologically trivial or non-trivial. The triviality or non-triviality of topology can be understood as the difference between the torus $\mathbbm{T}^2$, which is topologically trivial, and the sphere $\mathbbm{S}^2$, which is topologically non-trivial.
The topology of topological materials generally refers to non-trivial topology of their momentum space. For example, a non-topological $2d$ system has a first Brillouin zone which takes the form of a 2-torus $\mathbbm{T}^2$.\footnote{The momenta in the Brillouin zone are $2\pi$ periodic in $k_x$ and $k_y$, which leads to this space taking the form of a torus.} The momentum space is the tangent vector field over this space.
The notion of what is meant by topology can be illustrated in the difference between the M\"obius band and the cylinder, as shown in~Fig.\ref{moebius}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.5,line width=1.25]
\coordinate (C) at (-3,0);
\coordinate (M) at (3,0);
\node[] at (0,-3) {$M = \mathbbm{T}^1$};
\node[] at (0,1) {$V$};
\node (Cmid) at ($(C)+(-4,1)$) {};
\draw (C) arc (180:0:-4cm and -1cm);
\draw[dashed] (C) arc (0:61:4cm and 1cm);
\draw[dashed] ($(C)+(-8,0)$) arc (0:61:-4cm and 1cm);
\draw (Cmid) arc (90:119:4cm and 1cm) coordinate (int);
\draw (Cmid) arc (90:119:-4cm and 1cm);
\draw (C)--($(C)+(0,1.75)$);
\draw ($(C)+(-8,0)$)--($(C)+(-8,1.75)$);
\draw[black] ($(C)+(0,1.75)$) arc (360:0:4cm and -1cm);
\draw ($(C)+(0,-3)$) arc (360:0:4cm and -1cm);
\coordinate (Mcent) at ($(M)+(4,0.5)$);
\node (Mmid) at ($(M)+(4,1)$) {};
\draw ($(M)+(8,0)$)--($(M)+(8,1.75)$);
\draw[dashed,black] (M) arc (0:60:-4cm and 1cm);
\draw[dashed] ($(M)+(8,0)$) arc (0:60:4cm and 1cm);
\draw[black] (Mmid) arc (90:120:4cm and 1cm);
\draw (Mmid) arc (90:120:-4cm and 1cm);
\draw[black] (M) arc (180:140:4cm and -1cm) coordinate (bl);
\draw ($(M)+(8,0)$) arc (0:40:4cm and -1cm) coordinate (br);
\draw ($(M)+(0,1.75)$) arc (0:90:-4cm and 1cm);
\draw[black] ($(M)+(8,1.75)$) arc (0:90:4cm and 1cm);
\draw ($(M)+(0,1.75)$) arc (180:140:4cm and -1cm) coordinate (tl);
\draw[black] ($(M)+(8,1.75)$) arc (0:40:4cm and -1cm) coordinate (tr);
\draw[black] plot[smooth] coordinates { (tr) ($(Mcent)+(1.25,0.1)$) ($(Mcent)+(-1.25,-1.175)$) (bl)};
\draw plot[smooth] coordinates { (tl) ($(Mcent)+(-1.25,0.1)$) ($(Mcent)+(1.25,-1.175)$) (br)};
\draw (M)--($(M)+(0,1.75)$);
\draw ($(M)+(0,-3)$) arc (360:0:-4cm and -1cm);
\end{tikzpicture}
\caption[Cylinder vs. M\"obius band]{Comparison between the topologies of the cylinder (left) and M\"obius band (right). Locally, both have the structure of $\mathbbm{T}^1\times{V}$, but their global topologies differ.\label{moebius}}
\end{figure}
The difference in topologies of the cylinder and M\"obius band may be quantified by analyzing them through the language of fiber bundles\index{fiber bundle}. A fiber bundle is a triplet $\{E,M,\pi\}$ consisting of a total space $E$, base space $M$, and a projection map $\pi : E \to M$. The total space is comprised of fibers $E_p$, $E = \bigcup_p E_p$, defined as $E_p = \{ q 25 E | \pi(q) = p ,\ \forall p 25 M \}$.
A fiber bundle is termed trivial if the total space takes the form of a direct product of the fiber with the base manifold $E = M \times V$.
In general fiber bundles are only locally trivial, meaning that they are trivial only over a
local coordinate neighborhood $U_{i} \subset M$. In the overlap region of neighborhoods
$U_{\alpha} \cap U_{\beta}$
it is necessary to define a
transition function $g_{\alpha\beta}$
which appropriately identify local trivializations over each neighborhood with each other.
The cylinder and M\"obius band can be described by a fiber bundle whose fibers are a 1-dimensional vector space $V$ modeled\footnote{Since $x \mapsto -x$ is not a symmetry of $\mathbbm{R}$ the fiber $V$ cannot be taken to be $\mathbbm{R}$ itself, but is instead said to be ``modeled'' on $\mathbbm{R}$~\cite{penrose}.} on the real line $\mathbbm{R}$.
An example of a fiber for the cylinder and M\"obius band is the line segment $V = [-1,1]$.
An important class of fiber bundle which frequently appears in physics is a $G$-bundle, where the transition functions are elements of a group $G$,
$g_{\alpha\beta} 25 G$.
For the M\"obius band $G$ can be taken to be the (multiplicative) group $\mathbbm{Z}_2$ which means that the transition functions take values $g 25 \{-1,1\}$.
A $G$-bundle whose fibers are $G$ itself is called a principal-$G$ bundle.
A M\"obius bundle where the fiber is taken to be the boundary of a finite M\"obius band (\textit{i.e.} a fiber consisting of the points $\{-1,1\}$) can be described as a principal-$\mathbbm{Z}_2$ bundle.
The distinction between the cylinder and the M\"obius band can be obtained quantitatively from the transition functions for a point $p 25 U_{\alpha} \cap U_{\beta}$. In the cylinder, the transition functions can map elements of a fiber over overlapping neighborhoods trivially. On the M\"obius band in the presence of the twist, the transition functions perform the map
$U_{\alpha}\times[-1,1] \to U_{\beta}\times[1,-1]$, which indicates the non-trivial topology of the fiber bundle.
A fiber bundle whose fibers have the structure of a Hilbert space is termed a Hilbert bundle.
A fiber bundle which is important in condensed matter physics is the Bloch bundle, which is the direct sum of all momentum space Hilbert bundles belonging to bands below the Fermi level of a given system, $\displaystyle \bigoplus_{n<n_F} \mathcal{H}_n$. Topological band insulators can be identified mathematically from the appearance of non-trivial topology in their Bloch bundle. The total Bloch bundle may be defined as the direct sum over all bands of the system. The total Bloch bundle is always trivial~\cite{fruchartcarpentier}.
An important element in the construction of fiber bundles is the connection. The concept of a connection on a fiber bundle is that it facilitates the comparison between points on different fibers, which is for example needed in the operation of differentiation on sections of a bundle. A section of a fiber bundle over $p 25 M$ is defined as $\pi^{-1}(p)$, with the total section being $\bigcup_p \pi^{-1}(p)$.
A classical field in physics can generically be considered mathematically as a section of some vector bundle (a fiber bundle whose fibers are some vector space). Since fiber bundle sections are ubiquitous in physics, the need for a connection becomes apparent. A connection can be understood geometrically as follows: For a vector field $\vec{v} = v^j \hat{e}_j$ with components $v^j$ in basis $\hat{e}_j$ parameterized by the path $\vec{\lambda}$, the derivative of $\vec{v}$ along the component $\lambda^k$ is
\begin{align*}
\frac{\d \vec{v}}{\d \lambda^k} &= \frac{\d v^j}{\d \lambda^k} \hat{e}_j + v^j \frac{\d \hat{e}_j}{\d \lambda^k} \,,
\intertext{whose $i^{\text{th}}$ component is}
\left( \frac{\d \vec{v}}{\d \lambda^k} \right)^i &= \frac{\d v^i}{\d \lambda^k} + v^j \frac{\d \hat{e}_j}{\d \lambda^k} \cdot \hat{e}^i = \frac{\d v^i}{\d \lambda^k} + v^j \tensor*{\Gamma}{^i_{jk}}
\end{align*}
where $\tensor*{\Gamma}{^i_{jk}}$ is the infamous Levi-Civia connection (Christoffel symbol). This illustrates how the connection measures the change of a local coordinate basis along a parameterized path. The non-uniformity of the basis along the path is analogous to the fiber bundle picture where there is no canonical isomorphism between different fibers.
On the Hilbert bundle, a natural connection is the Berry connection, $A$, which is $U(1)$ connection 1-form $A = -\i \left\langle \psi | \d \psi \right\rangle$~\cite{berry,fruchartcarpentier,nakahara}. The $U(1)$ arises from that being the group of transition functions on the Hilbert bundle.
The topology of the M\"obius band can also be determined through its winding number, the closed loop integral of the connection on the band over the circle
\begin{equation}
\gamma = \oint_{\Gamma} A ,
\end{equation}
which measures the number of twists in the band. In the context of physics, $\gamma$ can be related to the geometric phases of Aharonov and Bohm~\cite{abphase,baezmuniain}, where $A$ is the electromagnetic vector potential, or aforementioned Berry connection.
A measure of topology for a two dimensional manifold $M$ is the first Chern number
\begin{equation}
\mathrm{Ch}_1 = \frac{1}{2\pi} \iint_M F
\label{chern}
\end{equation}
which is the integral of the curvature 2-form, $F$, on $M$. This number takes on integer values due to the Gau\ss-Bonnet theorem~\cite{frankel,nakahara}.
For a quantum system, the $F$ can be given by the Berry curvature, which is obtained from the Berry connection as
\begin{equation}
\begin{aligned}
F &= \d A
\\&= -\i \left\langle \d \psi | \extp | \d \psi \right\rangle \,,
\end{aligned}
\end{equation}
which follows from applying Stokes' Theorem to the expression for the winding number $\gamma$~\cite{fruchartcarpentier}.
For the connection of a principal $G$-bundle, the curvature takes values in the adjoint representation of the algebra $\mathfrak{g}$ associated to $G$~\cite{frankel,baezmuniain,nakahara}. This discussion of fiber bundles and connections and curvature on bundles has been rather brief and informal; further details may be found in the standard references~\cite{frankel,nakahara,eguchi}, with Ref.~\cite{fruchartcarpentier} of being particular relevance to modern condensed matter physics.
In the case of topological insulators, the integral \eqref{chern} is taken over the first Brillouin zone of the valance band. The \index{Chern number}Chern number gives a measure on how topologically non-trivial the valance band Hilbert space is. A valance band with non-trivial topology results in the appearance of topological edge modes in the spectrum.
In condensed matter physics, one of the first recognized instances of a topological state was the quantized Hall conductance of the integer quantum Hall effect~\cite{tknn,niuthoulesswu,avronseilersimon}. In this situation, it was found that the quantized Hall conductance, as computed from the Kubo formula, was proportional to $\mathrm{Ch}_1$.
The winding number and the Chern number are examples of a
topological invariant, a quantity which captures the global topology of the system and is robust to perturbations in its parameters. For the M\"obius band, the characterizing twist can only be added or removed through a destructive process, such as cutting and re-gluing the band.
For the Chern number, it can be shown that the variation of the functional \eqref{chern} vanishes.
A characteristic of topological insulators is the ``bulk-boundary correspondence''\index{bulk-boundary correspondence} wherein features of the boundary can be determined by properties of the bulk and vice-versa. The presence of topological states on the boundary of a topological insulator can be determined \textit{e.g.} by the Chern number of the bulk. In the case of the integer quantum Hall effect, the Chern number is calculated with respect to the band structure of the (insulating) bulk of the system, but this in turn yields information about the quantized conductance on the boundary of the system.
It should be noted that the aspects of topology discussed here and applied to topological phases and topological states of quantum condensed matter in this thesis are categorically distinct from what are known as topologically \textit{ordered}\index{topological order} phases of matter in the literature~\cite{wen}. A distinguishing characteristic between the two are that topological states exhibit short range ground state entanglement whereas topologically ordered states exhibit long range ground state entanglement. In some cases, such as the integer quantum Hall effect, there is an overlap between the two, but in general they are separate and distinct concepts.
\section{Su-Schrieffer-Heeger Model\label{sec:sshmodel}}
An elementary condensed matter model which exhibits topological characteristics is the Su--Schrieffer--Heeger (SSH) model~\cite{ssh,solitons,shortcourse}\index{SSH model}.
The SSH model is a $1d$ non-interacting single-particle tight-binding model of spinless fermions which exhibits the characteristics of a topological insulator.
Originally devised to model \textit{trans}-polyacytlene (\textit{trans}-(CH)$_\text{x}$) as a prototypical system of conducting polymers
and
$\pi$-conjugated polymers in general,
it is now known to be an example of a $1d$ topological insulator~\cite{shortcourse,altlandsimons,marino}.
In the tight-binding formalism the Hamiltonian of the SSH model is given by
\begin{equation}
\hat{H}_{\textsc{ssh}}
=
\sum_{j25\mathbbm{Z}^+} \left[ \varepsilon\, \opd{c}{j} \op{c}{j} + \left( \tensor{t}{_A} \opd{c}{2j-1} \op{c}{2j} + \tensor{t}{_B} \opd{c}{2j} \op{c}{2j+1} + \textsc{h.}\text{c.} \right) \right]
\end{equation}
with on-site energies $\varepsilon$ and tunneling amplitudes $t_A$ and $t_B$.
These amplitudes can alternatively be expressed as $t_A = t_0 - \delta t$ and $t_B = t_0 + \delta t$ where $4 t_0$ is the full band width and $4 \delta t$ is the band gap. In the original model of polyacetylene the $\pm\delta t$ originates from the Peierls mechanism of electron-phonon coupling, but here $\delta t$ is taken to be a fundamental parameter of the theory.
The retarded Green function on the boundary is easily obtained from the equation of motion method and takes the continued fraction form
\begin{equation}
G_{1,1}(z) = \cfrac{1}{z - \varepsilon - \cfrac{t_A^2}{z - \varepsilon - t_B^2 G_{1,1}(z)}}
\end{equation}
from which the explicit expression follows as
\begin{equation}
G_{1,1}(z) = \frac{(z-\varepsilon)^2 + t_B^2 - t_A^2 + \sqrt{((z-\varepsilon)^2 - t_A^2 + t_B^2)^2 - 4 (z-\varepsilon)^2 t_B^2}}{2 (z-\varepsilon) t_B^2} \,.
\label{eq:sshgreen}
\end{equation}
In the following the local potential $\varepsilon$ is normalized to $\varepsilon = 0$ as the SSH spectrum is symmetric about $\omega = \varepsilon$.
The SSH model's simplicity has enabled it to be realized in a variety of experimental setups, such as in
photonic systems~\cite{zakobs},
mechanical acoustic lattices~\cite{acoustic,acousticrev},
and
graphene nanoribbons~\cite{gnano1,gnano2}.
The simplicity of the SSH model also makes it a good starting point for generalizations to examine topology in more sophisticated examples.
Generalizations of the SSH model previously considered in the literature include the addition of long-range kinetic terms~\cite{extendedrm,longrange2}
and non-Hermitian generalizations~\cite{nonherm}.
The SSH model has also previously been generalized into a tripartite form~\cite{trimer} as well as to a four dimensional unit cell of the SSH$_4$ model~\cite{ssh4}, which has been also realized in ultracold atom experiments~\cite{ssh4exp}.
Other work has involved generalizing the SSH model to quasi-$1d$ systems, such as SSH ladders~\cite{sshladder}.
Additional generalizations of the SSH model are contained in \S\ref{ch:genssh}.
A separate type of topological insulator is that of a crystalline topological insulator, which differs from the conventional formalism as the protecting symmetry group is the spatial symmetry group of the system's lattice.
The chiral symmetry of the SSH model can be interpreted as an inversion symmetry on its lattice, and therefore the SSH model can also be used as a prototypical crystalline topological insulator. Similarly, the SSH model can be used as a basis for generalization to higher-order topological insulators~\cite{hoti}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1,site/.style={circle,draw=black,line width=1pt,font=\normalsize,inner sep=1pt}]
\node[site] (1) at (0,0){$A$};
\node[site] (2) at ($(1)+(2,0)$){$B$};
\node[site] (3) at ($(2)+(2,0)$){$A$};
\node[site] (4) at ($(3)+(2,0)$){$B$};
\node[site] (5) at ($(4)+(2,0)$){$A$};
%
\draw[line width=1.75pt, dashed, line cap=round]($(5)+(1,0)$)--+(1,0);
\draw[line width=1.75pt,color=blue](1)--(2) node[midway,above,blue] {${t_A}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](2)--(3) node[midway,above,red] {${t_B}$};
\draw[line width=1.75pt,color=blue](3)--(4);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](4)--(5);
\draw[line width=2pt,color=blue](5)--+(1,0);
%
\draw[dashed,line width=1pt] ($(1)+(-0.5,-1)$)rectangle($(2)+(0.5,1)$) node[midway,below=12pt] {$1$};
\draw[dashed,line width=1pt] ($(3)+(-0.5,-1)$)rectangle($(4)+(0.5,1)$) node[midway,below=12pt] {$2$};
\draw[dashed,line width=1pt] ($(5)+(1,-1)$)--($(5)+(-0.5,-1)$)--($(5)+(-0.5,1)$)--($(5)+(1,1)$);
\end{tikzpicture}
\caption[Unit cell grouping of the SSH model]{Unit cell grouping of the SSH model. In this picture $t_A$ represents hopping within the unit cell and $t_B$ represents hopping between unit cells.\label{fig:sshunitcell}}
\end{figure}
It is also instructive to express the Hamiltonian in terms of its unit cells as shown in Fig.~\ref{fig:sshunitcell}. In this form, the Hamiltonian is recast as
\begin{equation}
\hat{H}_{\textsc{ssh}}
= t_A \sum_{\mu25\mathbbm{Z}^+} \opd{\chi}{\mu} \boldsymbol{\sigma}_1 \op{\chi}{\mu}
+ t_B \sum_{\mu25\mathbbm{Z}^+} \left( \opd{\chi}{\mu} \frac{\boldsymbol{\sigma}_1 - \i \boldsymbol{\sigma}_2}{2} \op{\chi}{\mu+1} + \opd{\chi}{\mu+1} \frac{\boldsymbol{\sigma}_1 + \i \boldsymbol{\sigma}_2}{2} \op{\chi}{\mu} \right)
\end{equation}
where $\mu$ labels the unit cell and $\frac12\boldsymbol{\sigma}_a$ are the fundamental representation of $SU(2)$.\footnote{Conventionally also known as the Pauli matrices: $\displaystyle \boldsymbol{\sigma}_1=\begin{pmatrix}0&1\\1&0\end{pmatrix}\,, \boldsymbol{\sigma}_2=\begin{pmatrix}0&-\i\\\i&0\end{pmatrix}\,, \boldsymbol{\sigma}_3=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$.} The operators acting on the unit cells are given by
\begin{equation}
\op{\chi}{\mu} = \begin{pmatrix} \op{c}{A} \\ \op{c}{B} \end{pmatrix}_{\mu}
\end{equation}
where the $\op{c}{A/B}$ act on $A$ or $B$ sites respectively.
With respect to the matrix fast recursion algorithm described in \S\ref{sec:calcmeth}, the elements of the Hamiltonian representing dynamics within and between unit cells are
\begin{align}
\boldsymbol{h}_0 &= \begin{pmatrix} 0 & t_A \\ t_A & 0 \end{pmatrix}
&
\boldsymbol{h}_1 &= \begin{pmatrix} 0 & 0 \\ t_B & 0 \end{pmatrix}
\end{align}
In matrix form, the Green function equation of motion for the SSH model yields
\begin{equation}
\boldsymbol{G}_{1,1}(z) = \left[ z \mathbbm{1} - t_A \boldsymbol{\sigma}_1 - t_B^2 \boldsymbol{\sigma}_- \boldsymbol{G}_{1,1}(z) \boldsymbol{\sigma}_+ \right]^{-1}
\label{eq:sshgreenmatrix}
\end{equation}
where
\begin{equation}
\boldsymbol{G}_{\mu,\nu}(z) = \begin{pmatrix} G_{\mu_A \nu_A}(z) & G_{\mu_B \nu_A}(z) \\ G_{\mu_A \nu_B}(z) & G_{\mu_B \nu_B}(z) \end{pmatrix}
\end{equation}
with $\mu$ and $\nu$ indexing unit cells and $A$ and $B$ labeling sites within the unit cell.
The boundary Green function \eqref{eq:sshgreen} can be recovered from Eq.~\eqref{eq:sshgreenmatrix} by taking the $AA$ component
\begin{equation}
\left[ \boldsymbol{G}_{1,1}(z) \right]_{A,A} = \Green{\op{\chi}{1_A}}{\opd{\chi}{1_A}}_z = \Green{\op{c}{1}}{\opd{c}{1}}_z \,.
\end{equation}
The unit cell representation of the Hamiltonian is also useful for obtaining the momentum space representation.
Performing a Fourier transformation into momentum space yields the Hamiltonian
\begin{equation}
\begin{aligned}
\hat{H}_{\textsc{ssh}}(k)
&= \sum_k \opd{\chi}{k} \begin{pmatrix} 0 & t_A + t_B e^{\i k} \\ t_A + t_B e^{-\i k} & 0 \end{pmatrix} \op{\chi}{k}
\\
&= \sum_k \opd{\chi}{k} \left[ \left( t_A + t_B \cos(k) \right) \boldsymbol{\sigma}_1 - t_B \sin(k) \boldsymbol{\sigma}_2 \strut \right] \op{\chi}{k}
\\
&\equiv \sum_k \opd{\chi}{k}~\boldsymbol{h}_{\textsc{ssh}}(k)~\op{\chi}{k}
\end{aligned}
\label{eq:sshhamkspace}
\end{equation}
The matrix $\boldsymbol{h}_{\textsc{ssh}}(k)$ is an element of the Gra{\ss}mannian $U(2)/(U(1)\times U(1))$
which puts the SSH model in class $A$III in the classification scheme of topological insulators~\cite{tenfoldclassification}.
The group structure of $U(2)/(U(1)\times U(1))$ can be understood in more concrete terms as follows. As demonstrated by Eq.~\eqref{eq:sshhamkspace}, the Hamiltonian matrix $\boldsymbol{h}(k)$ is a $U(2)$ vector (this is clear from the interpretation of the $SU(2)$ matrices $\boldsymbol{\sigma}_a$ as basis unit vectors). The Hamiltonian is invariant under $U(1)\times U(1)$ gauge transformations, which are transformations that take the form of
\begin{equation}
\boldsymbol{U} = \begin{pmatrix} \e^{-\i n k} & 0 \\ 0 & 1 \end{pmatrix}
\end{equation}
The Hamiltonian is invariant under this gauge transformation
\begin{equation}
\boldsymbol{U} \boldsymbol{h}_{\text{ssh}}(k) \boldsymbol{U} = \boldsymbol{h}_{\text{ssh}}(k)
\end{equation}
Models of topological insulators are typically classified based on their spatial dimensions and their symmetries according to the Cartan classification scheme of Riemannian symmetric spaces~\cite{cartan1,cartan2,tenfoldclassification}.
This classification is based on the behavior of a Hamiltonian under chiral, time reversal, and charge conjugation symmetries.
The chiral symmetry generator is
\begin{equation}
\Gamma = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}
\end{equation}
which action on the SSH Hamiltonian as
\begin{equation}
\Gamma \boldsymbol{h}(k) \Gamma^{-1} = -\boldsymbol{h}(k) \,.
\end{equation}
Time reversal symmetry is generated by complex conjugation $T = \mathcal{K}$. The SSH model transforms under this operation as
\begin{equation}
T \boldsymbol{h}(k) T^{-1} = \boldsymbol{h}(k)^* = \boldsymbol{h}(-k) \,.
\end{equation}
Charge conjugation symmetry can be constructed as a composition of time-reversal and chiral symmetries as $C = T\circ\Gamma$, with the SSH model transforming as
\begin{equation}
C \boldsymbol{h}(k) C^{-1} = -\boldsymbol{h}(-k) \,.
\end{equation}
The momentum space Hamiltonian Eq.~\eqref{eq:sshhamkspace} yields the Bloch energy bands of $\boldsymbol{h}_{\textsc{ssh}}(k)$ as
\begin{equation}
E_\pm(k) = \pm \sqrt{t_A^2 + t_B^2 + 2 t_A t_B \cos(k)} \,.
\end{equation}
Reparameterizing the hopping amplitudes as $t_A = t_0 - \delta t$ and $t_B = t_0 + \delta t$,
the expression under the radical can be rewritten as $2 \left[ t_0^2 + \delta t^2 + (t_0^2 - \delta t^2) \cos(k) \right]$. From here it is clear that the spectrum is gapped unless $\delta t = 0$, where the band crossing occurs at $k = \pi/\mathbbm{Z}$.
The topology of the SSH model can be measured by the winding of the Zak phase~\cite{berry,zak}\index{Zak phase} around the first Brillouin zone
\begin{equation}
\gamma = \frac\i\pi \oint_{\textsc{bz}} \langle \psi(k) \vert \d \psi(k) \rangle
\end{equation}
where
\begin{equation}
\lvert \psi(k) \rangle = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ \e^{\i\varphi} \end{pmatrix}
\end{equation}
is the eigenvector of the valance band and $\varphi$ is defined from
\begin{equation}
\tan\varphi = \frac{t_B \sin(k)}{t_A + t_B \cos(k)} \,.
\end{equation}
For the SSH model initialized with a $t_A < t_B$ stagger, the Zak phase is quantized to unity.
The Zak phase of the SSH model has been observed experimentally in cold atom experiments~\cite{zakobs}. Since the Zak phase is technically not gauge invariant, this experiment measured the difference in Zak phases between the trivial and topological phases of the SSH model.
In position space the topology of the SSH model can be seen from its boundary Green function.
This Green function can be calculated numerically using the fast recursion methods described in~\S\ref{sec:calcmeth}.
The boundary site spectral function is obtained from the Green function
\begin{equation}
\mathcal{A}(\omega) = -\frac1\pi \Im\left[\Green{\op{c}{1}}{\opd{c}{1}}_{\omega + \i0^+}\right]
\end{equation}
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{sshbandstop.pdf}};
\node at (3.125,2.) {\footnotesize (a)};
\end{tikzpicture}
\phantomsubcaption{\label{fig:sshtop}}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{sshbandstriv.pdf}};
\node at (3.125,2.) {\footnotesize (b)};
\end{tikzpicture}
\phantomsubcaption{\label{fig:sshtriv}}
\end{subfigure}
\vspace{-\baselineskip}
\caption[Spectral function of the SSH model]{Spectral function on the boundary site of the SSH model in \subref{fig:sshtop} the topological phase and \subref{fig:sshtriv} the trivial phase. Note that both spectra are normalized to unity, with the difference in magnitude of the valance and conduction bands indicative of the pole weight.\label{fig:sshbands}}
\end{figure}
The primary notable feature of the SSH model spectral function in the topological phase is the presence of a pole at zero energy within a hard gap. This is the topological boundary state. In the trivial phase the spectral function is gapped with no midgap states. The bulk is similarly gapped in both phases.
The spectral weight of the topological zero energy pole is
\begin{equation}
w_p = \frac{t_B^2 - t_A^2}{t_B^2} \,.
\end{equation}
Analysis of the zero energy wavefunction reveals that this state is exponentially localized at the boundary. This will be demonstrated in the next section.
The zero energy topological state cannot be removed by local deformations and generally requires a drastic change to the system, such as closing and reopening the band gap or breaking the chiral symmetry. The appearance of a topological characteristic on the system's boundary and a topological characteristic from the bulk, in this case from the winding of the Zak phase, is a manifestation of the bulk-boundary correspondence\index{bulk-boundary correspondence}.
The spectrum of the bulk of the system is also worth examining.
The Green function on any bulk site can similarly be obtained from the equation of motion as
\begin{equation}
\begin{aligned}[b]
G_{\text{bulk}}(z)
&= \cfrac{1}{z - \cfrac{t_A^2}{z - \cfrac{t_B^2}{z - \cfrac{t_A^2}{z - \ddots}}} - \cfrac{t_B^2}{z - \cfrac{t_A^2}{z - \cfrac{t_B^2}{z - \ddots}}}}
\\
&= \cfrac{1}{z - t_A^2 G_{t_B,t_A}(z) - t_B^2 G_{t_A,t_B}(z)}
\label{eq:sshbulkgreen}
\end{aligned}
\end{equation}
where the two branches of the continued fraction have been resumed as
\begin{equation}
G_{t_1,t_2}(z) = \frac{z^2 + t_2^2 - t_1^2 \pm \sqrt{(z^2 - t_1^2 + t_2^2)^2 - 4 z^2 t_2^2}}{2 z t_2^2}
\label{eq:sshgreenfunctionform}
\end{equation}
The bulk SSH Green function is the same regardless of the stagger $t_A < t_B$ or $t_A > t_B$. This change affects the Green function on the boundary site, but for the bulk Green function the change can be seen as swapping the two branches of the continued fraction in \eqref{eq:sshbulkgreen}, which appear on equal footing. Swapping them therefore makes no change to the overall function. Physically, any given site in the bulk is coupled by a strong bond to one neighbor and by a weak bond to its other neighbor.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.75,site/.style={circle,draw=black,inner sep=4pt,line width=1pt}]
\node[site,fill=gray] (0) at (0,0){};
\node[below=3pt] (arrow) at (0) {$\uparrow$};
\node[below] at (arrow) {$G_{\text{bulk}}(z)$};
%
\node[site] (r1) at (2,0){};
\node[site] (r2) at (4,0){};
\node[site] (rend) at (6,0){};
%
\draw[line width=1.75pt, dashed, line cap=round]($(rend)+(1,0)$)--($(rend)+(2,0)$);
\draw[line width=1.75pt,color=blue](0)--(r1) node[midway,above,blue] {${t_A}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](r1)--(r2) node[midway,above,red] {${t_B}$};
\draw[line width=1.75pt,color=blue](r2)--(rend);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](rend)--($(rend)+(1,0)$);
%
\node[site] (l1) at (-2,0){};
\node[site] (l2) at (-4,0){};
\node[site] (lend) at (-6,0){};
%
\draw[line width=1.75pt, dashed, line cap=round]($(lend)+(-1,0)$)--($(lend)+(-2,0)$);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](0)--(l1) node[midway,above,red] {${t_B}$};
\draw[line width=1.75pt,color=blue](l1)--(l2) node[midway,above,blue] {${t_A}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](l2)--(lend);
\draw[line width=1.75pt,color=blue](lend)--($(lend)+(-1,0)$);
\end{tikzpicture}
\caption[Bulk Green function of the SSH model]{Bulk Green function}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[scale=1]{sshbandsbulk.pdf}
\caption[Bulk spectrum of the SSH model]{Bulk spectrum of the SSH model given by $\mathcal{A}(\omega)=-\frac1\pi \Im G_{\text{bulk}}(\omega)$ where $G_{\text{bulk}}(z)$ is given by Eq.~\eqref{eq:sshbulkgreen}. The spectrum is always gapped $\forall t_A, t_B$; $t_A \neq t_B$.\label{fig:sshbulk}}
\end{figure}
The appearance of a topological state on the boundary of the system and a bulk which is insulating is the primary defining phenomenological characteristic of a topological insulator.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.75,site/.style={circle,draw=black,inner sep=4pt,line width=1pt}]
\coordinate (t) at (0,1.5);
%
\node[site] (t1) at (t){};
\node[site] (t2) at ($(t1)+(2,0)$){};
\node[site] (t3) at ($(t2)+(2,0)$){};
\node[site] (t4) at ($(t3)+(2,0)$){};
\node[site] (t5) at ($(t4)+(2,0)$){};
%
\draw[line width=1.75pt, dashed, line cap=round]($(t5)+(1,0)$)--+(1,0);
\draw[line width=1.75pt,color=blue](t1)--(t2) node[midway,above,blue] {${t_A}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](t2)--(t3) node[midway,above,red] {${t_B}$};
\draw[line width=1.75pt,color=blue](t3)--(t4);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](t4)--(t5);
\draw[line width=2pt,color=blue](t5)--+(1,0);
%
\coordinate (b) at ($(0,0)-(t)$);
%
\node[site] (b1) at (b){};
\node[site] (b2) at ($(b1)+(2,0)$){};
\node[site] (b3) at ($(b2)+(2,0)$){};
\node[site] (b4) at ($(b3)+(2,0)$){};
\node[site] (b5) at ($(b4)+(2,0)$){};
%
\draw[line width=1.75pt, dashed, line cap=round]($(b5)+(1,0)$)--+(1,0);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](b1)--(b2) node[midway,above,red] {${t_A}$};
\draw[line width=1.75pt,color=blue](b2)--(b3) node[midway,above,blue] {${t_B}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](b3)--(b4);
\draw[line width=1.75pt,color=blue](b4)--(b5);
\draw[line width=1.5pt,color=red,double,double distance=1.5pt](b5)--+(1,0);
%
\node (topgraph) at ($(t5)+(7,0)$) {\includegraphics[scale=1]{sshttop.pdf}};
\node at ($(topgraph)+(-3.5,1.5)$) {$t_n$};
%
\node (trivgraph) at ($(b5)+(7,0)$) {\includegraphics[scale=1]{sshttriv.pdf}};
\node at ($(trivgraph)+(3.5,-1.5)$) {$n$};
\end{tikzpicture}
\caption[Parameterizations of the SSH model]{Parameterizations of the SSH model. The two possible configurations are with either $t_A < t_B$ (top) or $t_A > t_B$ (bottom).}
\end{figure}
Intuition for the existence of the localized state in the topological phase can be obtained by analyzing the extreme limits of the model's parameterization.
The trivial phase which is without a localized boundary state possesses as an extreme limit $t_B \ll t_A$ such that $t_B \approx 0$. This situation results in a collection of decoupled dimers.
The trivial phase of the SSH model can then be said to be adiabatically connected to an atomic insulator.
The topological phase of the SSH model is adiabatically connected to the opposite limit of $t_A \ll t_B$ such that $t_A \approx 0$.
A fermion located on the boundary of the system must be completely localized on that site as there is no hybridization to the remainder of the system. As the hybridization to the rest of the system is turned on, tuning $t_A$ finite while keeping $t_A < t_B$, the boundary mode remains localized exponentially, as demonstrated below.
\subsection{Topological State\label{sec:sshtransfer}}
Features of the zero-energy eigenfunction can be obtained from the transfer matrix\index{transfer matrix}~\cite{shortcourse}.
For this analysis it is useful to analyze the SSH model in Dirac notation. The single particle states can be written as
\begin{subequations}
\begin{align}
\lvert \psi_m^{(A)} \rangle &\vcentcolon= \opd{c}{2m-1} \lvert 0 \rangle
\\
\lvert \psi_m^{(B)} \rangle &\vcentcolon= \opd{c}{2m} \lvert 0 \rangle
\end{align}
\end{subequations}
where $\lvert 0 \rangle$ denotes the zero excitation vacuum state. The labels $(A)$ and $(B)$ of the state vectors correspond to states on the $A$ and $B$ sites of the unit cell as shown in Fig.~\ref{fig:sshunitcell}.
In the Dirac bra-ket state vector notation, the SSH Hamiltonian is written as
\begin{equation}
\hat{H}_{\textsc{ssh}} = \sum_{m=1}^{\infty} \left[ t_A \left( | \psi_{m}^{(A)} \rangle \langle \psi_{m}^{(B)} | + | \psi_{m}^{(B)} \rangle \langle \psi_{m}^{(A)} | \right) + t_B \left( | \psi_{m+1}^{(A)} \rangle \langle \psi_{m}^{(B)} | + | \tensor*{\psi}{_{m}^{(B)}} \rangle \langle \tensor*{\psi}{_{m+1}^{(A)}} | \right) \right]
\end{equation}
The zero-energy eigenstate may be defined with an ansatz of the form
\begin{equation}
| \mathnormal{\Psi}_0 \rangle = \sum_{n=1}^{\infty} \left[ u_{n}^{(A)} | \psi_{n}^{(A)} \rangle + u_{n}^{(B)} | \psi_{n}^{(B)} \rangle \right]
\end{equation}
which obeys the Schr\"odinger equation
$\hat{H}_{\textsc{ssh}} | \mathnormal{\Psi}_0 \rangle = E_0 | \mathnormal{\Psi}_0 \rangle$. The action of the Hamiltonian on this state is
\begin{align*}
\hat{H}_{\textsc{ssh}} | \mathnormal{\Psi}_0 \rangle
&= \sum_{n=1}^{\infty} \left[ \left( t_A u_{n}^{(B)} \right) | \psi_{n}^{(A)} \rangle + \left( t_A u_{n}^{(A)} \right) | \psi_{n}^{(B)} \rangle + t_B u_{n}^{(A)} | \psi_{n-1}^{(B)} \rangle + t_B u_{n}^{(B)} | \psi_{n+1}^{(A)} \rangle \right]
\end{align*}
Taking into account the zero eigenenergy, $E_0 = 0$, this leads to a set of algebraic equations for the wavefunction amplitudes
\begin{subequations}
\begin{align}
t_A u_{n}^{(A)} + t_B u_{n+1}^{(A)} &= 0
\\
t_A \tensor*{u}{_{n}^{(B)}} + t_B \tensor*{u}{_{n-1}^{(B)}} &= 0
\end{align}
\end{subequations}
As the system is taken to be semi-infinite, it follows that $u_{n}^{(B)} = 0$ $\forall n$ as $\tensor*{u}{_{n-1}^{(B)}}$ necessarily vanishes due to the non-existence of cells $n\leq0$ from the presence of the boundary.
The equation for $u^{(A)}_{n}$ can be iterated to form a solution in terms of $u^{(A)}_{1}$.
For an SSH model of finite system size with $N$ unit cells ($2N$ sites), edge modes appear on both boundaries. For $t_A < t_B$ an edge mode appears on the left-side boundary with support only on the odd sublattice and the right-side boundary hosts an edge mode with support on the even sublattice only. The right-boundary edge state can be seen to exist following the preceding argument except with state $n=L$. Now it is the $u_{n}^{(A)}$ which all vanish identically due to the absence of states $n>L$ such that $u_{L+1}^{(A)} = 0$. These edge modes decay exponentially into the bulk. A semi-infinite system may be obtained by taking the limit of sending the right-side boundary to infinity. As the edge mode is now infinitely far away, the total wavefunction is zero on the even sublattice as it is suppressed by a factor of $\e^{-\infty}$. A localized state on a boundary site hybridized to the rest of the chain with a strong $t_B$ bond is only present in the case of a SSH model of finite length.
The localization length $\xi$ of the wavefunction can be determined by prescribing the ansatz $u^{(A)}_{n} = e^{-{n}/{\xi}} u^{(A)}_{1}$ for the value of the wavefunction on the $A$ site in unit cell $n$ in terms of the value on the boundary site and iterating the value of the wavefunction from the transfer matrix as
\begin{align*}
\left| u^{(A)}_{n} \right|^2 &= \left| e^{-{n}/{\xi}} u^{(A)}_{1} \right|^2
\\
\left(-\frac{t_A}{t_2}\right)^{2n-2} \left| u^{(A)}_{1} \right|^2 &= e^{-{2n}/{\xi}} \left| u^{(A)}_{1} \right|^2
\\
e^{-{2n}/{\xi}} &= \left(\frac{t_A}{t_B}\right)^{2n-2}
\\
\Rightarrow \quad \xi &= \frac{n}{1-n} \frac{1}{\ln\frac{t_A}{t_B}}
\end{align*}
For sites deep into the bulk,
\begin{equation}\begin{aligned}
\xi &\underset{n\gg1}{\approx} \frac{1}{\ln\frac{t_B}{t_A}}
\label{eq:localization}
\end{aligned}\end{equation}
It is apparent that for $t_A < t_B$ the zero-energy state decays exponentially into the bulk. The topological state is therefore exponentially localized on the boundary of the system.
\subsubsection{Disorder}
It is conventionally understood that the topological states of topological insulators are robust to adiabatic perturbations. Strictly speaking, this applies exactly only to perturbations which preserve the symmetry protecting the topological state. The topological state may be stated to be exactly topologically protected against symmetry preserving perturbations. For non-symmetry preserving perturbations, the topological state is only considered to be approximately robust~\cite{sshbulkboundary}.
Disorder which respects the chiral symmetry of the SSH model in momentum space is tantamount to reparameterizing the tunneling amplitudes with $t_A \to t'_A$ and $t_B \to t'_B$ such that $t'_A < t'_B$ still holds.
Disorder applied to the hopping amplitudes can be expressed as $t_B \to t_B + \delta w_j$ and $t_A \to t_A + \delta w_j$, where the $\delta w_j$ are randomly sampled from the interval $[-W,W]$ and $W$ is termed the disorder width.
The spectrum for the disordered SSH model can be obtained directly from the Green function computed from its continued fraction form
\begin{equation}
\begin{aligned}[c]
G_{1,1}(z)
= \cfrac{1}{
z - \varepsilon_1 - \cfrac{t_{A_1}^2}{
z - \varepsilon_2 - \cfrac{t_{B_2}^2}{
z - \varepsilon_3 - \cfrac{t_{A_3}^2}{
\cfrac{\ddots}{
z - \varepsilon_{N-1} - \cfrac{t_{B_{N-1}}^2}{
z - \varepsilon_N
}
}
}
}
}
} \,.
\end{aligned}
\end{equation}
\begin{figure}[h]
\subfiglabel{\includegraphics[scale=1]{sshbandstop.pdf}}{3.125,2}{fig:dtsshspec0}
\subfiglabel{\includegraphics[scale=1]{ssht1e-4eps.pdf}}{3.125,2}{fig:dtsshspec1e-4}
\subfiglabel{\includegraphics[scale=1]{ssht1e-3eps.pdf}}{3.125,2}{fig:dtsshspec1e-3}
\subfiglabel{\includegraphics[scale=1]{ssht1e-2eps.pdf}}{3.125,2}{fig:dtsshspec1e-2}
\caption[Spectral function of the SSH model with hopping parameter disorder]{Spectral function of the SSH model in its topological configuration with disorder on the hopping parameters with disorder width of \subref{fig:dtsshspec0} $W = 0$, \subref{fig:dtsshspec1e-4} $W = 10^{-4}$, \subref{fig:dtsshspec1e-3} $W = 10^{-3}$, and \subref{fig:dtsshspec1e-2} $W = 10^{-2}$. Observe that even with heavy distortion, the midgap topological pole remains. Calculation performed with a finite chain of $N=\num{5000000}$ sites.\label{fig:dtsshspec}}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=1]{sshwft1e-3eps.pdf}
\includegraphics[scale=1]{sshwflogt1e-3eps.pdf}
\caption[Wavefunction of the zero-energy boundary state with hopping parameter disorder]{Wavefunction of the zero-energy boundary state for disorder width of $10^{-3}$ on the hopping parameters. \label{fig:dtsshwfn}}
\end{figure}
In order for the midgap state to collapse, it is necessary to alter the system by more than adding perturbations to the hopping parameters.
Collapsing the topological state in the midgap generally requires a
drastic change in system parameters,
such as closing and reopening the band gap.
The boundary state also remains localized for disorder in the on-site potential when $\varepsilon \to \delta\varepsilon_j$ and $\delta\varepsilon_j$ are randomly sampled from $[-W,W]$ as in the previous case with disordered hoppings. This effect on the SSH spectrum is shown in Fig.~\ref{fig:desshwfn}.
\begin{figure}[h]
\subfiglabel{\includegraphics[scale=1]{sshbandstop.pdf}}{3.125,2}{fig:desshspec0}
\subfiglabel{\includegraphics[scale=1]{sshtceps1e-4.pdf}}{3.125,2}{fig:desshspec1e-4}
\subfiglabel{\includegraphics[scale=1]{sshtceps1e-3.pdf}}{3.125,2}{fig:desshspec1e-3}
\subfiglabel{\includegraphics[scale=1]{sshtceps1e-2.pdf}}{3.125,2} {fig:desshspec1e-2}
\caption[Spectral function of the SSH model with on-site potential disorder]{Spectral function of the SSH model in its topological configuration with on-site potential disorder with disorder width of \subref{fig:desshspec0} $W = 0$, \subref{fig:desshspec1e-4} $W = 10^{-4}$, \subref{fig:desshspec1e-3} $W = 10^{-3}$, and \subref{fig:desshspec1e-2} $W = 10^{-2}$. Calculation performed with a finite chain of $N=\num{5000000}$ sites.\label{fig:desshspec}}
\end{figure}
\begin{figure}[h]
\includegraphics[scale=1]{sshwftceps1e-3.pdf}
\includegraphics[scale=1]{sshwflogtceps1e-3.pdf}
\caption[Wavefunction of the zero-energy boundary state with on-site potential disorder]{Wavefunction of the zero-energy boundary state for disorder width of $10^{-3}$ on the potentials.\label{fig:desshwfn}}
\end{figure}
Even in the presence of symmetry breaking disorder, the zero energy state remains exponentially localized on the boundary of the system, as shown in Fig.~\ref{fig:desshwfn}. Since the chiral symmetry is broken, the wavefunction now has finite support on the even sublattice, but is still orders of magnitude smaller than the amplitude on the odd sublattice. With potential disorder, the boundary site potential may shift from zero and hence the localized boundary mode may shift away from the Fermi level. However, the state is still robust and topologically protected, provided that the state remains in the spectral gap.
Sufficiently strong potential disorder will destroy the zero-energy state.
Very strong perturbations are required to destroy such states by moving them into the band continuum.
This argument regarding robustness in the presence of symmetry-breaking perturbations will become important in the discussion of topology in the Mott transition in the particle-hole asymmetric Hubbard model in \S\ref{sec:phasymmmap}.
\begin{comment}
\subsection{Topological Superconductor\label{sec:kitaevmodel}}
\cite{kitaev}
\begin{equation}
\hat{H} = \sum_{j} \left[ \epsilon \opd{c}{j} \op{c}{j} + t \left( \opd{c}{j} \op{c}{j+1} + \opd{c}{j+1} \op{c}{j} \right) - \Delta \left( \op{c}{j} \op{c}{j+1} + \opd{c}{j} \opd{c}{j+1} \right) \right]
\end{equation}
\begin{equation}
\hat{H}(k) = \sum_{k} \left[ \epsilon(k) \opd{c}{k} \op{c}{k} + 2 \i \Delta \left( \op{c}{k} \op{c}{-k} + \opd{c}{-k} \opd{c}{k} \right) \sin(k) \right]
\end{equation}
$\epsilon(k) = \epsilon + 2 t \cos(k)$
Nambu spinor $\opd{\psi}{k} = \adjvec{ \opd{c}{k} & \op{c}{-k} }$
\begin{equation}
\hat{H}(k) = \sum_{k} \opd{\psi}{k} \left[ ( \epsilon + 2 t \cos(k) ) \boldsymbol{\sigma}_3 - 2 \Delta \sin(k) \boldsymbol{\sigma}_2 \right] \op{\psi}{k}
\end{equation}
Bogoliubov--de Gennes Hamiltonian
\begin{equation}
E_k = \pm \sqrt{ ( \epsilon + 2 t \cos(k) )^2 + 4 \Delta^2 \sin^2(k)}
\end{equation}
anomalous Green function
\begin{align}
F_{i,j}(z) &= \Green{\op{c}{i}}{\op{c}{j}}_z
&
F^\dagger_{i,j}(z) &= \Green{\opd{c}{i}}{\opd{c}{j}}_z
\end{align}
\begin{equation}
\mathcal{G}_{\mu,\nu}
=
\begin{pmatrix} G & F \\ F^\dagger & G^\dagger \end{pmatrix}_{\mu,\nu}
\end{equation}
\begin{equation}
\boldsymbol{\tau}_+ \mathcal{G}_{1,1} \boldsymbol{\tau}_- \mathcal{G}_{1,1} - ( z\mathbbm{1} - \epsilon \boldsymbol{\sigma}_3 ) \mathcal{G}_{1,1} + \mathbbm{1} = 0
\end{equation}
Majorana
\begin{align}
\opd{c}{j} &= \frac{\gamma_{j,\textsc{a}} + \i \gamma_{j,\textsc{b}}}{2}
&
\op{c}{j} &= \frac{\gamma_{j,a} - \i \gamma_{j,b}}{2}
\end{align}
\begin{equation}
\{ \gamma_m , \gamma_n \} = 2 \delta_{m,n}
\end{equation}
It is conventional to map the Majorana representation onto a superlattice of the original fermionic lattice, such that
\begin{align}
\gamma_{j,a} &\equiv \gamma_{2n-1}
&&\text{and}
&
\gamma_{j,b} &\equiv \gamma_{2n}
\end{align}
\begin{equation}
H = \sum_{j} \left[ \frac{\epsilon}{2} \left( 1 - \i \gamma_{j,a} \gamma_{j,b} \right) + \i \frac{t}{2} \left( \gamma_{j,a} \gamma_{j+1,b} - \gamma_{j,b} \gamma_{j+1,a} \right) - \i \frac{\Delta}{2} \left( \gamma_{j,a} \gamma_{j+1,b} + \gamma_{j,b} \gamma_{j+1,a} \right) \right]
\end{equation}
\end{comment}
\chapter{Introduction}
Condensed matter physics is the area of physics concerned with aggregates of many constituents.
Aggregates which exist on macroscopic scales are comprised of electrons and atomic nuclei at the nanoscale. Their phenomenology is therefore dictated by quantum mechanical behavior.
A major area of study within condensed matter physics is then describing the macroscopic electronic behavior of solid materials in terms of their microscopic properties.
One of the basic classifications within this realm is that of electronic phases of matter. The primary phases describe whether or not these substances do or do not conduct electrical current, termed to be metallic or insulating phases.
In substances which are metallic, the electrons in the outer shells of the atomic constituents are free to propagate through the material.
However, there are various mechanisms which might lead a system to exhibit insulating behavior.
The most basic is that of the atomic insulator, where the local atomic potential is greater than the kinetic energy of the outermost valence electrons. An example of such an insulator is solid Ar.
A type of system for which the valence electrons do possess meaningful kinetic energy, but nevertheless exhibit insulating behavior is the band insulator, where there exists a large energy gap at the Fermi level. This means that there exist no electronic states which have sufficient energy to propagate through the material.
Another manner in which electrons may be prohibited from propagating is through the presence of disorder which takes the form of randomized local potentials on the lattice sites. This disorder can be the result of a high density of impurities or defects in the system. This type of insulator is known as an Anderson insulator.
There also exist insulators which might otherwise be concluded to be metals from elementary band theory. An example of such an insulator is that of a Mott insulator. In a Mott insulator the insulating behavior arises as a result of a strong Coulomb repulsion between electrons. In a system where electronic interactions play a meaningful role, an otherwise metallic phase may become insulating if
the interaction strength is sufficiently high enough to inhibit propagation of the charge carriers.
The Mott insulating phenomenon is an example of the non-trivial phenomenology which can arise from strongly correlated electrons. The level of correlation between electrons in a material can dramatically affect its macroscopic characteristics. As such, these correlations cannot generally be ignored in the analysis of these systems as their inherent properties can be dramatically altered by them, and completely new phenomena may arise due to them as well.
A special type of band insulator which has been discovered in recent decades is that of the
topological insulator~\cite{hk}. A topological insulator is a band insulator in which the momentum space energy bands possess non-trivial topology. Mathematically, topology refers to the global properties of a manifold. These properties are robust under small deformations of its geometry. A sphere and an ellipsoid are considered topologically equivalent, but topologically distinct from the torus. Quantitatively, this difference can be seen from the `hairy ball' theorem, which says that the tangent vector field on the sphere or ellipsoid would possess a singularity, whereas the tangent vector field on the torus would not.
Phenomenologically, topological insulators are systems whose bulk possesses an insulating spectral gap, but on the boundary possess localized gapless electronic states.
Early notions of this sort of topology in condensed matter were in semi-conducting wells~\cite{volkovpankratov} and the integer quantum Hall effect~\cite{originalqhe}. In the integer quantum Hall effect, a $2d$ insulating material was observed to exhibit quantized conductance on its boundary in the presence of a strong external magnetic field.
A later discovery was the
quantum spin Hall effect~\cite{kanemele,bernevigzhang}
which exhibited similar properties, but did not require the external magnetic field.
\begin{comment}
quantum anomalous Hall effect
topological insulators~\cite{hk} and topological superconductors~\cite{kitaev}
MOSFET\footnote{metal-oxide-semiconductor field-effect transistor} device
$\mathrm{Ga}\mathrm{As}\mathrm{-}\mathrm{Al}_{x}\mathrm{Ga}_{1-x}\mathrm{As}$ heterostructure
\end{comment}
When developing a theory describing a particular system, it is tempting to start with the most foundational elements of that system from which to base the theory on.
Nearly all known physical phenomena can be described, at a reductive level, by the quantum field theory arising from the action functional\footnote{The dynamics of the spacetime background is given by the \textit{classical} action $S_{\text{GR}}[\vartheta] = \frac{1}{2c\varkappa}\int \left[ \star (\vartheta \extp \vartheta) \extp R \right]$ which at time of writing does not yet have a satisfactory quantum description. Some elements of this open question are addressed in~\cite{mscthesis}.}
\begin{equation}
S_{\text{SM}}[\psi,A,\Phi] = \int \left[ -\frac14 F \extp \star F + \overline{\psi} \left( \i \slashed{\mathrm{D}} - \star m \right) \psi + \mathrm{D} \Phi \extp \star \mathrm{D} \Phi - V(|\Phi|^2) + \mathcal{L}(\psi,\Phi) \right]
\end{equation}
where the integrand is the Standard Model of particle physics Lagrangian 4-form.
While this quantum field theory is considered to be fundamental to nearly all observed physical phenomena, it is not possible to directly reproduce all such phenomena from it. Even for relatively high-energy subnuclear physics at energy scales $\sim \SI{1}{\giga\electronvolt/c^2}$ it is often necessary to work with effective theories rather than the underlying fundamental one.
One of the main reasons for evaluating effective models is the computational intractability of the more fundamental theory.
Additionally, the fundamental products of reductionism cannot in turn capture emergent phenomena which appear at different parameter scales~\cite{moreisdifferent}.
This leads to another reason for developing effective theories. Only at this higher effective level is it possible to construct a theory which correctly describes the relevant phenomena.
In terms of constructing effective theories, the fundamental constituents of a system are not necessarily the appropriate objects to model in order to produce a theory which describes a particular phenomenon.
It is even not uncommon for models of particular phenomena to appear drastically different than the actual systems they intend to represent.
An example of a system whose dynamical degrees of freedom are non-trivially related their elementary constituents is that of those exhibiting the fractional quantum Hall effect~\cite{fqhexp}. In the fractional quantum Hall effect, the system's charge carrying excitations have an electric charge which is a fraction of the unit charge of the electron~\cite{laughlinfqh,fracchargef,fracchargei}. In terms of its elementary constituents, systems exhibiting the fractional quantum Hall effect are comprised of only elementary electrons and atomic nuclei. These objects possess a precisely quantized integer electric charge. None of the elementary constituents of these systems possess fractional charge. The macroscopic phenomenon of fractional charge carriers then is an effect of emergent behavior arising from non-trivial interplay between the system's elementary constituents.
This provides an example of a quantum quasiparticle. Quasiparticles are emergent degrees of freedom which arise from the collective dynamics of the elementary constituents of a system, but nonetheless may behave as the primary dynamical degrees of freedom of their system.
While quasiparticles are typically only thought of in quantum systems, the notion of a classical quasiparticle also exists~\cite{mattuck}. Such a classical quasiparticle may be thought of as a particle existing at the center of mass of a two-body problem, such as two orbiting asteroids in space, whose mass is the sum of the masses of the two bodies. There is no actual physical object with the total mass at the center of mass, but the system can be modeled as if there were such a body there.
A particularly notable type of quasiparticle in recent condensed matter studies are Majorana degrees of freedom~\cite{kitaev}. Majorana fermions were first derived as a consequence of relativistic quantum electrodynamics which a primary characteristic being that they are their own anti-particle. While there are no present experimental signatures of elementary particles which are Majorana fermions~\cite{pdg}, Majorana quasiparticles may appear in condensed matter systems, the classic example of manifesting as zero bias conductance peaks in the spectra of electrons tunneling across Josephson junctions~\cite{kitaev,majoranaforqc}.
The first way in which the notion of effective theories enter into the work of this thesis is in the construction of basic models of condensed matter systems.
These are toy models that capture the basic physics of interest in a simplified setting. One such example is the Kitaev superconducting wire\index{Kitaev superconductor}, which is a model which is able to realize Majorana degrees of freedom~\cite{kitaev}.
The second manner in which the notion of effective theories enter is in the construction of auxiliary models, particularly in the second half of the thesis.
These auxiliary models are on an additional layer of effective theory in that their relevant degrees of freedom do not correspond to the actual physical degrees of freedom of the system. Nevertheless, these auxiliary models with their non-physical auxiliary degrees of freedom enable manipulation of the model into a form in which the physical properties of the model are still derivable.
\section{Outline of the Thesis}
The methods of quantum condensed matter physics employed in the work of this thesis are reviewed in \S\ref{ch:methods}.
In particular the theory of Green functions is discussed. Green functions in many-body quantum theory are quantities which are much more convenient to work with than the many particle wavefunctions of the dynamical degrees of freedom and numerous physically meaningful quantities can be derived from them.
Also covered in this chapter is more detailed quantitative analysis of the models described in the above introduction. Surveyed here is a collection of models which appropriately illustrate certain classes of physically interesting systems, and form the platform of the studies undertaken in this text.
A particularly relevant model discussed is the Su-Schrieffer-Heeger (SSH) model which describes a $1d$ topological insulator. In \S\ref{ch:genssh}, this model is generalized in various manners to other $1d$ models which exhibit topological features. The types of generalizations developed here serve as reference models for the auxiliary systems which appear in the later chapters of this thesis.
The first half of the thesis cumulates in \S\ref{ch:bethessh}, which analyzes a system exhibiting the topological features of the SSH model while incorporating strong electronic correlations. The system is solved numerically exactly in the limit of infinite dimensions using the DMFT-NRG method established in \S\ref{ch:methods}.
The second half of the thesis begins with \S\ref{ch:aux} and emphasizes the second theme of this work. It presents a set of novel auxiliary field mappings for strongly correlated electron systems which can map fully interacting systems to completely non-interacting ones.
The first of these methods is based on nonlinear canonical transformations of interacting fermionic systems based on their decomposition into Majorana degrees of freedom. This method makes extensive use of the Clifford algebra structure obeyed by Majorana degrees of freedom and the geometric structures which appear. While there are some complete calculations of specific applications given in \S\ref{sec:majorana}, the primary function of this section is a proof of principle that an analytic transformation of an interacting system to a non-interacting system exists. For the type of systems considered in this thesis this method of nonlinear canonical transformations has limited applicability due to its algebraic complexity for all but the simplest systems. However, the method in general may be able to achieve goals different than those of this thesis or have interesting applications to other types of systems not considered here. As such, the method is presented with some generality with numerous points of outlook provided to serve as a starting point for future work.
The second auxiliary field mapping presented in \S\ref{ch:aux} is the method which sees most application in the products of this thesis.
The mapping is first developed for systems of finite Hilbert space before being adapted for systems with continuous spectrum. In both cases the mapping is applied to impurity models both at zero temperature as well as at finite temperature. In particular the effects of temperature on the parameters of the auxiliary system are discussed.
While the general methodology of the mapping is presented in \S\ref{ch:aux}, a more detailed study is reserved for the next chapter \S\ref{ch:motttopology} where a richer spectrum of results are obtained, as well as a connection involving the work of the first part of the thesis on topological phases of matter. This chapter represents an in-depth case study of the auxiliary field mapping for the Hubbard model across the Mott metal-insulator transition.
The thesis concludes in \S\ref{ch:conclusion} where the primary novel results of the thesis are recapitulated and elements of potential future work are highlighted.
\subsubsection{Notation \& Conventions}
Except where noted, \textit{e.g.} in \S\ref{sec:transport}, units are chosen such that $\hslash = 1$, $k_{\text{B}} = 1$, and any lattice spacing constants are taken to be unitary and dimensionless, meaning that the dimension of momentum $k$ is similarly $[k] = 1$. The inverse thermodynamic temperature is notated as $\beta = 1/T$.
With this choice of units, the remaining free dimension is energy. Energy scales here will generally be with respect to the system under consideration's bandwidth, $D$, with arbitrary units.
Matrix objects are notated in bold and $\mathbbm{1}$ denotes the identity matrix with its dimension only prescribed where context alone is insufficient.
The abbreviation $\hc$ stands for Hermitian conjugate.
Additional conventions and notation are defined as introduced as well as in the index.
\chapter{Some Mathematics \& Computations}\label{ch:appendix}
\section{Elements of Complex Analysis}
\subsubsection{Dirac delta distribution}
\begin{align}
\delta(x-x_0) &= \frac1\pi \lim_{\eta\to0^+} \frac{\eta}{(x-x_0)^2 + \eta^2}
\label{eq:lorentzian}
\\
\delta(x-x_0) &= \frac{1}{2\pi} \int_{-\infty}^{+\infty} \d s\, \e^{-\i (x-x_0) s}
\end{align}
Being a distribution, the Dirac delta is technically only well-defined when as the argument of a functional.
\subsubsection{Heaviside step function}
\begin{align}
\theta(t-t') &= \smashoperator{\int_{-\infty}^{t}} \d s\, \delta(s-t')
\\
\theta(t-t') &= \frac{\i}{2\pi} \lim_{\eta\to0^+} \int_{-\infty}^{+\infty} \d s \frac{\e^{-\i s (t-t')}}{s + \i \eta}
\end{align}
\subsubsection{Cauchy principal value}\index{Cauchy principal value|see {$\fint$}}\index{$\displaystyle\fint$}
\begin{equation}
\fint_a^b \frac{F(\xi)}{\xi - \xi_0} \d\xi
\equiv \lim_{\eta\to0} \left[ \int_a^{\xi_0-\eta} \frac{F(\xi)}{\xi - \xi_0} \d\xi + \int_{\xi_0+\eta}^b \frac{F(\xi)}{\xi - \xi_0} \d\xi \right]
\end{equation}
\subsubsection{Sokhotski--Plemelj theorem}\index{Sokhotski-Plemelj theorem} Theorem over the real line
\begin{equation}
\lim_{\epsilon\to0} \int_{a}^{b} \frac{f(x-x_0)}{x - x_0 \pm \i \epsilon} \d x = \mp \i \pi \int_{a}^{b} f(x) \delta(x-x_0) \d x + \fint_{a}^{b} \frac{f(x-x_0)}{x-x_0} \d x
\label{eq:sokhotskiplemelj}
\end{equation}
\subsubsection{Fourier transform}
\begin{align}
f(q) &= \mathfrak{F} f(p) = \frac{1}{\sqrt{2\pi}} \int \d p\, f(p) \e^{-\i p q}
&
f(p) &= \mathfrak{F}^{-1} f(q) = \frac{1}{\sqrt{2\pi}} \int \d q\, f(q) \e^{\i p q}
\end{align}
\subsubsection{Kramers-Kronig relations}\index{Kramers-Kronig relations}
\begin{align}
\Re G_k(\omega) &= \phantom{-}\frac1\pi \fint_{-\infty}^{\infty} \frac{\Im G_k(\omega')}{\omega'-\omega} \d\omega'
= -\fint_{-\infty}^{\infty} \frac{\mathcal{A}_k(\omega')}{\omega'-\omega}\d\omega'
= \mathfrak{F}^{-1}\left[ \mathfrak{F}\mathcal{A}_k(\omega) \cdot \mathfrak{F}\frac1\omega \right]
\\
\Im G_k(\omega) &= -\frac1\pi \fint_{-\infty}^{\infty} \frac{\Re G_k(\omega')}{\omega' - \omega} \d\omega'
\end{align}
\section{Equations of Motion Identities}\label{appendixeom}
Here collected are several commutator identities which are useful in the computation of the Green function equations of motion.
\begin{align}
[ \hat{A} , \hat{B} \hat{C} ]
&= \hat{B} [ \hat{A} , \hat{C} ] + [ \hat{A} , \hat{B} ] \hat{C}
\end{align}
\begin{align}
[ \hat{A} \hat{B} , \hat{C} ]
&= \hat{A} [ \hat{B} , \hat{C} ] + [ \hat{A} , \hat{C} ] \hat{B}
\end{align}
\begin{align}
\{ \hat{A} , \hat{B} \hat{C} \}
&= \{ \hat{A} , \hat{B} \} \hat{C} + \hat{B} [ \hat{C} , \hat{A} ]
\end{align}
\begin{align}
\{ \hat{A} \hat{B} , \hat{C} \}
&= \hat{A} \{ \hat{B} , \hat{C} \} + [ \hat{C} , \hat{A} ] \hat{B}
\end{align}
$[\hat{A},\opd{c}{}] = -[\hat{A},\op{c}{}]$ $\hat{A}$ Hermitian
$\op{T}{ir,s} \vcentcolon= \opd{c}{i,s} \op{c}{i+r,s} + \opd{c}{i+r,s} \op{c}{i,s}$
\begin{equation}
\{ \op{c}{A} , \opd{c}{B} \} = \delta_{AB}
\end{equation}
\begin{equation}
\{ \op{c}{A} , \op{c}{B} \} = 0 = \{ \opd{c}{A} , \opd{c}{B} \}
\end{equation}
\begin{equation}
\{ \op{n}{A} , \op{n}{B} \} = 2 \op{n}{A} \op{n}{B}
\end{equation}
\begin{align}
[ \op{n}{i,s} , \opd{c}{j,\sigma} ] &= \delta_{ij,s\sigma} \opd{c}{i,s}
&
[ \op{n}{i,s} , \op{n}{j,\sigma} ] &= 0
&
[ \op{n}{i,\uparrow} \op{n}{i,\downarrow} , \opd{c}{j,\sigma} ] &= \delta_{ij} n_{i,-\sigma} \opd{c}{j,\sigma}
&
[ \op{n}{i,\uparrow} \op{n}{i,\downarrow} , \op{n}{k,\sigma'} \opd{c}{j,\sigma} ] &= \delta_{ij} \op{n}{k,\sigma'} \op{n}{i,-\sigma} \opd{c}{j,\sigma}
\end{align}
\begin{align}
[ \opd{c}{i,s} , \opd{c}{j,\sigma} ] &= 2 \opd{c}{i,s} \opd{c}{j,\sigma}
&
[ \op{c}{i,s} , \op{c}{j,\sigma} ] &= 2 \op{c}{i,s} \op{c}{j,\sigma}
&
[ \op{c}{i,s} , \opd{c}{j,\sigma} ] &= \delta_{ij,s\sigma} - 2 \opd{c}{j,\sigma} \op{c}{i,s}
\end{align}
\begin{align}
[ \opd{c}{A,s} , \opd{c}{a,\sigma} \opd{c}{b,\sigma'} \op{c}{c,\sigma''} ] &= \opd{c}{a,\sigma} \opd{c}{b,\sigma'} \left( \delta_{Ac,s\sigma''} - 2 \opd{c}{As} \op{c}{c,\sigma''} \right)
\\
[ \op{c}{B,s'} , \opd{c}{a,\sigma} \opd{c}{b,\sigma'} \op{c}{c,\sigma''} ] &= -\opd{c}{a,\sigma} \opd{c}{c,\sigma''} \delta_{Bb,s'\sigma'} + \opd{c}{b,\sigma'} \opd{c}{c,\sigma''} \delta_{Ba,s'\sigma} + 2 \opd{c}{a,\sigma} \opd{c}{b,\sigma'} \op{c}{B,s'} \op{c}{c,\sigma''}
\end{align}
\begin{align}
[ \opd{c}{A,s} , \opd{c}{a,\sigma} \op{c}{b,\sigma'} \op{c}{c,\sigma''} ]
&= - \opd{c}{a,\sigma} \op{c}{c,\sigma''} \delta_{Ab,s,\sigma'} + \opd{c}{a,\sigma} \op{c}{b,\sigma'} \delta_{Ac,s\sigma''} + 2 \opd{c}{A,s} \opd{c}{a,\sigma} \op{c}{b,\sigma'} \op{c}{c,\sigma''}
\\
[ \op{c}{B,s'} , \opd{c}{a,\sigma} \op{c}{b,\sigma'} \op{c}{c,\sigma''} ] &= \left( \delta_{Ba,s'\sigma} - 2 \opd{c}{a,\sigma} \op{c}{B,s'} \right) \op{c}{b,\sigma'} \op{c}{c,\sigma''}
\end{align}
\begin{align}
[ \op{T}{ir,s} , \hat{A} ]
&= \opd{c}{i,s} [ \op{c}{i+r,s} , \hat{A} ] + [ \opd{c}{i,s} , \hat{A} ] \op{c}{i+r,s} + \opd{c}{i+r,s} [ \op{c}{i,s} , \hat{A} ] + [ \opd{c}{i+r,s} , \hat{A} ] \op{c}{i,s}
\end{align}
\begin{align}
[ \op{T}{ir,s} , \opd{c}{j,\sigma} ]
&= \delta_{i+r,j;s\sigma} \opd{c}{i,s} + \delta_{i,j;s\sigma} \opd{c}{i+r,s}
\\
\sum_{i,s} [ \op{T}{ir,s} , \opd{c}{j,\sigma} ]
&= \opd{c}{j-r,\sigma} + \opd{c}{j+r,\sigma}
\end{align}
\begin{align}
[ \tensor*{\hat{T}}{_{ir,s}} , \tensor*{\hat{n}}{_{j,\sigma}} ]
&= \delta_{i+r,j;s\sigma} \opd{c}{i,s} \op{c}{j,\sigma} - \delta_{i,j;s\sigma} \opd{c}{j,\sigma} \op{c}{i+r,s} + \delta_{i,j;s\sigma} \opd{c}{i+r,s} \op{c}{j,\sigma} - \delta_{i+r,j;s\sigma} \opd{c}{j,\sigma} \op{c}{i,s}
\\
\sum_{i,s} [ \tensor*{\hat{T}}{_{ir,s}} , \tensor*{\hat{n}}{_{j,\sigma}} ]
&= \opd{c}{j-r,\sigma} \op{c}{j,\sigma} - \opd{c}{j,\sigma} \op{c}{j+r,\sigma} + \opd{c}{j+r,\sigma} \op{c}{j,\sigma} - \opd{c}{j,\sigma} \op{c}{j-r,\sigma}
\end{align}
\section{Non-Linear Canonical Transformations}
Collected here are some useful identities for the computations involved in the non-linear canonical transformations performed in the main text.
\begin{align}
S_{\gamma,i}^2 &= +1 = S_{\mu,j}^2
&
( \pm \i S_{\gamma_j} )^2 &= -1 = ( \pm \i S_{\mu,j} )^2
\end{align}
\begin{align}
\left[ S_{\gamma,i} , S_{\gamma,j} \right] &= 0 = \left[ S_{\mu,i} , S_{\mu,j} \right]
&
\left[ S_{\gamma,i} , S_{\mu,j} \right] &= 0
&
\left[ S_{\gamma} , S_{\mu} \right] &= 0
\end{align}
\begin{align}
\{ \gamma_k , S_{\gamma,j} \} &= 2 \delta_{jk} \gamma_j S_{\gamma,j}
&
\{ \mu_k , S_{\mu,j} \} &= 2 \delta_{jk} \mu_j S_{\mu,j}
\\
[ \gamma_k , S_{\mu,j} ] &= 2 \delta_{jk} \gamma_j S_{\mu,j}
&
[ \mu_k , S_{\gamma,j} ] &= 2 \delta_{jk} \mu_j S_{\gamma,j}
\end{align}
\begin{align}
&\begin{aligned}
P_\gamma S_{\gamma,j}
&= - P_\gamma \gamma_j \mu_j P_\gamma
\\
&= \gamma_j \mu_j P_\gamma P_\gamma
\\
&= - S_{\gamma_j} P_\gamma
\end{aligned}
&
&\begin{aligned}
P_\gamma S_{\mu,j}
&= - P_\gamma \gamma_j \mu_j P_\mu
\\
&= \gamma_j \mu_j P_\gamma P_\mu
\\
&= - S_{\mu,j} P_\gamma
\end{aligned}
\end{align}
Baker-Campbell-Hausdorff
\begin{align}
\e^{X+Y + \sum_n f_n\left([X,Y]^{(n)}\right)} &= \e^{X} \e^{Y}
\end{align}
\begin{align}
V_{1,j} &= \e^{- \i S_{\gamma,j} \theta_{1,j}/2}
&
V_{2,j} &= \e^{- \i S_{\mu,j} \theta_{2,j}/2}
\\
&= \cos\left(\tfrac{\theta_{1,j}}{2}\right) - \i S_{\gamma,j} \sin\left(\tfrac{\theta_{1,j}}{2}\right)
&
&= \cos\!\left(\tfrac{\theta_{2,j}}{2}\right) - \i S_{\mu,j} \sin\!\left(\tfrac{\theta_{2,j}}{2}\right)
\\
V_{1} &\vcentcolon= \e^{- \i \sum_j S_{\gamma,j} \theta_{1,j}/2} = \prod_{j=1}^{4} V_{1,j}
&
V_{2} &\vcentcolon= \e^{- \i \sum_j S_{\mu,j} \theta_{2,j}/2} = \prod_{j=1}^{4} V_{2,j}
\end{align}
\begin{align}
V_{\gamma,j} V_{\mu,k}
&= \e^{-\i S_{\gamma,j} \theta_{\gamma,j}/2} \e^{-\i S_{\mu,k} \theta_{\mu,k}/2}
\\
&= \left( \1 \cos\left(\tfrac{\theta_{\gamma,j}}{2}\right) - \i S_{\gamma,j} \sin\left(\tfrac{\theta_{\gamma,j}}{2}\right) \right)
\left( \1 \cos\left(\tfrac{\theta_{\mu,k}}{2}\right) - \i S_{\mu,j} \sin\left(\tfrac{\theta_{\mu,k}}{2}\right) \right)
\\
&= \left( \1 \cos\left(\tfrac{\theta_{\mu,k}}{2}\right) - \i S_{\mu,j} \sin\left(\tfrac{\theta_{\mu,k}}{2}\right) \right)
\left( \1 \cos\left(\tfrac{\theta_{\gamma,j}}{2}\right) - \i S_{\gamma,j} \sin\left(\tfrac{\theta_{\gamma,j}}{2}\right) \right)
\\
&= V_{\mu,k} V_{\gamma,j}
\end{align}
\begin{align}
\left[ V_{\gamma,i} , V_{\gamma,j} \right] &= 0 = \left[ V_{\mu,i} , V_{\mu,j} \right]
&
\left[ V_{\gamma,i} , V_{\mu,j} \right] &= 0
&
\left[ V_{\gamma} , V_{\mu} \right] &= 0
\end{align}
\begin{align}
\{ \gamma_k , V_{\gamma,j} \} &= 2 \delta_{jk} \gamma_j S_{\gamma,j}
&
\{ \mu_k , V_{\mu,j} \} &= 2 \delta_{jk} \mu_j S_{\mu,j}
\\
[ \gamma_k , V_{\mu,j} ] &= 2 \delta_{jk} \gamma_j S_{\mu,j}
&
[ \mu_k , V_{\gamma,j} ] &= 2 \delta_{jk} \mu_j S_{\gamma,j}
\end{align}
\chapter{The Mott Transition as a Topological Phase Transition\label{ch:motttopology}}
\chaptermark{Mott Topology}
This chapter presents a more elaborate application of the auxiliary field mapping developed in the previous chapter. In doing so, it uncovers a relationship between the Mott metal-insulator transition in the Hubbard model with the topological phase transition in the SSH model. The form of the auxiliary chains belong to the classes of generalized SSH models which were discussed in~\S\ref{ch:genssh}. The case of mid-gap bands with finite width as well as the case of power-law spectral functions which were only briefly discussed in~\S\ref{ch:genssh} are examined here in more detail within the context of the Hubbard model's auxiliary field mapping.
This chapter is based on Ref.~\cite{motttopology}.
To reiterate from \S\ref{ch:methods}, the one band Hubbard model is described the Hamiltonian
\begin{equation}
\hat{H}_{\textsc{h}} = \varepsilon \sum_{j,\sigma} \opd{c}{j,\sigma} \op{c}{j,\sigma} + {t} \sum_{j,\ell,\sigma} \left( \opd{c}{j,\sigma} \op{c}{j+\ell,\sigma} + \opd{c}{j+\ell,\sigma} \op{c}{j,\sigma} \right) + U \sum_{j} \opd{c}{j,\uparrow} \op{c}{j,\uparrow} \opd{c}{j,\downarrow} \op{c}{j,\downarrow}
\tag{\ref*{eq:hubbard}}
\label{eq:hubbard0}
\end{equation}
where $j25\Gamma_{\textsc{bl}}$ in the following.
One of the initial motivations for this work lies in the observation that the character of the Hubbard model self-energy as the system undergoes the Mott metal-insulator transition is qualitatively similar to the SSH model as it undergoes its topological phase transition. Compare Fig.~\ref{fig:analogy} for the Hubbard model self-energy with the SSH model spectrum in its topological and trivial phases, Fig.~\ref{fig:sshbands}.
\begin{figure}[h]
\subfiglabel{\includegraphics{SU9.pdf}}{3.125,2}{fig:SU9}
\subfiglabel{\includegraphics{SU3.pdf}}{3.125,2}{fig:SU3}
\caption{Self-energy of the Hubbard model at half-filling in the Mott insulator ($U/t=9$ with bandwidth $D=4$) \subref{fig:SU9} and metallic phases ($U/t=3$ with bandwidth $D=3$) \subref{fig:SU3}.\label{fig:analogy}}
\end{figure}
As shown in Fig.~\ref{fig:analogy}, the self-energy of the Hubbard model possesses a phase which is (pseudo)gapped, $-\Im\Sigma(\omega) \sim \omega^2$ for $|\omega|\ll D$, and another phase in which there exists a localized spectral pole within a hard gap, $-\Im\Sigma(\omega) \sim \delta(\omega)$ for $|\omega|\ll D$. This is qualitatively similar to the SSH model which features a gapped phase, and a phase involving a pole within a spectral gap.
In this chapter the critical interaction strength for the Mott transition is taken to be $U_{c} = U_{c2}$, and the physics across this transition is explored.
\section{Mapping to Effective Model}
\sectionmark{The Effective Model}
Following the procedure developed in \S\ref{sec:mapping}, the local self-energy of the infinite-dimensional Hubbard model can be mapped to an effective $1d$ tight-binding chain.
For the case of the Hubbard model on the Bethe lattice, this mapping is shown schematically in Fig.~\ref{fig:hubbardmapping}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[every node/.style={line width=1pt,scale=1.0}, every path/.style={line width=1pt,scale=1.0}]
\input{mapping.tex}
\end{tikzpicture}
\caption[Auxiliary chain mapping of the Hubbard model]{Auxiliary chain mapping of the Hubbard model on the Bethe lattice. The original fully interacting system (left) with on-site interactions (filled circles) is mapped to a system (right) where the self-energy from the local interactions $\Sigma$ has been substituted by hybridization $\Delta$ to auxiliary degrees of freedom (red boxes), resulting in a fully non-interacting system (open circles). Figure adapted from~\cite{motttopology}.\label{fig:hubbardmapping}}
\end{figure}
As a preliminary step, the input self-energy for the effective mapping is calculated for the infinite-dimensional Hubbard model on the Bethe lattice using DMFT-NRG. The procedure as developed in \S\ref{sec:mapping} was for impurity models where the self-energy is localized on the impurity site. In application to the Hubbard model, it is necessary to perform the mapping within the context of DMFT in the infinite dimensional limit to ensure a momentum independent self-energy. The DMFT equations are solved self-consistently so that the local self-energy $\Sigma(\omega)$ at convergence is the proper lattice self-energy, which is local for the infinite dimensional Bethe lattice. In the case of finite systems with a momentum dependent self-energy, the mapping method needs to be modified. This is left for future investigation
The mapping is performed first for the Hubbard model at $T=0$ and particle-hole symmetry with
the $T=0$ particle-hole asymmetric case discussed later in \S\ref{sec:phasymmmap}.
The complete Hamiltonian of the effective model shown in Fig.~\ref{fig:hubbardmapping} is given by
\begin{subequations}
\begin{align}
\op{H}{\text{eff}} &= \varepsilon \sum_{j,\sigma} \opd{c}{j,\sigma} \op{c}{j,\sigma} + \tilde{t} \sum_{j,\ell,\sigma} \left( \opd{c}{j,\sigma} \op{c}{j+\ell,\sigma} + \hc \right) + \hat{H}_{\text{hyb}} + \hat{H}_{\text{aux}}
\\
\op{H}{\text{hyb}} &= V \smashoperator[lr]{\sum_{\substack{j 25 \Gamma_{\textsc{bl}} \\ \sigma 25 \{\uparrow,\downarrow\}}}} \left( \opd{c}{j,\sigma} \op{f}{j,\sigma;1} + \hc \right) \label{eq:Hhyb}
\\
\op{H}{\text{aux}} &= \sum_{\substack{j 25 \Gamma_{\textsc{bl}} \\ \sigma 25 \{\uparrow,\downarrow\}}} \sum_{n=1}^{\infty} \left[ \tensor{e}{_{n}} \opd{f}{j,\sigma;n} \op{f}{j,\sigma;n} + \tensor{t}{_{n}} \left( \opd{f}{j,\sigma;n} \op{f}{j,\sigma;n+1} + \hc \right) \right] \,. \label{eq:Haux}
\end{align}
\end{subequations}
The outer sums sum over all sites $j$ of the physical Bethe lattice $\Gamma_{\textsc{bl}}$ for each spin $\sigma$. $\tilde{t}$ is the regularized hopping amplitude between sites on the Bethe lattice, $\tilde{t} = {t}/\sqrt{\kappa}$, with $t$ the bare hopping amplitude (without index). In $\hat{H}_{\text{aux}}$ the inner sum captures the dynamics of the auxiliary degrees of freedom where the $n$ index labels sites within each auxiliary chain. Within this model there is one auxiliary chain per physical lattice site; auxiliary chains are decoupled due to the local nature of the self-energy in the infinite dimensional limit.
Before preceding to the results, a short technical remark is in order with respect to the treatment of the Mott pole in the self-energy of the Mott insulating phase. These remarks are analogous to those discussed in \S\ref{sec:cfetechnicalities} for Fermi liquid inputs to the auxiliary field mapping. The notation here follows that discussion.
For a Mott insulator, $\Delta_0(\omega)$ is hard-gapped for $\omega25[-\delta,\delta]$, where $2\delta$ is the size of the Mott gap. Inside the gap resides the zero-energy Mott pole, such that $\Delta_0(\omega\to 0)=\frac{\alpha_0}{\omega^+}$, \textit{i.e.} the only low energy contribution to the hybridization comes from the singular Mott pole. Based on the analysis for the Fermi liquid self-energy in \S\ref{sec:cfetechnicalities}, it is readily observed that in the Mott insulating case, the role of odd and even chain sites is now interchanged.
Upon iteration of recursion algorithm $\Delta_{n+1}(z) = z - \frac{t_n^2}{\Delta_{n}(z)}$, the next hybridization function is found to be $\Delta_1(\omega\to0) = \omega^+ - \frac{t_0^2}{\alpha_0} \omega^+$.
Note that with reference to the form of the even and odd hybridizations in Eq.~\eqref{eq:flhybparams}, the $\beta_n = 0$ for all (even) $n$ for the Mott insulator because the pole is sitting inside the Mott gap and the $b_n$ coefficients are also zero for all odd $n$, as observed from the form of $\Delta_1(\omega\to0)$. The low-energy asymptotic behavior and coefficient recursion (for $n\ge1$) follows,
\begin{subequations}
\begin{align}
\Delta_{2n-1}(\omega\to 0) &= a_{2n-1}\omega & (a_{2n-1}&<0) \,,
\\
\Delta_{2n}(\omega\to 0) &= \frac{\alpha_{2n}}{\omega^+}\,,
\end{align}
\end{subequations}
where
\begin{subequations}
\begin{align}
a_{2n-1} &= 1-\frac{t_{2n-2}^2}{|\alpha_{2n-2}|} \,,
&
b_{2n-1} &= 0\,,
\\
\alpha_{2n} &= \frac{t_{2n-1}^2}{|a_{2n-1}|} \,,
&
\beta_{2n} &= 0 \,.
\end{align}
\end{subequations}
Thus the pole structure of the Mott insulator $\Delta_n(\omega)$ is reversed with respect to the Fermi liquid.
Importantly, the imaginary part of the hybridizations $\Delta_n(\omega)$ is hard gapped for all $n$, but contains a mid-gap zero energy pole for all \textit{even} $n$.
\section{Topological Phases of the Effective Models}
\sectionmark{Topological Phases}
\subsection{Mott Insulator Regime}
For interaction strength $U>U_c$, the Hubbard model Eq.~\ref{eq:hubbard0} describes a Mott insulator, with two Hubbard bands separated by a hard spectral gap of width $2\delta$. The corresponding self-energy at zero temperature is shown in Fig.~\ref{fig:SU9}, obtained by DMFT-NRG for $U/t=9$. The imaginary part of the self-energy features a mid-gap `Mott pole'\index{Mott pole} throughout the Mott insulating phase, pinned at $\omega_{\textsc{mp}}=0$ (and with finite weight at the transition).
\begin{comment
\begin{figure}[h]
\includegraphics[width=1.0\columnwidth]{fig2.pdf}
\caption{Lattice self-energy at $T=0$ obtained from DMFT-NRG [panels (a,d)] and corresponding $t_n$ of the auxiliary chain [panels (b,e)]. Left panels show results for the MI ($U/t=9$, $D=4$): the hard gap in $\Im\Sigma(\omega)$ and the Mott pole at $\omega=0$ produce an SSH-type chain in the topological phase, hosting an exponentially-localized boundary zero mode, panel (c). Right panels show the metallic FL ($U/t=3$, $D=3$): the low-energy $\omega^2$ psuedogap in $\Im\Sigma(\omega)$ produces a generalized SSH chain with $1/n$ decay, in the trivial phase.}
\label{fig:2}
\end{figure}
\end{comment
Mapping to the auxiliary non-interacting chain, Eq.~\ref{eq:Haux}, leads to a model of modified SSH type, as exhibited in Fig.~\ref{fig:mit}. In particular, the hard gap in $\Im\Sigma(\omega)$ generates an alternating sequence of $t_n$ in $\hat{H}_{\text{aux}}$ at large distances from the physical degrees of freedom,
\begin{equation}
\label{eq:tn_MI}
t_n ~~ \overset{\frac{n \delta}{D} \gg 1}{\sim} ~~ \tfrac{1}{2}[D+(-1)^n\delta] \,.
\end{equation}
In the Mott insulator phase, the auxiliary chain parameters are alternating for all $n$, starting from a \textit{weak} bond ($t_1<t_2$). It is this feature that produces the Mott mid-gap pole at $\omega=0$. Additional structure in the Hubbard bands merely gives rise to transient structure in the $t_n$ for small $n$, but importantly the parity of the alternation, $t_{2n-1}/t_{2n}<1$, is preserved for all $n$.
The SSH model in its topological phase (Eq.~\ref{eq:Haux} with $t_n$ given by Eq.~\ref{eq:tn_MI} for all $n\ge 1$) hosts an exponentially-localized boundary zero-mode that is robust to parity-preserving perturbations~\cite{ssh,shortcourse}. Similarly, the zero-energy Mott pole corresponds to a robust and exponentially-localized state living at the end of the auxiliary chain (on its boundary with the physical degrees of freedom of the original lattice). This can be readily seen from the transfer matrix method, which gives the wavefunction amplitude of the zero-energy state at odd sites $(2n-1)$ of $\hat{H}_{\text{aux}}$ as
\begin{equation}
|\psi_0(2n-1)|^2\sim \prod_{x=1}^{n} \frac{t_{2x-1}}{t_{2x}} ,
\end{equation}
which at large $n$ decays exponentially as $\exp(-n/\xi)$ with $\xi \approx D/2\delta$ for small $\delta$
(while $|\psi_0(2n)|^2=0$ for all $n$)~\cite{ssh,shortcourse}. The boundary-localized nature of this zero-mode state is confirmed by exact diagonalization of $H_{\text{aux}}$, shown in Fig.~\ref{fig:miwavefunction}.
\begin{figure}[h]
\begin{subfigure}{0.36\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{mis.pdf}};
\node at (2.2,2) {\footnotesize\subref*{fig:mis}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:mis}}
\end{subfigure}
\begin{subfigure}{0.69\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{mit.pdf}};
\node at (4.4,2) {\footnotesize\subref*{fig:mit}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:mit}}
\end{subfigure}
\caption{Self-energy \subref{fig:mis} and effective chain parameters \subref{fig:mit} corresponding to the Mott insulating phase, $U/t = 9$. Analogously to the topological phase of the SSH model, the auxiliary chain is initialized with a weak bond.\label{fig:miresult}}
\end{figure}
\begin{figure}[h]
\subfiglabel{\includegraphics[scale=1]{miwf.pdf}}{3.125,2}{fig:miwf}
\subfiglabel{\includegraphics[scale=1]{miwflog.pdf}}{3.125,2}{fig:miwflog}
\caption[Wavefunction amplitude]{Zero energy wavefunction amplitude of the effective chain. The wavefunction exhibits exponential localization. Only the wavefunction on odd sites is plotted as the wavefunction has zero support on the even sites.\label{fig:miwavefunction}}
\end{figure}
\subsection{Fermi Liquid Regime\label{sec:flregime}}
For $U<U_c$, the Hubbard Hamiltonian describes a correlated metal, with low-energy Fermi liquid properties characterized by a quadratic dependence of the self-energy, $-t\Im\Sigma(\omega\rightarrow 0) \sim (\omega/Z)^2$, in terms of the quasiparticle weight $Z$~\cite{hewson,bullahubbard}\index{quasiparticle weight}. The quasiparticle weight is obtained from the real part of the self-energy and is given by~\cite{marino,bullahubbard}
\begin{equation}
Z = \left( 1 - \left. \frac{\d \Re \Sigma}{\d \omega} \right\rvert_{\omega=0} \right)^{-1} \,.
\end{equation}
Fig.~\ref{fig:fls} shows the $T=0$ self-energy deep in the Fermi liquid phase, obtained by DMFT-NRG for $U/t=3$.
A distinctive form for the auxiliary chain hopping parameters is obtained from the continued fraction expansion, arising due to the low-energy pseudogap in $\Im\Sigma(\omega)$,
\begin{equation}
t_n \overset{nZ\gg1}{\sim} \frac{D}{2} \sqrt{1 - (-1)^{n} \frac{r}{n+d}}
\label{eq:tn_FL}
\end{equation}
where $r=2$ is the exponent of the low-energy spectral power-law, and $d\sim 1/Z$.
This result follows from the scaling behavior observed in \S\ref{sec:pseudogapssh}.
Eq.~\ref{eq:Haux} with hopping parameters $t_n$ given by Eq.~\ref{eq:tn_FL} generalizes the standard hard-gapped SSH model to the pseudogapped case: the alternating sequence of $t_n$ again has a definite parity, but with a decaying $1/n$ envelope. Since $t_{2n-1}/t_{2n}>1$ for all $n$ (the chain starting this time from a \textit{strong} bond), the analogous SSH model would be in its trivial phase; likewise here, the Fermi liquid phase of the Hubbard model may be regarded as trivial. There is no localized boundary state of the auxiliary chain in the Fermi liquid phase.
\begin{figure}[h]
\begin{subfigure}{0.36\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{fls.pdf}};
\node at (2.2,2) {\footnotesize\subref*{fig:fls}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:fls}}
\end{subfigure}
\begin{subfigure}{0.69\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{flt.pdf}};
\node at (4.4,2) {\footnotesize\subref*{fig:flt}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:flt}}
\end{subfigure}
\caption{Self-energy \subref{fig:fls} and corresponding chain parameters \subref{fig:flt} at $U/t = 3$, characteristic of the metallic Fermi liquid phase. Analogously to the trivial phase of the SSH model, the auxiliary chain is initialized with a strong bond.\label{fig:flresult}}
\end{figure}
In both the metallic and Mott insulating regimes, the auxiliary chain takes the form of staggered hopping parameters with period $2\mathbbm{Z}$. This is due to the global appearance of two bands separated by an energy gap. From the analysis shown in \S\ref{ch:genssh}, there is a relationship between the multiplicity of bands and gaps and the period of the stagger in the hopping parameters.
\section{Mid-Gap Peaks and Domain Walls\label{sec:mottbands}}
This section elaborates on the analysis of \S\ref{sec:singledw} on the effect of domain walls within the unit cell of generalized SSH models. There it was seen in \S\ref{sec:singledw} that a single domain wall in a repeated unit cell produces a mid-gap band, as opposed to a mid-gap pole as in a domain wall-free SSH model. The reasoning behind this phenomenon is that the domain wall hosts a localized state which possesses finite overlap with the states localized on neighboring domain walls.
\begin{figure}[h]
\centering
\begin{subfigure}{\linewidth}
\phantomsubcaption{\label{fig:s2a}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\phantomsubcaption{\label{fig:s2b}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\phantomsubcaption{\label{fig:s2c}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\phantomsubcaption{\label{fig:s2d}}
\end{subfigure}
\includegraphics[width=0.625\linewidth]{fig_S2.pdf}
\caption[Generation of mid-gap bands in SSH models from domain wall superlattice.]{Beating in auxiliary chain hopping parameters, $\{t_n\}$ and their relation to the width of sharp mid-gap peaks. This model is generated first by prescribing the spectrum in \subref{fig:s2a} and then generate the associated $\{t_n\}$ in \subref{fig:s2b},\subref{fig:s2c} from a moment analysis. The width of the mid-gap bands in \subref{fig:s2a} is exaggerated for illustrative purposes to emphasize that these are not poles. On a linear plot the peaks corresponding to the parameters in \subref{fig:s2b},\subref{fig:s2c} are much narrower than what is illustrated.
Figure reproduced from~\cite{motttopology}.}
\label{fig:s2}
\end{figure}
In Fig~\ref{fig:s2a} such a model spectrum of full bandwidth $2D$ is shown consisting of two outer SSH bands separated by a gap $2\delta$. Inside this spectral gap of $2\delta$, there exists two inner SSH bands of full bandwidth $2D_p$ and gap $2\delta_p$, centered around $\omega_p$, such that the width of the inner SSH band is given by, $\Delta_p = D_p-\delta_p$.
In order to understand the effect of the mid-gap features, $\omega_p$ is kept fixed and $\Delta_p$ is varied. The moment expansion technique, discussed in \S\ref{sec:moments}, is employed to determine the respective tight binding chain represented by $\{t_n\}$ for $\Delta_p/D = 10^{-3}$ (see Fig~\ref{fig:s2b}) and $\Delta_p/D = 10^{-5}$ (see Fig~\ref{fig:s2c}) for a fixed $\delta/D = 0.2$, and $\omega_p/D = 10^{-2}$. As seen in Figs.~\ref{fig:s2b} and \ref{fig:s2c}, the chain represented by $\{t_n\}$ consists of a periodic beat pattern with multiple domain walls (highlighted with red squares), such that each domain wall contributes to a topological localized state that hybridize amongst themselves. These domain walls form a periodic pattern and thereby the additional domain walls are completely absent in Fig~\ref{fig:dwdistance}, where $\Delta_p=0$ because the spectrum consists of two mid-gap poles at $\pm\omega_p$.
Recalling the analysis of SSH models with a single domain wall in \S\ref{ch:genssh}, particularly from Fig.~\ref{fig:dwdistance}, it can be inferred that the location of the first domain wall and hence the length $n_1$ is pinned by the value of $\omega_p/D$; $n_1$ grows in magnitude as $\omega_p \to 0$. The additional beating is a manifestation of the presence of a band of topological states, represented as a mid-gap peak of width $\Delta_p$; $\Delta_p$ determines the length $n_2$ that grows in size as $\Delta_p \to 0$. In other words, the original SSH medium, denoted as $n_2$ is now interrupted by multiple domains of length $n_1$, due to the presence of a band of topological \textit{defects} instead of just two topological excitations, such that the hybridization $\Delta_p$ is determined by $n_2$ which is the real space separation between these \textit{defects}. Indeed, these defects being topological in nature hybridize with each other via an exponentially localized $\Delta_p$ given by $\Delta_p\sim D \exp(-n_2\delta/D)$. This is shown in Fig.~\ref{fig:s2d} where $\Delta_p$ is varied in the calculations and the respective $n_2$ is determined (shown as squares) and it is observed that indeed $\Delta_p$ is exponentially localized in $n_2$ (solid lines).
This is essentially the many-domain wall equivalent to the analysis presented in \S\ref{sec:singledw}. There it was seen that a pair of domain walls produces a pair of mid-gap states (\textit{cf.} Fig.~\ref{fig:tanhdw} where the topological boundary is considered as a domain wall). The distance between the mid-gap poles was seen to be inversely proportional to the distance between the domain walls in the chain (Fig.~\ref{fig:dwdistance}). In the case here, rather than single poles, the mid-gap peaks are bands of finite width. As there are still many domain walls in the system, the states localized on these domain walls are still able to hybridize with each other and maintain the bands.
In the terminology of a continuum model such as those discussed in \S\ref{sec:continuumfieldtheory}, this configuration of domain walls could be termed a `soliton gas', or `soliton ice' as the solitons are immobile. While the domain walls may informally be described as `propagating' through the chain, the auxiliary chains constructed here have fixed parameters; an auxiliary chain and its set of parameters $\{t_n\}$ are specific to one (self-energy) spectrum of the physical model for a specific set of physical system parameters. The physical system parameters are adjusted adiabatically to produce a new input for the auxiliary field mapping. The auxiliary models are not obtained from dynamical changes in the auxiliary system parameters, and therefore the domain walls do not propagate in a dynamical sense.
\section{The Mott Transition}\label{sec:motttransition}
Deep in either the Mott insulator or Fermi liquid phases of the Hubbard model, the auxiliary chains are of generalized SSH model type, with the Mott insulator phase being topologically non-trivial. A robust and exponentially localized zero-energy state lives on the boundary between the auxiliary and physical systems throughout the Mott insulating phase, corresponding to the Mott pole. However, richer physics is observed on approaching the Mott transition\index{Mott transition} from the Fermi liquid phase. In particular, the Mott transition occurs \textit{without} bulk gap closing of the Hubbard bands. Such a transition is unusual for a topological phase transition.
\begin{figure}[h!]
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics{SU5_82zoomlog.pdf}};
\node at (5.375,1.875) {\footnotesize\subref*{fig:SU5_82zoomlog}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-1.5\baselineskip}\label{fig:SU5_82zoomlog}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics{tnU5_82.pdf}};
\node at (5.375,1.75) {\footnotesize\subref*{fig:tnU5_82}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:tnU5_82}}
\end{subfigure}
\caption
Auxiliary field mapping for $U/t = 5.82$. The position of the peaks (inset of \subref{fig:SU5_82zoomlog}) is inversely proportional to the distance between the domain walls in the auxiliary chain, shown in \subref{fig:tnU5_82}. Compare the situation to Fig.~\ref{fig:U5_86}.\label{fig:U5_82}}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics{SU5_86zoomlog.pdf}};
\node at (5.375,1.75) {\footnotesize\subref*{fig:SU5_82zoomlog}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-1.5\baselineskip}\label{fig:SU5_86zoomlog}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics{tnU5_86.pdf}};
\node at (5.375,1.875) {\footnotesize\subref*{fig:tnU5_86}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:tnU5_86}}
\end{subfigure}
\caption
Auxiliary field mapping for $U/t = 5.86$. The position of the peaks (inset of \subref{fig:SU5_86zoomlog}) is inversely proportional to the distance between the domain walls in the auxiliary chain, shown in \subref{fig:tnU5_86}. Compare the situation to Fig.~\ref{fig:U5_82}.\label{fig:U5_86}}
\end{figure}
In the vicinity of the transition on the Fermi liquid side, the self-energy develops a preformed gap, inside which are peaks of finite width centered at $\pm\omega_p$ with $\omega_p \propto t \sqrt{Z}$~\cite{bullahubbard}, while quadratic pseudogap behavior sets in on the lowest-energy scales $\lvert \omega \rvert \ll \omega_p$. The transition corresponds to $Z\to0$.
These characteristics can be interpreted as a composite of features which appeared in the generalized SSH models in \S\ref{ch:genssh}.
As shown in the previous section \S\ref{sec:flregime}, deep in the Fermi liquid phase the auxiliary chain begins with a strong bond, however the auxiliary chains in this phase close to the transition feature an initial \textit{weak} bond. In the SSH model such a change in the system parameters necessitates a topological phase transition, but in the Hubbard model there is no such transition within the metallic phase.
The explanation for this behavior in the auxiliary system is that as $U$ is increased,
a domain wall pair forms at the end of the chain, thereby flipping the parity of the first bond. One domain wall remains fixed to the end of the chain, and the other propagates into the chain with increasing $U$, approaching $U_c$ from below.
Analogously to the case in \S\ref{sec:domwalls}, increasing $U$ produces a superlattice of domain walls in $\hat{H}_{\text{aux}}$, each hosting a localized state which hybridize together to form low energy bands of finite width.
This mechanism is reminiscent of the vortex-pair dissociation in the Berezinskii-Kosterlitz-Thousless transition~\cite{berezinskii1,berezinskii2,kosterlitzthouless1,kosterlitzthouless2}.
The topological phase transition occurs without bulk gap closing.
\subsection{Toy Model}\label{sec:mttoy}
The preceding analysis of the auxiliary chains can be checked by engineering a toy model with the intent of reproducing a spectral function with specific features. The toy model to be generated is completely determined by a set of hopping terms $\{t_n\}$ with a given parameterization. The choice of parameterization is informed both by the analysis in the preceding section \S\ref{sec:mottbands} as well as the analysis of extended SSH-type models presented in \S\ref{ch:genssh}.
As shown in the previous section, a self-energy in the metallic phase near $U_{c}$ features two prominent features at low energies: a Fermi liquid pseudogap, and two large spectral peaks outside the pseudogap region, but inside the main gap. A parameterization for a set $\{t_n\}$ for a tight-binding chain whose spectrum would exhibit the features of the Hubbard model self-energy in this regime could be given by
\begin{equation}
t_n = \frac{D}{2} \sqrt{ 1 - (-1)^{n} \frac{r}{n+d} \left[ 1 - \beta \cos\left( \tfrac{2\pi n}{\lambda} + \phi \right) \right] } \,.
\label{eq:tntoy}
\end{equation}
The structure of this parameterization can be understood as a composition of a part which produces a low energy power-law component
and a part which generates periodic domain walls. The $1/n$ part imposes an asymptotic decay envelope which results in power-law spectral features at low energy. As in \S\ref{sec:powerlawssh}, $r$ and $d$ parameterize the decay envelope. The domain walls are prescribed by the cosine term, with $\beta$, $\lambda$, and $\phi$ parameters determining the domain wall envelopes and their distribution in the chain.
It may be recalled that the enveloping function for domain walls in \S\ref{sec:domwalls} was a $\tanh$ profile rather than a (co)sine. This turns out not be a significant issue here as the density of domain walls is such that the resulting spectrum is insensitive to this difference in parameterization.
\begin{figure}[h!]
\centering
\includegraphics{tn_toy.pdf}
\caption{Hopping parameters for toy model prescribed by Eq.~\eqref{eq:tntoy} with parameterization $(\beta,d,\phi,\lambda)=(3,15,0.1,30)$. This set of hoppings yields the spectrum plotted in Fig.~\ref{fig:toyspec}.\label{fig:tn_toy}}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics{toycomp.pdf}};
\node at (5.375,2.375) {\footnotesize\subref*{fig:toyspec}};
\node at (5.375,-0.25) {\footnotesize\subref*{fig:hsetoy}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:toyspec}}
\phantomsubcaption{\label{fig:hsetoy}}
\end{subfigure}
\caption[Comparison of the toy model with Hubbard data]{Spectrum of the toy model Eq.~\eqref{eq:tntoy} parameterized by $(\beta,d,\phi,\lambda)=(3,15,0.1,30)$ \subref{fig:toyspec} compared to a corresponding self-energy of the Hubbard model at $U/t = 5.6$ with $D=3$ \subref{fig:hsetoy}.}
\end{figure}
In this model, the Mott transition\index{Mott transition} occurs in the limit of $\lambda$, $d \to \infty$. This limit of the parameters leads to the domain walls becoming increasingly more diffuse in the chain. This can be understood in terms of the discussion above in \S\ref{sec:mottbands}. The relation to the Mott transition is in that the positions of the peaks in the self-energy are proportional to the quasiparticle weight $Z$, which vanishes across the transition. As in the above, this can be manifested in the toy model by prescribing that the domain walls infinitely delocalize.
This toy model demonstrates that the lessons learned from \S\ref{ch:genssh} can be applied to generate models whose spectra possess specific features.
\section{Particle-Hole Asymmetry\label{sec:phasymmmap}}
The preceding analysis can also be applied to the more complex situation away from particle-hole (\textit{ph}) symmetry. The \textit{ph}-asymmetry is quantified according to the parameter $\eta \vcentcolon= 1-2\mu/U$, which involves the chemical potential $\mu$ and local Coulomb interaction strength $U$. At \textit{ph}-symmetry, $\mu = U/2$ and $\eta = 0$. For $\eta\neq 0 $, the system is not \textit{ph}-symmetric, and $\Im\Sigma(\omega)\neq \Im\Sigma(-\omega)$. In the Mott insulating phase in particular, the Mott pole is not located precisely at the Fermi energy. These features lead to some differences in the mapping to the auxiliary chain and the subsequent analysis in terms of an emergent topology. However, the important conclusion, as explained below, is that the topological classification associated to the Fermi liquid phase as trivial and the Mott insulating phase as topologically non-trivial carries over to the \textit{ph}-asymmetric case, and holds for the Mott transition\index{Mott transition} arising at any $\eta$.
First, it is important to distinguish \textit{ph}-asymmetry $\eta$, and the average filling $\langle n \rangle$ which must be determined self-consistently. At \textit{ph}-symmetry $\eta=0$, the system is exactly half-filled, $\langle \op{n} \rangle = 1$. However, note that $\langle \op{n}{} \rangle = 1$ pertains throughout the Mott insulator phase for any $\eta$, and also $\langle \op{n}{} \rangle \rightarrow 1$ as $U \rightarrow U_c^-$ in the metallic phase for any $\eta$~\cite{logangalpin}. The immediate vicinity of the Mott metal-insulator transition is in fact always at half-filling, for any $\eta$.
In the metallic phase, the lattice self-energy takes the usual Fermi liquid form, $-t\Im\Sigma(\omega)\sim (\omega/Z)^2$ at low energies $|\omega| \ll \omega_c$, where $\omega_c \sim Z t$ is the lattice coherence scale and $Z$ is the quasiparticle weight\index{quasiparticle weight}. This holds for \textit{any} asymmetry $\eta$, and as such there is an emergent low-energy \textit{ph}-symmetry in the precise sense that $\Im\Sigma(\omega) = \Im\Sigma(-\omega)$ for all $\lvert \omega \rvert \ll \omega_c$ independent of $\eta$. Since low energies in $\Sigma(\omega)$ roughly correspond to large $n$ down the auxiliary chain, one therefore expects the `bulk' of the Fermi liquid auxiliary chain to be the same as in the \textit{ph}-symmetric case already studied. At higher energies, \textit{ph}-asymmetry shows up in an asymmetry between the upper and lower Hubbard bands; in general this leads to $\Re\Sigma(\omega \rightarrow 0) \neq 0$ for $\eta \neq 0$. However, also note that $\Re\Sigma(\omega \rightarrow 0) \rightarrow 0$ as $U\rightarrow U_c^-$ approaching the Mott transition from the Fermi liquid side, independent of asymmetry $\eta$ and is as such a stronger definition of emergent \textit{ph}-symmetry in the close vicinity of the Mott transition (see Ref.~\cite{logangalpin} for details). Aspects of the generic Mott transition at $\eta \neq 0$ might therefore be expected to be related to the \textit{ph}-symmetric case at $\eta = 0$.
The auxiliary field mapping of \S\ref{sec:mapping} requires some generalizations to be applicable to the \textit{ph}-asymmetric case~\cite{motttopology}. The auxiliary chain takes the form of Eq.~\eqref{eq:Haux}, with finite on-site potentials, $\{e_n\}$ in general. The continued fraction expansion of the self-energy follows as,
\begin{align}
\Sigma(\omega)\equiv \Delta_0(\omega)=\cfrac{t_0^2}{\omega^+-e_1-\cfrac{t_1^2}{\omega^+-e_2-\cfrac{t_2^2}{\phantom{a}\ddots}}} \;.
\label{eq:G_CFE_ph_asymm}
\end{align}
Similar to the discussion in \S\ref{sec:mapping}, the initial bond of the auxiliary chain
is obtained from the relation $\int \d\omega \mathcal{A}_0(\omega) \equiv -\tfrac{1}{\pi}\int \d\omega \Im\Delta_0(\omega) = t_0^2$. Subsequently, for all $n>0$ all $t_n$ and $e_n$ can be determined recursively using the relations
\begin{subequations}
\begin{align}
\int \d\omega \mathcal{A}_n(\omega) \equiv -\frac{1}{\pi}\int \d\omega\, \Im\Delta_n(\omega) &= t_n^2
\intertext{and}
-\frac{1}{\pi}\int \d\omega\, \frac{\omega{\Im}\Delta_{n-1}(\omega)}{t_{n-1}^2} &= e_n \,,
\end{align}
\end{subequations}
where $e_1$ is the on-site energy of the boundary site in the isolated $\op{H}{\text{aux}}$ coupled to the physical degrees of freedom via $\op{H}{\text{hyb}}$ \eqref{eq:Hhyb}. Note that in the \textit{ph}-symmetric case, $e_n=0$ for all $n$.
\begin{comment}
\begin{figure}[h]
\caption{Lattice self-energy at $T=0$ and particle-hole asymmetry, $\eta=1/4$ obtained using DMFT-NRG [panels (a,e)] and corresponding auxiliary chain hopping parameters $\{t_n\}$ and onsite potential energies $\{e_n\}$ [panels (b,c) (MI), panels (f,g) (FL)]. Left panels show results for MI ($U/t=9.0,\,D=4.625$): the hard gap and pole at $E_{MP}\approx 0.12D$ produces a generalized SSH model in the topological phase, hosting a mode $\Psi_{\text{MP}}$ satisfying $H_{\text{aux}}\Psi_{\text{MP}}=E_{MP}\Psi_{\text{MP}}$ which is exponentially-localized on the boundary [panel (d)]. Right panels show results for FL ($U/t=3.0,\,D=4.5$). The low energy $\omega^2$ pseudogap in $\Im\Sigma$ results in an asymptotic $(-1)^n/n$ decay of the respective $t_n$ and $e_n$. Furthermore, the $\{e_n\}$ oscillate around zero (and tend to zero) at large $n$, indicative of a low energy emergent \textit{ph}-symmetric dynamics.}
\label{fig:MI_FL_ph_asymm}
\end{figure}
\end{comment}
The self-energy $\Im\Sigma(\omega)$ of a \textit{ph}-asymmetric Mott insulator consists of two high energy Hubbard bands centered on some high energy positive $(\omega_+)$ and negative $(\omega_-)$ values respectively, shown in Fig.~\ref{fig:S_eta0_25}. There also exists an insulating charge gap between the Hubbard bands of width $2\delta = | \delta_+ - \delta_- |$ where $\delta_\pm$ denotes the inner band edge on the positive (negative) energy side. Here $|\delta_+| \neq |\delta_-|$ and $|\omega_+ | \neq | \omega_-|$, unlike in the \textit{ph}-symmetric case. Additionally, in the \textit{ph}-asymmetric case the Mott pole inside the insulating gap is located away from the Fermi level at $\omega = e_0 \equiv E_{\text{MP}}$.
Thus the \textit{ph}-asymmetric Mott insulator self energy is of the form $\Sigma(\omega) \equiv \Delta_0(\omega) = \Delta_0^{\text{reg}}(\omega) + \frac{\alpha_0}{\omega^+ - e_0}$ where $\Im\Delta_0(\omega) = 0$ for $\delta_- < \omega < \delta_+$. Since the Mott pole of weight $\alpha_0$ resides in the gap, $\delta_- < e_0 < \delta_+$ and
$\alpha_0^{-1}=-\frac{1}{\pi}\int_{-\infty}^{\infty} \d\omega\frac{\Im G(\omega)}{(\omega - e_0)^2}$, where $G(\omega)$ is the local lattice Green function of the Mott insulator.
For the continued fraction expansion set up,
\begin{align}
\Delta_{n-1}(\omega) &= t_{n-1}^2 \tensor*{\widetilde{G}}{^{(n)}_{1,1}}(\omega),\\
\tensor*{\widetilde{G}}{^{(n)}_{1,1}}(\omega) &= \frac{1}{\omega^+ - e_n - \Delta_n(\omega)}.
\end{align}
Following the same logic based on the analytic structure of the complex $\Delta_n$'s as in the \textit{ph}-symmetric case,
\begin{align}
\Delta_{2n-1}(\omega)&=\omega^+-e_{2n-1}-\frac{t_{2n-2}^2}{\Delta_{2n-2}^{\text{reg}}(\omega)+\frac{\alpha_{2n-2}}{\omega^+ - e_{2n-2}}},\\
\Delta_{2n}(\omega)&=\omega^+ - e_{2n}-\frac{t_{2n-1}^2}{\Delta_{2n-1}(\omega)} \notag\\
&=\Delta_{2n}^{\text{reg}}(\omega)+\frac{\alpha_{2n}}{\omega^+ - e_{2n}},
\end{align}
where $n\ge1$ and every \textit{even} $-\Im\Delta_{2n}(\omega)$ is gapped at low energies with a pole of weight $\alpha_{2n}$ located inside this gap at $\omega = e_{2n}$. Every \textit{odd} $-\Im\Delta_{2n-1}(\omega)$ is gapped and regular and analytic in the complex plane. Since $\Delta_{2n-1}(\omega)$ is purely real for $\delta_-<\omega<\delta_+$ an isolated pole exists inside the gap in $\Delta_{2n}(\omega)$ at $e_{2n}$ where $\Re\Delta_{2n-1} \rvert_{\omega=e_{2n}}=0$.
The following relations are used to obtain $\{t_n\}$ and $\{e_n\}$ $\forall n\ge1$:
\begin{align}
&e_n = - \frac{1}{\pi t_{n-1}^2} \int \d\omega \, \omega\Im\Delta_{n-1}^{\text{reg}}(\omega)+\frac{\alpha_{n-1} e_{n-1}}{t_{n-1}^2},\\
&t_n^2 = - \frac{1}{\pi} \int \d\omega \Im\Delta_{n}^{\text{reg}}(\omega)+\alpha_n,
\end{align}
where, $\Im\Delta_{n}^{\text{reg}} = \Im\Delta_{n}$ for \textit{odd} $n$ and the respective pole weight $\alpha_n=0$ for all \textit{odd} $n$.
The pole weight, $\alpha_{2n}$ $\forall$ \textit{even} sites is obtained using the relation,
\begin{align}
\alpha_{2n}^{-1} = -\frac{1}{\pi t_{2n-1}^2} \int_{-\infty}^{\infty} \d\omega\frac{\Im\Delta_{2n-1}(\omega)}{(\omega - e_{2n})^2},
\end{align}
where, $e_{2n}$ is obtained by numerically locating the $\omega$ where $\Re\Delta_{2n-1}=0$ inside the gap. Since $\Delta_0(\omega)$ contains a pole inside the gap it is inevitable that $\Delta_2(\omega)$ will also contain a pole. However, unlike the \textit{ph}-symmetric case the location of the pole will vary along the recursions, albeit remaining inside the gap, until the $\{t_n,e_n\}$ of the auxiliary chain settles down to a staggered alternating form without any attenuation, as shown in Fig.~\ref{fig:phasymmtnen}.
It is important to note that the numerical determination of the $\{\alpha_{2n}\}$ and $\{e_n\}$ is prone to numerical errors due to grid resolution, spectral kinks and/or Hilbert transformation. These errors may propagate down the chain in the initial stages of the recursion leading to spurious features in the auxiliary chain parameters. Therefore care must be taken in the numerical evaluation.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{SU9_eta0_25.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:SU9_eta0_25}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:SU9_eta0_25}}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{SU3_eta0_25.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:SU3_eta0_25}};
\end{tikzpicture}
\phantomsubcaption{\vspace{-\baselineskip}\label{fig:SU3_eta0_25}}
\end{subfigure}
\caption{Self-energy of the particle-hole asymmetric Hubbard model for $\eta = \frac14$ at \subref{fig:SU9_eta0_25} $U/t = 9$ with $D = 4.625$, and \subref{fig:SU3_eta0_25} $U/t = 3$ with $D = 4.5$.\label{fig:S_eta0_25}}
\end{figure}
\begin{figure}
\begin{subfigure}{0.49\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{tnenU9_eta0_25.pdf}};
\node at (3.375,3.375) {\footnotesize \subref*{fig:tnU9_eta0_25}};
\node at (3.375,-0.25) {\footnotesize \subref*{fig:enU9_eta0_25}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:tnU9_eta0_25}}
\phantomsubcaption{\label{fig:enU9_eta0_25}}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{tnenU3_eta0_25.pdf}};
\node at (3.375,3.375) {\footnotesize \subref*{fig:tnU3_eta0_25}};
\node at (3.375,-0.25) {\footnotesize \subref*{fig:enU3_eta0_25}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:tnU3_eta0_25}}
\phantomsubcaption{\label{fig:enU3_eta0_25}}
\end{subfigure}
\vspace{-\baselineskip}
\caption{Parameters $t_n$ and $e_n$ for the auxiliary chain corresponding to $-\Im\Sigma(\omega)$ of the \textit{ph}-asymmetric Hubbard model with $\eta = \frac14$ at $U/t=9$ \subref{fig:tnU9_eta0_25},\subref{fig:enU9_eta0_25} and at $U/t=3$ \subref{fig:tnU3_eta0_25},\subref{fig:enU3_eta0_25}.\label{fig:phasymmtnen}}
\end{figure}
Characteristic self-energies for the \textit{ph}-asymmetric Hubbard model are shown in Fig.~\ref{fig:S_eta0_25}. The parameters chosen here for analysis are $\eta = \frac14$ with the Mott insulating phase parameters $U/t = 9$ with $D = 4.625$ (self-energy plotted in Fig.~\ref{fig:SU9_eta0_25}), and parameters $U/t = 3$ with $D = 4.5$ for the Fermi liquid (self-energy plotted in Fig.~\ref{fig:SU3_eta0_25}). The notable characteristics of these spectra are that the Mott pole at $\eta \neq 0$ no longer sits at the Fermi level, but rather has been shifted to $\omega_{\textsc{mp}} \approx 0.12 D$. On the other hand, the vertex of the pseudogap in the Fermi liquid still does sit at $\omega = 0$, although it has an asymmetry in the weight of the spectral bands. Employing the modified auxiliary field mapping from above, auxiliary chains can now be constructed for each of these self-energies.
The auxiliary chain parameters for the $\eta = \frac14$ Mott insulator and Fermi liquid are shown in Fig.~\ref{fig:phasymmtnen}. It is seen that
away from particle-hole symmetry, it is no longer true that the $e_n = 0$ $\forall n$ in the effective chain. Instead, these parameters take on an alternating pattern analogous to the $t_n$ parameters. The resulting effective model is then more appropriately a generalized Rice-Mele model~\cite{ricemele} rather than an SSH model. A Rice-Mele model has the same alternating hopping parameters as the SSH model, but additionally has alternating on-site potentials as well.
The auxiliary chain parameters for the Mott insulator are shown in Figs.~\ref{fig:tnU9_eta0_25} and \subref{fig:enU9_eta0_25}. The hoppings exhibit the characteristic strict alternating pattern initialized with a weak bond that is to be expected for the production of a spectral pole within a gap. The on-site potentials also exhibit a strict alternating form, with the chain initialized with $e_0/D \approx 0.12$, giving the position of the spectral pole. The spectral pole is also not centered in the gap. The staggered potentials alternate around a non-zero value, which shifts the center of the gap away from the Fermi level.
The auxiliary chain parameters for the Fermi liquid are shown in Figs.~\ref{fig:tnU3_eta0_25} and \subref{fig:enU3_eta0_25}. The asymptotic behavior of the hopping parameters again has $1/n$ envelope as in the \textit{ph}-symmetric case for a power-law spectrum. This $1/n$ envelope is also present in the potentials as well.
Unlike in the Mott insulating case, the alternating potentials is about zero, which reflects that the vertex of the pseudogap still lies at the Fermi level.
In both cases, all the auxiliary chain parameters exhibit a strong asymmetry in the initial head of the chain, reflecting the strong asymmetry present in the higher energy parts of the self-energies.
\subsection{Asymmetric Topology}\label{sec:phasymmtopology}
Based on the analysis in Ref.~\cite{extendedrm}, the following shows that this auxiliary system is indeed topological.
The original model considered in Ref.~\cite{extendedrm} was that of a SSH model with parameterized next-nearest neighbor (NNN) dynamics. The model considered here instead incorporates on site potentials. For systems consisting of a two site unit cell, as in the case here, the two schemes result in the same effective Hamiltonian in momentum space as both NNN and potential terms enter into the diagonal entries of the momentum space Hamiltonian.
As shown in Figs.~\ref{fig:tnU9_eta0_25} and \subref{fig:enU9_eta0_25} the bulk of the auxiliary chain in the Mott insulator has strictly alternating hoppings $t_A$ and $t_B$ as well as alternating
on-site potentials $\epsilon_A$ and $\epsilon_B$ on well-defined $A$ and $B$ sublattices. This implies that the bulk can be taken to be uniformly periodic, meaning that momentum is a good quantum number. Importantly, it allows the analysis to take place in momentum space where traditional methods for studying band topology can be employed, \S\ref{sec:topology}~\cite{fruchartcarpentier,shortcourse,altlandsimons}. Fourier transforming to momentum space yields $\hat{H} = \sum_{k} \tensor*{\vec{f}}{_k^\dagger} \boldsymbol{h}(k) \tensor*{\vec{f}}{_k}$, where $\tensor*{\vec{f}}{_k^\dagger} = \adjvec{\opd{f}{Ak} & \opd{f}{Bk}}\,$, and
\begin{align}
\boldsymbol{h}(k) =
\begin{pmatrix}
\epsilon_A & t_A + t_B e^{\i k} \\
t_A + t_B e^{-\i k} & \epsilon_B
\end{pmatrix}.
\label{eq:nnn}
\end{align}
Following Ref.~\cite{extendedrm} the coefficients of the Hamiltonian are re-parametrized as
\begin{equation}
t_A = t_0 (1 - \delta t \cos\theta) \,,\;
t_B = t_0 (1 + \delta t \cos\theta) \,,\;
\epsilon_{A} = q \cos(\theta + \phi) \,,\;
\epsilon_{B} = q \cos(\theta - \phi) \,.
\end{equation}
The introduction of a periodically modulated parameter such as $\theta$ as a means of determining the topology of a system is well-known in the literature in the context of the Thouless pump~\cite{thoulesspump,shortcourse}. This concept has also been generalized to the study of topological insulators with synthetic dimensions, with a prominent example being the $4d$ quantum Hall effect~\cite{4dqhe,4dqheuca,syntheticdimensions}.
These methods for endowing systems with synthetic dimensions have also been used to engineer systems with effective magnetic fields or effective gauge fields~\cite{lightgauge,syntheticdimensions}. In addition to being an intriguing theoretical construct, such ideas have been implemented in various experimental setups~\cite{lightgauge,syntheticdimensions,synthetichallribbons,bosegasqhe,4dcircuit}.
The effective model takes the form of a system in one real spatial dimension and one synthetic dimension. In momentum space the two dimensions appear with equal footing, thereby resulting in a system which is $2d$ in momentum space. The topology of this effective model can then be measured using the Berry curvature and the Chern number.
In the momentum space representation, the cyclic parameter $\theta$ plays the role of the momentum in a synthetic dimension alongside the usual quasimomentum $k$ to produce an effective two-dimensional Brillouin zone. The topological invariant is given by the Chern number\index{Chern number}
\begin{equation}
\mathrm{Ch} = \frac{1}{2\pi}\oint_{\textsc{bz}} F,
\end{equation}
with the usual notation of exterior forms
\begin{equation}
\begin{aligned}
F &= \d A
\\
&= \left( \partial_k A_\theta - \partial_\theta A_k \right) \d k \extp \d\theta,
\end{aligned}
\end{equation}
where $A_{k}$ is the $k$ component of the Berry connection and $A_\theta$ is the $\theta$ component. For $\theta 25 [\frac{\pi}{2},\frac{3\pi}{2}]$, as is the case for the Mott insulator self energy considered here, the Chern number is explicitly given by~\cite{extendedrm,kaufmann}
\begin{equation}
\mathrm{Ch} = \frac12[\sgn(2 q \sin\phi) - \sgn(-2 q \sin\phi)]=\pm 1 \,,
\end{equation}
meaning the system is topological.
Note that in case of \textit{ph}-symmetry, $t_0=D/2$, however here it is observed that $t_0\approx D/2$ which could be numerical. For the mapping demonstrated here the numerical values are used.
Furthermore, unlike the \textit{ph}-symmetric case, here the high energy cutoff $D_+(D_-)$ on the positive (negative) side is different and it is chosen to be $D=(D_++|D_-|)/2$.
The respective parameters for this system, as obtained from the continued fraction expansion mapping of the Mott insulator self-energy for $U/t=9$ and $\eta=\frac14$ plotted in Fig.~\ref{fig:SU9_eta0_25}, are $\epsilon_A = 0.24$ and $\epsilon_B = 0.52$, and hopping amplitudes $t_A = 1.85$ and $t_B = 2.78$. In order to cast this set of parameters in terms of $\delta t$, $q$, $\theta$, and $\phi$, $t_0=2.31$ as obtained from the calculation with $\delta t$ chosen to be $\delta t = 0.5$, as it is a free parameter. This yields $\{q,\,\theta,\,\phi\}\approx\{-0.97,\, 1.98,\,-0.16\}$. The $k$ is chosen to be the momentum point where the band gap in $\hat{H}_{\text{aux}}$ is minimum and is equal to the spectral gap in $-\Im\Sigma(\omega)$. This always occurs at $k=\pi$.
\begin{figure}[h]
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[scale=1]{phasymbands.pdf}
\end{subfigure}
\hfill
\begin{subfigure}{0.4\linewidth}
\centering
\includegraphics[scale=1]{phasymspec.pdf}
\end{subfigure}
\caption[Topological band structure of particle-hole asymmetric chain]{Band structure of the extended Rice-Mele model in the $(E,\theta)$-plane at $k=\pi$, evaluated with the parameters $\{q,t_0,\delta t,\phi\} \approx \{-0.97,2.31,0.50,-0.16\}$ corresponding to the Mott insulator. The solid vertical line marks the cut $\theta = 1.99$, which reproduces exactly the appropriate spectral function of the chain boundary (right). The two intragap bands correspond to localized states on the left and right boundaries; only the left boundary is physical in the semi-infinite auxiliary chain (lower intragap band in $\theta<\pi$ region).\label{fig:phasymbands}}
\end{figure}
Using the above parametrization, it follows that the Chern number $\mathrm{Ch} = 1$ is quantized and the system is in the topological phase. The topological character of the parametrized Mott insulating phase shown here, is further exemplified by the intragap band crossing shown in Fig.~\ref{fig:phasymbands}. Within this auxiliary model, the system is topological only if the intragap bands cross~\cite{extendedrm}. This occurs if there exists a $\theta$ such that $h \sin\theta \sin\phi = 0$. This is always the case for $\theta 25 [\frac\pi2,\frac{3\pi}{2}]$, which is when $t_A < t_B$, exactly as in the standard SSH model. Therefore, it can be concluded that the topological state is robust to perturbations in the hopping amplitudes as well as the on-site potentials.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{wfnU9_eta0_25.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:wfnU9_eta0_25}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:wfnU9_eta0_25}}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=1]{wfnlogU9_eta0_25.pdf}};
\node at (3.125,2) {\footnotesize \subref*{fig:wfnlogU9_eta0_25}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:wfnlogU9_eta0_25}}
\end{subfigure}
\vspace{-2\baselineskip}
\caption[Asymmetric wavefunction amplitude]{Zero energy wavefunction amplitude of ph-asymmetric the effective chain. The wavefunction exhibits exponential localization. Note that unlike the symmetric case, the wavefunction is not zero on all even sites, but rather takes finite value on all sites due to the broken symmetry.\label{fig:asymmwfn}}
\end{figure}
Finally, the mid-gap Mott pole, which arises here at $E_{\textsc{mp}} \vcentcolon= e_0\approx0.12D$, corresponds to a bound state that is exponentially localized on the boundary between the auxiliary system and the physical degrees of freedom of the lattice. This state is denoted the `Mott pole state'
$\mathnormal{\Psi}_{\textsc{mp}}$, and it satisfies $\hat{H}_{\text{aux}}\mathnormal{\Psi}_{\textsc{mp}}=E_{\textsc{mp}}\mathnormal{\Psi}_{\textsc{mp}}$.
Numerical diagonalization of $\hat{H}_{\text{aux}}$ allows the wavefunction amplitude of this state to be plotted as a function of chain site index $n$. This is plotted in Fig.~\ref{fig:asymmwfn} and shows that $\psi_{\textsc{mp}}$ is exponentially-localized on the edge of the chain. This is a result of the strict alternation of the chain parameters, with the coupling $t_1$ at the start of the chain being a \textit{weak} bond. Even though the chain parameters near the boundary exhibit variations/perturbations with respect to the bulk of the chain, the strict alternation guarantees that the Mott pole state is localized.
This is confirmed by the transfer matrix method (described in \S\ref{sec:calcmeth}), which yields explicitly
\begin{equation}
\lvert \mathnormal\Psi_{\textsc{mp}}(n) \rvert^2 \sim \prod_{m=1}^{n-1} \frac{E_{\textsc{mp}} - e_{m} - t_{m-1}}{t_{m}} .
\end{equation}
This expression gives a stringent condition connecting the parameters $t_n$, $e_n$ and the pole position $E_{\textsc{mp}}$ for a localized state. The state shows considerable robustness against perturbations to the chain parameters. In particular, if the Mott pole lies inside the gap (as it must in the insulator, by definition) then the corresponding state in the auxiliary chain is exponentially-localized on the boundary. Inverting the parity of the chain oscillations for all sites involves bulk gap closing; while local parity flips down the chain generate additional domain wall states (i.e.~multiple mid-gap poles, which are not seen in the Mott insulator). Although the Mott pole position $E_{\textsc{mp}}$ is affected by the boundary potential $e_1$, removing the boundary state by some boundary potential perturbation is equivalent to shifting the pole out of the gap, in which case the spectrum no longer represents a Mott insulator.
When analyzing the robustness of the boundary-localized state to perturbations, the perturbations which must be considered are those which are to the original physical degrees of freedom of the Hubbard model, not unphysical perturbations to the auxiliary chain. Perturbations to the Hubbard model constitute variations in the values $U/t$ and $\eta$. However, provided such perturbations are not so drastic as to cause the system to cross into a different phase, the Mott insulator will always have a Mott pole inside the gap, and hence a corresponding boundary-localized state. It is concluded that a Mott insulator, by definition, has an exponentially-localized state on the boundary of its auxiliary system, which is robust to the physically-relevant perturbations to the underlying Hubbard model.
The auxiliary system is therefore concluded to be topologically non-trivial, even on breaking particle-hole symmetry.
\section{Topological Indicator}
While the auxiliary chains constructed here take the form of generalized topological SSH models, they are spatially inhomogeneous and have a boundary, and therefore momentum is not a good quantum number. This means it is not possible to perform a transform to momentum space and evaluate a standard topological invariant, such as the Zak phase\index{Zak phase}, as in \S\ref{ch:genssh}.
However, it has been shown recently for the Hubbard model at $T=0$ that the Luttinger integral\index{Luttinger integral} takes distinct constant values in the Fermi liquid and Mott insulator phases for any $\eta \neq 0$~\cite{logangalpin}
\footnote{For a detailed discussion of the Luttinger integral in non-Fermi liquid phases see \cite{logantuckergalpin}.},
\begin{equation}
\label{eq:lutt}
\begin{aligned}
I_{\textsc{L}}
&= \frac{2}{\pi} \Im \int_{-\infty}^{0} \d\omega\, G(\omega) \frac{\d \Sigma(\omega)}{\d \omega}
\\
&= \begin{cases} 0 & \text{Fermi Liquid} \\ 1 & \text{Mott Insulator} \end{cases}
\end{aligned}
\end{equation}
The finite value of $I_{\textsc{L}}$ for the generic Mott insulator can be traced to the Mott pole, which is identified here as the topological feature of the Mott insulator. $\eta=0$ is a special point at $T=0$ where it is found that $I_{\textsc{L}}=0$~\cite{logangalpin}. This appears to be an order-of-limits issue and $I_{\textsc{L}}=1$ in the Mott insulator phase is expected if the $\eta\rightarrow 0$ limit is taken before the $T\rightarrow 0$ limit~\cite{galpinprivate}. The generic Mott insulator has $I_L = 1$.
Since the evolution of the self-energy with interaction strength drives the Mott transition\index{Mott transition}, the Luttinger integral is a natural quantity to characterize the distinct topologies of the Fermi liquid and Mott insulator phases, and may be regarded as a topological invariant.
\begin{comment}
A further signature of the topological nature of the Mott insulator self-energy, is a quantized fictitious $T=0$ `conductance' through the end of the auxiliary chain
\begin{equation}
\mathfrak{G}_{C} = \frac{e^2}{\hslash}\Gamma \mathcal{A}^{\text{aux}}(\omega=0) = 1
\end{equation}
where $\Gamma$ is the hybridization to fictitious electrodes, and $\mathcal{A}^{\text{aux}}(\omega)$ is the spectral function at the end of the electrode-coupled auxiliary chain. By contrast with the Mott insulator, the fictitious $T=0$ conductance through the end of the auxiliary chain precisely vanishes in the Fermi liquid phase,
\begin{equation}
\mathfrak{G}_{C} = \frac{e^2}{\hslash} \Gamma \mathcal{A}^{\text{aux}}(\omega=0) = 0 .
\end{equation}
\end{comment}
To reiterate, the signatures of topology from the Luttinger integral described above are \textit{not} being employed as a topological invariant of the Hubbard model, but rather they serve as an indicator for the topology of the auxiliary chain constructed for the effective system. The topology here lies in the auxiliary degrees of freedom of the effective system, and not in the physical degrees of freedom of the Hubbard model.
\section{Outlook}\label{sec:motttopologyoutlook}
Presented here was an interpretation of the classic Mott transition in the infinite dimensional one band Hubbard model as a topological phase transition. The lattice self-energy, determined here by DMFT-NRG, is mapped to an auxiliary tight-binding chain, which is found to be of generalized SSH model type. The Mott insulator is the topological phase, with a boundary-localized state corresponding to the Mott pole. The transition from Fermi liquid to Mott insulator involves domain wall dissociation.
It is concluded that any system with such a pole in its local self-energy may be regarded as topological. The analysis could also be extended to multiband models, where the auxiliary chains become multilegged ladders. A speculation is that a superconducting Hubbard model my map to auxiliary Kitaev chains\index{Kitaev superconductor} involving Majorana\index{Majorana} modes. For a fully momentum-dependent self-energy of a $d$-dimensional lattice, the mapping generalizes to an auxiliary lattice in $d+1$ dimensions. For a Mott insulator, such an auxiliary lattice may be a topological insulator with a localized boundary state.
This chapter's detailed study of treating the Hubbard model with the auxiliary field mapping developed in this thesis serves as a robust proof of concept of the method and demonstrates that non-trivial interpretations of conventional results may emerge as a product of the mapping. The auxiliary models for the proposed extensions above may similarly possess interesting features.
\chapter{Generalized SSH Models\label{ch:genssh}}
The SSH model described at the end of the previous chapter is simple enough to provide the basis of a wide variety of extensions which explore realizations of quantum topology in $1d$ systems. Several such extensions are the subject of the present chapter. As mentioned in \S\ref{sec:sshmodel}, many generalizations of the SSH model exist in the literature. The generalizations contained in this chapter are similar in concept to the SSH trimer and SSH$_4$ models\cite{trimer,ssh4} in that they are based on extending the size of the unit cell in $1d$. In contrast to the previous literature, the generalized SSH models here extend the unit cell to arbitrary size with the unit cell hopping parameters given by a particular functional form.
The generalized SSH models presented here also make appearances in the auxiliary systems which are constructed in \S\ref{ch:aux} and \S\ref{ch:motttopology}. This section explores $1d$ models which generalize the SSH model both at the level of the Hamiltonian and at the level of the spectral function.
Discussed first in this chapter are the characteristics of domain walls in the semi-infinite SSH model. The discussion of SSH model domain walls in the literature generally considers only a single domain wall, or a pair of domain walls, in a fully infinite system~\cite{solitons}. These configurations are reviewed below before turning to the semi-infinite case with arbitrary numbers of domain walls, which has not previously been discussed in the literature. Additionally, the method here of reverse engineering the system parameters from the spectrum of topological SSH-type models, performed here via a spectral moment recursion as well as the Lanczos algorithm, is new to the literature.
This chapter is based on the paper~\cite{generalizedssh}. Elements of this work also appear in~\cite{motttopology}.
\section{Domain Walls\label{sec:domwalls}}
A simple non-trivial generalization of the SSH spectrum is to consider the case where there are two localized mid-gap states. These manifest as poles located at $\pm\omega_p$ due to the system's chiral symmetry.
The number of mid-gap poles is equal to the number of domain walls.
In the absence of domain walls, the bulk of an infinite SSH model possesses an insulating gapped spectral function.
\begin{figure}[h]
\subfiglabel{\includegraphics[scale=1]{sshbandsbulk.pdf}}{3.2,2}{fig:sshspecbulk}
\subfiglabel{\includegraphics[scale=1]{sshbandsbulksoliton.pdf}}{3.2,2}{fig:sshspecdw}
\caption[Bulk spectrum in the SSH model without \subref{fig:sshspecbulk} and with \subref{fig:sshspecdw} a domain wall]{Bulk spectrum in the SSH model without \subref{fig:sshspecbulk} and with \subref{fig:sshspecdw} a domain wall. Note that the presence of a domain wall yields the same qualitative spectrum as the boundary in the topological phase.}
\end{figure}
A domain wall may be introduced by reversing the parity of the alternating hopping pattern at one site, demonstrated in Fig.~\ref{fig:simpledwschematic}. This results in the appearance of a domain wall at the site where the hopping parity flips.
On this site the spectral function is no longer insulating, but takes a form similar to that of the SSH boundary in the topological phase, with a zero energy spectral pole. This result shows that domain walls host localized states in a similar way as the boundary of the SSH model does in its topological configuration.
In fact, the boundary of a semi-infinite SSH chain may itself be interpreted as being a domain wall between the system and the vacuum, which is topologically trivial.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.67,site/.style={circle,draw=black,inner sep=4pt,line width=1pt}]
\node[site,fill=gray] (0) at (0,0){};
\node[below=3pt] (arrow) at (0) {$\uparrow$};
\node[below] at (arrow) {$G_{\text{dw}}(z)$};
%
\node[site] (r1) at (2,0){};
\node[site] (r2) at (4,0){};
\node[site] (rend) at (6,0){};
%
\draw[line width=1.75pt, dashed, line cap=round]($(rend)+(1,0)$)--($(rend)+(2,0)$);
\draw[line width=1.75pt,color=blue](0)--(r1) node[midway,above,blue] {${t_A}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](r1)--(r2) node[midway,above,red] {${t_B}$};
\draw[line width=1.75pt,color=blue](r2)--(rend);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](rend)--($(rend)+(1,0)$);
%
\node[site] (l1) at (-2,0){};
\node[site] (l2) at (-4,0){};
\node[site] (lend) at (-6,0){};
%
\draw[line width=1.75pt, dashed, line cap=round]($(lend)+(-1,0)$)--($(lend)+(-2,0)$);
\draw[line width=1.75pt,color=blue](0)--(l1) node[midway,above,blue] {${t_A}$};
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](l1)--(l2) node[midway,above,red] {${t_B}$};
\draw[line width=1.75pt,color=blue](l2)--(lend);
\draw[line width=1.25pt,color=red,double,double distance=1.25pt](lend)--($(lend)+(-1,0)$);
%
\node at (-0.725,-5) {\includegraphics{sshdwt.pdf}};
\end{tikzpicture}
\caption{Schematic of an elementary domain wall in an infinite SSH model.\label{fig:simpledwschematic}}
\end{figure}
The Green function on an elementary domain wall is given by
\begin{equation}
G_{\text{dw}}(z) = \cfrac{1}{z - 2 t_A^2 G_{t_B,t_A}(z)}
\end{equation}
where $G_{t_B,t_A}(z)$ is in the notation of Eq.~\eqref{eq:sshgreenfunctionform}. This Green function is produces the spectrum shown in Fig.~\ref{fig:sshspecdw}.
Now consider the case of a semi-infinite SSH model with the boundary in the topological configuration with a single domain wall located a finite distance into the chain, as shown in Fig.~\ref{fig:simpledw}.\footnote{As mentioned above, this set up is equivalent to an infinite SSH model with \textit{two} domain walls in the bulk as the topological boundary can be considered a domain wall with a vacuum.} The spectral function now features instead of a single mid-gap pole, two poles.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{simpledwG.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{sshsimpledw.pdf}
\end{subfigure}
\caption[An SSH model with a simple domain wall formed by a change in parity of the stagger in the hopping parameters]{An SSH model with a simple domain wall formed by a change in parity of the stagger in the hopping parameters. Note that this produces a pair of gapped poles in the spectrum, and that the outer bands take on a distorted form, but still continuous.\label{fig:simpledw}}
\end{figure}
However, in this case the SSH bands in the spectral function, while still continuous, become distorted, as shown in Fig.~\ref{fig:simpledw}.
It is therefore instructive to investigate under which parameterization of the hopping amplitudes $t_n$ yield multiple mid-gap states as well as preserve the form of the SSH satellite bands, so that the spectrum of an SSH model with a single intragap state with one with multiple intragap states can be directly compared. The motivation here is to produce models whose only difference from the conventional SSH model is the number of localized states in the gap. The objective of preserving the outer bands will become relevant in \S\ref{ch:motttopology} where effective models designed to reproduce specific spectral functions are engineered.
The appropriate parameterization can be found from investigating the continuum quantum field theory associated to the SSH lattice model.
Rather than focusing on the structure of the of chain parameters and examining the resulting spectrum, it is useful to construct the spectrum first and then reverse engineer the chain parameters.
\label{sec:moments}
A method which can uncover this parameterization numerically is by analyzing the moments of the spectral function. Consider first that a composite spectrum may be written as
\begin{equation}
\mathcal{A}(\omega) = \frac{1}{\mathcal{N}} \sum_{i} w_{i} \mathcal{A}_{i}(\omega)
\end{equation}
where $w_i$ are the relative weights of the spectral elements $\mathcal{A}_{i}(\omega)$ with normalization $\mathcal{N} = \sum_{i} w_{i}$.
The $k$-th spectral moment of spectral element $i$ is given by
\begin{equation}
\mu_{i,k} \vcentcolon= \int \omega^k \mathcal{A}_{i}(\omega) \d\omega \,.
\end{equation}
The $k$-th spectral moment for the total spectrum is given by
\begin{equation}
\mu_{k} = \frac{1}{\mathcal{N}} \sum_{i} w_{i} \mu_{i,j} \,.
\end{equation}
Since moments are additive, it is possible to add the moments of the outer bands of SSH form plus the moments from the spectral poles.
For a particle-hole symmetric system, the spectral function obeys $\mathcal{A}(\omega) = \mathcal{A}(-\omega)$, and therefore only the even moments survive, $\mu_{2n+1} = 0$. Particle-hole symmetry holds for the cases considered in this chapter, although generalizations of this method also exist~\cite{recursionmethod}.
From a set of spectral moments $\{ \mu_0, \mu_2, \ldots, \mu_{2N} \}$, it is possible to construct the first $N$ elements of a chain's hopping parameters $t_n$. The $n$-th chain parameter $t_n$ can be calculated by introducing a set of auxiliary variables $X_k(n)$ obeying the recursion relation~\cite{recursionmethod}
\begin{equation}
X_{2k}(n) = \frac{X_{2k}(n-1)}{t_{n-1}^2} - \frac{X_{2k-2}(n-2)}{t_{n-2}^2}
\end{equation}
initialized with $X_{2k}(0) = \mu_{2k}$, $X_{2k}(-1) = 0$, and $t_{-1}^2 = 1 = t_{0}^2$.
The $n$-th chain parameter is recovered from $t_{n}^2 = X_{2n}(n)$. While this moment analysis will give exact results for the $t_n$'s, this scheme is known to be numerically unstable~\cite{gaspard,gautschi}. Using arbitrary precision numerics the maximum number of parameters that can be calculated in practice is about $n_{\text{max}}\sim$120. After this number the calculations deliver unphysical results, such as returning a $t_{n>n_{\text{max}}}^2 < 0$.
An application of the moment analysis is showing that the addition of a zero energy pole of appropriate weight to the spectrum of the SSH model in the trivial phase has the effect of flipping the parity of all the chain parameters of the system.
That is, $t_n = t_0 - (-1)^{n}\delta t \to t_0 - (-1)^{n+1}\delta t$, on adding the zero-energy pole.
In the trivial phase, the SSH model parameters obey $t_{2j-1} > t_{2j}$ for $j25\mathbbm{Z}^+$ and the spectral function features two bands separated by a hard gap without any mid-gap pole.
In this phase the moments are given by
\begin{equation}
\mu_{2n} = \int \omega^{2n} \mathcal{A}(\omega) \d\omega
\end{equation}
where the spectral function is
\begin{equation}
\mathcal{A}(\omega) = -\frac1\pi \Im\left[ \frac{z^2 + t_1^2 - t_2^2 \pm \sqrt{(z^2 - t_2^2 + t_1^2)^2 - 4 z^2 t_1^2}}{2 z t_1^2} \right]
\end{equation}
and all odd moments vanish by symmetry of the spectral function, $\mathcal{A}(-\omega) = \mathcal{A}(\omega)$.
The spectrum is then altered with the addition of a pole at $\omega=0$ of weight $w_p$ and the weight of the bands is lowered by the same amount to preserve the spectral sum rule. The spectral weight of the added pole is chosen to match the weight of the SSH pole,
\begin{equation}
w_p = \frac{t_1^2 - t_2^2}{t_1^2} \,.
\end{equation}
The moments for this new spectral function are the sum of the moments of the constituent parts, so that the total moment for the new spectrum is obtained from
\begin{equation}
\twiddle{\mu}_{2n} = \twiddle{\mu}_{2n}^{p} + \twiddle{\mu}_{2n}^{b}
\end{equation}
where the constituent moments are
\begin{subequations}
\begin{align}
\twiddle{\mu}_{2n}^{p}
&= \int \omega^{2n} w_p \delta(\omega) \d\omega
\intertext{for the pole, and}
\twiddle{\mu}_{2n}^{b}
&= \int \omega^{2n} (1-w_p) \mathcal{A}(\omega) \d\omega\end{align}
\end{subequations}
for the bands, where $\mathcal{A}(\omega)$ is the trivial SSH spectral function. The factor of $(1-w_p)$ comes from the SSH bands decreasing by an amount given by the weight of the zero pole. The bands of the SSH model in the topological phase are the same as those of the bands in the trivial phase scaled by the factor $(1-w_p)$.
Being at $\omega=0$, the pole only possesses a zeroth moment,
\begin{equation}
\twiddle{\mu}_0^p = w_p \,.
\end{equation}
The zeroth moment of the bands is simply $1-w_p$ due to the normalization of the spectral function. The zeroth moment of the total spectrum is then
\begin{equation}
\twiddle{\mu}_0 = 1 \,.
\end{equation}
The higher moments of the composite spectrum $\twiddle{\mu}$ are related to the moments of the initial trivial spectrum $\mu$ by
\begin{equation}
\begin{aligned}[b]
\twiddle{\mu}_{2n} &= (1-w_p) \mu_{2n}
\\
&= \frac{t_2^2}{t_1^2} \mu_{2n}
\end{aligned}
\end{equation}
The hopping parameters associated to the composite spectrum moments can now be obtained from the moment recursion as
\begin{subequations}
\begin{align}
&\begin{aligned}[b]
\twiddle{t}_{1}^2
&= \twiddle{X}_{2}(1)
\\
&= \twiddle{\mu}_2
\\
&= \frac{t_2^2}{t_1^2} \mu_{2}
\\
&= \frac{t_2^2}{t_1^2} X_{2}(1)
\\
&= \frac{t_2^2}{t_1^2} t_1^2
\\
&= t_2^2
\end{aligned}
\intertext{and}
&\begin{aligned}[b]
\twiddle{t}_2^2
&= \twiddle{X}_{4}(2)
\\
&= \frac{\twiddle{X}_{4}(1)}{\twiddle{t}_1^2} - \frac{\twiddle{X}_{2}(0)}{\twiddle{t}_0^2}
\\
&= \frac{\twiddle{\mu}_{4}}{\twiddle{t}_1^2} - \twiddle{\mu}_2
\\
&= \frac{\frac{t_2^2}{t_1^2} {\mu}_{4}}{{t}_2^2} - \frac{t_2^2}{t_1^2} {\mu}_2
\\
&= t_2^2 + t_1^2 - \frac{t_2^2}{t_1^2} t_1^2
\\
&= t_1^2 \,.
\end{aligned}
\end{align}
\end{subequations}
As this calculation shows, the parity of the system hopping parameters is reversed, so that the system is now an SSH model in its topological phase.
A similar calculation can be done to insert two poles at $\pm\omega_p$ into the trivial SSH spectrum, as shown later.
\begin{comment}
\(\displaystyle
\mu_{n} \to \frac{\mu_{n}}{1 + \alpha} = \twiddle{\mu}_n
\qquad
\qquad
\twiddle{\mu}_0 = 1
\)
\hfill
\(\displaystyle
\alpha = \frac{t_1^2}{t_2^2} - 1
\)
\\
SSH: $t_{n} = t_{n+2\ell}$, $\ell25\mathbbm{N}$
$ t_{n} = \M{n}{2n} $ $\leadsto$ $\M{n}{2n} = \M{n+2\ell}{2n+4\ell}$
\\
\[\begin{aligned}
\tensor*{M}{^{(0)}_{2n}}
&= \mu_{2n}
\\
&= (1+\alpha) \twiddle{\mu}_{2n}
\\
&= (1+\alpha) \tensor*{\twiddle{M}}{^{(0)}_{2n}}
\\
\frac{\tensor*{M}{^{(0)}_{2n}}}{1+\alpha}
&= \tensor*{\twiddle{M}}{^{(0)}_{2n}}
\\
\mu_{2n}\frac{t_2^2}{t_1^2}
&= \widetilde{\mu}_{2n}
\end{aligned}\]
\begin{align*}
&
\begin{aligned}
\widetilde{t}_{1}^2
&= \tM{1}{2}
\\
&= \twiddle{\mu}_2
\\
&= \frac{t_2^2}{t_1^2} \mu_{2}
\\
&= \frac{t_2^2}{t_1^2} \M{1}{2}
\\
&= \frac{t_2^2}{t_1^2} t_1^2
\\
&= t_2^2
\end{aligned}
&
&
\begin{aligned}
\twiddle{t}_2^2
&= \tM{2}{4}
\\
&= \frac{\tM{1}{4}}{\twiddle{t}_1^2} - \frac{\tM{0}{2}}{\twiddle{t}_0^2}
\\
&= \frac{\twiddle{\mu}_{4}}{\twiddle{t}_1^2} - \twiddle{\mu}_2
\\
&= \frac{\frac{t_2^2}{t_1^2} {\mu}_{4}}{{t}_2^2} - \frac{t_2^2}{t_1^2} {\mu}_2
\\
&= t_2^2 + t_1^2 - \frac{t_2^2}{t_1^2} t_1^2
\\
&= t_1^2
\end{aligned}
\end{align*}
\begin{align*}
\tM{n+2\ell-1}{2n+4\ell}
&= \twiddle{t}_{n+2\ell-1}^2 \tM{n+2\ell}{2n+4\ell} - \frac{\twiddle{t}_{n+2\ell-1}^2}{\twiddle{t}_{n+2\ell-2}^2} \tM{n+2\ell-2}{2n+4\ell-2}
\end{align*}
$n=2$
\begin{align*}
\tM{2+2\ell-1}{4+4\ell}
&= \twiddle{t}_{2+2\ell-1}^2 \tM{2+2\ell}{4+4\ell} - \frac{\twiddle{t}_{2+2\ell-1}^2}{\twiddle{t}_{2+2\ell-2}^2} \tM{2+2\ell-2}{4+4\ell-2}
\\
\tM{1+2\ell}{4+4\ell}
&= \twiddle{t}_{1+2\ell}^2 \tM{2+2\ell}{4+4\ell} - \frac{\twiddle{t}_{1+2\ell}^2}{\twiddle{t}_{2\ell}^2} \tM{2\ell}{2+4\ell}
\end{align*}
$\ell = 0$
\begin{align*}
\tM{1}{4}
&= \twiddle{t}_{1}^2 \tM{2}{4} - \twiddle{t}_{1}^2 \tM{0}{2}
\end{align*}
$\forall n,\ell25\mathbbm{N}$
\begin{align*}
\M{n+2\ell-1}{2n+4\ell}
&= {t}_{n+2\ell-1}^2 \M{n+2\ell}{2n+4\ell} - \frac{{t}_{n+2\ell-1}^2}{{t}_{n+2\ell-2}^2} \M{n+2\ell-2}{2n+4\ell-2}
\end{align*}
$n=2$
\begin{align*}
\M{2+2\ell-1}{4+4\ell}
&= {t}_{2+2\ell-1}^2 \M{2+2\ell}{4+4\ell} - \frac{{t}_{2+2\ell-1}^2}{{t}_{2+2\ell-2}^2} \M{2+2\ell-2}{4+4\ell-2}
\\
\M{1+2\ell}{4+4\ell}
&= {t}_{1+2\ell}^2 \M{2+2\ell}{4+4\ell} - \frac{{t}_{1+2\ell}^2}{{t}_{2\ell}^2} \M{2\ell}{2+4\ell}
\\
&= \M{1+2\ell}{2+4\ell} \M{2+2\ell}{4+4\ell} - \frac{\M{1+2\ell}{2+4\ell}}{\M{2\ell}{4\ell}} \M{2\ell}{2+4\ell}
\end{align*}
\(\displaystyle
\tensor*{\twiddle{M}}{^{(n)}_{2k}}
=
\frac{\tensor*{\twiddle{M}}{^{(n-1)}_{2k}}}{\twiddle{t}_{n-1}^2} - \frac{\tensor*{\twiddle{M}}{^{(n-2)}_{2k-2}}}{\twiddle{t}_{n-2}^2}
\)
\begin{minipage}[t]{0.45\linewidth}
\[\begin{aligned}
\tensor*{\twiddle{M}}{^{(1)}_{2k}}
&=
\frac{\tensor*{\twiddle{M}}{^{(0)}_{2k}}}{\twiddle{t}_{0}^2} - \frac{\tensor*{\twiddle{M}}{^{(-1)}_{2k-2}}}{\twiddle{t}_{-1}^2}
\\
&=
\tensor*{\twiddle{M}}{^{(0)}_{2k}}
\\
&=
\frac{\tensor*{M}{^{(0)}_{2k}}}{1+\alpha}
\end{aligned}\]
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\[
\twiddle{t}_1^2 = \tensor*{\twiddle{M}}{^{(0)}_{2}} = \frac{\mu_{2}}{1+\alpha}
\]
\end{minipage}
\\
\begin{minipage}[t]{0.45\linewidth}
\[\begin{aligned}
\tensor*{\twiddle{M}}{^{(2)}_{2k}}
&=
\frac{\tensor*{\twiddle{M}}{^{(1)}_{2k}}}{\twiddle{t}_{1}^2} - \frac{\tensor*{\twiddle{M}}{^{(0)}_{2k-2}}}{\twiddle{t}_{0}^2}
\\
&=
\cfrac{\cfrac{\tensor*{M}{^{(0)}_{2k}}}{1+\alpha}}{\cfrac{\mu_{2}}{1+\alpha}} - \twiddle{\mu}_{2k-2}
\\
&=
\cfrac{\tensor*{M}{^{(0)}_{2k}}}{\mu_{2}} - \frac{\mu_{2k-2}}{1+\alpha}
\end{aligned}\]
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\[\begin{aligned}
\twiddle{t}_{2}^2 = \left. \tensor*{\twiddle{M}}{^{(2)}_{2k}} \right\vert_{k=2}
&= \left. \cfrac{\mu_{2k}}{\mu_{2}} - \frac{\mu_{2k-2}}{1+\alpha} \right\vert_{k=2}
\\
&= \cfrac{\mu_{4}}{\mu_{2}} - \frac{\mu_{2}}{1+\alpha}
\end{aligned}\]
\end{minipage}
\\
\begin{minipage}[t]{0.45\linewidth}
\[\begin{aligned}
\tM{3}{2k}
&=
\frac{\tM{2}{2k}}{\twiddle{t}_2^2} - \frac{\tM{1}{2k-2}}{\twiddle{t}_1^2}
\\
&=
\cfrac{\cfrac{\M{0}{2k}}{\mu_{2}} - \cfrac{\mu_{2k-2}}{1+\alpha}}{\twiddle{t}_2^2} - \cfrac{\cfrac{\M{0}{2k-2}}{1+\alpha}}{\twiddle{t}_1^2}
\end{aligned}\]
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\[\begin{aligned}
\twiddle{t}_3^2 = \left. \tM{3}{2k} \right\vert_{k=3}
&=
\cfrac{\cfrac{\mu_{6}}{\mu_{2}} - \cfrac{\mu_{4}}{1+\alpha}}{\twiddle{t}_2^2} - \cfrac{\cfrac{\mu_{4}}{1+\alpha}}{\twiddle{t}_1^2}
\end{aligned}\]
\end{minipage}
\end{comment}
\subsection{Continuum Field Theory}\label{sec:continuumfieldtheory}
Information on the nature of domain walls in SSH systems can be gathered from considering the solutions to the continuum quantum field theory corresponding to the SSH lattice model.
The continuum quantum field theory of the SSH model, also known as the Takayama--Lin-Liu--Maki (TLM) model~\cite{tlm}, can be obtained by linearizing the Hamiltonian around the low-energy near-metallic limit. The metallic phase occurs at the phase space point where $t_B = t_A$, or with the parameterization $t_A = t_0 - t$ and $t_B = t_0 + t$, where $t = 0$. At this point the band crossing occurs at $k = \pi$ in the Brillouin zone. Expanding the Hamiltonian around $k = \pi$ to linear order in $k$ and $t$,
\begin{equation}
\begin{aligned}[b]
h(k)
&= \left( t_A + t_B \cos(k) \right) \boldsymbol{\sigma}_1 - t_B \sin(k) \boldsymbol{\sigma}_2
\\
h(k-\pi)
&= t_0 \boldsymbol{\sigma}_2 k - 2 t \boldsymbol{\sigma}_1 + \mathcal{O}(k^2)
\\ &\approx \i \hslash t_0 \boldsymbol{\sigma}_2 \partial_x - m(x) \boldsymbol{\sigma}_1
\end{aligned}
\end{equation}
where $m(x) \vcentcolon= t_B - t_A$.
This is of the form of a Dirac Hamiltonian with position dependent mass. This Hamiltonian may be analyzed considering a domain wall at $x=0$. In this case, the mass term interpolates between the two topologically distinct vacua of $t_B > t_A$ and $t_B < t_A$ parameterized by $m(x\to\pm\infty) = \pm m_0$ where $m_0 = 2 t$.
The Schr\"{o}dinger equation for the zero energy eigenstate reads as
\begin{equation}
\begin{aligned}[b]
h \psi &= 0\cdot \psi
\\
\i \hslash t_0 \frac{\d}{\d x} \psi(x) - \i m(x) \boldsymbol{\sigma}_3 \psi(x) &= 0
\\
\i \hslash t_0 \frac{\d}{\d x} \begin{pmatrix} \psi_+(x) \\ \psi_-(x) \end{pmatrix} - \i m(x) \begin{pmatrix} \psi_+(x) \\ -\psi_-(x) \end{pmatrix} &= 0 \,.
\end{aligned}
\end{equation}
The second line follows from multiplying through by $\boldsymbol{\sigma}_2$.
This equation admits the formal solution of
\begin{equation}
\psi(x) = \psi(0) \e^{-\frac{1}{\hslash t_0} \int_0^x m(x') \d x'}
\end{equation}
In the topological configuration, the boundary of the system may be interpreted as a domain wall with the vacuum. Since the vacuum is regarded to be topologically trivial, the mass profile $m(x)$ then interpolates between topologically trivial and non-trivial phases. A function which smoothly interpolates between two topological configurations is
\begin{equation}
m(x) \sim \tanh\left( x \right)
\end{equation}
which then yields the formal solution for the wavefunction as
\begin{equation}
\psi(x) = \psi(0) \sech\left( x \right) \,.
\end{equation}
$\lvert \psi (x)\rvert^2 \sim \sech^2(x)$ decays exponentially, in agreement with the transfer matrix calculation derived in \S\ref{sec:sshmodel}.
The preceding discussion can be made more quantitatively precise by beginning with an alternative formulation of the continuum model.
Rather than starting from the lattice SSH model directly, the starting point of the continuum model can instead be taken to be the Peierls-Fr\"{o}hlich Hamiltonian
\begin{equation}
\hat{H} = \sum_{n} (t_0 + \alpha(u_{n} + u_{n+1})) ( \opd{c}{n+1} \op{c}{n} + \opd{c}{n} \op{c}{n+1} ) + \frac{K}{2} \sum_{n} (u_{n+1} - u_{n})^2
\end{equation}
which represents a lattice model with electron-phonon coupling. $K$ parameterizes the harmonic potential between the lattice distortions and $u_{n}$ represents the displacement of each atom from its equilibrium position. In this model, the staggered tunneling amplitudes of the SSH model manifest from the Peierls distortion. For physical \textit{trans}-polyacetylene, the values of these parameters are $t_0 = \SI{2.5}{\electronvolt}$, $\alpha = \SI{4.1}{\electronvolt/\angstrom}$, and $K = \SI{21.0}{\electronvolt/\angstrom^2}$. For the present situation the model is taken to be in the adiabatic limit, such that the phonon momentum $p_u$ is negligible, $p_u \approx 0$.
The continuum model is obtained from linearization around Fermi surface, $k_F = \frac\pi2$ in the Brillouin zone.
The continuum Hamiltonian is
\begin{equation}
H = -\i \psi^\dagger \boldsymbol{\sigma}_3 \partial_x \psi + g \Delta \psi^\dagger \boldsymbol{\sigma}_1 \psi + \frac12 \Delta^2
\end{equation}
where $\Delta$ represents the phonon field.
This continuum SSH model is formally equivalent to the static semiclassical field equations of $N_{f}=2$ \index{Gross-Neveu model}Gross-Neveu model~\cite{gn}
\begin{equation}
\mathcal{L}_{\textsc{gn}} = \overline{\psi} \i \slashed{\partial} \psi + \frac{g}{2} \left( \overline{\psi} \psi \right)^2
\end{equation}
which is a relativistic quantum field theory that is defined in $1+1d$. The Gross-Neveu model can be subjected to
a Hubbard-Stratanovich transformation, which brings it into the form
\begin{equation}
\mathcal{L}_{\textsc{gn}} = \overline{\psi} \i \slashed{\partial} \psi + g \Delta \overline{\psi} \psi - \frac12 \Delta^2 \,.
\end{equation}
This Lagrangian yields the massive Dirac equation
\begin{align}
\left( \i \slashed{\partial} - g \Delta \right) \psi &= 0
\end{align}
with the self consistency relation
\begin{equation}
\Delta = g \overline{\psi} \psi \,.
\end{equation}
The equivalency between the Gross-Neveu model and the previous SSH continuum theory can be seen by changing from the chiral basis to the Dirac basis and performing a chiral rotation of $\theta=\pi/4$.
An effective potential for the $\Delta$ field can be determined through semi-classical methods.
The $\Delta$ field possesses a vacuum expectation value $\langle \Delta \rangle_{0} \neq 0$.
\begin{comment}
\begin{equation}
\text{Tr}~\e^{-\i H T}
=
\int \mathcal{D}\psi \mathcal{D}\overline{\psi} \mathcal{D}\Delta~\e^{\i \int\d t \int\d x \left( \overline{\psi} (\i \slashed{\partial} - g \Delta) \psi - \frac12 \Delta^2 \right)}
\end{equation}
integrate out $\psi$
\begin{equation}
\text{Tr}~\e^{-\i H T}
\sim
\int \mathcal{D}\Delta~\e^{\i \int\d t \int\d x \left( - \frac12 (\Delta^2 - \Delta_0^2) \right) - \i N \sum_i (\omega_i(\Delta) - \omega_i(\Delta_0)) - \i n_0 \omega_0(\Delta)}
\end{equation}
asymptotic boundary conditions
\begin{align}
\Delta(+\infty) = \pm\Delta_0 = -\Delta(-\infty)
\end{align}
\begin{align}
( \i \slashed{\partial} - g \Delta) \psi &= 0
\\
( \i \slashed{\partial} + g \Delta) ( \i \slashed{\partial} - g \Delta) \psi &= 0
\\
\frac{\d^2\psi}{\d x^2} - \left( g^2(\Delta^2 - \Delta_0^2) \pm g \frac{\d\Delta}{\d x} \right) \psi &= -( \omega^2 - g^2 \Delta_0^2) \psi
\end{align}
Eigenenergies
\begin{equation}
k^2 \equiv \omega^2 - g^2 \Delta_0^2
\end{equation}
Potential
\begin{equation}
u(x) \equiv g^2(\Delta^2 - \Delta_0^2) \pm g \frac{\d\Delta}{\d x}
\end{equation}
The solution for the $\Delta$ field may be found from inverse scattering methods.
zero energy bound state
soliton electron bound state wavefunction is finite only on odd sites
\end{comment}
For the soliton with zero energy bound state and the boundary conditions
\begin{align}
\Delta(+\infty) = \pm\Delta_0 = -\Delta(-\infty) \,,
\end{align}
the semi-classical methods yield the solution~\cite{gnsoliton,gnsemiclassical1,gnsemiclassical2,gnboundstate}
\begin{equation}
\Delta(x) = \Delta_0 \tanh(x) \,.
\end{equation}
This solution allows the field equations to be solved exactly. The zero energy bound state takes the form
\begin{equation}
\psi(x) = N_0 \sech(x)
\end{equation}
For $x\gg1$ the analytic soliton solution for $\psi(x)$ exhibits the same asymptotic exponential decay just as was obtained from the transfer matrix calculation.
Another admissible solution is that of a polaron, which is essentially a soliton--anti-soliton bound state,
\begin{equation}
\Delta(x) = \tanh(x-x_0) - \tanh(x+x_0) \,.
\end{equation}
This solution arises from the boundary conditions
\begin{equation}
\lim_{|x|\to\infty} \Delta(x,t) = |\Delta_0| \,.
\end{equation}
These boundary conditions state the asymptotic form of the $\Delta$ field takes on a value with the same sign in both the $x\to-\infty$ and $x\to+\infty$ asymptotic regimes. In other words, the polaron solution interpolates between a vacuum to another vacuum of the same type.
Unlike the soliton, this is a non-topological state.
The electronic bound states appear in pairs at energies $\pm\omega_0 \neq 0$.
At a formal level, the relationship between the topological configuration of the phonon field and the presence of fermion zero modes can be obtained from the Atiya-Singer index theorem~\cite{atiyahsinger}. The full technicalities of this theorem and its proof are beyond the scope of this thesis, so here the theorem will be stated in a form relevant to the current discussion and the proof omitted.
The theorem states that for an elliptic differential operator $D$ on a compact oriented differentiable manifold $X$, it follows that
\begin{equation}
\mathrm{Analytic\ Index\ of\ } D = \mathrm{Topological\ Index\ of\ } X
\label{eq:atiyahsinger}
\end{equation}
where the analytical index is given by $\dim(\ker(D)) - \dim(\ker(D^*))$, \textit{i.e.} the difference in dimensions of the kernel and cokernel of the operator $D$, and the topological index parameterizes the topological (non-)triviality of $X$, where an index of zero implies trivial topology and a non-zero index implies non-triviality. Recall that the kernel of a linear operator $L$ is the set of elements $v$ which satisfy $L v = 0$.
\begin{comment
\begin{theorem}
For an elliptic pseudodifferential operator $D$ on a compact oriented differentiable manifold $X$
\begin{equation}
\mathrm{Analytic\ Index\ of\ } D = \mathrm{Topological\ Index\ of\ } X
\end{equation}
The analytic index is given by $\dim(\ker(D)) - \dim(\ker(D^*))$
\\
The topological index measures the topology of X
\end{theorem}
\begin{corollary}
For a Dirac operator, as in \textrm{e.g.} the SSH/Gross-Neveu model with Dirac operator \[D = -\i \hslash t_0 \boldsymbol{\sigma}_2 \partial_x + \boldsymbol{\sigma}_1 \Delta(x), \] zero-energy eigenstates exist if and only if the $\Delta(x)$ is in a non-trivial topological configuration.
\end{corollary}
\end{comment
In the case of the SSH or Gross-Neveu models, the elliptic differential operator $D$ and its adjoint $D^*$ are
\begin{align}
D &= -\i \hslash t_0 \boldsymbol{\sigma}_2 \partial_x + \boldsymbol{\sigma}_1 \Delta(x)
&
&\text{and}
&
D^* &= \i \hslash t_0 \boldsymbol{\sigma}_2 \partial_x + \boldsymbol{\sigma}_1 \Delta(x)
\end{align}
which act as
\begin{align}
\left[ -\i \hslash t_0 \boldsymbol{\sigma}_2 \partial_x + \boldsymbol{\sigma}_1 \Delta(x) \right] \psi(x) &= 0
&
&\text{and}
&
\left[ \i \hslash t_0 \boldsymbol{\sigma}_2 \partial_x + \boldsymbol{\sigma}_1 \Delta(x) \right] \psi(x) &= 0 \,.
\end{align}
For $\displaystyle \psi(x) \sim \begin{pmatrix} 1 \\ 0 \end{pmatrix}$, which represents localization on only one sublattice, it follows that $\boldsymbol{\sigma}_3 \psi = + \psi$ and
\begin{equation}
\psi(x) = \mathcal{N} \psi(0) \exp\left[-\frac{1}{t_0 \hslash} \int_{0}^{x} \Delta(x') \d x'\right]
25
\ker(D) \,.
\label{eq:psiinkernel}
\end{equation}
For localization on only one sublattice, $\psi$ is unnormalizable for $D^*$, and therefore $\dim(\ker(D^*)) = 0$. From \eqref{eq:psiinkernel} $\dim(\ker(D))$ is finite and therefore the analytic index of $D$ is non-zero. By the Atiyah-Singer index theorem \eqref{eq:atiyahsinger}, the underlying topology is then non-trivial.
On the other hand, in the case of two domain walls, there is no zero energy state for either $D$ or $D^*$. This means that $\ker(D) = \emptyset = \ker(D^*)$, and so the analytic index $= 0$. This is the situation encountered in the polaron solution as mentioned previously. The Atiyah-Singer index theorem \eqref{eq:atiyahsinger} then concludes that this configuration is topologically trivial.
The topology can also be identified from the topological charge~\cite{marino} as
\begin{align}
Q &= \frac12 \int_{-\infty}^{\infty} \partial_x \Delta(x) \d x
\\
&= \frac12 \left[ \Delta(+\infty) - \Delta(-\infty) \right]
\end{align}
which is quantized in terms of the asymptotic vacuum expectation values of the field $\Delta$.
The topological charge is the integral over the zeroth component of the topological current $j = \star \d \Delta$, which is $1+1d$ in the present case.\footnote{In vector index notation, the topological current is expressed as $j^\mu = \frac12\epsilon^{\mu\nu} \partial_\nu \Delta$. It follows that $j^0 = \frac12 \epsilon^{01} \partial_1 \Delta$.}
The main result of this section is that the phonon field configuration yielding a domain wall which minimizes the energy takes on a spatial $\tanh$ form. In considering the Peierls-Fr\"ohlich Hamiltonian and interpreting it in terms of a Hamiltonian of SSH-type, it reveals that the width of the $\tanh$ spans over several lattice sites. While the field theoretic derivation of the $\tanh$ form of the domain wall is well-known from the literature~\cite{solitons,marino,gnsoliton,gnsemiclassical1,gnboundstate}, it's employment as the envelope of a domain wall in the lattice SSH model has not previously been reported.
\subsection{Single Domain Wall}
\label{sec:singledw}
The derivation that a bulk domain wall within an SSH-type system is parameterized by a $\tanh$ envelope enables the construction of a Hamiltonian for such a system of the form
\begin{equation}
\hat{H}_{\textsc{dw}} = \sum_{n} \tensor*{t}{_n} \opd{c}{n+1} \op{c}{n} + \hc
\end{equation}
where
\begin{equation}
t_n = t_0 + (-1)^{n} \delta t~\tanh\left(\frac{n + \phi}{\alpha}\right) \,.
\label{eq:singledwtn}
\end{equation}
Here $\phi$ is the parameter which determines the location of the domain wall within the chain. Envelopes of other functional form still result in a localized state on the domain wall, however the nonzero energy bands in the spectral function then become distorted and do not take the form of the SSH bands, whereas their shape is preserved with the $\tanh$ envelope. An envelope which is not a finely-tuned $\tanh$ function produces a `fractal' spectrum, where a multitude of bands form with exponentially small gaps between them and the overall spectrum of these bands forms a distinct shape. Hamiltonians with non-$\tanh$ envelopes without domain walls will be discussed in \S\ref{sec:largeunitcells}.
The node of the $\tanh$ envelope is in general incommensurate with the integer-indexed hopping amplitudes of the lattice, meaning that in general $\phi\notin\mathbbm{Z}$.
The location of the domain wall from the boundary of the chain is proportional to $\ln(-\Delta \omega_p)$ where $\Delta \omega_p$
is the distance between the two mid-gap poles.
This system can be deformed adiabatically to shift the position of the domain wall by smoothly tuning $\phi$.
By taking the limit $\phi\to\infty$, the domain wall can be propagated to infinity which in effect results in a topological phase transition without bulk gap closing. The system is then left with only a simple domain wall on the boundary with the vacuum. As $\phi\to\infty$, $\Delta \omega_p \to 0$
, so that a simple pole at zero emerges.
A different example of a topological phase transition without gap closing involves symmetry breaking~\cite{ezawa}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{tanhdwspec.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{tanhdwtn.pdf}
\end{subfigure}
\caption{Plotted here is the spectral function and hopping parameters for an SSH model with a domain wall parameterized according to Eq.~\eqref{eq:singledwtn} with $\phi=9.35$ and $\alpha=\frac{t_0}{\delta t} = 5.0$.\label{fig:tanhdw}}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.47\linewidth}
\includegraphics{G_head01.pdf}
\end{subfigure}
\begin{subfigure}{0.52\linewidth}
\includegraphics{tn_head01.pdf}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics{G_head001.pdf}
\end{subfigure}
\begin{subfigure}{0.52\linewidth}
\includegraphics{tn_head001.pdf}
\end{subfigure}
\begin{subfigure}{0.47\linewidth}
\includegraphics{G_head0001.pdf}
\end{subfigure}
\begin{subfigure}{0.52\linewidth}
\includegraphics{tn_head0001.pdf}
\end{subfigure}
\caption{Correspondence between the position of the mid-gap poles and the depth from the boundary of the domain wall.\label{fig:dwdistance}}
\end{figure}
\begin{figure}[h]
\includegraphics{polegap1e-2.pdf}
\includegraphics{polegap1e-4.pdf}
\caption{Wavefunction of the boundary state localized on the odd sites (blue), and the wavefunction localized on the domain wall with support on the even sites (red).}
\end{figure}
A SSH model in the topological phase with a single domain wall will have two poles in its spectral gap. This configuration is equivalent to the soliton--anti-soliton bound state solution obtained in the continuum field theory.
\subsection{Repeated Domain Walls}\label{sec:repeateddws}
Adding an additional domain wall is equivalent to adding an additional pole in the gap of the spectral function.
It is instructive to analyze this situation from the reverse, by constructing a system which exhibits the gap and satellite bands of the SSH model, but which has an arbitrary configuration of poles lying within the gap.
The task is then to generate a set of parameters $\{\tilde{t}_n\}$ which produces the appropriate tight-binding chain with such a spectral function.
Using the relationship between the presence of a domain wall in the chain and the mid-gap poles, an ansatz can be chosen for the form of the $\{\tilde{t}_n\}$ with some parameterized dependence on the location of the added spectral poles.
\subsubsection{Effective Hamiltonian}
\label{sec:lanczos}
The Hamiltonian describing the semi-infinite generalized SSH model with multiple mid-gap poles requires knowing the positions of the domain walls. These positions can be obtained by first analyzing an effective Hamiltonian which describes only the mid-gap states.
The mid-gap poles can be described by an effective Hamiltonian whose spectral function consists of only the intragap poles.
This effective Hamiltonian can be written in a diagonal representation as
\begin{equation}
\boldsymbol{H}_{\text{D}} = \boldsymbol{H}_{\text{D}}(\{\omega_{p}\}) = \begin{pmatrix} \omega_{p_1} & \phantom{\ddots} & O \\ \phantom{\ddots} & \omega_{p_2} \\ O & & \ddots \end{pmatrix}
\label{eq:diag}
\end{equation}
where the $\omega_{p_j}$ label the position of the intragap poles. To preserve chiral symmetry it is required that $\omega_{p_{2j}} = - \omega_{p_{2j-1}}$.
The Green function for each of these uncoupled sites is simply
\begin{equation}
\Green{\op{c}{j}}{\opd{c}{j}}_z = \frac{1}{z - \omega_{p_j}}
\end{equation}
with associated spectral function $\mathcal{A}_j(\omega) = \delta(\omega-\omega_{p_j})$.
On the other hand, the spectral function describing the total collection mid-gap poles is given by
\begin{equation}
\widetilde{\mathcal{A}}(\omega) = \sum_{j=1}^{N} \lvert u_{1p_j} \rvert^2 \delta(\omega-\omega_{p_j})
\end{equation}
where the sum runs over the total number of mid-gap poles, $N = \dim(\boldsymbol{H}_D)$, and $\lvert u_{1p_j} \rvert^2$ determines the weight of the pole at $\omega_{p_j}$.
The weight of these poles sums to the weight of the mid-gap spectral pole of the SSH model in the topological phase,
\begin{equation}
\sum_{j=1}^{N} \lvert u_{1p_j} \rvert^2 = \frac{t_B^2 - t_A^2}{t_B^2} \equiv w_p \,.
\label{eq:totalpoleweight}
\end{equation}
The spectral function $\mathcal{A}(\omega)$ can be regarded as being obtained from some Green function as $\mathcal{A}(\omega) = -\frac1\pi \Im \Greenline{\op{f}{1}}{\opd{f}{1}}_{\omega+\i0^+}$. In order to ensure that $\Greenline{\op{f}{1}}{\opd{f}{1}}_{z}$ can properly be considered a Green function of a system, its spectral function must be appropriately normalized, $\int\d\omega \mathcal{A}(\omega) \overset{!}{=} 1$. This requires scaling the Green function by the total weight of the poles, $w_p$, from Eq.~\eqref{eq:totalpoleweight}.
From the diagonal representation \eqref{eq:diag}, this Green function may be written as
\begin{equation}
\Green{\op{f}{1}}{\opd{f}{1}}_{z} = \frac{1}{w_p} \sum_{j=1}^{N} \frac{\lvert u_{1p_j} \rvert^2}{z - \omega_{p_j}} \,.
\end{equation}
This Green function can also be expressed in terms of a Hamiltonian in tridiagonal form as
\begin{equation}
\boldsymbol{H}_{\text{T}} = \boldsymbol{H}_{\text{T}}(\{\tilde{t}_n\}) = \begin{pmatrix} 0 & \tilde{t}_1 & \phantom{\ddots} & O \\ \tilde{t}_1 & 0 & \tilde{t}_2 & \phantom{\ddots} \\ & \tilde{t}_2 & 0 & \ddots \\ O & \phantom{\ddots} & \ddots & \ddots \end{pmatrix} \,,
\label{eq:tridiag}
\end{equation}
which corresponds to a $1d$ chain model for the mid-gap states.
The relation between Eq.~\eqref{eq:diag} and Eq.~\eqref{eq:tridiag} is
\begin{equation}
\boldsymbol{U} \, \boldsymbol{H}_{\text{D}} \, \boldsymbol{U}^\dagger = \boldsymbol{H}_{\text{T}}
\end{equation}
where $\boldsymbol{U}$ is a unitary matrix with the form
\begin{equation}
\boldsymbol{U} = \begin{pmatrix} u_{1p_1} & u_{1p_2} & \cdots & \\ u_{2p_1} & u_{2p_2} & \cdots & \\ \vdots & \vdots & \ddots \\ \ \end{pmatrix} \,.
\end{equation}
The transformation
From the equations of motion the Green function in the tridiagonal basis takes the form of a continued fraction
\begin{equation}
\Green{\op{f}{1}}{\opd{f}{1}}_z = \cfrac{1}{z - \cfrac{\tilde{t}_1^2}{z - \cfrac{\tilde{t}_2^2}{z - \ddots}}} \,.
\end{equation}
The tridiagonal basis of the Hamiltonian can be constructed from the diagonal basis by means of the Lanczos algorithm~\cite{lanczos}\index{Lanczos algorithm}.
The Lanczos algorithm is a procedure which iteratively constructs an orthogonal basis $\{|\psi_k\rangle\}$ such that the transition elements $\langle \psi_{m} \rvert \hat{H} \lvert \psi_{n} \rangle$ yield the desired amplitudes.
\begin{comment}
\[Psi] = Table[0, Length[\[Omega]p] + 1];
\[Psi] = Insert[\[Psi], up, 2];
v = Table[0, Length[\[Omega]p] + 2];
t = Table[0, Length[\[Omega]p]];
Do[
v[[j]] = \[Psi][[j]]/Norm[\[Psi][[j]]];
\[Psi][[j + 1]] =
dH . v[[j]] - v[[j]] (v[[j]] . dH . v[[j]]) -
v[[j - 1]] (v[[j - 1]] . dH . v[[j]]);
t[[j - 1]] = Norm[\[Psi][[j]]];
, {j, 2, Length[\[Omega]p] + 1}]
\end{comment}
The space of these orthonormal states for a $L$-dimensional Hamiltonian is a Krylov space $\mathcal{K}^L(|\psi_0\rangle) = \mbox{span}\left\{|\psi_0\rangle,\hat{H}|\psi_0\rangle,\hat{H}^2|\psi_0\rangle,\ldots,\hat{H}^L|\psi_0\rangle\right\}$.
The Krylov space is initialized with the normalized state
\begin{equation}
|\psi_0\rangle = \frac{1}{w_p} \sum_{i} u_{1p_i} \opd{c}{i} \lvert 0 \rangle \,.
\end{equation}
After this initialization, the first step of the algorithm is to construct a normalized state $\lvert \psi_{1} \rangle$ which is orthogonal to the initial state $\lvert \psi_{0} \rangle$, $\langle \psi_{0} \vert \psi_{1} \rangle \overset{!}{=} 0$. This can be achieved with the ansatz
\begin{equation}
b_1 \lvert \psi_{1} \rangle = \hat{H} \lvert \psi_{0} \rangle - \lvert \psi_{0} \rangle \langle \psi_{0} \rvert \hat{H} \lvert \psi_{0} \rangle
\label{eq:lanczos1}
\end{equation}
which trivially yields orthogonality as observed from contraction with $\langle \psi_{0} \rvert$ and the normalization of the initial state $\langle \psi_{0} \vert \psi_{0} \rangle = 1$. The coefficient $b_1$ is introduced to ensure that the state $\lvert \psi_1 \rangle$ is normalized, $\langle \psi_1 \vert \psi_1 \rangle = 1$. It can be determined by contraction of \eqref{eq:lanczos1} with $\langle \psi_{1} \rvert$, which yields
\begin{equation}
b_1 = \langle \psi_{1} \rvert \hat{H} \lvert \psi_{0} \rangle
\end{equation}
due to the orthogonality of $\lvert \psi_{0} \rangle$ and $\lvert \psi_{1} \rangle$.
The second state to be constructed, $\lvert \psi_{2} \rangle$, is required to be orthogonal to both the previous states, $\langle \psi_{0} \vert \psi_{2} \rangle \overset{!}{=} 0$ and $\langle \psi_{1} \vert \psi_{2} \rangle \overset{!}{=} 0$. This state can be constructed in a manner analogous to the previous step by defining $\lvert \psi_{2} \rangle$ as
\begin{equation}
b_2 \lvert \psi_{2} \rangle = \hat{H} \lvert \psi_{1} \rangle - \lvert \psi_{1} \rangle \langle \psi_{1} \rvert \hat{H} \lvert \psi_{1} \rangle - \lvert \psi_{0} \rangle \langle \psi_{0} \rvert \hat{H} \lvert \psi_{1} \rangle \,.
\end{equation}
Once again the coefficient $b_2$ facilitates $\langle \psi_2 \vert \psi_2 \rangle = 1$ and is determined from the matrix element
\begin{equation}
b_2 = \langle \psi_{2} \rvert \hat{H} \lvert \psi_{1} \rangle \,.
\end{equation}
The remainder of the Krylov space is produced in a similar fashion. The general form for the production of each state $n\geq1$ is
\begin{equation}
b_{n} \lvert \psi_{n} \rangle = \hat{H} \lvert \psi_{n-1} \rangle - \sum_{m=0}^{n-1} \lvert \psi_{m} \rangle \langle \psi_{m} \rvert \hat{H} \lvert \psi_{n-1} \rangle \,.
\label{eq:lanczosrecursion}
\end{equation}
The general $b_n$ is given by $b_n = \langle \psi_{n} \rvert \hat{H} \lvert \psi_{n-1} \rangle$. For completeness, a set of parameters $a_n$ can be defined as $a_n = \langle \psi_n \rvert \hat{H} \lvert \psi_n \rangle$. In the present situation only nearest neighbor hopping is considered, so the terms $\langle \psi_{m} \rvert \hat{H} \lvert \psi_{n} \rangle \overset{!}{=} 0$ for $\lvert n - m \rvert > 1$.
The resulting set of parameters $b_{n}$ can be seen to be the hopping amplitudes between nearest neighbor sites.
The tridiagonal Hamiltonian matrix constructed from the Lanczos algorithm is
\begin{equation}
\boldsymbol{H}_T =
\begin{pmatrix}
a_0 & b_1 & & & O \\
b_1 & a_1 & b_2 \\
& b_2 & a_2 & b_3 \\
& & b_3 & a_3 & \ddots \\
O & & & \ddots & \ddots
\end{pmatrix} \,.
\end{equation}
For the particular case at hand, the Hamiltonian employed in the construction of the Krylov space \eqref{eq:lanczosrecursion} is $\hat{H}_D$. The resulting Hamiltonian constructed from the parameters $\{ a_n , b_n \}$ is the tridiagonal Hamiltonian $\hat{H}_T$.
For a Hamiltonian $\hat{H}_D$ of dimension $N$, the Green function on the boundary of the system defined by the corresponding $\hat{H}_T$ produced by the Lanczos algorithm satisfies
\begin{equation}
\Green{\op{f}{1}}{\opd{f}{1}}_z = \sum_{j=1}^{N} \left| u_{1j} \right|^2 \Green{\op{c}{j}}{\opd{c}{j}}_z \,.
\end{equation}
The Lanczos method is an iterative procedure which crucially depends on the correct normalization and orthogonality of all states at each iteration. As such it potentially suffers from compounding numerical errors with increasing iteration number. In the present context the number of iterations is equal to the number of poles to be included in the spectral gap. For most cases considered in this analysis this number is small, $L<10$. However the Lanczos algorithm was tested for cases considering $L \sim \mathcal{O}(100)$ mid-gap poles and the algorithm was found to be numerically stable and successfully transformed the Hamiltonian $\boldsymbol{H}_D(\omega_1 , \omega_2, \ldots, \omega_L)$ to $\boldsymbol{H}_T(\tilde{t}_1, \tilde{t}_2, \ldots, \tilde{t}_{L-1})$ with their corresponding spectral functions being matching as expected. Prescribing this number of poles within the spectral gap is an extremely high density which verges towards the situation of infinite domain walls producing a full band of states in the spectral gap. The main conclusion of this calculation is that for a modest number of mid-gap poles there are no significant numerical errors in the construction of $\boldsymbol{H}_T$ as the algorithm is stable even when considering iterations orders of magnitude greater than what is needed.
The utility of the Hamiltonian in tridiagonal form $\boldsymbol{H}_{\text{T}}$ is that its parameters $\{\tilde{t}_n\}$ relate to the positions of the domain walls in the full Hamiltonian.
\subsubsection{Construction of Chain Parameters}
For $M$ number of intragap poles, the chain parameters can be obtained by the expression
\begin{equation}
t_n = t_0 + (-1)^{n} \delta t \left[\sum_{m=1}^{M-1} \tanh\left(\frac{n + \phi_m}{(-1)^{m+1} \alpha_m}\right) - \frac{1+(-1)^{M-1}}{2} \right]
\label{eq:tntanh}
\end{equation}
where the $m$ sum produces $M-1$ domain walls. Each domain wall lies at the crossing at the node of a $\tanh$ envelope. The $(-1)^{m+1}$ term in the $\tanh$ envelope is used to ensure that the domain wall flips the parity of the alternating hopping parameters. The last term in \eqref{eq:tntanh} is constant shift necessary to ensure that the alternating parameters are centered about $t_0$.
The parameter $\alpha_m$ is defined as
\begin{equation}
\alpha_m = \frac{t_0}{\delta t} \,.
\label{eq:tanhparams}
\end{equation}
The specification of the $\alpha_m$ parameter was formulated empirically based on analyzing the numerical values of available input parameters. The index is left explicit as it in principle could be different for each domain wall, as will be seen in Eq.~\eqref{eq:4dwrescale}.
For a chain initialized with a weak bond, there is an additional pole at the end of the chain which therefore results in a final spectral function with $M$ poles. The use of a $\tanh$ envelope is prescribed according to the previous discussion on domain walls in the Gross-Neveu model.
Domain wall states in SSH chains have localization length $\xi$. Two such states overlap and hybridize. This can be modeled as a tight-binding type tunneling $\tilde{t}$ between domain wall states. With $\tilde{t} \sim \e^{-d/\xi}$ with $d$ the separation between domain walls. Inverting this relationship results in $d \sim \xi \ln \tilde{t}$.
For a single domain wall and a topological boundary, the location of the domain wall in the effective chain goes as $\ln \tilde{t}$
The positions of the domain walls is obtained from the hopping parameters $\tilde{t}_n$ of the effective Hamiltonian describing the intragap poles,
\begin{equation}
\phi_m = \frac{\alpha_m}{\xi} \sum_{j=1}^m \ln \tilde{t}_j
\label{eq:phifromt}
\end{equation}
which assumes the domain walls are well-separated and their contributions additive.
The expression for the $t_n$ describing the $\tanh$ envelope of domain walls, Eq.~\eqref{eq:tntanh} is only approximate. For a single domain wall, the functional form of the envelope is a $\tanh$. For multiple domain walls, these $\tanh$ functions overlap and the $t_n$'s which lie in the overlap region must be tuned very precisely in order to preserve the smooth continuous form of the high energy bands. This issue is amplified when the domain walls occur close to each other in the chain. The expression Eq.~\eqref{eq:tntanh} should be interpreted as an approximate form which is valid in the case where the bandwidth is small and the domain walls are a reasonable distance from each other.
This result is a generalization of the single domain wall case wherein a domain wall in the lattice corresponds to a soliton in the field theory
This $\tanh$ envelope produces an approximate analytical expression for the Hamiltonian parameters $\{t_n\}$.
For cases where the domain walls appear nearby in the chain, when the distance between $\phi_{m+1}$ and $\phi_m$ is small, care must be taken to ensure that their respective $\tanh$ envelopes do not affect each other.
An exact set of parameters can be obtained from a moment analysis of the spectral function. Such a calculation will appear in \S\ref{ch:motttopology}.
In addition to constructing SSH models with a multitude of mid-gap poles, it is also possible to construct models whose spectral function exhibits mid-gap bands.
Analogously to the cases of many mid-gap poles, a generalized SSH model with infinitely repeating domain walls can result in a spectrum with multiple mid-gap bands. The domain walls host localized states which can be considered to be localized states on a superlattice.
Further discussion of this scenario will be postponed until \S\ref{sec:motttransition}, where systems of this type play a central role.
\subsubsection{Example: 4 Poles}
An example of this calculation strategy is the case where there are $N=4$ poles lying within the SSH band gap. In this example the SSH model parameters $t_A = t_0 - \delta t$ and $t_B = t_0 + \delta t$ will be parameterized with $t_0 = 0.5$ and $\delta t = 0.2$. For this parameterization the SSH pole weight is
\begin{equation}
w_p = \frac{t_B^2 - t_A^2}{t_B^2} \approx 0.82 \,.
\end{equation}
The intragap poles for this example will be chosen to lie at $\omega_{p_1} = -\omega_{p_2} = 0.001$ and $\omega_{p_3} = -\omega_{p_4} = 0.0001$. The poles will be taken to have equal weight. The Hamiltonian of this system in the diagonal basis is
\begin{equation}
\boldsymbol{H}_{\text{D}} = \begin{pmatrix} \omega_{p_1} & & & O \\ & \omega_{p_2} & & \\ & & \omega_{p_3} & \\ O & & & \omega_{p_4} \end{pmatrix} \,.
\end{equation}
The system described by this Hamiltonian is that of four decoupled sites with on-site potentials $\omega_{p_j}$.
Following the notation of \S\ref{sec:lanczos}, the Green function on each site $j=1,\ldots,4$ is given by
\begin{equation}
\Green{\op{c}{j}}{\opd{c}{j}}_{z} = \cfrac{1}{z - \omega_{p_j}}
\end{equation}
with total spectral function given by $\mathcal{A}(\omega) = \sum_{j=1}^{4} \lvert u_{p_j} \rvert^2 \delta(\omega - \omega_{p_j})$
where $u_{p_j} = \sqrt{\frac{w_p}{4}}$. Applying the Lanczos algorithm yields $N-1=3$ parameters in the tridiagonal $\hat{f}$ basis which have the numerical values $\tilde{t}_1 = 0.000710634$, $\tilde{t}_2 = 0.000696562$, and $\tilde{t}_3 = 0.00014072$. This now represents a system with still with four sites, but now with hybridizations between the sites but with vanishing on-site potentials.
In this basis the Hamiltonian is
\begin{equation}
\boldsymbol{H}_{\text{T}} = \begin{pmatrix} 0 & \tilde{t}_1 & & O \\ \tilde{t}_1 & 0 & \tilde{t}_2 & \\ & \tilde{t}_2 & 0 & \tilde{t}_3 \\ O & & \tilde{t}_3 & 0 \end{pmatrix}
\end{equation}
and the Green function for the left edge of the system is
\begin{equation}
\begin{aligned}[c]
\Green{\op{f}{1}}{\opd{f}{1}}_z = \cfrac{1}{z - \cfrac{\tilde{t}_1^2}{z - \cfrac{\tilde{t}_2^2}{z - \cfrac{\tilde{t}_3^2}{z}}}}
\end{aligned}
\end{equation}
which recovers the correct spectral function $\mathcal{A}(\omega)$.
The $\tanh$ envelope of the hopping amplitudes can now be constructed according to Eq.~\eqref{eq:tntanh}. The position of the domain walls is found from Eq.~\eqref{eq:phifromt}, which are
$\phi_1 = -15.3559$, $\phi_2 = -30.7542$, $\phi_3 = -49.5403$.
The functional form of the hopping parameters now can be expressed in the form
\begin{equation}
t_n = t_0 + (-1)^{n} \delta t
\left[
\tanh\left(\frac{n + \phi_1}{\alpha_1}\right)
- \tanh\left(\frac{n + \phi_2}{\alpha_2}\right)
+ \tanh\left(\frac{n + \phi_3}{\alpha_3}\right)
\right] \,.
\label{eq:tntanhanalytic}
\end{equation}
The parameters are first chosen such that $\alpha_m$ take the constant form of Eq.~\eqref{eq:tanhparams}.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{tn_4poles.pdf}
\caption{Structure of the hopping amplitudes for mid-gap poles situated at $\omega = \pm 0.001, \pm 0.0001$ as derived from the moment expansion, which should be considered the exact result.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{tn_4poles_toy.pdf}
\caption{Structure of the hopping amplitudes for mid-gap poles situated at $\omega = \pm 0.001, \pm 0.0001$ as prescribed by the toy model Eq.~\eqref{eq:tntanhanalytic}.}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{G_4poles_me.pdf}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{G_4poles_mezoom.pdf}
\end{subfigure}
\caption{Spectrum of a generalized SSH model with a head consisting of three domain walls. The positions of the poles at $\pm 10^{-4}$ and $\pm 10^{-3}$ were prescribed as an initial condition and the chain parameters were calculated using the moment expansion.\label{fig:4polesme}}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{G_4poles_toy.pdf}
\phantomsubcaption{\label{fig:4polestoy}}
\end{subfigure}
\begin{subfigure}{0.49\linewidth}
\includegraphics[scale=1]{G_4poles_toyzoom.pdf}
\phantomsubcaption{\label{fig:4polestoyzoom}}
\end{subfigure}
\vspace{-\baselineskip}
\caption{Spectrum of the generalized SSH model generated from the model \eqref{eq:tntanh}. The model captures the main features of the desired spectrum with reasonable accuracy. Panel \subref{fig:4polestoyzoom} shows the comparison between the positions of the mid-gap poles generated by the toy model (black) and the true positions of the poles (dashed).\label{fig:4polestoyall}}
\end{figure}
As generated by the toy model prescribed in Eq.~\eqref{eq:tntanh}, the poles in the resulting spectrum are approximately located at $\omega = \pm 0.0001975$ and $\omega = \pm 0.001725$.
While the model specified by Eq.~\eqref{eq:tntanh} does not reproduce the location of the mid-gap poles exactly, it does reproduce the correct order of magnitude, with the location of the outer pair of poles being roughly one order of magnitude further out than the inner pair.
The outer bands of the spectrum display some spurious microstructure features coming from the inexactness of the domain wall $\tanh$ profiles, shown in Fig.~\ref{fig:4polestoy}, but otherwise accurately capture the correct shape of the SSH bands.
Some numerical massaging of the parameters yields an essentially exact result:
\begin{align}
\alpha'_m &= \left\{ \frac{\alpha_1}{1.01} , \frac{\alpha_2}{1.02} , \frac{\alpha_3}{1.03} \right\}
\label{eq:4dwrescale}
\end{align}
This rescaling of the $\alpha_m$ is performed by hand. The physically-motivated approximation to the true chain parameters is thus seen to be within $\sim1\%$ error. The advantage over the exact moment expansion is that it is simple and provides physical insight into the mechanism behind the formulation of domain walls and their relationship to mid-gap states.
\subsubsection{Unit Cells With Domain Walls}
\label{sec:ucdw}
A natural generalization of the insertion of domain walls into an SSH lattice is that of
an extended SSH model where the unit cell spans many sites and there is a domain wall situated within the unit cell.
An example of such a set of $\{t_n\}$ is shown in Fig:~\ref{fig:tn_singledwunitcell}. The corresponding spectral function is shown in Fig.~\ref{fig:G_singledwunitcell}. This spectral function is qualitatively similar to that of the topological phase of the SSH model, with a zero energy feature located within a spectral gap.
Note however that the unit cell is initialized with a strong bond. In the absence of domain walls, this system would be adiabatically connected to the SSH model in its trivial phase.
\begin{figure}[h]
\centering
\begin{subfigure}{0.47\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{G_singledwunitcell.pdf}};
\node at (3.125,2) {\footnotesize\subref*{fig:G_singledwunitcell}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:G_singledwunitcell}}
\end{subfigure}
\begin{subfigure}{0.52\linewidth}
\begin{tikzpicture}
\node at (0,0) {\includegraphics{tn_singledwunitcell.pdf}};
\node at (3.5,2) {\footnotesize\subref*{fig:tn_singledwunitcell}};
\end{tikzpicture}
\phantomsubcaption{\label{fig:tn_singledwunitcell}}
\end{subfigure}
\vspace{-2\baselineskip}
\caption{Spectral function of a generalized SSH model \subref{fig:G_singledwunitcell} whose unit cell contains a domain wall, shown in \subref{fig:tn_singledwunitcell}.\label{fig:singledwunitcell}}
\end{figure}
The spectral function exhibits a mid-gap feature which is not a pole, but rather a band of finite width and spectral height. This characteristic is shown clearly in Fig.~\ref{fig:dwband}. The initial strong bond precludes the presence of a topological zero pole. The band is due to states localized on the domain walls hybridizing to each other.
\begin{figure}[h]
\includegraphics{G_singledwunitcellzoom.pdf}
\includegraphics{G_singledwunitcelllog.pdf}
\caption{Close up of the mid-gap feature in Fig.~\ref{fig:singledwunitcell}. As shown, the mid-gap feature is not a pole, but a well-defined band.\label{fig:dwband}}
\end{figure}
Since the domain walls host localized states, the domain walls can be considered to form a superlattice where the domain walls are the lattice sites and the overlap between wavefunctions localized on each domain wall parameterizes an effective superlattice hopping amplitude. Recall from \S\ref{sec:calcmeth} that for a homogeneous tight binding model the bandwidth is equal to $2t$. The very small width of the mid-gap band is indicative that the hopping amplitude of the superlattice is exponentially small, which is to be expected from the exponentially small overlap between wavefunctions localized on the domain walls.
As with the previous section, the effective tunneling is estimated as $\tilde{t} \sim \delta t \, \e^{-d/\xi}$ with $d$ the separation between domain walls (or equivalently here, the unit cell size). This leads to an effective model for the domain wall states
\begin{equation}
\op{H}{\textsc{dw}} = \tilde{t} \sum_{j} \opd{f}{j} \op{f}{j+1} + \hc
\end{equation}
where $\op{f}{j}$ is an operator for the $j^{\text{th}}$ domain wall state. This produces a narrow mid-gap band of width $2 \tilde{t} \ll \delta t$.
Further details of these domain wall structures generating spectra with mid-gap bands will be postponed to \S\ref{sec:mottbands} where such models arise within a particularly rich context.
\section{Extended Unit Cells\label{sec:largeunitcells}}
In contrast to the previous sections which have explored generalizations of the SSH model based on modifying its spectrum, in this section extensions of the SSH model are generated by modifying the SSH Hamiltonian directly. Here, the functional form of the SSH model's Hamiltonian is generalized to incorporate unit cells of arbitrary size and structure.
Similar studies considered SSH-type models with three-site unit cells~\cite{trimer} and four-site unit cells~\cite{ssh4}.
Due to the limited size of the unit cells considered there, no functional form of the hopping parameters was prescribed, but rather the analysis was performed with permutations of the relative magnitude of the various hopping parameters with each other. With the extension to unit cells of arbitrary size considered here, a more systematic approach to the strength of the hopping parameters is needed. A variety of parameterizations will be discussed below.
For a unit cell of size $L$, the Hamiltonian of an extended SSH model may be written as
\begin{equation}
\hat{H} = \sum_{n25\mathbbm{Z}^+} \sum_{m=0}^{L-1} \tensor{t}{_m}~\opd{c}{L n + m} \op{c}{L n + m+1} + \hc
\end{equation}
with chiral symmetry is enforced such that $\varepsilon_n = 0$.
As in \S\ref{sec:sshmodel} the Hamiltonian can be rewritten using the canonical basis with operators $\hat{\chi}$ which act on an entire unit cell $\alpha$
\begin{equation}
\hat\chi_{\alpha} = \begin{pmatrix} \left\{ \op{c}{m} \right\}_{A} \\ \left\{ \op{c}{m} \right\}_{B} \end{pmatrix}_{\alpha}
\end{equation}
where sublattice $A$ consists of the odd sites within the unit cell, sublattice $B$ consists of the even sites within the unit cell, and $\alpha$ labels the unit cell.
The Hamiltonian then assumes the chiral form of
\begin{equation}
\hat{H}(k) = \hat\chi^\dagger(k) \begin{pmatrix} 0 & \boldsymbol{h}(k) \\ \boldsymbol{h}^\dagger(k) & 0 \end{pmatrix} \hat\chi(k)
\end{equation}
where $\boldsymbol{h}(k)$ is a $\frac{L}{2}\times\frac{L}{2}$ submatrix of the form
\begin{equation}
\boldsymbol{h}(k) = \begin{pmatrix} t_1 & t_2 & 0 & \cdots & t_{L} \e^{\i k} \\ 0 & t_3 & t_4 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots \\ \vdots & \ddots & \ddots & t_{L-3} & t_{L-2} \\ 0 & \cdots & \cdots & 0 & t_{L-1} \end{pmatrix} \,.
\end{equation}
In this basis the Hamiltonian anti-commutes with the chiral symmetry generator $\Gamma$
\begin{equation}
\{ \hat{H}(k) , \Gamma \} = 0
\end{equation}
where the chiral symmetry generator is given by $\Gamma = \boldsymbol{\sigma}_3 \otimes \mathbbm{1}_{N}$
Like the SSH model, a $2N$-dimensional momentum space Hamiltonian of this form falls in the $A$III symmetry class and is an element of $U(2N) / (U(N) \times U(N))$. The standard SSH model can be recovered for $N=1=\frac{L}{2}$.
The $U(N) \times U(N)$ gauge transformations take the form of
\begin{equation}
\boldsymbol{U} = \begin{pmatrix} \mathbbm{1}_{N} \e^{-\i n k} & 0 \\ 0 & \mathbbm{1}_{N} \end{pmatrix}
\end{equation}
As an extension of the method of \S\ref{sec:sshtransfer}, the transfer matrix\index{transfer matrix} can be employed to calculate eigenstates of general $1d$ Hamiltonians with unit cells of arbitrary size $N$. The zero-energy eigenstate is then given by the ansatz
\begin{equation}
| \Psi \rangle = \sum_{n=1}^{N} \sum_{a=1}^{L} u^{(a)}_n | \psi^{(a)}_n \rangle
\end{equation}
with the normalization condition
\begin{equation}
\sum_{n=1}^{N} \sum_{a=1}^{L} \left\lvert u^{(a)}_n \right\rvert^2 = 1
\end{equation}
and $a$ indexes each site of the unit cell and $n$ indexes each unit cell along the chain.
For zero energy, the Schr\"odinger equation is
\begin{equation}
\hat{H} | \Psi_0 \rangle = 0 \cdot | \Psi_0 \rangle
\end{equation}
and the above transfer matrix method results in an asymptotic localization length of
\begin{equation}
\xi \approx \cfrac{1}{\ln\cfrac{\prod_{k\text{ even}} t_k}{\prod_{k\text{ odd}} t_k}}
\end{equation}
which is obtained from calculation by analogous methods to that of \S\ref{sec:sshtransfer}.
In practice, it is often necessary to calculate the Zak phase numerically. This can be accomplished by adapting a method developed in~\cite{numericalchern} for computing the Chern number. Recall that the Zak phase for a particular band $n$ is given by
\begin{equation}
\gamma_{n\mathcal{C}} = \oint_{\mathcal{C}} A_n
\end{equation}
where
\begin{equation}
A_n = \i \tensor*{\psi}{^\dagger_n}(k) \tensor{\partial}{_{k}} \tensor*{\psi}{_n}(k) \d k
\end{equation}
is the Berry potential 1-form and $\tensor*{\psi}{_n}(k)$ is the momentum space Bloch eigenfunction of the $n$-th band. As a phase, its exponential is an alternative quantity that makes sense to analyze. The Zak phase can then be written as
\begin{equation}
\begin{aligned}[b]
\e^{-\i \gamma_{n \mathcal{C}}}
&= \e^{-\i \oint_{\mathcal{C}} A_n}
\\ &= \e^{-\i \oint_{\mathcal{C}} A_{n}(k) \d k}
\\ &\approx \e^{-\i \sum_a A_{n}(k_a) \Delta k}
\\ &= \prod_{a} \e^{-\i A_{n}(k_a) \Delta k}
\end{aligned}
\end{equation}
where the integral has now been approximated by a sum over a discretized Brillouin zone with momentum labelled by $k_a$.
Since $\Delta k$ is small, each factor of the product can be expanded as
\begin{equation}
\begin{aligned}[b]
\e^{-\i A_{n}(k_a) \Delta k}
&= 1 - \i A_{n}(k_a) \Delta k + \mathcal{O}(\Delta k^2)
\\ &\approx 1 + \psi^\dagger_n(k_a) \tensor{\partial}{_{k}} \psi_n(k_a) \Delta k
\\ &= \psi^\dagger_n(k_a) \left[ \psi_n(k_a) + \tensor{\partial}{_{k}} \psi_n(k_a) \Delta k \right]
\\ &\approx \psi^\dagger_n(k_a) \psi_n(k_a + \Delta k)
\\ &= \psi^\dagger_n(k_a) \psi_n(k_{a+1})
\end{aligned}
\end{equation}
where also the discrete derivative has been employed as well as the unitary normalization of the eigenvectors.
The discrete form of the Zak phase for the $n^{\text{th}}$ band can then be obtained as
\begin{equation}
\gamma_{n\mathcal{C}} = \frac\i\pi \ln \left( \prod_a \psi^\dagger_n(k_a) \psi_n(k_{a+1}) \right)
\label{eq:numericalzak}
\end{equation}
where the principal value of the complex logarithm is taken\footnote{The phase of the principal value of the complex logarithm $\ln(z)$ is given by the 2-argument arctangent function, $\tan^{-1}(x,y)$ where $z = x+\i y$.} to ensure that only the phase of the argument contributes to $\gamma$.
The contour of integration $\mathcal{C}$ is over the whole Brillouin zone, so the product over $a$ is over all $k_a 25 [0 , 2\pi]$.
\subsection{Periodic Envelopes}
A specific parameterization of the $\{t_n\}$ can be taken to be the form of a periodic envelope.
\begin{equation}
t_n = t_0 + (-1)^{n} \delta t \left\lvert\cos\left( \tfrac{(n-1) \pi}{q} - \phi \right)\right\rvert
\label{eq:tnenvelope}
\end{equation}
The factor $n-1$ in the argument of the $\cos$ envelope is used such that the first bond in the chain is indexed by $t_1$ and with the displacement initialized by $\cos(-\phi)$ when $n=1$.
In the absence of domain walls, these models are adiabatically connected to the standard SSH model, which is recovered in the limit $q\to\infty$. Taking the absolute value of the envelope in Eq.~\eqref{eq:tnenvelope} ensures there are no domain walls in the parameterization of the $\{t_n\}$.
\begin{figure}[htp!]
\begin{subfigure}{\linewidth}
\begin{subsubcaption}
\subfiglabel{\includegraphics[scale=1]{GenvN30phi0.pdf}}{3,2}{GenvN30phi0}
\subfiglabel{\includegraphics[scale=1]{tnenvN30phi0.pdf}}{3,2}{tnenvN30phi0}
\end{subsubcaption}
\phantomsubcaption{\vspace{-\baselineskip}\addtocounter{subfigure}{-1}\label{fig:envN30phi0}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\begin{subsubcaption}
\subfiglabel{\includegraphics[scale=1]{GenvN30phi2.pdf}}{3,2}{GenvN30phi2}
\subfiglabel{\includegraphics[scale=1]{tnenvN30phi2.pdf}}{3,2}{tnenvN30phi2}
\end{subsubcaption}
\phantomsubcaption{\vspace{-\baselineskip}\addtocounter{subfigure}{-1}\label{fig:envN30phi2}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\begin{subsubcaption}
\subfiglabel{\includegraphics[scale=1]{GenvN30phi4.pdf}}{3,2}{GenvN30phi4}
\subfiglabel{\includegraphics[scale=1]{tnenvN30phi4.pdf}}{3,2}{tnenvN30phi4}
\end{subsubcaption}
\phantomsubcaption{\vspace{-\baselineskip}\addtocounter{subfigure}{-1}\label{fig:envN30phi4}}
\end{subfigure}
\begin{subfigure}{\linewidth}
\begin{subsubcaption}
\subfiglabel{\includegraphics[scale=1]{GenvN30phi6.pdf}}{3,2}{GenvN30phi6}
\subfiglabel{\includegraphics[scale=1]{tnenvN30phi6.pdf}}{3,2}{tnenvN30phi6}
\end{subsubcaption}
\phantomsubcaption{\vspace{-\baselineskip}\addtocounter{subfigure}{-1}\label{fig:envN30phi6}}
\end{subfigure}
\caption{Boundary-site spectral functions (left) corresponding to generalized SSH models with unit cells given by $t_n$'s following Eq.~\eqref{eq:tnenvelope} (right). The chain is initialized with a weak bond with $N=30$ and $\phi=0$ \subref{fig:envN30phi0}, $\phi=\pi/4$ \subref{fig:envN30phi2}, $\phi=\pi/2$ \subref{fig:envN30phi4}, and $\phi=3\pi/4$ \subref{fig:envN30phi6}.\label{fig:envN30}}
\end{figure}
These periodic envelopes can also be interpreted as a kind of periodic spatial disorder which respects the SSH model's chiral symmetry over a larger period than the standard two dimensional symmetry.
\begin{figure}[ht!]
\includegraphics[scale=1]{G-envN30phi4.pdf}
\includegraphics[scale=1]{tn-envN30phi4.pdf}
\caption{Boundary-site spectral function (left) corresponding to a generalized SSH model with unit cells given by $t_n$'s following Eq.~\eqref{eq:tnenvelope} (right). The chain is initialized with a strong bond with $N=30$ and $\phi=\pi/4$. This is the parity reversed case of Fig.~\ref{fig:envN30phi4}.\label{fig:envN30phi4s}}
\end{figure}
An envelope of only half a cosine band can also be implemented. In contrast to the previous envelope, this unit cell repeats its pattern after only half the period of a cosine envelope.
\begin{figure}[htp!]
\includegraphics[scale=1]{GenvN60phi0.pdf}
\includegraphics[scale=1]{tnenvN60phi0.pdf}
\caption{Boundary-site spectral function (left) corresponding to a generalized SSH model with unit cells given by $t_n$'s following Eq.~\eqref{eq:tnenvelope} (right). The chain is initialized with a weak bond with $N=60$ and $\phi=0$.\label{fig:envN60phi0}}
\end{figure}
Since these models are adiabatically connected to the SSH model, at low energy they feature a hard gap with or without a spectral pole at zero energy depending on if the chain is initialized with $t_1 < t_2$ or $t_1 > t_2$.
A difference with the SSH model is that these models are characterized by spectral functions with fractured outer bands.
For these systems the Zak phase as computed by Eq.~\eqref{eq:numericalzak} is quantized, and their topological classification is clear.
An examples of generalized SSH model with the hopping parameters given by Eq.~\eqref{eq:tnenvelope} is shown in Figs.~\ref{fig:envN30}, \ref{fig:envN30phi4s}, and \ref{fig:envN60phi0}. The shading of the spectral function (blue/red) corresponds to the system being either topological or trivial respectively. The topology is quantified by the Zak phase, as evaluated numerically according to Eq.~\eqref{eq:numericalzak}. For the topological spectra, the Zak phase is integer quantized, and is zero for trivial spectra. The sequence of spectral functions shown in Fig.~\ref{fig:envN30} have unit cells initialized with a weak bond and choses $N=30$, $t_0 = 0.5$, $\delta t = 0.1$, with varying values of $\phi$. The calculation shows that compared to the standard SSH model, the spectrum still features a zero energy mid-gap pole sitting in a hard gap. The SSH bands however take on a fractal form, with many bands separated by exponentially small gaps.
Fig.~\ref{fig:envN30phi4s} shows an example of a generalized SSH model initialized with a strong bond. The parameters are chosen to be $N=30$, $t_0 = 0.5$, $\delta t = 0.1$, and $\phi = 3/\pi/4$. This is the parity reversed case of Fig.~\ref{fig:envN30phi4}. As shown, the spectrum takes on a similar form, but with the topological pole absent, as expected by analogy with the trivial phase of the SSH model.
Shown in Fig.~\ref{fig:envN60phi0} is a model parameterized with $N=60$, $t_0 = 0.5$, $\delta t = 0.1$, and $\phi = 0$. Like the previous examples, this model has a unit cell which is 30 sites in length, but the period of the cosine envelope is now 60 sites, meaning that only half the wavelength is captured by the unit cell.
Tight-binding models parameterized by Eq.~\eqref{eq:tnenvelope} are adiabatically connected to the SSH model since the hopping parameters exhibit a strict alternating behavior. It is therefore expected that the topological phase and presence or absence of a zero energy pole depends on whether the semi-infinite system is initialized with a weak bond or a strong bond.
\subsection{Long Wavelength Modulations}
The models described in the previous section involved generalizations of the SSH model where a long wavelength envelope is superimposed over the hopping amplitudes. The opposite case can also be considered, where the alternating behavior of the hopping amplitudes does not take place over successive bonds, but rather over several bonds.
A parameterization of $\{t_n\}$ which realizes this concept is
\begin{equation}
t_n = t_0 + (-1)^n \delta t \cos\left( \tfrac{(n-1) 2\pi}{N} - \phi \right)
\label{eq:tnoscillating}
\end{equation}
This parameterization recovers the standard SSH model for a two-site unit cell, $N=2$, and $\phi \neq \pi/2 \mod\pi$.
This parameterization includes as subcases configurations considered in analyses of the SSH$_4$ model~\cite{ssh4}. In contrast to these previous studies, the parameterization \eqref{eq:tnoscillating} specifies a functional form for the $\{t_n\}$.
The general features of these models can be classified based on their parameterization. This classification is shown in Table~\ref{tab:oscillating}. The primary spectral feature of concern in these models is whether at zero energy the spectrum is gapped ($\mathcal{A}(0)=0$), metallic ($\mathcal{A}(0)=\text{const.}$), or contains a pole ($\mathcal{A}(0) \sim \delta(0)$).
\begin{table}[h]
\centering
\caption[Classification of long wavelength SSH variants]{Classification of long wavelength SSH variants. The parameters are the phase shift $\phi$, the unit cell size $N$, and the qualitative feature of the spectral function at zero frequency $\mathcal{A}(0)$. The notation $X \setminus Y$ denotes the set $X$ with elements of the set $Y$ removed. \textit{E.g.} $2\mathbbm{N} \setminus 4\mathbbm{N}$ denotes cells of even length not including $N$ which are multiples of 4.\label{tab:oscillating}}
\begin{tabular}{ccc}
$\phi \mod 2\pi$ & $N$ & $\mathcal{A}(0)$
\\\hline
0 & $4\mathbbm{N}$ & Pole \\
0 & $2\mathbbm{N} \setminus 4\mathbbm{N}$ & Gap \\
$\pi$ & $2\mathbbm{N}$ & Pole \\
\\
$\pm\pi/2$ & $8\mathbbm{N}$ & Pole \\
$\pm\pi/2$ & $4\mathbbm{N} \setminus 8\mathbbm{N}$ & Gap \\
$\pm\pi/2$ & $2\mathbbm{N} \setminus 4\mathbbm{N}$ & Metal \\
\end{tabular}
\end{table}
The Zak phase for models of this type is in general not quantized.
Within this class of generalizations, only systems with an even number of sites in the unit cells were analyzed. Systems with a unit cell size which is odd in dimension cannot possess chiral symmetry.
A sequence of tight-binding chains constructed from the parameterization Eq.~\eqref{eq:tnoscillating} are shown in Fig.~\ref{fig:oscphi4}. The parameters chosen are $t_0 = 0.5$, $\delta t = 0.4$, $\phi = \pi/2$, with $N$ varied as 8, 10, 12, 14 ,16. These values of $N$ demonstrate the variety of spectra which can be generated by the parameterization Eq.~\eqref{eq:tnoscillating}. These choices of $N$ cover all the cases of $\phi = \pi/2~\mod 2\pi$ shown in Table~\ref{tab:oscillating}: $8,16 25 8\mathbbm{N}$; $12 25 4\mathbbm{N} \setminus 8\mathbbm{N}$; and $10, 14 25 2\mathbbm{N} \setminus 4\mathbbm{N}$.
Note that in Fig.~\ref{fig:oscphi4} only the low energy features of the spectral function are plotted.
So, for example, while the $N=8$ case appears to be identical to the standard SSH configuration, there are bands and gaps at higher energies beyond the plot region; the bandwidth here is $D=1.0$, so there are bands and gaps among the entire bandwidth.
\begin{figure}[htp!]
\includegraphics[scale=1]{GoscN8phi4.pdf}
\includegraphics[scale=1]{tnoscN8phi4.pdf}
\includegraphics[scale=1]{GoscN10phi4.pdf}
\includegraphics[scale=1]{tnoscN10phi4.pdf}
\includegraphics[scale=1]{GoscN12phi4.pdf}
\includegraphics[scale=1]{tnoscN12phi4.pdf}
\includegraphics[scale=1]{GoscN14phi4.pdf}
\includegraphics[scale=1]{tnoscN14phi4.pdf}
\includegraphics[scale=1]{GoscN16phi4.pdf}
\includegraphics[scale=1]{tnoscN16phi4.pdf}
\caption[Example spectra of long wavelength SSH variants]{Sequence demonstrating the dependence of spectra based on Eq.~\eqref{eq:tnoscillating} on $N$. Note that only the low energy region of the spectral functions is plotted. The spectral functions obey the categorization of Table~\ref{tab:oscillating}.\label{fig:oscphi4}}
\end{figure}
The classification of the long wavelength SSH models presented in Table~\ref{tab:oscillating} was found by analyzing models parameterized by systematic sampling of the parameter space for $\phi$ and $N$. The full theoretical understanding of how this pattern emerges is not yet known and is left for future work.
\section{Power-Law Spectral Functions\label{sec:powerlawssh}}
While one of the main characteristics of the SSH model is the presence of a hard gap about the Fermi energy, the model can be modified to allow cases where the low energy spectrum obeys power-law behavior
\begin{equation}
\mathcal{A}(\omega) \sim \lvert \omega \rvert^{\pm r}
\label{eq:pmr}
\end{equation}
in the region $\lvert \omega \rvert \ll 1$ and $r>0$.
To obtain a spectrum with power $\pm r$, the chain parameters can be generated by a form of
\begin{equation}
t_n = t_0 \sqrt{ 1 - \zeta (-1)^{n} \frac{r}{n + d} }
\label{eq:powerlawtn}
\end{equation}
where $\zeta = \pm$ and aligns with the sign of the exponent in Eq.~\eqref{eq:pmr}.
The negative sign yields a power-law vanishing spectrum, and the positive sign yields a power-law diverging spectrum, corresponding to the analogous trivial or topological phase of the SSH model. $4 t_0$ is the full band width.
\subsection{Power-Law Vanishing Spectra\label{sec:pseudogapssh}}
A spectrum which is power-law vanishing at low energies is also called a pseudogap.
Pseudogap spectral functions appear in such contexts as cuprate high-$T_c$ superconductors and also in the self-energy of the Hubbard model in infinite dimensions in the metallic phase, as seen in \S\ref{sec:hubbardsolution}.
The functional form of the hopping parameters follows Eq.~\eqref{eq:powerlawtn} with $\zeta = +$.
It is clear that $t_1 > t_2$, so this configuration starts on a strong bond, like the trivial phase of the SSH model.
A set of spectra corresponding to models generated from the parameterization Eq.~\eqref{eq:powerlawtn} with $\zeta = +$ for a range of values of $d$ is shown in Figs.~\ref{fig:fraction_r05}, with $r= 0.5$, and \ref{fig:fraction_r2}, with $r = 2$. Note the similarity between the spectra for $r = 2$ in Fig.~\ref{fig:fraction_r05} and the self-energy for the Anderson impurity model, Fig.~\ref{fig:siamS}. In both cases the very low energy features scale as $\omega^2$.
At low energy these spectra exhibit behavior described by $\mathcal{A}(\omega) = \alpha |\omega|^r$. From constructing chains from the ansatz Eq.~\eqref{eq:powerlawtn} and analyzing the resulting spectrum, it is deduced that for fixed $r$, the coefficient $\alpha$ scales as $\alpha \sim d$.
\begin{table}[h]
\centering
\caption{Values of $\alpha$ for varying values of $d$ for fixed $r$.}
\begin{tabular}{|c||c|c|c|c|}\hline
\diagbox[height=0.67cm]{$r$}{$d$} & 2 & 5 & 8 & 20 \\\hline\hline
0.5 & 4.65 & 6.74 & 8.67 & 13.08 \\\hline
2 & 11.80 & 58.26 & 141.99 & 705.58 \\\hline
\end{tabular}
\end{table}
\begin{figure}[htp!]
\includegraphics[scale=1]{green_fraction_r05d2.pdf}
\includegraphics[scale=1]{tn_fraction_r05d2.pdf}
\includegraphics[scale=1]{green_fraction_r05d5.pdf}
\includegraphics[scale=1]{tn_fraction_r05d5.pdf}
\includegraphics[scale=1]{green_fraction_r05d8.pdf}
\includegraphics[scale=1]{tn_fraction_r05d8.pdf}
\includegraphics[scale=1]{green_fraction_r05d20.pdf}
\includegraphics[scale=1]{tn_fraction_r05d20.pdf}
\caption{Spectral functions (left) corresponding to models with $t_n$'s generated by Eq.~\eqref{eq:powerlawtn} with $\zeta = +$ (right) at various values of $d$ with $r = 0.5$.\label{fig:fraction_r05}}
\end{figure}
\begin{figure}[htp!]
\includegraphics[scale=1]{green_fraction_r2d2.pdf}
\includegraphics[scale=1]{tn_fraction_r2d2.pdf}
\includegraphics[scale=1]{green_fraction_r2d5.pdf}
\includegraphics[scale=1]{tn_fraction_r2d5.pdf}
\includegraphics[scale=1]{green_fraction_r2d8.pdf}
\includegraphics[scale=1]{tn_fraction_r2d8.pdf}
\includegraphics[scale=1]{green_fraction_r2d20.pdf}
\includegraphics[scale=1]{tn_fraction_r2d20.pdf}
\caption{Spectral functions (left) corresponding to models with $t_n$'s generated by Eq.~\eqref{eq:powerlawtn} with $\zeta = +$ (right) at various values of $d$ with $r = 2$.\label{fig:fraction_r2}}
\end{figure}
An interpretation of the $1/{n}$ envelope lies in noting that the spectral gap of the SSH model is directly proportional to the difference of neighboring hoppings, and that features at large $n$ correspond to spectral features at low $\omega$. For hoppings parameterized by Eq.~\eqref{eq:powerlawtn}, hopping amplitudes at large $n$ become asymptotically close in magnitude, such that their difference becomes very small.
In the parameterization Eq.~\eqref{eq:powerlawtn}, the power-law features in the resulting spectral functions typically set in at very low energies, $|\omega| \ll 1$. As such the power-law feature can be difficult to identify on linear scale plots. Spectral functions for $r = 0.5$ and $r = 2$ at $d = 20$ are plotted on a log scale in Fig.~\ref{fig:powerlogd20}. The powers of $r = 0.5$ and $r = 2$ are clearly seen in Figs.~\ref{fig:green_fraction_r05d20_log} and \subref{fig:green_fraction_r2d20_log} respectively.
\begin{figure}[htp!]
\subfiglabel{\includegraphics{green_fraction_r05d20_log.pdf}}{3.125,2}{fig:green_fraction_r05d20_log}
\subfiglabel{\includegraphics{green_fraction_r2d20_log.pdf}}{3.125,2}{fig:green_fraction_r2d20_log}
\caption{Spectral functions for the power-law vanishing SSH model from Eq.~\eqref{eq:powerlawtn} with $\zeta = +$ and $d=20$ at \subref{fig:green_fraction_r05d20_log} $r=0.5$ and \subref{fig:green_fraction_r2d20_log} $r=2$. Log scale clearly shows the low energy power as $\mathcal{A}(\omega) \sim |\omega|^{r}$\label{fig:powerlogd20}}
\end{figure}
\subsection{Power-Law Diverging Spectra\label{sec:powerdivergencessh}}
The opposite case to a pseudogap is where the spectral function exhibits a power-law divergence at low energy. This case is made possible with the parameterization of the $\{t_n\}$ initialized with a weak bond, following Eq.~\eqref{eq:powerlawtn} with $\zeta = -$.
This configuration has $t_1 < t_2$, and therefore the chain parity starts with a weak bond like the topological phase of the SSH model.
Similarly, a diverging state at zero energy can be identified. Also as in the SSH model, the wavefunction of this zero energy state is exponentially localized on the boundary site. The localization of the wavefuction is obtained from a transfer matrix calculation.
A set of spectra corresponding to models generated from the parameterization Eq.~\eqref{eq:powerlawtn} with $\zeta = -$ for a range of values of $d$ is shown in Figs.~\ref{fig:fractiond_r05}, with $r = 0.5$, and \ref{fig:fractiond_r2}, with $r = 2$.
\begin{figure}[htp]
\includegraphics[scale=1]{green_fractiond_r05d2.pdf}
\includegraphics[scale=1]{tn_fractiond_r05d2.pdf}
\includegraphics[scale=1]{green_fractiond_r05d5.pdf}
\includegraphics[scale=1]{tn_fractiond_r05d5.pdf}
\includegraphics[scale=1]{green_fractiond_r05d8.pdf}
\includegraphics[scale=1]{tn_fractiond_r05d8.pdf}
\includegraphics[scale=1]{green_fractiond_r05d20.pdf}
\includegraphics[scale=1]{tn_fractiond_r05d20.pdf}
\caption{Spectral functions (left) corresponding to models with $t_n$'s generated by Eq.~\eqref{eq:powerlawtn} with $\zeta = -$ (right) at various values of $d$ with $r = 0.5$.\label{fig:fractiond_r05}}
\end{figure}
\begin{figure}[htp]
\includegraphics[scale=1]{green_fractiond_r2d2.pdf}
\includegraphics[scale=1]{tn_fractiond_r2d2.pdf}
\includegraphics[scale=1]{green_fractiond_r2d5.pdf}
\includegraphics[scale=1]{tn_fractiond_r2d5.pdf}
\includegraphics[scale=1]{green_fractiond_r2d8.pdf}
\includegraphics[scale=1]{tn_fractiond_r2d8.pdf}
\includegraphics[scale=1]{green_fractiond_r2d20.pdf}
\includegraphics[scale=1]{tn_fractiond_r2d20.pdf}
\caption{Spectral functions (left) corresponding to models with $t_n$'s generated by Eq.~\eqref{eq:powerlawtn} with $\zeta = -$ (right) at various values of $d$ with $r = 2$.\label{fig:fractiond_r2}}
\end{figure}
In the parameterization Eq.~\eqref{eq:powerlawtn}, the power-law features in the resulting spectral functions typically set in at very low energies, $|\omega| \ll 1$. As such the power-law feature can be difficult to identify on linear scale plots. Spectral functions for $r = 0.5$ and $r = 2$ at $d = 20$ are plotted on a log scale in Fig.~\ref{fig:powerlogdd20}.
While on the linear scale plot, the spectrum for $r=2$, $d=20$ in Fig.~\ref{fig:fractiond_r2} appears similar to a pole lying among the continuum, but on the log scale plot shown in Fig.~\ref{fig:green_fractiond_r2d20_log}, the peak is clearly a power-law feature.
\begin{figure}[htp!]
\subfiglabel{\includegraphics{green_fractiond_r05d20_log.pdf}}{3.125,2}{fig:green_fractiond_r05d20_log}
\subfiglabel{\includegraphics{green_fractiond_r2d20_log.pdf}}{3.125,2}{fig:green_fractiond_r2d20_log}
\caption{Spectral functions for the power-law vanishing SSH model from Eq.~\eqref{eq:powerlawtn} with $\zeta = -$ and $d=20$ at \subref{fig:green_fractiond_r05d20_log} $r=0.5$ and \subref{fig:green_fractiond_r2d20_log} $r=2$. Log scale clearly shows the low energy power as $\mathcal{A}(\omega) \sim |\omega|^{-r}$\label{fig:powerlogdd20}}
\end{figure}
The spectrum does not feature a gap and there exist finite energy states infinitesimally close to zero energy. The wavefunctions of these low energy states exhibit a node near to the chain boundary, \textit{i.e.} a point in the chain $n_p$ where $\lvert \psi(n_p) \rvert^2 = 0$, effectively partitioning the wavefunction into separate parts.
These wavefunctions exhibit exponential decay between the boundary and the node, and beyond the node the wavefunction is finite and delocalized. The nature of these low energy wavefunctions can be analyzed by considering the entanglement between the two partitions on either side of the node, a portion which can be considered the boundary and a portion which can be considered the bulk. The entanglement entropy is given by
\begin{equation}
S_{\textsc{e}} = -\text{Tr} \left[\rho_A \ln \rho_A\right]
\end{equation}
where $\rho_A$ is the reduced density matrix of partition $A$. It is found that the portion of the wavefunction near to the boundary has significant entanglement with the bulk state. Because of this, the non-zero energy states should not be interpreted as being topological as they are extended into the bulk.
Only the state precisely at zero energy should be regarded as topological in the power-law diverging case. Note however that it is not separated from the continua by a gap as in the standard SSH case.
\section{Outlook}
Developed in this chapter were several generalizations of the SSH model. Previous generalizations of the SSH model considered unit cells of three and four sites, but the constructions here go further and devise models where the unit cell may possess an arbitrary number of sites. These previous studies exist as special cases of the more general models developed in this chapter.
Another novel generalization constructed here is the non-translation invariant models with power-law suppressed hopping amplitudes. Generalizations of the SSH model in the literature are generally concerned with the system's momentum space representation, and therefore completely miss such non-periodic models.
The SSH model is sometimes taken to be the simplest model of a crystalline topological insulator, or higher-order topological insulator~\cite{hoti}. The lessons learned about the generalized SSH models analyzed in this chapter may provide the basis for generalizing crystalline topological insulators in higher dimensions.
While the generalized SSH models developed here are interesting in their own right, many features of these generalizations will appear again in the context of the auxiliary field mapping in \S\ref{ch:aux} and \S\ref{ch:motttopology}.
\chapter*{Statement of Original Authorship}
\addcontentsline{toc}{chapter}{Statement of Original Authorship}
I hereby certify that the submitted work is my own work, was completed while registered as a candidate for the degree stated on the Title Page, and I have not obtained a degree elsewhere on the basis of the research presented in this submitted work.
\vfill
This thesis is based on the following publications:
\begin{itemize}
\item[\cite{motttopology}] S. Sen, P. J. Wong, A. K. Mitchell, ``The Mott transition as a topological phase transition,'' \textit{Phys. Rev. B} \textbf{102} 081110(R) \textbf{Editors' Suggestion} (2020). \href{https://www.arxiv.org/abs/2001.10526}{arXiv:2001.10526 [cond-mat.str-el]}
\item[\cite{bethessh}] P. J. Wong, A. K. Mitchell, ``Topological phases of the interacting SSH model on the Bethe lattice,'' (\textit{In Progress}).
\item[\cite{generalizedssh}] P. J. Wong, A. K. Mitchell, ``Extended SSH tight-binding models,'' (\textit{In Progress}).
\item[\cite{auximp}] P. J. Wong, S. Sen, A. K. Mitchell, ``Effective theories for quantum impurity models'' (\textit{In Progress}).
\item[\cite{phasymmotttopology}] P. J. Wong, S. Sen, A. K. Mitchell, ``Effective Topology in the $ph$-asymmetric Hubbard Model,'' (\textit{In Progress}).
\end{itemize}
\vfill
\phantom{.}
\input{acknowledgements_arxiv.tex}
\cleardoublepage
\pagenumbering{arabic}
\setcounter{page}{1}
{
\onehalfspacing
\input{introduction_arxiv.tex}
\input{green_arxiv.tex}
\input{sshmodel_arxiv.tex}
\input{bethessh_arxiv.tex}
\input{correlations_arxiv.tex}
\input{motttopology_arxiv.tex}
\input{conclusion_arxiv.tex}
| {
"attr-fineweb-edu": 1.652344,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbVM4dbjiU5gJuRwu | \section{Introduction}
\label{se1}
\subsection{Scope of this paper}
Motivated by the huge success of deep neural networks
in applications (see, e.g., Schmidhuber (2015),
Rawat and Wang (2017),
Hewamalage, Bergmeir and Bandara (2020)
and the literature cited therein) there is nowadays a
strong interest in showing theoretical properties
of such estimates. In the last years many new results
concerning deep feedforward neural network estimates
have been derived (cf., e.g.,
Eldan and Shamir (2016),
Lu et al. (2020),
Yarotsky (2018) and
Yarotsky and Zhevnerchuk (2019)
concerning approximation properties or
Kohler and Krzy\.zak (2017), Bauer and Kohler
(2019) and Schmidt-Hieber (2020) concerning statistical
properties of these estimates).
But basically no theoretical convergence results are known about
the recurrent neural network estimates, which are among those
neural network estimates which have been successfully
applied in practice for time series forecasting (Smyl (2020), Mas and Carre (2020) and
Makridakis, Spiliotis and Assimakopolulos (2018)),
handwriting recognition (Graves et al. (2008),
Graves and Schmidhuber (2009)),
speech recognition (Graves and Schmidhuber (2005),
Graves, Mohamed and Hinton (2013)) and natural language processing
(Pennington, Socher and Manning (2014)). For survey of the recent advances on
recurrent neural networks see Salehinejad et al. (2018). In this paper
we introduce a special class of deep recurrent neural
network estimates and analyze their statistical properties
in the context of regression estimation problem with dependent data.
\subsection{A regression problem with dependent data}
In order to motivate our regression estimation problem with dependent
data, we start by considering
a general time series prediction problem with exogeneous variables
described
as follows: Let $(X_t,Y_t)$ $(t \in \mathbb{Z})$ be $\mathbb{R}^d \times \mathbb{R}$--valued
random variables which satisfy
\begin{equation}
\label{se1eq1}
Y_t = F(X_t, (X_{t-1},Y_{t-1}), (X_{t-2},Y_{t-2}), \dots) + \epsilon_t
\end{equation}
for some measurable function
$F: \mathbb{R}^d \times (\mathbb{R}^d \times \mathbb{R})^\mathbb{N} \rightarrow \mathbb{R}$ and
some real-valued random variables $\epsilon_t$ with the property
\begin{equation}
\label{se1eq2}
{\mathbf E}\{\epsilon_t|X_t, (X_{t-1},Y_{t-1}), (X_{t-2},Y_{t-2}), \dots\}=0
\quad a.s.,
\end{equation}
where $\mathbb{R}$ and $\mathbb{N}$ are real numbers and positive integers, respectively.
Given the data
\begin{equation}
\label{se1eq3}
{\mathcal{D}}_n=\{(X_1,Y_1), \dots, (X_n,Y_n)\}
\end{equation}
the aim is to construct an estimate $m_n(\cdot)=m_n(\cdot,{\mathcal{D}}_n):\mathbb{R}^d
\rightarrow \mathbb{R}$ such that the mean squared prediction error
\[
{\mathbf E} \left\{
\left|
Y_{n+1}-m_n(X_{n+1},{\mathcal{D}}_n)
\right|^2
\right\}
\]
is as small as possible.
In this article we simplify the above general model by imposing five main
constraints.
Firstly, we assume that
$F(X_t, (X_{t-1},Y_{t-1}), (X_{t-2},Y_{t-2}), \dots) $
does not depend on the complete infinite past
$(X_{t-1},Y_{t-1}), (X_{t-2},Y_{t-2}), \dots$
but only on the last $k$ times steps, where $k \in \mathbb{N}$.
Secondly, we assume that $F(X_t, (X_{t-1},Y_{t-1}), (X_{t-2},Y_{t-2}), \dots) $
depends only on the $x$-values.
Thirdly, we assume that $F$ has, in addition,
a special recursive structure:
\begin{eqnarray*}
&&
F(X_t, (X_{t-1},Y_{t-1}), (X_{t-2},Y_{t-2}), \dots)
=
G(X_t, H_k(X_{t-1}, X_{t-2}, \dots, X_{t-k}))
\end{eqnarray*}
where
\begin{eqnarray}
&&
H_k(x_{t-1}, x_{t-2}, \dots, x_{t-k})
=
H(x_{t-1}, H_{k-1}(x_{t-2}, x_{t-3}, \dots, x_{t-k}))
\label{se1eq10}
\end{eqnarray}
and
\begin{equation}
\label{se1eq11}
H_1(x_{t-1})=H(x_{t-1},0).
\end{equation}
Here $G:\mathbb{R}^d \times \mathbb{R} \rightarrow \mathbb{R}$ and $H:\mathbb{R}^d \times \mathbb{R}
\rightarrow \mathbb{R}$ are smooth functions.
Fourthly, we assume that $\epsilon_t$ are independent
and identically distributed
random
variables with mean zero satisfying the following sub-Gaussian
assumption:
\begin{equation}
\label{se1eq12}
{\mathbf E} \left\{
e^{c_1 \cdot \epsilon_t^2}
\right\}
<
\infty.
\end{equation}
And finally we simplify our model further by assuming
that $X_1$, $X_2$, \dots are independent and identically distributed.
In this way we get the following regression problem: Let $(X_t)_{t \in \mathbb{Z}}$
be independent identically distributed random variables with values in $\mathbb{R}^d$ and
let $(\epsilon_t)_{t \in \mathbb{Z}}$
be independent identically distributed random variables with values in $\mathbb{R}$, which are
independent of $(X_t)_{t \in \mathbb{Z}}$.
Assume ${\mathbf E} \{\epsilon_t\}=0$ and (\ref{se1eq12}).
Set
\[
Y_t = G(X_t, H_k(X_{t-1}, \dots, X_{t-k})) + \epsilon_t
\]
for some (measurable) $G: \mathbb{R}^d \times \mathbb{R} \rightarrow \mathbb{R}$ and $H_k$ defined by
(\ref{se1eq10}) and (\ref{se1eq11}) for some (measurable)
$H:\mathbb{R}^d \times \mathbb{R} \rightarrow \mathbb{R}$.
Given the data (\ref{se1eq3})
we want to construct an estimate
\[
m_n(\cdot)=m_n(\cdot, {\mathcal{D}}_n): \mathbb{R}^d \times (\mathbb{R}^d)^k \rightarrow \mathbb{R}
\]
such that
\[
{\mathbf E} \left\{
\left|
Y_{n+1}-m_n(X_{n+1},X_n, \dots, X_{n-(k-1)})
\right|^2
\right\}
\]
is as small as possible.
In the above model we have
\[
{\mathbf E}\{Y_t | X_t=x_t, \dots, X_{t-k}=x_{t-k}\}
=
G( x_t, H_k(x_{t-1}, \dots, x_{t-k})),
\]
i.e.,
\begin{equation}
\label{se1eq*1}
m(x_1,\dots,x_{k+1})=G(x_{k+1},H_k(x_k,\dots,x_1))
\end{equation}
is the regression function on
\[
((X_1, \dots, X_{k+1}),Y_{k+1}),
\]
and our estimation problem above is a standard regression
estimation problem where we try to estimate (\ref{se1eq*1})
from the data
\begin{equation}
\label{se1eq*2}
((X_1, \dots, X_{k+1}),Y_{k+1}),
((X_2, \dots, X_{k+2}),Y_{k+2}),
\dots,
((X_{n-k}, \dots, X_{n}),Y_{n}).
\end{equation}
Here the data (\ref{se1eq*2}) is not independent because
each of the variables $X_2$, \dots, $X_{n-1}$ occur in several of
the data ensembles.
\subsection{A recurrent neural network estimate}
We construct a recurrent neural network estimate
as follows: Below we define a suitable class ${\cal F}_n$ of recurrent
neural networks and use the least squares principle to define
our estimate by
\begin{equation}
\label{se1eq4}
\tilde{m}_n=
\arg \min_{f \in {\cal F}_n}
\frac{1}{n-k}
\sum_{t=k+1}^n
| Y_t
-
f(X_t,X_{t-1}, \dots, X_{t-k}) |^2
\end{equation}
and
\begin{equation}
\label{se1eq5}
m_n(X_{n+1}, {\mathcal{D}}_n)
=
T_{\beta_n} \tilde{m}_n(X_{n+1},X_{n}, \dots, X_{n-(k-1)} )
\end{equation}
where $T_L z=\min\{\max\{z,-L\},L\}$ (for $L >0$ and $z\in \mathbb{R}$)
is a truncation operator and $\beta_n= c_2 \cdot \log n$.
So it remains to define the class ${\cal F}_n$ of recurrent neural networks.
Here we use standard feedforward neural networks with additional
feedback loops.
We start by defining our artificial neural network
by choosing the so--called activation function $\sigma:\mathbb{R} \rightarrow
\mathbb{R}$, for which we select the ReLU activation function
\begin{equation}
\label{se1eq9}
\sigma(z)=\max\{z,0\} \quad (z \in \mathbb{R}).
\end{equation}
Our neural network consists of $L$ layers of hidden neurons
with $k_l$ neurons in layer $l$. It
depends on a vector of weights
$w_{i,j}^{(l)}$ and $\bar{w}_{j,(r,\bar{l})}^{(l)}$, where
$w_{i,j}^{(l)}$ is the weight between neuron $j$ in layer $l-1$ and
neuron
$i$ in layer $l$, and where
$\bar{w}_{j,(r,\bar{l})}^{(l)}$
is the recurrent weight between neuron $r$ in layer $\bar{l}$
and neuron $j$ in layer $l$.
For each neuron $j$ in layer $l$ the index set $I_j^{(l)}$ describes
the neurons in the neural network from which there exists
a recurrent connection to this neuron.
The function corresponding to this network
evaluated at $x_1,\dots,x_t$
is defined recursively as follows:
\begin{equation}
\label{se1eq6}
f_{net,\mathbf{w}}(t)=
\sum_{j=1}^{k_L} w_{1,j}^{(L)} \cdot f_j^{(L)}(t),
\end{equation}
where
\begin{equation}
\label{se1eq7}
f_j^{(l)}(t)
=
\sigma
\left(
\sum_{s=1}^{k_{l-1}}
w_{j,s}^{(l)}
\cdot
f_s^{(l-1)}(t)
+
I_{\{t>1\}}
\cdot
\sum_{(r,\bar{l}) \in I_j^l}
\bar{w}_{j,(r,\bar{l})}^{(l)}
\cdot
f_r^{(\bar{l})}(t-1)
\right)
\end{equation}
for $l=2, \dots, L$ and
\begin{equation}
\label{se1eq8}
f_j^{(1)}(t)
=
\sigma
\left(
\sum_{s=0}^{d}
w_{j,s}^{(1)}
\cdot
x_t^{(s)}
+
I_{\{t>1\}}
\cdot
\sum_{(r,\bar{l}) \in I_j^1}
\bar{w}_{j,(r,\bar{l})}^{(1)}
\cdot
f_r^{(\bar{l})}(t-1)
\right).
\end{equation}
Here we have set $x_t^{(0)}=1$.
In case that $f_{net,\mathbf{w}}$ is computed as above we
define
\[
f_{net,\mathbf{w}}(X_t,X_{t-1}, \dots, X_{t-k})
\]
as $f_{net,\mathbf{w}}(k+1)$ where the function is evaluated
at
$X_{t-k}$,
$X_{t-k+1}$,
\dots,
$X_t$. Here we set in (\ref{se1eq8})
$x_{k+1}=X_t$, $x_k=X_{t-1}$, \dots, $x_1=X_{t-k}$.
In order to describe the above neural networks completely
(up to the weights, which are chosen in data-dependent way by the
least squares principle as described
in (\ref{se1eq4})) we have to choose the number $L$ of hidden
layers, the numbers of hidden neurons $k_1$, $k_2$, \dots, $k_L$
in layers $1$, $2$, \dots, $L$, and the location of the recurrent
connections described by the index sets $I_{j}^{(l)}$. We set
$L=L_{n,1}+L_{n,2}$, $k_l=K_{n,1}$ for $l=1, \dots, L_{n,1}$
and $k_l=K_{n,2}$ for $l=L_{n,1}+1$, \dots, $L_{n,1}+L_{n,2}$,
where $L_{n,1}$, $L_{n,2}$, $K_{n,1}$, $K_{n,2}$ are parameters
of the estimate chosen in Theorem \ref{th1} below.
The location of the recurrent connections is described
in Figure \ref{fig1}, which sketches
the architecture of the recurrent network
(see Section \ref{se2} for a formal
definition).
\begin{figure}[!ht]
\centering
{
\vspace*{-2cm}
\includegraphics[width=5cm]{recnet_fig1.pdf}
}
\caption{\label{fig1}
The structure of the recurrent neural networks. The solid
arrows are standard feedforward connections, the dashed
arrows represent the recurrent connections. The two boxes
represent the parts of the network which approximate
functions $G$ and $H$. Here $H$ is approximately computed in layers
$1, \dots, L_{n,1}$, and function $G$ is approximately computed in layers $L_{n,1}+1, \dots, L_{n,1}+L_{n,2}$.
}
\end{figure}
\subsection{Main result}
In Theorem \ref{th1} we show that the recurrent neural network
regression estimate
(\ref{se1eq4}) and (\ref{se1eq5}) with the above class
of recurrent neural networks satisfies in case that
$G$ and $H$ are $(p_G,C_G)$ and $(p_H,C_H)$--smooth the
error bound
\begin{eqnarray*}
&&
\hspace*{-0.3cm}
{\mathbf E} \left\{
\left|
Y_{n+1}-m_n(X_{n+1},X_n, \dots, X_{n-(k-1)})
\right|^2
\right\}
\\
&&
\hspace*{-0.3cm}
\leq \min_{g: (\mathbb{R}^d)^{k+1} \rightarrow \mathbb{R}}
{\mathbf E} \left\{
\left|
Y_{n+1}-g(X_{n+1}, \dots, X_{n-(k-1)})
\right|^2
\right\}
+
c_{3} \cdot
(\log n)^6 \cdot
n^{- \frac{2 \cdot \min\{p_g,p_H\}}{2 \cdot \min\{p_g,p_H\}+(d+1)}}.
\end{eqnarray*}
Here the derived rate of convergence depends on $d+1$ and
not on the dimension \linebreak $(k+1) \cdot (d+1)$ of the predictors
in the data set (\ref{se1eq*2}). This shows that by using
recurrent neural networks it is possible to get under
the above assumptions on the structure of $H_k$ a rate of convergence
which does not depend on $k$ (and hence circumvents the
curse of dimensionality in this setting).
\subsection{Discussion of related results}
The Recurrent Neural Networks (RNN) are the class of artificial neural networks which can be described by the directed cyclic or acyclic graph and which exhibit temporal dynamic behaviour. Such networks can implement time delays and feedback loops. They are able to learn long-term dependencies from sequential and time-series data. In particular, properly trained RNN can model an arbitrary dynamical system. The most popular architectures of RNN are Hopfield networks in which all connections are symmetric (Bruck (1990)), Bidirectional Associative Memory, that stores associative data as vectors (Kosko (1988)), Recursive Neural Networks in which the same set of weights are applied recursively over the structured input (Socher et al. (2011)), Long-Short Term Memory (LSTM), a network able to model long-term dependencies which has been very popular in natural language processing and speech recognition (Hochreiter and Schmidhuber (1997)) and is more robust to vanishing gradients than the classical RNN, and Gated Recurrent Units which are derived from RNN by adding gating units to them (Cho et al. (2014)) and which are more capable to learn long-term dependencies and are more robust to vanishing gradients than the classical RNN. Deep RNN have been surveyed by Schmidhuber (2015). Recent advances on RNN have been discussed in Salehinejad et al. (2018). The main problems with training RNNs by backpropagation are overfitting and vanishing gradients. Overfitting has generally been controlled by regularization, dropout, activation stabilization and hidden activation preservation, see Srivastava et al (2014) and Krueger et al. (2016). Theoretical analysis of RNNs learning has been lacking to date.
\subsection{Notation}
\label{se1sub7}
Throughout the paper, the following notation is used:
The sets of natural numbers, natural numbers including $0$,
integers
and real numbers
are denoted by $\mathbb{N}$, $\mathbb{N}_0$, $\mathbb{Z}$ and $\mathbb{R}$, respectively.
For $z \in \mathbb{R}$, we denote
the greatest integer smaller than or equal to $z$ by
$\lfloor z \rfloor$, and
$\lceil z \rceil$
is the smallest
integer greater than or equal to $z$.
Let $D \subseteq \mathbb{R}^d$ and let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be a real-valued
function defined on $\mathbb{R}^d$.
We write $x = \arg \min_{z \in D} f(z)$ if
$\min_{z \in {\mathcal{D}}} f(z)$ exists and if
$x$ satisfies
$x \in D$ and $f(x) = \min_{z \in {\mathcal{D}}} f(z)$.
For $f:\mathbb{R}^d \rightarrow \mathbb{R}$
\[
\|f\|_\infty = \sup_{x \in \mathbb{R}^d} |f(x)|
\]
is its supremum norm, and the supremum norm of $f$
on a set $A \subseteq \mathbb{R}^d$ is denoted by
\[
\|f\|_{\infty,A} = \sup_{x \in A} |f(x)|.
\]
Let $p=q+s$ for some $q \in \mathbb{N}_0$ and $0< s \leq 1$.
A function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is called
$(p,C)$-smooth, if for every $\alpha=(\alpha_1, \dots, \alpha_d) \in
\mathbb{N}_0^d$
with $\sum_{j=1}^d \alpha_j = q$ the partial derivative
$\frac{
\partial^q f
}{
\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
}$
exists and satisfies
\[
\left|
\frac{
\partial^q f
}{
\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
}
(x)
-
\frac{
\partial^q f
}{
\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
}
(z)
\right|
\leq
C
\cdot
\| x-z \|^s
\]
for all $x,z \in \mathbb{R}^d$.
Let ${\cal F}$ be a set of functions $f:\mathbb{R}^d \rightarrow \mathbb{R}$,
let $x_1, \dots, x_n \in \mathbb{R}^d$ and set $x_1^n=(x_1,\dots,x_n)$.
A finite collection $f_1, \dots, f_N:\mathbb{R}^d \rightarrow \mathbb{R}$
is called an $L_2$ $\varepsilon$--cover of ${\cal F}$ on $x_1^n$
if for any $f \in {\cal F}$ there exists $i \in \{1, \dots, N\}$
such that
\[
\left(
\frac{1}{n} \sum_{k=1}^n |f(x_k)-f_i(x_k)|^2
\right)^{1/2}< \varepsilon.
\]
The $L_2$ $\varepsilon$--covering number of ${\cal F}$ on $x_1^n$
is the size $N$ of the smallest $L_2$ $\varepsilon$--cover
of ${\cal F}$ on $x_1^n$ and is denoted by ${\mathcal{N}}_2(\varepsilon,{\cal F},x_1^n)$.
For $z \in \mathbb{R}$ and $\beta>0$ we define
$T_\beta z = \max\{-\beta, \min\{\beta,z\}\}$. If $f:\mathbb{R}^d \rightarrow
\mathbb{R}$
is a function and ${\cal F}$ is a set of such functions, then we set
$
(T_{\beta} f)(x)=
T_{\beta} \left( f(x) \right)$.
\subsection{Outline of the paper}
\label{se1sub8}
In Section \ref{se2} the deep recurrent
neural network estimates used in this paper
are defined.
The main result is presented in Section \ref{se3} and proven
in Section \ref{se4}.
\section{A recurrent neural network estimate}
\label{se2}
We start with the definition of our class
of the recurrent neural networks. It depends on parameters
$k$, $L_{n,1}$, $L_{n,2}$, $K_{n,1}$ and $K_{n,2}$.
As activation function
we use the ReLU activation function
defined in (\ref{se1eq9}). Depending on a weight vector $\mathbf{w}$
which consists of weights
$w_{i,j}^{(l)}$ and $\bar{w}_{j,(r,\bar{l})}^{(l)}$ we define
our recurrent neural network
\[
f_{net,\mathbf{w}}: (\mathbb{R}^d)^{k+1} \rightarrow \mathbb{R}
\]
by
\[
f_{net,\mathbf{w}}(x_{k+1},x_{k}, \dots, x_1)
=
\sum_{j=1}^{K_{n,2}} w_{1,j}^{(L)} \cdot f_j^{(L_{n,1}+L_{n,2})}(k+1),
\]
where $ f_j^{(L_{n,1}+L_{n,2})}(t)$ are recursively defined as follows:
\begin{equation}
\label{se2eq1}
f_j^{(l)}(t)
=
\sigma
\left(
\sum_{s=1}^{K_{n,2}}
w_{j,s}^{(l)}
\cdot
f_s^{(l-1)}(t)
\right)
\end{equation}
for $l=L_{n,1}+2, \dots, L_{n,1}+L_{n,2}$,
\begin{equation}
\label{se2eq2}
f_j^{(L_{n,1}+1)}(t)
=
\sigma
\left(
\sum_{s=1}^{K_{n,1}}
w_{j,s}^{(L_{n,1}+1)}
\cdot
f_s^{(L_{n,1})}(t)
+
I_{\{t>1\}}
\cdot
\sum_{s=1}^{K_{n,1}}
\bar{w}_{j,(s,L_{n,1})}^{(L_{n,1}+1)}
\cdot
f_s^{(L_{n,1})}(t-1)
\right),
\end{equation}
\begin{equation}
\label{se2eq3}
f_j^{(l)}(t)
=
\sigma
\left(
\sum_{s=1}^{K_{n,1}}
w_{j,s}^{(l)}
\cdot
f_s^{(l-1)}(t)
\right)
\end{equation}
for $l=2, \dots, L_{n,1}$
and
\begin{equation}
\label{se2eq4}
f_j^{(1)}(t)
=
\sigma
\left(
\sum_{s=0}^{d}
w_{j,s}^{(1)}
\cdot
x_t^{(s)}
+
I_{\{t>1\}}
\cdot
\sum_{s=1}^{K_{n,1}}
\bar{w}_{j,(s,L_{n,1})}^{(1)}
\cdot
f_s^{(L_{n,1})}(t-1)
\right).
\end{equation}
Let ${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$ be the
class of all such recurrent deep networks. Observe
that here we implement the networks in a slightly different
way than in Figure \ref{fig1} since we do not use
a direct connection from the input to the part
of the network in G, instead we use the network which
implements $H$ also to feed the input to $G$ (cf., Figure \ref{fig2}).
\begin{figure}[!ht]
\centering
{
\vspace*{-2cm}
\includegraphics[width=5cm]{recnet_fig2.pdf}
}
\caption{\label{fig2}
The structure of the recurrent neural networks in
${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$. The solid
arrows are standard feedforward connections, the dashed
arrows represent the recurrent connections. The two boxes
represent the parts of the network which implement approximations
of functions $G$ and $H$. Here the network which approximates $H$
also feeds the input to $G$.
}
\end{figure}
Then our estimate is defined by
\begin{equation}
\label{se2eq6}
\tilde{m}_n=
\arg \min_{f \in {\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})}
\frac{1}{n-k}
\sum_{t=k+1}^n
| Y_t
-
f(X_t,X_{t-1}, \dots, X_{t-k}) |^2
\end{equation}
and
\begin{equation}
\label{se2eq7}
m_n(X_{n+1}, {\mathcal{D}}_n)
=
T_{\beta_n} \tilde{m}_n(X_{n+1},X_{n}, \dots, X_{n-(k-1)} ).
\end{equation}
\section{Main result}
\label{se3}
Our main result is described the following theorem.
\begin{theorem}
\label{th1}
Let $X_t$ $(t \in \mathbb{Z})$ be independent
and identically distributed
$[0,1]^d$--valued
random variables, and let $\epsilon_t$ $(t \in \mathbb{Z})$ be
independent and identically distributed
$\mathbb{R}$-valued random variables with
${\mathbf E}\{\epsilon_t\}=0$ which satisfy (\ref{se1eq12})
and which are independent from $(X_t)_{t \in \mathbb{Z}}$.
Let $G, H: \mathbb{R}^d \times \mathbb{R} \rightarrow \mathbb{R}$ be
$(p_G,C_g)$-- and $(p_H,C_H)$--smooth functions which
satisfy
\begin{equation}
\label{th1eq1}
|G(x,z_1)-G(x,z_2)| \leq C \cdot |z_1-z_2|
\quad \mbox{and} \quad
|H(x,z_1)-H(x,z_2)| \leq C \cdot |z_1-z_2|
\end{equation}
$ (x \in \mathbb{R}^d, z_1,z_2 \in \mathbb{R})$
for some constant $C >1$.
Let $k \in \mathbb{N}$ and define
\[
Y_t = G(X_t, H_k(X_{t-1},\dots,X_{t-k}))+\epsilon_t
\]
for $H_k$ recursively defined by (\ref{se1eq10}) and (\ref{se1eq11}).
Set
\[
K_{n,1}= \lceil c_4 \rceil, \quad K_{n,2}= \lceil c_5 \rceil,
\quad
L_{n,1}=\left\lceil
c_6 \cdot n^{\frac{d+1}{2 \cdot (2 p_H + d+1)}} \right\rceil
\]
and
\[
L_{n,2}=
\left\lceil
c_7 \cdot n^{\frac{d+1}{2 \cdot (2 p_G + d+1)}}
\right\rceil
\]
and define the estimate $m_n$ as in Section \ref{se2}.
Then we have for
$c_4$, \dots, $c_7>0$ sufficiently large and for
any $n \geq 2 \cdot k+2$:
\begin{eqnarray*}
&&
\hspace*{-0.3cm}
{\mathbf E} \left\{
\left|
Y_{n+1}-m_n(X_{n+1},X_n, \dots, X_{n-(k-1)})
\right|^2
\right\}
\\
&&
\hspace*{-0.3cm}
\leq \min_{g: (\mathbb{R}^d)^{k+1} \rightarrow \mathbb{R}}
{\mathbf E} \left\{
\left|
Y_{n+1}-g(X_{n+1}, \dots, X_{n-(k-1)})
\right|^2
\right\}
+
c_{8} \cdot
(\log n)^6 \cdot
n^{- \frac{2 \cdot \min\{p_G,p_H\}}{2 \cdot \min\{p_G,p_H\}+(d+1)}}.
\end{eqnarray*}
\end{theorem}
\noindent
{\bf Remark 1.}
Our estimation problem can be considered as a regression
problem with independent variable
\[
(X_t,X_{t-1},\dots,X_{t-k}),
\]
having dimension $d \cdot k$.
The rate of convergence in Theorem \ref{th1} corresponds
to the optimal minimax rate of convergence of a regression problem
with dimension $d+1$ (cf., Stone (1982)),
hence our assumption on the
structure of $H_t$ enables us to get the rate of convergence
independent of $k$.
\section{Proofs}
\label{se4}
\subsection{Auxiliary results from empirical process theory}
\label{se4sub1}
In our proof we will apply well-known techniques from the empirical
process theory as described, for instance, in van de Geer (2000).
We reformulate the results there by the following two
auxiliary lemmas.
Let
\[
Y_i = m(x_i) + W_i
\quad
(i=1, \dots, n)
\]
for some
$x_1, \dots, x_n \in \mathbb{R}^d$, $m:\mathbb{R}^d \rightarrow \mathbb{R}$ and some
random variables
$W_1$, \dots, $W_n$ which are independent and have expectation zero.
We assume that the $W_i$'s are sub-Gaussian in the sense that
\begin{equation}
\label{se52eq1}
\max_{i=1, \dots, n} K^2 {\mathbf E} \{ e^{W_i^2 / K^2} -1 \} \leq \sigma_0^2
\end{equation}
for some $K, \sigma_0 >0$.
Our goal is to estimate $m$ from
$(x_1, Y_1), \dots, (x_n, Y_n)$.
Let ${\cal F}_n$ be a set of functions $f:\mathbb{R}^d \rightarrow \mathbb{R}$
and consider the least squares estimate
\begin{equation}
\label{se52eq2}
\tilde{m}_n(\cdot)
=
\arg \min_{f \in {\cal F}_n}
\frac{1}{n} \sum_{i=1}^n |f(x_i)- Y_{i}|^2
\quad \mbox{and} \quad
m_n = T_{\beta_n} \tilde{m}_n,
\end{equation}
where
$\beta_n = c_2 \cdot \log n$.
\begin{lemma}
\label{le1}
Assume that the sub-Gaussian condition (\ref{se52eq1})
and
\[
|m(x_i)| \leq \beta_n/2 \quad (i=1, \dots,n)
\]
hold,
and let the estimate be defined by (\ref{se52eq2}). Then
there exist constants
$c_{9},c_{10}>0$
which depend only on $\sigma_0$ and $K$
such that for any
$\delta_n > c_{9} /n$ with
\begin{eqnarray}
\label{le1eq1}
&&
\sqrt{n} \cdot \delta
\geq
c_{9}
\int_{ \delta / (12 \sigma_0)}^{\sqrt{48 \delta}}
\Bigg(
\log {\mathcal{N}}_2 \Bigg(
u
,
\{ T_{\beta_n} f-g : f \in {\cal F}_n, \nonumber \\
&&
\hspace*{2.6cm}
\frac{1}{n} \sum_{i=1}^n
|T_{\beta_n} f(x_i)-g(x_i)|^2 \leq 4 \delta \}
,
x_1^n
\Bigg)
\Bigg)^{1/2} du
\end{eqnarray}
for all $\delta \geq \delta_n / 6$
and all $g \in {\cal F}_n$
we have
\begin{eqnarray*}
&&
{\mathbf P} \Bigg\{
\frac{1}{n} \sum_{i=1}^n | m_n(x_i)-m(x_i)|^2
>
c_{10} \left(
\delta_n
+
\min_{f \in {\cal F}_n}
\frac{1}{n} \sum_{i=1}^n | f(x_i)-m(x_i)|^2
\right)
\Bigg\}
\\
&&
\leq
c_{10} \cdot \exp \left(
- \frac{n \cdot \min \{ \delta_n, \sigma_0^2 \} }{c_{10}}
\right) + \frac{c_{10}}{n}.
\end{eqnarray*}
\end{lemma}
\noindent
{\bf Proof.}
Lemma \ref{le1} follows from the proof of
proof of Lemma 3 in Kohler and Krzy\.zak (2020).
For the sake of completeness a complete proof
can be found in the Appendix.
\hfill $\Box$
In order to formulate our next auxiliary result we
let $(X,Y), (X_1,Y_1),\ldots$ be independent and identically
distributed $\mathbb{R}^d\times\mathbb{R}$ valued random variables with
${\mathbf E} Y^2 <\infty$, and we let
$m(x) = {\mathbf E}\{Y|X=x\}$ be the corresponding
regression function.
\begin{lemma}
\label{le2}
Let $\beta_n \geq L \geq 1$ and assume that
$m$ is bounded in absolute value by $L$. Let $n, N \in \mathbb{N}$, let
${\cal F}_n$ be a set of functions $f:\mathbb{R}^d \rightarrow
\mathbb{R}$,
let
\[
\tilde{m}_n(\cdot)=\tilde{m}_n(\cdot, (X_1,Y_1),
\dots, (X_{n+N},Y_{n+N})) \in {\cal F}_n
\]
and set $m_n=T_{\beta_n} \tilde{m}_n$.
Then
there exist constants $c_{11},c_{12},c_{13},c_{14}>0$
such that for any $\delta_n>0$
which satisfies
\[
\delta_n > c_{11} \cdot \frac{\beta_n^2}{n}
\]
and
\begin{eqnarray}
\label{le2eq1}
&& c_{12} \cdot \frac{\sqrt{n} \delta}{\beta_n^2}
\geq
\int_{ c_{13} \cdot \delta / \beta_n^2
}^{\sqrt{\delta}} \Bigg( \log {\mathcal{N}}_2 \Bigg( u , \{
(T_{\beta_n} f-m)^2 \, : \, f \in {\cal F}_n
\} , x_1^n \Bigg)
\Bigg)^{1/2} du
\end{eqnarray}
for all $\delta \geq \delta_n$ and all $x_1, \dots, x_n \in \mathbb{R}^d$,
we have for $n \in \mathbb{N} \setminus \{1\}$
\begin{eqnarray*}
&&
{\mathbf P} \left\{
\int |m_n(x)-m(x)|^2 {\mathbf P}_X(dx)
>
\delta_n
+
3
\frac{1}{n}
\sum_{i=1}^n
|m_n(X_i)-m(X_i)|^2
\right\}
\\
&&
\leq
c_{14} \cdot
\exp \left(
- \frac{n \cdot \delta_n }{c_{14} \cdot \beta_n^2}
\right)
.
\end{eqnarray*}
\end{lemma}
\noindent
{\bf Proof.} The result follows from the
proof of Lemma 4 in Kohler and Krzy\.zak (2020).
For the sake of completeness a complete proof
can be found in the Appendix.
\hfill $\Box$
\subsection{Approximation results for neural networks}
\begin{lemma}
\label{le3}
Let $d \in \mathbb{N}$,
let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be $(p,C)$--smooth for some $p=q+s$,
$q \in \mathbb{N}_0$ and $s \in (0,1]$, and $C>0$. Let $A \geq 1$
and $M \in \mathbb{N}$ sufficiently large (independent of the size of $A$, but
\begin{align*}
M \geq 2 \ \mbox{and} \ M^{2p} \geq c_{15} \cdot \left(\max\left\{A, \|f\|_{C^q([-A,A]^d)}
\right\}\right)^{4(q+1)},
\end{align*}
where
\[
\|f\|_{C^q([-A,A]^d)}
=
\max_{\alpha_1, \dots, \alpha_d \in \mathbb{N}_0, \atop \alpha_1 + \dots + \alpha_d \leq q}
\left\|
\frac{
\partial^q f
}{
\partial x_1^{\alpha_1}
\dots
\partial x_d^{\alpha_d}
}
\right\|_{\infty, [-A,A]^d}
,
\]
must hold for some sufficiently large constant $c_{15} \geq 1$).
\\
a) Let $L, r \in \mathbb{N}$ be such that
\begin{enumerate}
\item $L \geq 5+\lceil \log_4(M^{2p})\rceil \cdot \left(\lceil \log_2(\max\{q, d\} + 1\})\rceil+1\right)$
\item $r \geq 2^d \cdot 64 \cdot \binom{d+q}{d} \cdot d^2 \cdot (q+1) \cdot M^d$
\end{enumerate}
hold.
There exists a feedforward neural network
$f_{net, wide}$ with ReLU activation function, $L$ hidden layers
and $r$ neurons per hidden layer such that
\begin{align}
\| f-f_{net, wide}\|_{\infty, [-A,A]^d} \leq
c_{16} \cdot \left(\max\left\{A, \|f\|_{C^q([-A,A]^d)}\right\} \right)^{4(q+1)} \cdot M^{-2p}.
\label{le3eq1}
\end{align}
b) Let $L, r \in \mathbb{N}$ be such that
\begin{enumerate}
\item $L \geq 5M^d+\left\lceil \log_4\left(M^{2p+4 \cdot d \cdot (q+1)} \cdot e^{4 \cdot (q+1) \cdot (M^d-1)}\right)\right\rceil \\
\hspace*{4cm}
\cdot \lceil \log_2(\max\{q,d\}+1)\rceil+\lceil \log_4(M^{2p})\rceil$
\item $r \geq 132 \cdot 2^d\cdot \lceil e^d\rceil
\cdot \binom{d+q}{d} \cdot \max\{ q+1, d^2\}$
\end{enumerate}
hold.
There exists a feedforward neural network
$f_{net, deep}$ with ReLU activation function, $L$ hidden layers
and $r$ neurons per hidden layer such that
(\ref{le3eq1}) holds with
$f_{net,wide}$ replaced by $f_{net,deep}$.
\end{lemma}
\noindent
{\bf Proof.} See Theorem 2 in Kohler and Langer (2020).
\hfill $\Box$
\begin{lemma}
\label{le4}
Let $k \in \mathbb{N}$, $x_1, \dots, x_{k+1} \in [0,1]^d$, $A \geq 1$,
$g, \hat{g}:\mathbb{R}^d \times \mathbb{R} \rightarrow \mathbb{R}$,
$h: \mathbb{R}^d \times \mathbb{R} \rightarrow [-A,A]$,
$\hat{h}: \mathbb{R}^d \times \mathbb{R} \rightarrow \mathbb{R}$
and assume
\[
|g(x,z) - g(x,\bar{z})| \leq C_{Lip,g} \cdot |z-\bar{z}|
\quad \mbox{and} \quad
|h(x,z) - h(x,\bar{z})| \leq C_{Lip,h} \cdot |z-\bar{z}|
\]
for some $ C_{Lip,g} , C_{Lip,h} >1$. Set $z_0=\hat{z}_0=0$,
\[
z_t = h(x_t,z_{t-1}) \quad \mbox{and} \quad
\hat{z}_t = \hat{h}(x_t,\hat{z}_{t-1})
\]
for $t=1, \dots, k$. Assume
\[
\frac{C_{Lip,h}^k-1}{C_{Lip,h}-1} \cdot \|h - \hat{h}\|_{\infty,[-2A,2A]^{d+1}} \leq 1.
\]
Then we have
\begin{eqnarray*}
&&
|g(x_{k+1},z_k)
-
\hat{g}(x_{k+1}, \hat{z}_{k})|\\
&&
\leq
\|g - \hat{g}\|_{\infty, [-2A,2A]^{d+1}} + C_{Lip,g} \cdot \frac{C_{Lip,h}^k-1}{C_{Lip,h}-1}
\cdot \|h - \hat{h}\|_{\infty, [-2A,2A]^{d+1}}.
\end{eqnarray*}
\end{lemma}
\noindent
{\bf Proof.}
For $t \in \{1, \dots, k\}$, $z_{t-1} \in [-A,A]$ and
$\hat{z}_{t-1} \in [-2A,2A]$ we have
\begin{eqnarray*}
|z_{t} - \hat{z}_{t}|
&=&
| h(x_t,z_{t-1}) - \hat{h}(x_t,\hat{z}_{t-1})|
\\
&
\leq&
|h(x_t,z_{t-1}) - h(x_t,\hat{z}_{t-1})|
+
| h(x_t,\hat{z}_{t-1})- \hat{h}(x_t,\hat{z}_{t-1})|
\\
&\leq&
C_{Lip,h} \cdot |z_{t-1} - \hat{z}_{t-1}|
+
\|h - \hat{h}\|_{\infty, [-2A,2A]^{d+1}}.
\end{eqnarray*}
In case $z_s \in [-A,A]$ and
$\hat{z}_{s} \in [-2A,2A]$ for $s \in \{0,1,\dots,t-1\}$
we can conclude
\begin{eqnarray*}
&& | z_{t} - \hat{z}_{t}| \\
&&\leq
\|h - \hat{h}\|_{\infty, [-2A,2A]^{d+1}} \cdot (1+ C_{Lip,h} + C_{Lip,h}^2 +
\dots + C_{Lip,h}^{k-1}) + C_{Lip,h}^{k}\cdot |z_0-\hat{z}_0|
\\
&&=
\|h - \hat{h}\|_{\infty, [-2A,2A]^{d+1}} \cdot
\frac{C_{Lip,h}^k-1}{C_{Lip,h}-1}
+ 0
\leq 1
\end{eqnarray*}
(where the last equality follows from $z_0=\hat{z}_0=0$),
which implies
\[
|\hat{z}_{t}|
\leq
|z_t|+
| z_{t} - \hat{z}_{t}|
\leq
A + 1 \leq 2A.
\]
Via induction we can conclude that
we have
$z_{s} \in [-A,A]$ and
$\hat{z}_{s} \in [-2A,2A]$
for $s \in \{0,1,\dots,k\}$ and consequently
we get
\[
| z_{k} - \hat{z}_{k}|
\leq
\|h - \hat{h}\|_{\infty,[-2A,2A]^{d+1}} \cdot
\frac{C_{Lip,h}^k-1}{C_{Lip,h}-1}.
\]
This together with
\begin{eqnarray*}
|g(x_k,z_{k})
-
\hat{g}(x_k,\hat{z}_{k})|
&\leq&
|g(x_k,z_{k})
-
g(x_k,\hat{z}_{k})
|
+
|g(x_k,\hat{z}_{k}) -
\hat{g}(x_k,\hat{z}_{k})| \\
& \leq &
C_{Lip,g} \cdot |z_{k} - \hat{z}_{k}| + \| \hat{g}-g\|_{[\infty, -2A,2A]^d}
\end{eqnarray*}
implies the assertion.
\hfill $\Box$
\begin{lemma}
\label{le5}
Let $k \in \mathbb{N}$ and $A \geq 1$.
Assume that $g$ and $h$
are $(p_G,C_G)$-- and $(p_H,C_H)$--smooth functions which
satisfy the assumptions
of Lemma \ref{le4}, and define
\[
h_t(x_{t}, x_{t-1}, \dots, x_1) =
h(x_t, h_{t-1}(x_{t-1}, x_{t-2},\dots, x_1))
\]
$t=2, \dots, k$ and
\[
h_1(x_1)=h(x_1,0).
\]
Let $h_{net}$ be a feedforward neural network with
$L_{n,1}$ hidden layers and $K_{n,1}$ hidden neurons
in each layer and let
$g_{net}$ be a feedforward neural network with
$L_{n,2}$ hidden layers and $K_{n,2}$ hidden neurons
in each layer, which approximate $h$ and $g$.
Let $x_1, \dots, x_n \in [0,1]^d$ arbitrary and assume
\[
\|h_{net} - h\|_{\infty, [-2A,2A]^d} \cdot
\frac{C_{Lip,h}^k-1}{C_{Lip,h}-1}
\leq 1.
\]
Then there exists $f_{net,rec} \in {\cal F}(k,K_{n,1}+ 2 \cdot d,K_{n,2},L_{n,1},L_{n,2})$
such that
\begin{eqnarray*}
&&
| g(x_{k+1},h_k(x_k,\dots,x_1)) - f_{net,rec}(x_{k+1},\dots,x_1)|
\\
&&
\leq
c_{17} \cdot \max\{
\|g_{net} - g\|_{\infty, [-2A,2A]^{d+1}},
\|h_{net} - h\|_{\infty, [-2A,2A]^{d+1}}
\}
\end{eqnarray*}
holds for any $x_{k+1}, \dots, x_1 \in [0,1]^d$.
\end{lemma}
\noindent
{\bf Proof.}
We construct our recurrent neural network as follows:
In layers $1, \dots, L_{n,1}$ it computes in neurons $1, \dots, K_{n,1}$
$h_{net}(x,z)$, where $x$ is the input of the recurrent neural
network and $z$ is the output of layer $L_{n,1}$ of the
network in the previous time step propagated
by the recurrent connections. In the same layer
it uses
\[
f_{id}(x)=x=\sigma(x)-\sigma(-x)
\]
in order to propagate in the neurons
$K_{n,1}+1$, \dots, $K_{n,1}+2 \cdot d$
the input value of $x$ to the next layer.
In layers $L_{n,1}+1, \dots, L_{n,1}+L_{n,2}$ it computes
in the neurons $1, \dots, K_{n,2}$ the function
$g_{net}(x,z)$. Here the layer $L_{n,1}+1$
gets as input the value of $x$ propagated to the layer $L_{n,1}$
in the neurons $K_{n,1}+1$, \dots, $K_{n,1}+2 \cdot d$ in the
previous layers, and (via a recurrent connection) the
output $z$ of the network $h_{net}$ computed in the layers
$1, \dots, L_{n,1}$ in the previous time step.
The output of our recurrent network is the output of $g_{net}$
computed in layer $L_{n,1}+L_{n,2}$.
By construction, this recurrent neural network computes
\[
f_{net,rec}(x_{k+1},\dots,x_1)
=
g_{net}(x_{k+1},\hat{z}_k),
\]
where
$\hat{z}_k$
is recursively defined by
\[
\hat{z}_t=h_{net}(x_t,\hat{z}_{t-1})
\]
for $t=2, \dots, k$ and
\[
\hat{z}_1=h_{net}(x_1,0).
\]
From this we get the assertion by applying Lemma \ref{le4}.
\hfill $\Box$
\subsection{A bound on the covering number}
\begin{lemma}
\label{le6}
Let ${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$ be the
class of deep recurrent networks introduced
in Section \ref{se2} and assume
\[
\max\{L_{n,1},L_{n,2}\} \leq L_n \leq n^{c_{18}}
\quad \mbox{and} \quad
\max\{K_{n,1},K_{n,2}\} \leq K_n.
\]
Then we have for any
$z_1^s \in ((\mathbb{R}^d)^{k+1})^s$ and any $1/n^{c_{19}} < \epsilon < c_2 \cdot (\log n) / 4$
\begin{eqnarray*}
&&
\log \left(
{\mathcal{N}}_2 \left(\epsilon,
\{ T_{\beta_n} f \, : \, f \in {\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2}) \},
z_1^s
\right)
\right)
\\
&&
\leq
c_{20} \cdot k \cdot L_n^2 \cdot K_n^2 \cdot (\log n)^2.
\end{eqnarray*}
\end{lemma}
\noindent
{\bf Proof.} By unfolding the recurrent neural networks
in ${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$ in time it is easy
to see that ${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$ is contained
in a class of standard feedforward neural networks with
\[
(k+1) \cdot (L_{n,1}+L_{n,2})
\]
layers having at most
\[
\max\{K_{n,1},K_{n,2}\}+2d+2
\]
neurons per layer. In this
unfolded feedforward neural network there are at most
\[
c_{22} \cdot (L_{n,1} \cdot K_{n,1}^2 + L_{n,2} \cdot K_{n,2}^2)
\]
different weights (since we share the same weights at all time
points). By Theorem 6 in Bartlett et al. (2019) we
can conclude that the VC dimension of the set
of all subgraphs from ${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$
(cf., e.g., Definition 9.6 in Gy\"orfi et al. (2002))
and hence also the VC dimension
of the set of all subgraphs
from
\[
\{ T_{\beta_n} f \, : \, f \in {\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2}) \}
\]
is bounded above by
\begin{eqnarray*}
&&
c_{22} \cdot (L_{n,1} \cdot K_{n,1}^2 + L_{n,2} \cdot K_{n,2}^2)
\cdot (k+1) \cdot
(L_{n,1} + L_{n,2}) \cdot \log( (k+1) \cdot (L_{n,1} + L_{n,2}))
\\
&&
\leq
c_{23} \cdot k \cdot L_n^2 \cdot K_n^2 \cdot \log(n).
\end{eqnarray*}
From this together with Lemma 9.2 and Theorem 9.4
in Gy\"orfi et al. (2002) we can conclude
\begin{eqnarray*}
&&
{\mathcal{N}}_2 \left(\epsilon,
\{ T_{\beta_n} f \, : \, f \in {\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2}) \},
z_1^k
\right)
\\
&&
\leq
3 \cdot \left(
\frac{4 e \cdot (c_{2} \cdot \log n)^2}{\epsilon^2}
\cdot
\log
\frac{6 e \cdot (c_{2} \cdot \log n)^2}{\epsilon^2}
\right)^{ c_{23} \cdot k \cdot L_n^2 \cdot K_n^2 \cdot \log(n)
},
\end{eqnarray*}
which implies the assertion. \hfill $\Box$
\subsection{Proof of Theorem \ref{th1}}
{\it In the first step of the proof} we show that the
assertion follows from
\begin{eqnarray}
\label{pth1eq1}
&&
{\mathbf E} \int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\nonumber \\
&&
\leq
(\log n)^3 \cdot
n^{- \frac{2 \cdot \min\{p_G,p_H\}}{2 \cdot \min\{p_G,p_H\}+(d+1)}}.
\end{eqnarray}
Let
\[
m(x_{k+1},x_k,\dots,x_1)={\mathbf E}\{Y_{k+1}|X_{k+1}=x_{k+1}, \dots,X_{1}=x_{1}\}
\]
be the regression function to $((X_{k+1},\dots,X_1),Y_{k+1})$.
By the assumptions on $(X_t,Y_t)$ we have
\[
m(x_{k+1},x_k,\dots,x_1)=G(x_{k+1},H_k(x_k,\dots,x_1))
\]
and
\[
m(x_{n+1},x_n,\dots,x_{n-(k-1)})={\mathbf E}\{Y_{n+1}|X_{n+1}=x_{n+1}, \dots,X_{n-(k-1)}
=x_{n-(k-1)}\},
\]
from which we can conclude by a standard decomposition of the
$L_2$ risk in nonparametric regression (cf., e.g., Section 1.1 in Gy\"orfi et
al. (2002))
\begin{eqnarray*}
&&
{\mathbf E} \left\{
\left|
Y_{n+1}-m_n(X_{n+1},X_n, \dots, X_{n-(k-1)})
\right|^2
\right\}
\\
&&
= {\mathbf E} \Bigg\{
\bigg|
(Y_{n+1}-m(X_{n+1},X_n, \dots, X_{n-(k-1)})
\\
&&
\hspace*{2cm}
+(m(X_{n+1},X_n, \dots, X_{n-(k-1)})
-m_n(X_{n+1},X_n, \dots, X_{n-(k-1)}))
\bigg|^2
\Bigg\}
\\
&&
=
{\mathbf E} \left\{
\left|
Y_{n+1}-m(X_{n+1},X_n, \dots, X_{n-(k-1)}
\right|^2
\right\}
\\
&&
\hspace*{1cm}
+
{\mathbf E} \left\{ \left|
m(X_{n+1},X_n, \dots, X_{n-(k-1)})
-m_n(X_{n+1},X_n, \dots, X_{n-(k-1)})
\right|^2
\right\}
\\
&&
=
\min_{g: (\mathbb{R}^d)^{k+1} \rightarrow \mathbb{R}}
{\mathbf E} \left\{
\left|
Y_{n+1}-g(X_{n+1},X_n, \dots, X_{n-(k-1)})
\right|^2
\right\}
\\
&&
\hspace*{1cm}
+
{\mathbf E} \int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv).
\end{eqnarray*}
{\it In the second step of the proof} we show
\begin{eqnarray}
\label{pth1eq2}
&&
{\mathbf E} \Bigg\{
\int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\nonumber \\
&&
\hspace*{2cm}
-
\frac{6 \cdot k+6}{n-k} \sum_{i=k+1}^n
|m_n(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2
\Bigg\}
\nonumber \\
&&
\leq
c_{24} \cdot (\log n)^6 \cdot
n^{-\frac{ 2 \cdot \min\{p_G,p_H\}}{\min\{p_G,p_H\}+d+1}}
\end{eqnarray}
Set
\begin{eqnarray*}
&&
T_n=
\int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\\
&&
\hspace*{0.5cm}
-
\frac{6 \cdot k+6}{n-k} \sum_{i=k+1}^n
|m_n(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2.
\end{eqnarray*}
Then
\begin{eqnarray*}
&&
T_n \leq
\int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\\
&&
\hspace*{0.1cm}
-
\frac{6 \cdot k + 6}{n-k} \sum_{i= k+1+l \cdot (k+1), \atop
l \in \mathbb{N}_0, k+1+l \cdot (k+1) \leq n}
|m_n(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2
\\
&&
=
\int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\\
&&
\hspace*{0.5cm}
-
\frac{n_k \cdot (6 \cdot k + 6)}{3 \cdot (n-k)} \cdot \frac{3}{n_k}\sum_{i= k+1+l \cdot (k+1), \atop
l \in \mathbb{N}_0, k+1+l \cdot (k+1) \leq n}
\bigg|m_n(X_i,X_{i-1},\dots,X_{i-k})
\\
&&
\hspace*{8cm}
-m(X_i,X_{i-1},\dots,X_{i-k})
\bigg|^2
\end{eqnarray*}
where
\[
n_k=| \{ l \in \mathbb{N}_0 \, : \,
k+1+l \cdot (k+1) \leq n \} |
\geq
\left\lfloor \frac{n}{k+1} \right\rfloor
\geq \frac{n}{2k+2}
\]
is the number of terms in the sum on the right-hand side
above and consequently
\begin{eqnarray*}
&&
T_n
\leq
\int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\\
&&
\hspace*{1cm}
-
\frac{3}{n_k}\sum_{i= k+1+l \cdot (k+1), \atop
l \in \mathbb{N}_0, k+1+l \cdot (k+1) \leq n}
|m_n(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2.
\end{eqnarray*}
Let
$\delta_n \geq c_{25}/n$.
Then
\[
{\mathbf E}\{ T_n\} \leq \delta_n + \int_{\delta_n}^{4 \beta_n^2} {\mathbf P}\{ T_n > t\} \, dt.
\]
We will apply Lemma \ref{le2} in order to bound
${\mathbf P}\{ T_n > t\}$ for $t>\delta_n$. Here we will
replace $n$ by $n_k$ and ${\cal F}_n$ by
${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$.
By Lemma \ref{le6}
we know for $u > c_{25} \cdot \delta_n/\beta_n^2$
\begin{eqnarray*}
&&
\log {\mathcal{N}}_2 \left( u , \left\{
(T_{\beta_n} f-m)^2 \, : \, f \in {\cal F}_n,
\frac{1}{n} \sum_{i=1}^n
|T_{\beta_n} f(x_i)-m(x_i)|^2 \leq \frac{\delta}{\beta_n^2}
\right\} , x_1^n \right)
\\
&&
\leq
\log {\mathcal{N}}_2 \left( u , \left\{
(T_{\beta_n} f-m)^2 \, : \, f \in {\cal F}_n
\right\} , x_1^n \right)
\\
&&
\leq
\log {\mathcal{N}}_2 \left( \frac{u}{4 \beta_n} , \left\{
T_{\beta_n} f \, : \, f \in {\cal F}_n
\right\} , x_1^n \right)
\\
&&
\leq
c_{26} \cdot k \cdot (\max\{L_{n,1}, L_{n,2}\})^2
\cdot (\max\{K_{n,1}, K_{n,2}\})^2 \cdot (\log n)^2.
\\
&&
\leq
c_{27} \cdot n^{\frac{(d+1)}{2 \cdot \min\{p_G,p_H\}+d+1}} \cdot (\log n)^2.
\end{eqnarray*}
Consequently (\ref{le2eq1}) is satisfied for
\[
\delta_n=c_{28} \cdot \frac{n^{\frac{d+1}{2 \cdot \min\{p_G,p_H\}+d+1}} \cdot (\log n)^6}{n}
=
c_{28} \cdot (\log n)^6 \cdot n^{-\frac{2 \cdot \min\{p_G,p_H\}}{2 \cdot \min\{p_G,p_H\}+d+1}}.
\]
Application of Lemma \ref{le2} yields
\[
{\mathbf E}\{ T_n\} \leq \delta_n + c_{29} \cdot \frac{\beta_n^2}{n} \cdot
\exp\left( - \frac{n \cdot \delta_n}{c_{14} \cdot \beta_n^2}\right),
\]
which implies (\ref{pth1eq2}).
{\it In the third step of the proof} we show
\begin{eqnarray}
&& \label{pth1eq3}
{\mathbf E} \left\{
\frac{1}{n-k} \sum_{i=k+1}^n
|m_n(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2
\right\}
\nonumber \\
&&
\leq
c_{30} \cdot (\log n)^6 \cdot
n^{-\frac{ 2 \cdot \min\{p_G,p_H\}}{\min\{p_G,p_H\}+d+1}}.
\end{eqnarray}
To do this, we set
\[
\bar{T}_n=
\frac{1}{n-k} \sum_{i=k+1}^n
|m_n(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2
\]
and define $\delta_n$ as in the second step of the proof
(for $c_{28}$ sufficiently large).
Then
\[
{\mathbf E}\{ \bar{T}_n \}
\leq \delta_n + \int_{\delta_n}^{4 \beta_n^2} {\mathbf P}\{ \bar{T}_n > t\} \, dt.
\]
To bound $ {\mathbf P}\{ \bar{T}_n > t\}$ for $t \geq \delta_n$,
we apply Lemma \ref{le1} conditioned on $X_1, \dots, X_n$
and with sample size $n-k$ instead of $n$
and with
${\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})$
instead of ${\cal F}_n$. As in the proof of
the second step we see that (\ref{le1eq1}) holds for $\delta \geq
\delta_n/12$. Furthermore, we get by application of
Lemma \ref{le3} b) and
Lemma \ref{le5}
\begin{eqnarray*}
&&
\min_{f \in {\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})}
\frac{1}{n-k} \sum_{i=k+1}^n
|f(X_i,X_{i-1},\dots,X_{i-k})-m(X_i,X_{i-1},\dots,X_{i-k})
|^2
\\
&&
\leq
\left(
\min_{f \in {\cal F}(k,K_{n,1},K_{n,2},L_{n,1},L_{n,2})} \|f-m\|_{\infty, [0,1]^d}
\right)^2
\\
&&
\leq
c_{31} \cdot \max\{L_{n,1}^{-\frac{2 p_H}{d+1}},
L_{n,2}^{-\frac{2 p_G}{d+1}} \}
\leq
c_{28}/2 \cdot (\log n)^6 \cdot
n^{-\frac{ 2 \cdot \min\{p_G,p_H\}}{\min\{p_G,p_H\}+d+1}}=\delta_n/2.
\end{eqnarray*}
Consequently, we get by Lemma \ref{le1}
\begin{eqnarray*}
{\mathbf E}\{ \bar{T}_n \}
&\leq& \delta_n + \int_{\delta_n}^{4 \beta_n^2} {\mathbf P}\{ \bar{T}_n > t\} \, dt.
\\
&\leq& \delta_n +4 \beta_n^2 \cdot
{\mathbf P}\{ \bar{T}_n > \delta_n/2 + \delta_n/2\}
\\
&\leq&
\delta_n +4 \beta_n^2 \cdot c_{32} \cdot \exp( - c_{33} \cdot (n-k) \cdot
\frac{\delta_n}{2}) + 4 \beta_n^2 \cdot \frac{c_{34}}{n},
\end{eqnarray*}
which implies (\ref{pth1eq3}).
{\it In the fourth step of the proof} we conclude the proof of
Theorem \ref{th1} by showing (\ref{pth1eq1}). Define $T_n$ and
$\bar{T}_n$ as in the second and in the third step of
the proof, resp. Then (\ref{pth1eq2}) and (\ref{pth1eq3})
imply
\begin{eqnarray*}
&&
{\mathbf E} \int
| m_n(u,v)- G(u,H_k(v))|^2
{\mathbf P}_{X_{n+1}}(du) {\mathbf P}_{(X_{n},...,X_{n-k+1})}(dv)
\\
&&
\leq {\mathbf E}\{ T_{n,1} \}
+
(6 \cdot k + 6) \cdot
{\mathbf E}\{ T_{n,2} \}
\leq
c_{35} \cdot (\log n)^6 \cdot
n^{-\frac{ 2 \cdot \min\{p_G,p_H\}}{\min\{p_G,p_H\}+d+1}}.
\end{eqnarray*}
\hfill $\Box$
| {
"attr-fineweb-edu": 1.298828,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbZDxK6Ot9UjD_oC4 | \section{Introduction}
\subsection{Entropy and unital stochastic maps}
When a pure state, represented by a density matrix $\rho$, is
transmitted along a noisy channel, it is mapped into a mixed
state $\Phi(\rho)$. The entropy of the initial pure state
is necessarily zero, i.e.,
$S(\rho) \equiv - \hbox{Tr} \, \rho \, \log \rho = 0$ since
${\rho}^2 = \rho$ and so the only
eigenvalues of $\rho$ are $0$ and $1$. However, the
entropy $S[\Phi(\rho)]$ of the mixed state which emerges need
not be zero. One seeks states $\rho$ which minimize the
effect of the noise in the sense of minimizing the entropy
$S[\Phi(\rho)]$ of the state that emerges from the channel.
There are a number of reasons for studying such states,
most notably the connection between minimizing entropy
and maximizing channel capacity, which will be
discussed in Section \ref{subsect:prelim.cap}. However, in this paper
we focus attention on the entropy.
The noise, which results from interactions between the states
in a Hilbert space ${\cal H}$
and the environment, is represented by the action of a
completely positive, trace-preserving map $\Phi$ on the
trace class operators in ${\cal B}({\cal H})$.
We use the term {\em stochastic} to describe such maps.
(Following a similar use by Alberti and Uhlmann \cite{AH},
this terminology was used by Petz,
\cite{Pz} and reflects the fact that $\Phi$ is the non-commutative
analogue of the action of a column stochastic matrix on a probability
vector.) We restrict attention to two-level quantum systems in which
case ${\cal H} = {\bf C}^2$ or tensor products of ${\bf C}^2$. A stochastic
map $\Phi$ acting on states in ${\bf C}^2$, can be naturally extended to
tensor products, e.g., $\Phi \otimes \Phi$ acting on states
on ${\bf C}^2 \otimes {\bf C}^2 $ etc., and leads to questions about
the additivity of the minimal entropy and capacity of product
channels.
In particular, Shor has conjectured that the minimal
entropy is additive. We were led independently to this conjecture
because it would imply additivity of channel capacity for {\em unital}
stochastic maps, i.e., maps which take the identity operator
to itself so that $\Phi(I) = I$. Although we have a convincing argument
that our results for entropy, unlike those for channel capacity,
extend to non-unital maps, we focus most of our attention on
unital maps. In the last section we briefly consider non-unital maps.
\bigskip
Recall that every completely positive map $\Phi$ can be
represented (non-uniquely) in the Kraus form
\begin{eqnarray} \label{eq:kraus}
\Phi(\rho) = \sum_k A_{k}^{\dagger} \rho A_{k}.
\end{eqnarray}
Every map $\Phi$ representing noisy evolution in a quantum channel
must preserve the trace of $\rho$, since $\Phi(\rho)$ is also
a state. In terms of the Kraus operators,
the condition that $\Phi$ be stochastic, that is
completely positive and trace preserving, is
\begin{eqnarray} \label{eq:cond.tracepres}
\hbox{Tr} \Phi(\rho) = \hbox{Tr} \rho \quad \forall \rho & \Leftrightarrow & \sum_{k=1}^{n} A_{k}
A_{k}^{\dagger} = I .
\end{eqnarray}
The map $\Phi$ is {\it unital} if $\Phi(I) = I$, that is
if $\Phi$ maps the identity operator to itself.
In terms of the Kraus operators,
the condition for $\Phi$ to be completely positive and unital is
\begin{eqnarray} \label{eq:cond.unital}
\Phi(I) = I ~~ & \Leftrightarrow &
\sum_{k=1}^{n} A_{k}^{\dagger} A_{k} = I .
\end{eqnarray}
A sufficient condition that a stochastic map be unital is that the
Kraus operators are self-adjoint, i.e., $A_k = A_k^{\dagger} ~ \forall k$.
This condition is not necessary; for example, a
double-stochastic matrix which is not symmetric corresponds to a
unital stochastic map which is not self-adjoint.
Henceforth in this paper, ``unital map'' will mean ``unital stochastic map''
unless otherwise stated.
It is worth noting that the Kraus operators are self-adjoint if
and only if $\Phi$ is self-adjoint with respect to the
Hilbert-Schmidt inner product
$\langle P, Q \rangle = \hbox{Tr} P^{\dagger} Q$. The
dual or adjoint map $\hat{\Phi}$ is then defined by the
condition $\hbox{Tr} [\hat{\Phi}(P)]^{\dagger} Q = \hbox{Tr} P^{\dagger} \Phi(Q)$.
It is easy to see that if $\Phi$ is given by (\ref{eq:kraus}) then
$\hat{\Phi}(\rho) = \sum_k A_{k}\rho A_{k}^{\dagger} $.
In addition, since the dual of any trace-preserving map
satisfies $\hat{\Phi}(I) = I$, any stochastic map (considered as
a linear map on the space of Hilbert-Schmidt operators) has
eigenvalue $1$, and hence a fixed point $P$ such that $\Phi(P) = P$.
For unital maps, the identity is an eigenvector whose orthogonal
complement is the set of operators with trace zero. Hence, a
unital map also defines a linear map on the traceless part
of a density matrix. By contrast, a non-unital map is only affine
on the set of traceless matrices. This distinction is easily
seen for the ${\bf C}^2$ case when the Bloch sphere representation
is used as described in section \ref{sect:notation}.
Appendix C contains a list of examples of unital and non-unital maps.
\bigskip
Recall that the entropy of a density matrix can be written in
terms of its eigenvalues $\lambda_k$, namely
$S(\rho) = -\sum_k \lambda_k \log \lambda_k$. The minimal
entropy $S(\rho) = 0$ occurs if and only if one eigenvalue of $\rho$ is
$1$ and all others $0$; the maximal entropy (in $d$ dimensions)
of $S(\rho) = \log d$ occurs if and only if all eigenvalues
are $1/d$ so that $\rho = \frac{1}{d} I$. Thus, if
$S(\rho) \approx 0$, one must have one eigenvalue close to $1$
and the others near $0$. Hence, states with small entropy
are those for which $\| \rho \| \approx 1$. Thus, in seeking
pure states $\rho$ which have minimal entropy $S[\Phi(\rho)]$
after emerging from a noisy channel, we are led to seek states
for which $\| \Phi(\rho) \|$ is maximal. In section \ref{sect:norm.bnd}
we give a precise definition of the maximal norm and show that
it is multiplicative when (at least) one channel is a unital map on
${\bf C}^2$.
Our results suggest that multiplicativity of the maximal norm may
hold for general channels; in fact, we can extend our result to some
non-unital channels (see Remarks at the end of Section \ref{sect:norm.bnd}).
In any case, our results provide
strong support for the additivity of minimal entropy for
unital channels.
Another property of unital channels is that the entropy
of a state is non-decreasing under the action of a unital
stochastic map.
This follows easily from the fact that the relative entropy
\begin{eqnarray}
H(P,Q) = \hbox{Tr} P [\log P - \log Q]
\end{eqnarray}
decreases under stochastic maps, i.e.,
\begin{eqnarray} \label{eq:relent.dec}
H[\Phi(P),\Phi(Q)] \leq H(P,Q)
\end{eqnarray}
Since $S(\rho) = -H(\rho, \frac{1}{d} I) + \log d$
if $\Phi$ is unital, it follows from (\ref{eq:relent.dec})
that $S[\Phi(\rho)] \geq S(\rho)$.
For a non-unital map, the entropy of a pure state cannot decrease;
however, one can have mixed states for which the entropy actually
decreases.
\subsection{Channel Capacity}
We now discuss the information
capacity of a noisy quantum channel \cite{Ho1}, \cite{Ho2} used for
what is sometimes called ``classical'' communication, i.e.,
communications in which signals are sent using quantum particles
but without additional or prior entanglement between sender
and receiver.
In the simplest case where no entanglement
is used in either the transmission or the measurement,
each letter $i$ from the source alphabet is
represented by a pure state which we represent by its
density matrix ${\rho}_i$ on a quantum Hilbert space.
During transmission
the channel transforms this state into ${\tilde \rho}_i \equiv
\Phi({\rho}_i)$, where $\Phi$ implements the noisy interaction
between states and the environment. The map $\Phi$
is a completely positive, trace-preserving map on the set of states.
The resulting state ${\tilde \rho}_i$ is measured, and the outcome
determines a letter from the output alphabet. In the general case this
measurement is effected
by a positive operator-valued measurement (POVM) -- namely, there is a positive
operator $E_j$ assigned to each letter $j$ of the output alphabet, which
together
satisfy the constraint $\sum_{j} E_j = I$. When the measurement is
performed on a state
$\rho$, the result will be $j$ with
probability ${\rm Tr}\big(\rho E_j\big)$.
Several definitions of channel capacity have been proposed,
corresponding to whether or not entangled states are used for
transmission, and whether or not entangled measurements are made by the
receiver. Bennett and Shor \cite{BS} identify four possibilities,
which we denote $C_{PP}, C_{PE}, C_{EP}$ and $C_{EE}$ where
the subscripts $P$ and $E$ refer to product and entangled
processes respectively.
\bigskip
In the process described above, with no entanglement at either end,
the channel is equivalent to a classical
noisy channel with transition probabilities
$\{p_{ij} = \hbox{Tr} \big({\tilde \rho}_i E_j\big)\}$.
Therefore its
maximum rate of information transmission is given by
\begin{eqnarray}\label{Shannon}
C_{PP}(\Phi) = C_{\rm Shan}(\Phi) = \sup_{\pi, \rho, E}
\sum_{i} \sum_{j} {\pi}_i p_{ij} \log \bigg(
{p_{ij} \over \sum_{k} {\pi}_k p_{kj}} \bigg),
\end{eqnarray}
where we take the $\sup$ over all probability distributions $\{{\pi}_i\}$
for the
input alphabet, as well as all choices of input states and measurements.
We call this the {\it Shannon capacity} of the channel, since it bears closest
resemblance to the classical case.
It is reasonable to expect that by transmitting entangled states and by
using entangled
measurements it may be possible to exceed the Shannon capacity for a noisy
channel. The Holevo-Schumacher-Westmoreland Theorem \cite{Ho1}, \cite{SW}
provides a closed form expression for the capacity in the case
where product states are transmitted at the input
and entangled measurements of arbitrary length are allowed at the output:
\begin{eqnarray}\label{HSW}
C_{PE} = C_{{\rm Holv}}(\Phi) .
\end{eqnarray}
Here $C_{{\rm Holv}}(\Phi)$ is the {\it Holevo capacity} of the channel:
\begin{eqnarray}\label{Holevo}
C_{{\rm Holv}}(\Phi) = \sup_{\pi, \rho}
\bigg(S({\tilde \rho}) - \sum_{i} {\pi}_i S({\tilde \rho}_i)\bigg),
\end{eqnarray}
where $\rho = \sum_i {\pi}_i {\rho}_i$ and
${\tilde \rho} = \Phi(\rho)$. The well-known Holevo bound states that
\begin{eqnarray}
C_{{\rm Shan}}(\Phi) \leq C_{{\rm Holv}}(\Phi).
\end{eqnarray}
Holevo \cite{Ho2, Ho3} provided examples of channels in which this
inequality is strict, i.e., $C_{{\rm Shan}}(\Phi) < C_{{\rm Holv}}(\Phi)$.
Furthermore, it has been shown \cite {FC,OPW} that a
necessary and sufficient condition for strict inequality is
that the output states $\{{\tilde \rho}_i\}$ do not commute.
\bigskip
One important open question is whether or not the
Holevo capacity can be exceeded when
entangled states are used at the input, that is whether
$C_{EE}$ exceeds $C_{PE}$. This would be equivalent to the
superadditivity of the Holevo capacity.
In this paper we address this question in the case of messages
which are entangled over {\it two inputs} only. This
is equivalent to the question whether
$C_{\rm Hol}(\Phi \otimes \Phi)$ exceeds $2 C_{\rm Hol}(\Phi)$.
Holevo \cite{Ho2} has shown that $C_{{\rm Shan}}(\Phi \otimes \Phi) > 2
C_{{\rm Shan}}(\Phi)$ for the quantum binary channel, but to the
best of the authors' knowledge there is no known example
of a superadditive channel for the Holevo capacity.
Bruss et al \cite{BFMP} showed that
$C_{\rm Hol}(\Phi \otimes \Phi) = 2 C_{\rm Hol}(\Phi)$ for the
depolarising channel, which is an example of a unital channel.
As we will show, our results strongly suggest that {\it if} the
Holevo capacity is superadditive then the channel must be non-unital.
\subsection{Summary of Results}\label{sect:summary}
We prove several
theorems about the minimal entropy and the maximal norm for
states of the form ${(\Phi \otimes \Omega) ({\rho}_{12})}$, where
$\Phi$ and $\Omega$ are unital stochastic maps on ${\bf C}^{2\times 2}$
and
${\rho}_{12}$ is an entangled state. In addition, we explain how
these results provide evidence for
the conjecture that {\it minimal entropy is additive} for all stochastic
maps on ${\bf C}^{2\times 2}$. We also show that this conjecture has
important implications for the capacity of unital quantum channels. In
particular, we show that the conjecture implies that if $\Phi$ is
unital, then the Holevo capacity is additive over two inputs, that is
${C_{{\rm Holv}}(\Phi \otimes \Phi) = 2 C_{{\rm Holv}}(\Phi)}$.
\medskip
Our first theorem concerns the maximal value of $\|\Phi(\rho)\|$
as $\rho$ varies over states on ${\bf C}^{2}$. We will
consider the general possibility of two
stochastic maps $\Phi$ and $\Omega$ on ${\bf C}^{2\times 2}$
and denote their maximal values by $M_{\Phi}$ and $M_{\Omega}$
respectively. In Theorem \ref{thm:normbnd} we
prove, under mild conditions on one of these maps, that the maximal value
of
$\|(\Phi \otimes\Omega)({\rho}_{12})\|$ is $M_{\Phi} M_{\Omega}$, as
${\rho}_{12}$ varies over states on ${\bf C}^{2\times 2}$.
That is, the norm
of $(\Phi \otimes \Omega)({\rho}_{12})$ achieves its maximal value on product
states ${\rho}_{12} = {\rho}_{1}
\otimes {\rho}_{2}$, rather than on entangled states.
In Theorem \ref{thm:mixing}, we prove a similar, though slightly weaker,
result for the minimal entropy of $(\Phi \otimes \Omega)({\rho}_{12})$.
Namely, we restrict ${\rho}_{12}$ to the family of entangled states whose
reduced density matrices ${\rho}_{1}$ and ${\rho}_{2}$ are such that
$\Phi({\rho}_{1})$ and $\Omega({\rho}_{2})$ have minimal entropy.
Then we prove that the minimal entropy of $(\Phi \otimes
\Omega)({\rho}_{12})$, as
${\rho}_{12}$ varies over this family, is
the sum of the minimal entropies
of $\Phi({\rho}_{1})$ and $\Omega({\rho}_{2})$. That is, the entropy also
achieves its minimal value on product states.
It seems extremely unlikely that we could further decrease the entropy of
$(\Phi \otimes \Omega)({\rho}_{12})$
by using an entangled state whose reduced density matrices do not have
minimal entropy themselves.
Therefore we believe that the conclusion of Theorem \ref{thm:mixing}
also holds as ${\rho}_{12}$ varies over all entangled states on
${\bf C}^{2\times 2}$. In fact, similar
arguments and numerical evidence support an even stronger conclusion,
namely that
minimal entropy is additive for {\it all} stochastic maps on
${\bf C}^{2\times 2}$. This is the content of the conjecture below.
\begin{conj} \label{conj:minent}
If $\Phi$ and $\Omega$ are stochastic maps on ${\bf C}^{2\times 2}$, then
\begin{eqnarray}
\inf_{\rho~ {\rm pure}} S\Big[(\Phi \otimes \Omega)(\rho)\Big] =
\inf_{\rho~ {\rm pure}} S\Big[\Phi(\rho)\Big] +
\inf_{\rho~ {\rm pure}} S\Big[\Omega(\rho)\Big].
\end{eqnarray}
\end{conj}
We will discuss the evidence for this conjecture in detail in
Section \ref{sect:entanal} and Section \ref{subsect:nonu.min.ent}.
We note that
P. Shor \cite{Shor} earlier made a similar conjecture, and
together with J. Smolin
obtained numerical evidence which supports it.
\bigskip
The unital case is particularly important because it yields the
following results as immediate corollaries.
\begin{cor}
If $\Phi$ is unital, then the Holevo capacity is
additive, i.e.,\linebreak ${C_{{\rm Holv}}(\Phi \otimes \Phi) = 2
C_{{\rm Holv}}(\Phi)}$.
\end{cor}
\begin{cor} If $\Phi$ is unital, then the Holevo capacity
can be achieved with orthogonal states.
\end{cor}
\begin{cor}
If $\Phi$ is unital, then $C_{{\rm Holv}}(\Phi) = C_{{\rm Shan}}(\Phi)$ and
$C_{{\rm Holv}}(\Phi \otimes \Phi) = C_{{\rm Shan}}(\Phi \otimes \Phi)$.
\end{cor}
\begin{cor} If $\Phi$ is unital, then the Shannon capacity is also
additive, i.e., \linebreak ${C_{{\rm Shan}}(\Phi \otimes \Phi) = 2
C_{{\rm Shan}}(\Phi)}$.
\end{cor}
In Section \ref{subsect:prelim.cap} we will explain in detail how these
Corollaries follow if Conjecture \ref{conj:minent} holds for unital maps.
\bigskip
This paper is organized as follows. In Section \ref{sect:prelim} we
introduce the notation we will use for the Stokes parametrization
for representing both
states and maps in a basis consisting of the Identity and
Pauli matrices, and show how the various conditions of
unital, trace-preserving, and complete positivity can be
expressed in this representation. With this background,
we conclude Section \ref{sect:prelim} by presenting
the arguments leading
to the Corollaries above.
In Section \ref{sect:norm.bnd} we prove the multiplicativity of
the maximum value of $\| \Phi(\rho) \|$.
Section \ref{sect:entanal} contains the heart of the paper in which we
give the details of the proof of our theorem about additivity
of minimal entropy.
In Section \ref{sect:non.unit} we discuss some of the features of
non-unital maps using a special subclass and then present the evidence for
additivity of minimal entropy in general.
Finally, in Section \ref{sect:conc} we summarize our results and discuss
their implications for channel design.
We also include three Appendices. Appendix A gives some
important background on singular value decompositions and
the details needed for the diagonal representation
introduced in Section \ref{sect:notation}. Appendix B gives the details
needed to verify the complete positivity conditions
of Section \ref{sect:singv.cond}. In Appendix C we provide a number of
examples of different types of channels and show how some familiar
examples appear in the representation and notation we use.
\section{Preliminaries}\label{sect:prelim}
\subsection{Stokes parametrization and Bloch sphere} \label{sect:notation}
Recall that the identity and Pauli matrices form a basis
for ${\bf C}^{2 \times 2}$ so that any $2 \times 2$ matrix $C$
can be written as $w_0 I + {\bf w} {\mathbf \cdot \sigma} $ where
${\bf \sigma}$ denotes the vector of Pauli matrices
and ${\bf w} \in {\bf C}^3.$ Then for $C$ = $w_0 I + {\bf w} {\mathbf \cdot \sigma} $
\begin{itemize}
\item[a)] $C$ is self-adjoint $\Leftrightarrow~~(w_0, {\bf w})$ is real, i.e.,
$w_0 \in {\bf R}$ and $ {\bf w} \in {\bf R}^3$;
\item[b)] $\hbox{Tr} C = 1 \Leftrightarrow w_0 = {\textstyle \frac{1}{2}}$; and
\item[c)] $ C > 0 \Leftrightarrow | {\bf w}| \leq w_0.$
\end{itemize}
Thus, $\{ I, \sigma \}$ also form a basis for the {\em real}
vector space of self-adjoint matrices in ${\bf C}^{2 \times 2}$
and every density matrix can be written in this basis
as $\rho = {{\textstyle \frac{1}{2}}} [I + {\bf w} \cdot {\bf \sigma}]$ with
$ {\bf w} \in {\bf R}^3$ and $| {\bf w}| \leq 1$. Furthermore
\begin{itemize}
\item[d)] $\rho$ is a one-dimensional projection (or pure state)
$\Leftrightarrow~~ | {\bf w}| = 1.$
\end{itemize}
Every linear map
$\Phi : {\bf C}^{2 \times 2} \rightarrow {\bf C}^{2 \times 2}$
can be represented in this basis by a unique $4 \times 4$
matrix ${{\Bbb T}}$, and $\Phi$ is trace-preserving if
and only if the first row satisfies $t_{1k} = \delta_{1k}$, i.e.,
${{\Bbb T}} = \left( \begin{array} {cc}
1 & {\bf 0} \\ {\bf t} & {{\rm T}} \end{array} \right)$
where ${\rm T}$ is a $3 \times 3$ matrix (and ${\bf 0}$ and ${\bf t}$
are row and column vectors respectively) so that
\begin{eqnarray} \label{eq:Trep}
\Phi(w_0I + {\bf w} \cdot \sigma) =
w_0I + ({\bf t} + {{\rm T}} {\bf w}) \cdot \sigma .
\end{eqnarray}
The matrix ${{\Bbb T}}$ is self-adjoint if and only if $\Phi$ is
self-adjoint as an operator on ${\bf C}^{2 \times 2}$
with respect to the Hilbert-Schmidt inner product. We are
interested in those $\Phi$ which map a subspace of
self-adjoint matrices into itself, which holds if and
only if ${\Bbb T}$ is real.
The map $\Phi$ is unital if and only if ${\bf t} = {\bf 0}$. Thus,
any unital stochastic map $\Phi$ acting on density matrices on ${\bf C}^2$
can be written in the form
\begin{eqnarray} \label{eq:T3rep}
\Phi\big( {\textstyle \frac{1}{2}} [I + {\bf w} {\mathbf \cdot \sigma}] \big) =
{\textstyle \frac{1}{2}} [ I + ({\rm T} {\bf w}) {\mathbf \cdot \sigma} ],
\end{eqnarray}
where ${\rm T}$ is a real $3 \times 3$ matrix. Using the singular value
decomposition (see Appendix A), we can write
\begin{eqnarray}\label{SVD}
{\rm T} = R S
\end{eqnarray}
where $R$ is a rotation and $S$ is self-adjoint.
Define the map ${\Phi}_{S}$ by
\begin{eqnarray}
{\Phi}_{S}\big( {\textstyle \frac{1}{2}} [I + {\bf w} {\mathbf \cdot \sigma}] \big) =
{\textstyle \frac{1}{2}} [ I + (S {\bf w}) {\mathbf \cdot \sigma} ]
\end{eqnarray}
As explained in Appendix A, the rotation $R$ defines a unitary
operator $U$ such that for any state $\rho$,
\begin{eqnarray}\label{eq:def.U}
\Phi(\rho) = U \Big[{\Phi}_{S} \big( \rho \big)
\Big]U^{\dagger}
\end{eqnarray}
In this paper we are interested only in the critical
values of certain functions of the spectrum of $\Phi(\rho)$,
as $\rho$ varies over the space of states -- the maximum value of
the norm, the minimum value of the entropy. Since a unitary
transformation leaves the spectrum unchanged, these are the same for
$\Phi$ and ${\Phi}_{S}$. Also, since $S$ is self-adjoint it can be
diagonalized by a change of basis.
Hence without loss of generality we need henceforth
consider only unital stochastic maps whose matrix ${\rm T}$ defined in
(\ref{eq:Trep}) is diagonal, with eigenvalues
$({\lambda}_1, {\lambda}_2, {\lambda}_3)$. As a shorthand, we will denote
this diagonal map by $\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$.
The image of the set of pure state density matrices
$\rho = {\textstyle \frac{1}{2}}[I + {\bf w} {\mathbf \cdot \sigma} ]$ (with $| {\bf w}| = 1$) under the action of
$\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$ is the ellipsoid
\begin{eqnarray} \label{eq:ellip.unit}
\left( \frac{ x_1 }{\lambda_1} \right)^2 +
\left( \frac{ x_2 }{\lambda_2} \right)^2 +
\left( \frac{ x_3 }{\lambda_3} \right)^2 = 1,
\end{eqnarray}
and the image under the action of $\Phi$ is obtained by a
further rotation of the ellipsoid, corresponding to
the operator $U$ in (\ref{eq:def.U}).
Similar reasoning applies when $\Phi$ is non-unital. Using
(\ref{eq:T.SVD.nonpos}) and (\ref{Phi.selfadj}) from Appendix A, the map
$\Phi$ can be written in the form
$\Phi(\rho) = U \Phi_D (V \rho V^{\dagger}) U^{\dagger}$ where $U,V$
are unitary, $D$ is diagonal and
$\Phi_D$ is represented by the matrix
\begin{eqnarray} \label{eq:T.nonunit}
{{\Bbb T}} = \pmatrix{ 1 & 0 & 0 &0 \cr {t'}_1 & \lambda_1 & 0 & 0
\cr {t'}_2 & 0 & \lambda_1 & 0 \cr {t'}_3 & 0& 0 & \lambda_3 }
\end{eqnarray}
The vector ${\bf t'} = ({t'}_1, {t'}_2, {t'}_3)$ is equal to
$R_2 {R_1}^{T} {\bf t}$ in the notation of (\ref{eq:T.SVD.nonpos}).
In this case, the image of the set of pure state density matrices
$\rho = {\textstyle \frac{1}{2}}[I + {\bf w} {\mathbf \cdot \sigma} ]$ (with $| {\bf w}| = 1$) under the action of
$\Phi_D$ is the translated ellipsoid
\begin{eqnarray} \label{eq:ellip.nonunit}
\left( \frac{ x_1 - {t'}_1}{\lambda_1} \right)^2 +
\left( \frac{ x_2 - {t'}_2}{\lambda_2} \right)^2 +
\left( \frac{ x_3 - {t'}_3}{\lambda_3} \right)^2 = 1,
\end{eqnarray}
and again the image under $\Phi$ is a rotation of this.
\bigskip
It will be useful to write out explicitly the action of
the diagonal unital map
$\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$ on a density matrix
in the form
\begin{eqnarray} \label{Phi}
\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3](\rho)
& = & \Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]
\pmatrix{a & b \cr b^{\dagger} & c}
\\ & = & \nonumber
{\textstyle \frac{1}{2}} \pmatrix{ (a+c) + {\lambda}_{3}(a-c) & {\lambda}_1(b+b^{\dagger})
+ {\lambda}_2(b-b^{\dagger}) \cr {\lambda}_1(b+b^{\dagger}) -
{\lambda}_2(b-b^{\dagger}) &
(a+c)- {\lambda}_{3}(a-c)}.
\end{eqnarray}
\subsection{Complete Positivity Conditions}
\label{sect:singv.cond}
\bigskip
The requirement that $\Phi$ be stochastic imposes a number of
constraints on the matrix ${{\Bbb T}}$. We describe these in
Appendix B in which we give explicit formulas for the matrix
elements of ${{\Bbb T}}$ in terms of the Stokes parameterization of
the operators $A_k$. These formulas in turn imply constraints
on the eigenvalues $({\lambda}_1, {\lambda}_2, {\lambda}_3)$
described in the previous section.
Let $T_{jk}$ denote the elements of ${\Bbb T}$ using the convention
that $j,k \in 0\ldots 3$. Then the point with coordinates
$(T_{11}, T_{22}, T_{33}) $ must lie inside a tetrahedron
with corners at
$(1,1,1), (1,-1,-1), (-1,1,-1), (-1,-1,1)$.
These conditions are equivalent to four linear inequalities
which can be written compactly as
\begin{eqnarray}\label{eq:fuji.dcond}
|T_{11} \pm T_{22} | & \leq & |1 \pm T_{33}|.
\end{eqnarray}
(Note that we always have $T_{00} = 1$.)
In the special case where $\Phi$ is unital,
(\ref{eq:fuji.dcond}) implies
that the eigenvalues (which are necessarily real) satisfy
\begin{eqnarray}\label{eq:fuji.cond}
|\lambda_1 \pm {\lambda}_2| & \leq & |1 \pm {\lambda}_3| .
\end{eqnarray}
In fact, for unital $\Phi$ the condition
(\ref{eq:fuji.cond}) is a necessary and sufficient condition
for the numbers $({\lambda}_1, {\lambda}_2, {\lambda}_3)$ to arise
as eigenvalues of the self-adjoint part of a unital stochastic map.
These conditions were discussed earlier by Algoet and Fujiwara
\cite{AF}. In addition they gave conditions for complete positivity
of some non-unital maps. In particular, for the special case of
(\ref{eq:T.nonunit}) with the form
\begin{eqnarray} \label{eq:AF.nonunit}
\Phi\big( {\textstyle \frac{1}{2}}[I + {\bf w} {\mathbf \cdot \sigma}]\big) = {\textstyle \frac{1}{2}}
\left[I + w_1 \lambda_1 \sigma_1 + (t + w_3 \lambda_3) \sigma_3 \right],
\end{eqnarray}
they showed that the necessary and sufficient condition for
complete positivity is
\begin{eqnarray} \label{eq:cpmcond.nonunit}
\lambda_1^2 + t^2 \leq (1 - |\lambda_3 |)^2
\end{eqnarray}
\subsection{Relation between capacity and minimal entropy for unital maps}
\label{subsect:prelim.cap}
We now assume wlog that $\Phi$ is self-adjoint and written in the
diagonal form $\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$.
Let $\mu = \max
(|{\lambda}_1|, |{\lambda}_2|, |{\lambda}_3|)$, and let
${ {\bf w}}_{\mu}$ be a unit vector satisfying $T{ {\bf w}}_{\mu}= \pm \mu
{ {\bf w}}_{\mu}$. Then it is easy to show that
\begin{eqnarray}
\inf_{\rho} S[\Phi(\rho)] =
S\Big( \Phi \big( {\textstyle \frac{1}{2}} [I + { {\bf w}}_{\mu} {\mathbf \cdot \sigma}] \big) \Big) =
h( \mu )
\end{eqnarray}
where
\begin{eqnarray}
h( \mu ) = - {\textstyle \frac{1}{2}} (1 + \mu) \ln {\textstyle \frac{1}{2}} (1 + \mu)
- {\textstyle \frac{1}{2}} (1 - \mu) \ln {\textstyle \frac{1}{2}} (1 - \mu).
\end{eqnarray}
Consider now the question of computing the Holevo capacity $C_{\rm Hol}(\Phi)$
defined in (\ref{Holevo}).
The choice ${\rho}_1 = {\textstyle \frac{1}{2}} [I + { {\bf w}}_{\mu} {\mathbf \cdot \sigma}]$ and
${\rho}_2 = {\textstyle \frac{1}{2}} [I - { {\bf w}}_{\mu} {\mathbf \cdot \sigma}]$, and ${\pi}_1 = {\pi}_2 =
{\textstyle \frac{1}{2}}$, both maximizes the first term $S(\rho) = S({\textstyle \frac{1}{2}} I) =
\ln 2$ and minimizes the second term $\sum {\pi}_i S({\rho}_i) =
h(\mu)$. Hence it also maximizes their difference, which gives
$C_{\rm Hol}(\Phi) = \ln2 - h(\mu)$.
\medskip
In Section \ref{sect:summary} we stated several corollaries of
Conjecture \ref{conj:minent}. Here we will show how these
corollaries follow from
the assumption that minimal entropy is additive for unital maps.
\medskip
So suppose that Conjecture \ref{conj:minent} holds for unital maps, that is
suppose that the minimum value of $S(\Phi \otimes \Phi({\rho}_{12}))$ over
states
${\rho}_{12}$ on ${\bf C}^{2 \times 2}$ is $2 h(\mu)$. Then
there are four product states, namely ${\rho}_i = {\textstyle \frac{1}{2}} [I \pm { {\bf w}}_{\mu}
{\mathbf \cdot \sigma}]
\otimes {\textstyle \frac{1}{2}} [I \pm { {\bf w}}_{\mu} {\mathbf \cdot \sigma}]$, such that $S(\Phi \otimes
\Phi({\rho}_{i}))$ achieves this minimum value for each $i$. If we take
${\pi}_i = 1/4$ for each $i$, then $\rho = 1/4 I \otimes I$ and
$S(\Phi \otimes \Phi(\rho)) = \ln 4$ achieves its maximum possible value.
Hence with these choices, we can separately maximize each term on the right
side of (\ref{Holevo}) and therefore maximize the Holevo capacity. Therefore,
for unital maps the equality $C_{\rm Hol}(\Phi \otimes \Phi) =
2 C_{\rm Hol}(\Phi)$ is implied by the minimal entropy conjecture.
This demonstrates Corollary 2. Furthermore,
the two minimal entropy states ${\textstyle \frac{1}{2}} [I \pm { {\bf w}}_{\mu} {\mathbf \cdot \sigma}]$ are
orthogonal. Hence both $C_{\rm Hol}(\Phi)$ and
$C_{\rm Hol}(\Phi \otimes \Phi)$ are achieved with orthogonal states,
and this establishes Corollary 3. Also, a simple calculation shows that
the expression inside the $\sup$ in the definition of
$C_{{\rm Shan}}(\Phi)$ in (\ref{Shannon}) equals
$C_{\rm Hol}(\Phi)$ when we choose the input states to be
${\rho}_{i} = {\textstyle \frac{1}{2}} [I \pm { {\bf w}}_{\mu} {\mathbf \cdot \sigma}]$ with ${\pi}_{i}={\textstyle \frac{1}{2}}$, and
the POVM to be $E_{i} = {\textstyle \frac{1}{2}} [I \pm { {\bf w}}_{\mu} {\mathbf \cdot \sigma}]$. This shows the
first statement of Corollary 4, and the second statement follows
immediately. Then Corollary 5 is a direct consequence.
\medskip
The essential observation in this argument is that we can find a
partition of unity in terms of a set of
{\it orthogonal} input states which are mapped into a level set of
minimal entropy. For such inputs,
uniform averaging yields the state $\rho = I$ whose output
$\Phi(\rho) = I$ has maximal entropy. Hence
both terms in the Holevo capacity are simultaneously maximised.
On ${\bf C}^2$ orthogonal inputs have the form
${\textstyle \frac{1}{2}} [I \pm { {\bf w}} {\mathbf \cdot \sigma}]$, and the corresponding output states
have the same entropy if and only if $\Phi$ is unital.
In that case, the products of these states form a set of
orthogonal inputs on ${\bf C}^4$ which map onto a level set of
entropy under the product map $\Phi \otimes \Phi$. If the
minimal entropy is additive, one such set of product states
will be mapped onto a set of minimal entropy.
For {\it non-unital} maps on ${\bf C}^2$, and more general
(non-product) maps on ${\bf C}^3$ or ${\bf C}^4$, this
need not hold. (Fuchs and Shor \cite{Shor} have found an explicit
example of a map on ${\bf C}^3$ which does not have such a set
of orthogonal inputs.) Hence the
above argument is quite special and does not provide a direct
link between the additivity of minimal entropy and additivity
of the Holevo capacity.
\section{Upper Bound on Norm} \label{sect:norm.bnd}
For any linear map $\Omega$ define
\begin{eqnarray} \label{eq:phi.norm}
M_{\Omega} \equiv \sup_{\rho \in {\rm DenMat}} \|\Omega(\rho)\|
= \sup_{Q > 0} \frac{\|\Omega(Q)\|}{\hbox{Tr} Q}
\end{eqnarray}
so that for any $\varrho > 0$, $\|\Omega(\varrho)\| \leq
M_{\Omega} \hbox{Tr} \varrho$. Since the matrix norm $\| ~ \cdot ~\| $
used in (\ref{eq:phi.norm}) is convex,
it suffices to consider the supremum over pure states or,
equivalently one-dimensional projections $\rho = |\psi \ket \bra \psi|$.
Then $M_{\Omega}$ can be rewritten using the representation
(\ref{eq:kraus})
\begin{eqnarray}
M_{\Omega} = \sup_{\psi, \chi} \sum_k
\left| \langle \chi , A_k \psi \rangle \right|^2
\end{eqnarray}
where the supremum is taken over those vectors
satisfying $|\psi| = |\chi| = 1$.
We restrict attention now to unital maps.
As discussed in Section \ref{sect:notation}, wlog we assume
that $\Phi$ is diagonal of the form
$\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$.
Then it follows from the discussion in section \ref{sect:notation}
that
\begin{eqnarray}
M_{\Phi} = {\textstyle \frac{1}{2}} \big(1 + {\max}_k|{\lambda}_k|\big) .
\end{eqnarray}
In this section
we will show that for unital maps on
${\bf C}^{2 \times 2}$, the norm $M_{\Phi}$ is multiplicative, i.e.,
$M_{\Phi \otimes \Omega} = M_{\Phi} M_{\Omega}.$ In fact, we will
show a slightly stronger result.
\begin{thm} \label{thm:normbnd}
Let $\Omega$ be any 2-positive map on
${\bf C}^{n \times n}$ and let
$\Phi$ be a unital stochastic map on ${\bf C}^{2 \times 2}$.
Then $M_{\Phi \otimes \Omega} = M_{\Phi} M_{\Omega}$.
\end{thm}
Notice that Theorem \ref{thm:normbnd} implies that
$||(\Phi \otimes \Omega)({\rho})||$ is maximised on product
states of the form $\rho = {\rho}_{1} \otimes {\rho}_2$,
where $||\Phi({\rho}_1)|| = M_{\Phi}$ and $||\Omega({\rho}_2)|| = M_{\Omega}$.
\medskip
Our proof will need the following
well-known result. (See, e.g., \cite{HJ1}.)
\begin{lemma} \label{lemma:poscond}
Let $S = \pmatrix{A & B \cr B^{\dagger} & C}$ be a matrix in block form
with $A, C > 0$ .
Then $S$ is (strictly) positive definite (i.e. $S > 0$) if and only if
$ A > B C^{-1} B^{\dagger}$ if and only if $C > B^{\dagger} A^{-1} B$.
\end{lemma}
As immediate corollaries we find
\begin{cor}
Let $S = \pmatrix{A & B \cr B^{\dagger} & C}$ be a matrix in block form
with $A, C$ positive semi-definite. Then $S$ is positive semi-definite
if and only if for all $u > 0$ one of the following
two equivalent conditions holds
\begin{eqnarray*}
A + uI & > & B(C +uI)^{-1} B^{\dagger} \\
C + uI & > & B^{\dagger}(A +uI)^{-1} B .
\end{eqnarray*}
\end{cor}
\begin{cor} \label{cor:schwarz}
If $S = \pmatrix{A & B \cr B^{\dagger} & C} \geq 0$, then
\begin{eqnarray} \label{eq:block.ineq}
\| B \|^2 = \|B B^{\dagger}\| \leq \|A\| \, \|C\|.
\end{eqnarray}
\end{cor}
To prove this note that
\begin{eqnarray*}
\langle v, BB^{\dagger} v \rangle & \leq &
\| C + u I \| \, \langle v, B(C+uI)^{-1}B^{\dagger} v \rangle \\
& < & \| C + u I \| \, \langle v, (A+uI) v) \rangle
~ \leq ~ \| C + u I \| \, \| A + u I \| \, \|v\|^2.
\end{eqnarray*}
Choosing $v$ an eigenvector of $BB^{\dagger}$ and letting $u \rightarrow 0$
proves (\ref{eq:block.ineq}).
\bigskip
Returning to the proof of Theorem \ref{thm:normbnd},
let $\rho = \pmatrix{\rho_1 & \gamma \cr \gamma^{\dagger} & \rho_2}$
be a density matrix on ${\bf C}^2 \otimes {\bf C}^n$ written in
block form with
$\rho_1, \rho_2, \gamma$ each $n \times n$ matrices.
First observe that
the two-positivity of $\Omega$ implies that
\begin{eqnarray}
\Big( I \otimes \Omega \Big) (\rho) =
\pmatrix{\Omega(\rho_1) & \Omega(\gamma) \cr
\Omega(\gamma)^{\dagger} & \Omega(\rho_2)} \geq 0
\end{eqnarray}
Hence, it follows from (\ref{eq:block.ineq}) that
\begin{eqnarray} \label{ineq:bnd1}
\| \Omega(\gamma) \|^2 = \| \Omega(\gamma) \Omega(\gamma)^{\dagger} \|
& \leq & \| \Omega(\rho_1) \|\, \| \Omega(\rho_2) \| \nonumber \\
& \leq & \hbox{Tr} \rho_1 \, \hbox{Tr} \rho_2 \, M_\Omega^2
\end{eqnarray}
Now use the form (\ref{Phi}) and the linearity
of $\Omega$ to write
\begin{eqnarray}
\lefteqn{ \Big( \Phi \otimes \Omega \Big) (\rho) =
\pmatrix{ P & L \cr L^{\dagger} & Q }} \\
& = & \nonumber {\textstyle \frac{1}{2}}
\pmatrix{\Omega[\rho_1 + \rho_2 + \lambda_3(\rho_1 -\rho_2) ] &
(\lambda_1 + \lambda_2) \Omega(\gamma) +
(\lambda_1 - \lambda_2) \Omega(\gamma)^{\dagger}
\cr (\lambda_1 + \lambda_2) \Omega(\gamma)^{\dagger} +
(\lambda_1 - \lambda_2) \Omega(\gamma) &
\Omega[\rho_1 + \rho_2 - \lambda_3(\rho_1 -\rho_2) ]}
\end{eqnarray}
Note that the complete positivity of $\Phi$ implies that
$\rho_1 + \rho_2 + \lambda_3(\rho_1 -\rho_2) > 0$ . Thus
if $x = \hbox{Tr} \rho_1$
\begin{eqnarray}
\| P \| & \leq & \nonumber
M_{\Omega}\, {\textstyle \frac{1}{2}} \hbox{Tr}[ \rho_1 + \rho_2 + \lambda_3(\rho_1 -\rho_2)] \\
& = &M_{\Omega} \Big[ {\textstyle \frac{1}{2}} + \lambda_3 (x - {\textstyle \frac{1}{2}}) \Big], ~~~
\hbox{and} \label{eq:Pnorm}
\\ \label{eq:Qnorm}
\| Q \| & \leq & M_{\Omega} \Big[ {\textstyle \frac{1}{2}} - \lambda_3 (x - {\textstyle \frac{1}{2}}) \Big].
\end{eqnarray}
Now we can assume wlog that $\lambda_3 = \max_k |\lambda_k|$ so
that $M_{\Phi} = {\textstyle \frac{1}{2}} (1+ \lambda_3)$.
Then to prove Theorem \ref{thm:normbnd}, it suffices to show that
\begin{eqnarray} \label{eq:zcond}
z > {\textstyle \frac{1}{2}} (1+ \lambda_3) M_{\Omega} \Rightarrow
zI - \Big( \Phi \otimes \Omega \Big) (\rho) > 0.
\end{eqnarray}
Note that
\begin{eqnarray} \label{ineq: Lnorm}
\| L L^{\dagger} \| \leq (z - \|P\|)(z - \|Q\|)
\end{eqnarray}
and the general property $P \leq \|P \|$ imply
\begin{eqnarray*}
L (zI - P)^{-1} L^{\dagger} & \leq & L (z - \|P\|)^{-1} L^{\dagger}
\leq \| L L^{\dagger} \| (z - \|P\|)^{-1} \\
& \leq & (z - \|Q\|) \leq zI - Q.
\end{eqnarray*}
Therefore, by Lemma \ref{lemma:poscond}, to verify (\ref{eq:zcond}),
it suffices to show (\ref{ineq: Lnorm}). But it follows from
(\ref{ineq:bnd1}) that
\begin{eqnarray*}
\| L L^{\dagger} \| & = & {\textstyle \frac{1}{4}} \Big\| \Big[
(\lambda_1+\lambda_2) \Omega(\gamma) + (\lambda_1-\lambda_2)
\Omega(\gamma^{\dagger}) \Big] \Big[
(\lambda_1+\lambda_2) \Omega(\gamma)^{\dagger} + (\lambda_1-\lambda_2)
\Omega(\gamma) \Big] \Big\| \\
& \leq & {\textstyle \frac{1}{4}} \left[(\lambda_1+\lambda_2)^2 + 2
|\lambda_1+\lambda_2| \, |\lambda_1-\lambda_2| + (\lambda_1-\lambda_2)^2
\right] \| \Omega(\gamma) \|^2 \\
& \leq & \lambda_1^2 x(1-x) M_{\Omega}^2.
\end{eqnarray*}
where we have used $\| \Omega(\gamma) \|^2 = \| \Omega(\gamma)^{\dagger} \|^2
= \| \Omega(\gamma) \Omega(\gamma)^{\dagger} \| $.
However, (\ref{eq:Pnorm}) and (\ref{eq:Qnorm}) also imply
\begin{eqnarray*}
(zI - \|P\|) (zI - \|Q\|)\geq \Big (\lambda_3(1-x) M_{\Omega} \Big)
\Big (x \lambda_3M_{\Omega} \Big) =
x(1-x) \lambda_3^2 M_{\Omega}^2.
\end{eqnarray*}
Since we have assumed $ \lambda_3^2 > \lambda_1^2$, these inequalities
imply (\ref{ineq: Lnorm}).
\bigskip
\noindent{\bf Remark:} At this point the {\em only} use we made of the
unital character of $\Phi$ was
\newline (a) to give a specific formula for $M_\Phi$
\newline (b) to use the special form (\ref{Phi}) of representing $\Phi$.
It is possible to generalize these formulas to some non-unital
stochastic maps. Any stochastic map $\Phi$ can be written in the form
(\ref{eq:T.nonunit}), where ${\Bbb T}$ is the $4 \times 4$ matrix which
represents its action on $(I, {\sigma}_{1}, {\sigma}_{2}, {\sigma}_{3})$.
Suppose $\lambda_1$ is the smallest diagonal entry of ${\Bbb T}$.
If $t_1=0$, then the above method can be extended
in a straightforward way to deduce that
$\Phi$ also satisfies the conclusion of Theorem \ref{thm:normbnd},
for any values of $t_2$, $t_3$ allowed by complete positivity.
That is, as long as we do not translate the ellipsoid in the
direction of its shortest major axis, the conclusion
still holds. This is consistent with the conclusions of
Section \ref{sect:non.unit}, where we argue
that this is the hardest case to analyse. The difficulty
occurs when the two other major axes have equal lengths,
so that the ellipsoid is a `flying saucer'. This produces
a circle of states of maximal norm and minimal entropy in the ellipsoid.
It is necessary to show that no entanglement of these minimal
entropy states can increase the norm above the product bound,
or can lower the entropy below the product sum. We discuss this
situation further in (\ref{subsect:nonu.min.ent}).
\section{Minimal Entropy Analysis} \label{sect:entanal}
\subsection{Reduction via Convexity} \label{sect:convex}
\begin{lemma}
Let $\Phi, \Omega$ be unital stochastic maps with
$M_{\Phi}$ and $M_{\Omega}$ equal to $\mu$
and $\nu$, respectively. Then
\begin{eqnarray}
\inf_{\rho~ {\rm pure}} S(\Phi \otimes \Omega)(\rho) \geq
\inf_{\rho ~{\rm pure}} S\Big(\Phi[\mu,u,\mu] \otimes
\Omega[\nu,v,\nu]\Big)(\rho)
\end{eqnarray}
where $|u| \leq \mu$ and $|v| \leq \nu.$
\end{lemma}
\bigskip
\noindent{\bf Proof of Lemma:} By the results of sections
\ref{sect:notation} and \ref{sect:singv.cond}, we can assume
wlog that $\Phi$ and
$\Omega$ are self-adjoint and diagonal, with eigenvalues
$(\lambda_1, \lambda_2, \lambda_3)$ and $(\omega_1, \omega_2, \omega_3)$
respectively, where $\lambda_3 = \mu > 0$ and
$\omega_3 = \nu > 0.$
We first consider the case $\mu, \nu > 1/3$.
It follows
from (\ref{eq:fuji.cond}) that the eigenvalues
$\lambda_1 , \lambda_2$ lie in a convex set with extreme points
\begin{eqnarray*}
(\mu,\mu),~ (\mu,2\mu-1),~ (2\mu-1, \mu),~ (-\mu,-\mu),~ (-\mu,1-2\mu),
~ (1-2\mu, -\mu)
\end{eqnarray*}
If we let
$\Phi_1 \equiv \Phi[\mu,\mu,\mu],~ \Phi_2 \equiv \Phi[\mu,2\mu-1,\mu]$
etc. so that $\Phi_j ~~ (j=1 \ldots 6)$ denote the stochastic maps
corresponding to these six points, we can write
$\Phi = \sum_{j=1}^6 a_j \Phi_j$ as a convex combination of these six maps
and similarly for $\Omega = \sum_{j=1}^6 b_k \Omega_k$. Then, since the
entropy is concave we find
\begin{eqnarray} \label{eq:conv.prearg}
S(\Phi \otimes \Omega)(\rho) & = &
S\left( \Big[\sum_{j=1}^6 a_j \Phi_j \Big] \otimes
\Big[\sum_{k=1}^6 b_k \Omega_k \Big]\right)(\rho) \nonumber \\
& = & S \left( \sum_j \sum_k a_j b_k \Phi_j \otimes \Omega_k \right)
(\rho) \nonumber \\
& \geq & \sum_j \sum_k a_j b_k S \Big( \Phi_j \otimes \Omega_k
\Big) (\rho) \\ \nonumber
& \geq & \min \{ S \Big( \Phi_j \otimes \Omega_k \Big) (\rho) :
j = 1 \ldots 6,~ k=1\ldots 6 \}
\end{eqnarray}
But now we note that
$\Phi_4 = {\Upsilon}_{3} \circ \Phi_1$ and $\Phi_5 = {\Upsilon}_{3} \circ
\Phi_2$, where ${\Upsilon}_{3} (\rho) = \sigma_z \rho \sigma_z$.
Hence, e.g.,
\begin{eqnarray*}
(\Phi_5 \otimes \Omega_4 )(\rho) =
(\sigma_z \otimes \sigma_z) \Big[ (\Phi_2 \otimes \Omega_1 )(\rho)
\Big] (\sigma_z \otimes \sigma_z)
\end{eqnarray*}
so that
\begin{eqnarray*}
S \Big( \Phi_5 \otimes \Omega_4 \Big )(\rho) =
S \Big( \Phi_2 \otimes \Omega_1 \Big )(\rho) .
\end{eqnarray*}
and similarly for $S \Big( \Phi_1 \otimes \Omega_4 \Big )(\rho)
= S \Big( \Phi_1 \otimes \Omega_1 \Big )(\rho) $ etc.
Hence we can replace (\ref{eq:conv.prearg}) by
\begin{eqnarray} \label{eq:conv.arg}
S(\Phi \otimes \Omega)(\rho) \geq
\min \{ S \Big( \Phi_j \otimes \Omega_k \Big) (\rho) :
j,k=1,2,3 \}.
\end{eqnarray}
Since we also have
$ \inf_{\rho ~{\rm pure}} S\Big(\Phi[\mu,u,\mu] \otimes \Omega (\rho)\Big) =
\inf_{\rho ~{\rm pure}} S\Big(\Phi[u, \mu,\mu] \otimes \Omega (\rho)\Big))$
for any $\Omega$, and
since $\Phi_j,~ j=1,2,3$ and $\Omega_k,~ k=1,2,3$ have the form
given in the lemma, the result follows.
For $\mu < 1/3$ we proceed similarly, but with the convex set
given by the rectangle with corners $(\pm \mu, \pm \mu)$.
\subsection{Special Form of Pure State} \label{sect:entropy}
As shown in Section \ref{sect:convex}, to show additivity of
minimal entropy for unital stochastic maps it is sufficient to
consider self-adjoint maps of the special form
$\Phi[\mu, u, \mu]$ and $\Omega[\nu, v,\nu]$. In this section
we prove additivity for these maps over a special class of entangled
states.
\begin{thm} \label{thm:main}
Let $\Phi[\mu, u, \mu]$ and $\Omega[\nu, v,\nu]$ be diagonal
stochastic maps, satisfying
$\mu \geq |u|$ and $\nu \geq |v|$, so that
$\mu$ and $\nu$ are the largest eigenvalues of $\Phi$ and $\Omega$
respectively. Let $| \psi \rangle $ be a pure state
of the form
$| \psi \rangle = a |0 \, 0 \rangle +
e^{i \theta} d |1 \: 1 \rangle$. Then
\begin{eqnarray}
S\big(\Phi \otimes \Omega\big) \big( | \psi \ket \bra \psi | \big) \geq
h(\mu) \,+\, h(\nu)
= \inf_{\rho} S[\Phi(\rho)] \,+\, \inf_{\gamma} S[\Omega(\gamma)].
\end{eqnarray}
\end{thm}
We prove Theorem \ref{thm:main} in the next section. Here we derive
some intermediate results which will be used in the proof.
For generality we consider diagonal maps
$\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$ and
$\Omega[\omega_1, \omega_2,\omega_3]$, and for definiteness we
also assume that
$|\lambda_3|$ and $|\omega_3|$ are their largest singular
values.
\bigskip
We find the density
matrix for a pure state of the form
$ | \psi \rangle = a |00 \rangle + e^{i \theta} d | 11 \rangle $
with $a, d$ real and $a^2 + d^2 = 1$.
Then if $\rho = | \psi \ket \bra \psi | $
\begin{eqnarray} \label{eq:rho.diag}
\rho & = & \left( \begin{array}{cccc}
a^2 & 0 & 0 & ad e^{i \theta} \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
ad e^{-i \theta} & 0 & 0 & d^2
\end{array} \right) =
\left( \begin{array}{cccc}
\alpha & 0 & 0 & e^{i \theta} \sqrt{t}/2 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
e^{-i \theta} \sqrt{t}/2 & 0 & 0 & 1 - \alpha
\end{array} \right) \\
& = & {\textstyle \frac{1}{4}} \alpha ~ \nonumber
[ I \otimes I + \sigma_z \otimes \sigma_z + I \otimes \sigma_z + \sigma_z \otimes I ] \\
& ~ & + {\textstyle \frac{1}{4}} (1 - \alpha )~
[ I \otimes I + \sigma_z \otimes \sigma_z - I \otimes \sigma_z - \sigma_z \otimes I ]\\
& ~ & + {\textstyle \frac{1}{4}} \sqrt{t} ~ \big[ \cos \theta \,
(\sigma_x \otimes \sigma_x - \sigma_y \otimes \sigma_y )
- \sin\theta \, (\sigma_x \otimes \sigma_y + \sigma_y \otimes \sigma_x )
\big] \nonumber
\end{eqnarray}
where $\alpha = a^2$ and $t = 4 \alpha (1 - \alpha)$, so that
$t \in [0,1]$.
Applying the stochastic maps gives
\begin{eqnarray}
[\Phi \otimes \Omega] (\rho)
& = & {\textstyle \frac{1}{4}} \alpha ~ [ I \otimes I + \lambda_3 \omega_3 \sigma_z \otimes
\sigma_z + \omega_3 I \otimes \sigma_z + \lambda_3 \sigma_z \otimes I ] \\
& ~ & + {\textstyle \frac{1}{4}} (1 - \alpha )~ [ I \otimes I + \lambda_3 \omega_3 \sigma_z \otimes
\sigma_z - \omega_3 I \otimes \sigma_z - \lambda_3 \sigma_z \otimes I ]
\nonumber \\
& ~ & + {\textstyle \frac{1}{4}} \sqrt{t} ~ \big[ \cos \theta \,
~(\lambda_1 \omega_1 \sigma_x \otimes \sigma_x - \lambda_2 \omega_2 \sigma_y
\otimes \sigma_y) \nonumber \\
& ~ & \quad\quad\quad\quad - \sin\theta \,
(\lambda_1
\omega_2 \sigma_x \otimes \sigma_y +
\lambda_2 \omega_1 \sigma_y \otimes \sigma_x ). \big] \nonumber
\end{eqnarray}
Notice that because $\Phi$ and $\Omega$ are diagonal, the result remains a
linear combination of terms of the form
$\sigma_k \otimes \sigma_k ~~ (k = 0 \ldots 3)$ and
$I \otimes \sigma_z$ and $\sigma_z \otimes I$; no cross terms of the form
$\sigma_x \otimes \sigma_z$ etc arise. Therefore, the only non-zero
terms in $[\Phi \otimes \Omega] (\rho)$ lie along the diagonal or
skew diagonal. Thus $ [\Phi \otimes \Omega](\rho)$ can be written
in the form
\begin{eqnarray*}
\pmatrix{ X & 0 & 0 & X \cr 0 & X & X & 0 \cr
0 & X & X & 0 \cr X & 0 & 0 & X }
\end{eqnarray*}
where $X$ denotes a non-zero matrix element. Thus,
$[\Phi \otimes \Omega](\rho)$ is equivalent to a block
diagonal matrix.
A straightforward computation shows that these blocks, which
we refer to as ``outer'' and ``inner'' can be written respectively as
\begin{eqnarray}\label{block:out}
{\textstyle \frac{1}{4}} \left( \begin{array}{cc}
~ 1 + \lambda_3 \omega_3 + \sqrt{1-t}(\lambda_3 + \omega_3) ~ &
~ {\textstyle \frac{1}{2}} \sqrt{t}(e^{i \theta} \lambda_+ \omega_+ + e^{- i \theta}
\lambda_{-} \omega_{-}) ~ \\
{\textstyle \frac{1}{2}} \sqrt{t}(e^{i \theta} \lambda_{-} \omega_{-} + e^{- i \theta}
\lambda_{+} \omega_{+}) &
1 + \lambda_3 \omega_3 - \sqrt{1-t}(\lambda_3 + \omega_3)
\end{array} \right)
\end{eqnarray}
and
\begin{eqnarray}\label{block:in}
{\textstyle \frac{1}{4}} \left( \begin{array}{cc}
1 - \lambda_3 \omega_3 + \sqrt{1-t}(\lambda_3 - \omega_3) &
{\textstyle \frac{1}{2}} \sqrt{t}(e^{i \theta} \lambda_+ \omega_{-} + e^{- i \theta}
\lambda_{-} \omega_{+}) \\
{\textstyle \frac{1}{2}} \sqrt{t}(e^{i \theta} \lambda_{-} \omega_+ + e^{- i \theta}
\lambda_{+} \omega_{-}) &
1 - \lambda_3 \omega_3 - \sqrt{1-t}(\lambda_3 - \omega_3)
\end{array} \right) .
\end{eqnarray}
where $\lambda_{\pm} = \lambda_1 \pm \lambda_2$ and similarly for
$\omega_{\pm}$.
The first has eigenvalues
\begin{eqnarray}\label{eq:eval.out}
{\textstyle \frac{1}{4}}
\Big[ 1 + \lambda_3 \omega_3\Big]
\pm {\textstyle \frac{1}{4}}\Big[ { (1-t)(\lambda_3 + \omega_3)^2
+ {\textstyle \frac{1}{4}} t(\lambda_{+}^2 \omega_{+}^2 + \lambda_{-}^2 \omega_{-}^2
+ 2 \cos (2 \theta) \gamma) } ~\Big]^{1/2}
\end{eqnarray}
while the second has eigenvalues
\begin{eqnarray}\label{eq:eval.in}
{\textstyle \frac{1}{4}} \Big[ 1 - \lambda_3 \omega_3\Big]
\pm {\textstyle \frac{1}{4}}\Big[ { (1-t)(\lambda_3 - \omega_3)^2
+ {\textstyle \frac{1}{4}} t(\lambda_{+}^2 \omega_{-}^2 + \lambda_{-}^2 \omega_{+}^2
+ 2 \cos (2 \theta) \gamma) } ~\Big]^{1/2}
\end{eqnarray}
where $\gamma = \lambda_{+} \lambda_{-} \omega_{+} \omega_{-}$.
Minimum entropy occurs when both pairs of eigenvalues are spread out as far as
possible. This happens either at $\theta=0$ (if $\gamma \geq 0$)
or at $\theta=\pi/2$ (if $\gamma \leq 0$).
We will be interested in the case $\lambda_1 \geq |\lambda_2|$ and
$\omega_1 \geq |\omega_2|$, which means that $\gamma \geq 0$,
so we assume that $\theta=0$ henceforth.
We need more compact notation for the eigenvalues. Define
\begin{eqnarray}\label{def:f(t)}
f(t) = \Big((\lambda_3 + \omega_3)^2 - t [(\lambda_3 + \omega_3)^2
-(\lambda_1 \omega_1 + \lambda_2 \omega_2)^2] \Big)^{1/2}
\end{eqnarray}
and
\begin{eqnarray}\label{def:g(t)}
g(t) = \Big((\lambda_3 - \omega_3)^2 - t [(\lambda_3 - \omega_3)^2
-(\lambda_1 \omega_1 - \lambda_2 \omega_2)^2] \Big)^{1/2}
\end{eqnarray}
Also define
\begin{eqnarray}
A = 1 + \lambda_3 \omega_3, \quad\quad
B = 1 - \lambda_3 \omega_3
\end{eqnarray}
Then the four eigenvalues of $[\Phi \otimes \Omega](\rho)$ at $\theta=0$ are simply
\begin{eqnarray}
{\textstyle \frac{1}{4}} [A \pm f(t)], \quad \quad {\textstyle \frac{1}{4}} [B \pm g(t)]
\end{eqnarray}
The state $| \psi \rangle $ is unentangled when $t=0$ and maximally
entangled for $t=1$. We want
to show that the entropy is minimized at $t=0$.
We will let $S(t)$ denote the entropy of $[\Phi \otimes \Omega](\rho)$
considered as a function of $t.$
To analyze its behavior,
it is convenient to use the function
\begin{eqnarray}
{\eta}(\alpha,x) = - (\alpha+x) \log(\alpha+x) - (\alpha-x)
\log(\alpha-x).
\end{eqnarray}
It follows that
\begin{eqnarray}
S(t) = {\textstyle \frac{1}{4}} {\eta}[A, f(t)] + {\textstyle \frac{1}{4}} {\eta}[B, g(t)] + \log 4,
\end{eqnarray}
where we have used the fact that ${\textstyle \frac{1}{2}}(A+B) = 1$. To find the minimum
of $S(t)$ it suffices to analyze the behavior of $ {\eta}[A, f(t)]$ and
$ {\eta}[B, g(t)].$ First observe that
\begin{eqnarray} \label{eq:1st.deriv}
\frac{d~}{dt} {\eta}[A, f(t)] = f^{\prime}(t)
\log \frac{ A- f(t)}{A+f(t)}
\end{eqnarray}
and
\begin{eqnarray} \label{eq:2nd.deriv}
\frac{d^2~}{dt^2} {\eta}[A, f(t)] & = & f^{\prime\prime}(t)
\log \frac{ A- f(t)}{A+f(t)} - |f^{\prime}(t)|^2
\frac{2A}{A - [f(t)]^2} \nonumber \\
& \leq & \frac{2A}{A - [f(t)]^2}
\big[ f^{\prime\prime}(t) f(t) - |f^{\prime}(t)|^2 \big]
\end{eqnarray}
if $f^{\prime\prime}(t) \leq 0$ and $0 \leq f(t) \leq A$.
This follows from the elementary inequality
$ \log \left( \frac{1+x}{1-x} \right) \geq \frac{2x}{1-x^2}$
(which holds for $ x \in [0,1]$) applied to $x = f(t)/A$.
Now $f(t)$ is a function of the form $\sqrt{a - bt}$ for which one
easily checks that $f^{\prime\prime}(t) < 0$ and
$|f^{\prime\prime}(t)| f(t) - |f^{\prime}(t)|^2 = 0$.
Therefore, it follows immediately from (\ref{eq:2nd.deriv}) that
$\frac{d^2~}{dt^2} {\eta}[A, f(t)] \leq 0 $. Since $g(t)$ also
has the form $\sqrt{a - bt}$, a similar argument
holds for $ {\eta}[B, g(t)].$
Hence $S^{\prime\prime}(t) < 0$ from which we
conclude that $S(t)$ is a concave function on $[0,1]$, and therefore
attains its minimum at either $t=0$ or $t=1$.
Hence to prove that $S$ attains its minimal value at $t=0$,
it is necessary and sufficient to show that $S(1) \geq S(0)$.
In essence, we have shown that for a state of the form
given in Theorem 1 {\em the minimal entropy is attained
for either a maximally entangled state or a simple product
state.}
If we again think of $f(t)$ in the form $\sqrt{a - bt}$, then
(\ref{def:f(t)}) (together with our assumption that $\lambda_3$
and $\omega_3$ are the maximal singular values)
implies that $b > 0$. Combined with
(\ref{eq:1st.deriv}) this implies that ${\eta}[A, f(t)] $ is increasing.
However, this need not be true for $g(t)$. For example, when
$\lambda_3 = \omega_3$ and $t=1$,
$g^{\prime}(1) = {\textstyle \frac{1}{2}} \, |\lambda_1 \omega_1 - \lambda_2 \omega_2|$
which implies that $g(t)$ is increasing and ${\eta}[B, g(t)] $
decreasing. Thus, the general situation is that the
entropy $S(t)$ is a linear combination of two concave functions
corresponding to the contributions from the ``outer'' and ``inner''
eigenvalues respectively. The former is always increasing,
while the latter can be decreasing as $t$ goes from $0$ to $1$.
To illustrate this, Figure \ref{fig:eigenmove} shows how
the eigenvalues of the product state move in the case where
${\lambda}_3={\omega}_3=\mu$. The ``inner'' eigenvalues are
both ${\textstyle \frac{1}{4}}(1-{\mu}^2)$, and they move apart as $t$ increases
away from $0$, which lowers their contribution to the entropy.
The ``outer'' eigenvalues ${\textstyle \frac{1}{4}}(1\pm{\mu})^2$ move
together as $t$ increases, which raises their contribution
to the entropy.
To examine the difference $S(1) - S(0)$, we note that
\begin{eqnarray}
S(1) & = & \log 4 +
{\textstyle \frac{1}{4}} {\eta}(1 + \lambda_3 \omega_3, |\lambda_1 \omega_1 + \lambda_2
\omega_2|) + {\textstyle \frac{1}{4}} {\eta}(1 - \lambda_3 \omega_3, |\lambda_1 \omega_1 -
\lambda_2 \omega_2|)
~~~ \\ S(0) & = & \log 4 +
{\textstyle \frac{1}{4}} {\eta}(1 + \lambda_3 \omega_3, \lambda_3 + \omega_3)
~~~~~~ + {\textstyle \frac{1}{4}} {\eta}(1 - \lambda_3 \omega_3, |\lambda_3 - \omega_3|)
\end{eqnarray}
so that
\begin{eqnarray} \label{eq:entdiff}
4[ S(1) - S(0)] & = &
{\eta}(1 + \lambda_3 \omega_3, |\lambda_1 \omega_1 + \lambda_2 \omega_2|)
- {\eta}(1 + \lambda_3 \omega_3, \lambda_3 + \omega_3) \\
& ~ & + {\eta}(1 - \lambda_3 \omega_3,
|\lambda_1 \omega_1 - \lambda_2 \omega_2|)
- {\eta}(1 - \lambda_3 \omega_3, |\lambda_3 - \omega_3|). \nonumber
\end{eqnarray}
Since $\frac{\partial ~}{\partial x} {\eta}(\alpha,x) =
\log \frac{\alpha-x}{\alpha+x} < 0 $ if $x > 0$, ${\eta}(\alpha,x)$ is
decreasing in $x$. By our assumptions,
$ \lambda_3 \geq |\lambda_1 | \geq |\lambda_1 \omega_1 | $ and
$ \omega_3 \geq |\omega_2 | \geq |\lambda_2 \omega_2 | $
so that
\begin{eqnarray*}
\lambda_3 + \omega_3 > |\lambda_1 \omega_1 + \lambda_2 \omega_2|,
\end{eqnarray*}
and hence the difference of the the first two terms in (\ref{eq:entdiff})
(which corresponds to the change in entropy from the ``outer'' eigenvalues)
is always positive. The change from the inner eigenvalues need not
be positive however; indeed, when $\lambda_3 = \omega_3$ it must be
negative. Thus we need to show that the contribution from the
inner eigenvalues cannot dominate.
We gain some intuition from an elementary analysis of ${\eta}({\textstyle \frac{1}{2}}, x)$.
This function is largest near $x = 0$, where it is flat, but has its
largest derivative near $x = \pm 1$. Hence one expects the change
from the larger `` outer'' eigenvalues to dominate. Explicit analysis
of the extreme points in the next section verifies this.
\bigskip
For the proof of Theorem \ref{thm:main} we will restrict to the
values $\lambda_1 = \lambda_3 = \mu$ and $\lambda_2 = u$ with $|u| \leq
\mu$, and
$\omega_1 = \omega_3 = \nu$ and $\omega_2 = v$ with $|v| \leq \nu$.
In this case (\ref{eq:entdiff}) becomes
\begin{eqnarray} \label{eq:entdiffspec}
4[ S(1) - S(0)] & = &
{\eta}(1 + \mu \nu, \mu \nu + u v)
- {\eta}(1 + \mu \nu, \mu + \nu) \\
& ~ & + {\eta}(1 - \mu \nu,
\mu \nu - u v)
- {\eta}(1 - \mu \nu, \mu - \nu). \nonumber
\end{eqnarray}
\subsection{Analysis of Extreme Points}
In this section we will complete the proof of Theorem \ref{thm:main}.
By the argument in section \ref{sect:convex}, it suffices to
consider either
$u = \pm \mu $ with $\mu \in [0,{\textstyle \frac{1}{3}}]$, or $u = \mu $ with
$\mu \in [{\textstyle \frac{1}{3}},1]$, or $u = 2\mu-1$
with $\mu \in [{\textstyle \frac{1}{3}}, 1]$. Similarly for $\Omega$: either
$v = \pm \nu$ with $\nu\in [0,{\textstyle \frac{1}{3}}]$, or $v = \nu$
with $\nu \in [{\textstyle \frac{1}{3}}, 1]$, or $v = 2 \nu-1$
with $\nu \in [{\textstyle \frac{1}{3}}, 1]$.
So we wish to prove the positivity of $S(1) - S(0)$ for these values of
the parameters.
The simplest case occurs when $u = \mu$, $v = \nu$.
In this case (\ref{eq:entdiffspec}) becomes simply
\begin{eqnarray} \label{eq:entdiffspec2}
4[ S(1) - S(0)] & = &
{\eta}(1 + \mu \nu, 2 \mu \nu)
- {\eta}(1 + \mu \nu, \mu + \nu) \\
& ~ & + {\eta}(1 - \mu \nu,
0)
- {\eta}(1 - \mu \nu, \mu - \nu). \nonumber
\end{eqnarray}
Since ${\eta}(\alpha, x)$ is decreasing in $|x|$, the first term
dominates the second, and the third term dominates the fourth,
hence $ S(1) - S(0) > 0$ in this case.
The remaining cases are handled numerically. It is useful to first consider
a special case, namely $\mu = \nu$.
This arises when the convexity argument is applied to the product
channel $\Phi \otimes \Phi$, which is the situation of most interest to us.
Then (\ref{eq:entdiffspec2}) yields
\begin{eqnarray} \label{eq:entdiff:Phi=Omega}
4[ S(1) - S(0)] & = &
- (1 + 2{\mu}^2 + uv) \log (1 + 2{\mu}^2 + uv) \nonumber \\
& ~ & - (1 - 2{\mu}^2 + uv) \log (1 - 2{\mu}^2 + uv)
- 2(1 - uv) \log (1 - uv) \nonumber \\
& ~ & + 4(1 + {\mu}) \log (1 + {\mu}) +
4(1 - {\mu}) \log (1 - { \mu}).
\end{eqnarray}
Graphing verifies that (\ref{eq:entdiff:Phi=Omega}) is positive for
the two extreme values $uv = \pm {\mu}^2$ in the range $0 \leq {\mu} \leq
{\textstyle \frac{1}{3}}$ (see Figure \ref{fig:entdiff1}),
and for the three extreme values $uv = {\mu}^2$, $uv = {\mu}(2{\mu}-1)$
and $uv=(2{\mu}-1)^2$ in the range ${\textstyle \frac{1}{3}} \leq {\mu} \leq 1$
(see Figure \ref{fig:entdiff2}).
The graphs show that (\ref{eq:entdiff:Phi=Omega}) is
smallest when $\mu \simeq 0,1$, so we analyze these regions more
carefully. In the first case when $0 \leq {\mu} \leq {\textstyle \frac{1}{3}}$ and $uv = -
{\mu}^2$,
we expand around $\mu = 0$. This gives
\begin{eqnarray} \label{eq:entdiff:x=0}
4[ S(1) - S(0)] \simeq
4 {\mu}^2 \nonumber
\end{eqnarray}
In the second case when ${\textstyle \frac{1}{3}} \leq {\mu} \leq 1$ and $uv = {\mu}(2{\mu}-1)$,
write $x = 1 -\mu$, and expand in $x$:
\begin{eqnarray}
4S(1)-4S(0) & \simeq & 3x \log {1 \over x} + x
\Big[7(1+\log 4)-6 \log 3 -4(1 + \log 2)\Big]\nonumber\\
& \simeq & 3x \log {1 \over x} + 3.34 x.
\end{eqnarray}
This is manifestly positive for $x$ small. Similarly in the case
$uv=(2{\mu}-1)^2$ we have
\begin{eqnarray}
4S(1)-4S(0) & \simeq & 4x\log {1 \over x} + 4 x (1-\log 2)\nonumber\\
& \simeq & 4x\log {1 \over x} + 1.23 x
\end{eqnarray}
\bigskip
The general case $\mu \neq \nu$ is handled similarly.
We have
\begin{eqnarray} \label{eq:entdiff:Phi!=Omega}
4[ S(1) - S(0)] & = &
- (1 + 2 {\mu} \nu + uv) \log (1 + 2 {\mu} \nu + uv) \nonumber\\
& ~ & - (1 - 2 {\mu} \nu + uv) \log (1 - 2 {\mu} \nu + uv)
- 2(1 - uv) \log (1 - uv) \nonumber\\
& ~ & + 2(1 + {\mu}) \log (1 + {\mu}) +
2(1 - {\mu}) \log (1 - { \mu}) \nonumber\\
& ~ & + 2(1 + {\nu}) \log (1 + {\nu}) +
2(1 - {\nu}) \log (1 - { \nu}).
\end{eqnarray}
By symmetry it suffices to assume that $\nu \leq \mu$. For
$0 \leq \mu \leq {\textstyle \frac{1}{3}}$ and $0 \leq \nu \leq \mu$ we have two
extreme values $uv = \pm \mu \nu$.
Graphing (\ref{eq:entdiff:Phi!=Omega}) shows that it is positive
in both of these cases. Again the smallest values occur near
$\mu = \nu =0$, so we expand (\ref{eq:entdiff:Phi!=Omega}) around
this point. For both values $uv = \pm \mu \nu$ this gives
\begin{eqnarray} \label{eq:entdiff:Phi!=Omega:case1}
4[ S(1) - S(0)] \simeq
2 ({\mu}^2 + {\nu}^2). \nonumber
\end{eqnarray}
For ${\textstyle \frac{1}{3}} \leq \mu \leq 1$ and $0 \leq \nu \leq {\textstyle \frac{1}{3}}$,
there are four extreme values $uv = \mu \nu$, $uv = - \mu \nu$,
$uv = (2 \mu -1) \nu$ and $uv = - (2 \mu -1) \nu$.
The graph of (\ref{eq:entdiff:Phi!=Omega}) is positive in all cases, with
smallest values around $\mu = {\textstyle \frac{1}{3}}$.
For ${\textstyle \frac{1}{3}} \leq \mu \leq 1$ and ${\textstyle \frac{1}{3}} \leq \nu \leq \mu$,
there are also four extreme values, $uv = \mu \nu$,
$uv = {\mu}(2{\nu}-1)$, $uv = (2{\mu}-1){\nu}$ and
$uv = (2{\mu}-1)(2{\nu}-1)$
(see Figure \ref{fig:entdiff3} for the last of these).
The graphs of (\ref{eq:entdiff:Phi!=Omega}) are positive in each case,
and the smallest values occur near $\mu=\nu=1$. This region can be
analyzed more carefully by expanding the functions to leading order in
$x=1-{\mu}$ and $y=1-{\nu}$. For example, when
$uv = (2{\mu}-1)(2{\nu}-1)$ the expansion of (\ref{eq:entdiff:Phi!=Omega})
yields
\begin{eqnarray}\label{approx}
4S(1)-4S(0) & \simeq & 2 [x \log x + y \log y - 2(x+y) \log (x+y)]
\nonumber \\
& & + x(2 + 2 \log 2) + y(2 + 2 \log 2)
\end{eqnarray}
Using convexity of the function $x \log x$, we can bound (\ref{approx})
from below by $2(x + y)$, which demonstrates positivity for
$x,y$ small. Similar results are obtained for the other cases.
\subsection{Mixing Discussion} \label{sect:mix}
In this section we extend Theorem \ref{thm:main} to pure states
$|\psi \rangle$ formed from any
entanglement of states of minimal
entropy. After a precise statement of this and proof of this
extension, we discuss its interpretation and the evidence
for more general validity of Conjecture \ref{conj:minent}.
Let $\Phi$ be a unital stochastic map. As explained in
Appendix A, we can write $\Phi = U {\Phi}_{S} U^\dagger$ where
$U$ is a unitary operator and ${\Phi}_{S}$ is self-adjoint.
Let $\mu = ||S||$. Define
\begin{eqnarray}\label{def:L}
{\cal L}(\Phi) = \{ \rho = {\textstyle \frac{1}{2}} (I + N) \,|\, {\Phi}_{S}(N) = \pm \mu
N \}.
\end{eqnarray}
In words, ${\cal L}(\Phi)$ is the collection of density matrices which lie
in the direction of the largest eigenvalue of ${\Phi}_{S}$. If this largest
eigenvalue is non-degenerate, then ${\cal L}(\Phi)$
is a line segment between antipodal points on the Bloch sphere. In case of
degeneracy it may
be a disk, or even the entire Bloch sphere.
If ${\rho}_{12}$ is a density matrix on ${\bf C}^2 \otimes {\bf C}^2$,
we denote by
${\rho}_1 = T_2({\rho}_{12})$ and ${\rho}_2 = T_1({\rho}_{12})$
the reduced density matrices on
${\bf C}^2$ obtained by taking the indicated partial traces.
\bigskip
\begin{thm} \label{thm:mixing}
Let $\Phi$ and $\Omega$ be unital
stochastic maps.
Let ${\rho}_{12}$ be a density matrix on
${\bf C}^2 \otimes {\bf C}^2$, such that ${\rho}_1$ lies in
${\cal L}(\Phi)$, and ${\rho}_2$ lies in
${\cal L}(\Omega)$. Then
\begin{eqnarray}
S\big(\Phi \otimes \Omega\big) \big( {\rho}_{12} \big) \geq
\inf_{\rho} S[\Phi(\rho)] \, +\, \inf_{\gamma} S[\Omega(\gamma)].
\end{eqnarray}
\end{thm}
\noindent {\bf Proof}:
Wwe assume wlog that $\Phi$ and $\Omega$ are diagonal maps in the form
${\Phi}[ {\lambda}_1, {\lambda}_2, {\lambda}_3] $ and
${\Omega}[ {\omega}_1, {\omega}_2, {\omega}_3] $. Furthermore we
can arrange that ${\lambda}_3 = \mu$ is the largest eigenvalue,
so that ${\textstyle \frac{1}{2}} [ I \pm {\sigma}_3]$
lies in ${\cal L}(\Phi)$. Similarly we can arrange that
$\omega_3 = \nu$ is the largest eigenvalue, so that
${\textstyle \frac{1}{2}} [ I \pm {\sigma}_3]$ also
lies in ${\cal L}(\Omega)$.
Wlog we can assume that ${\rho}_{12} = | \psi \ket \bra \psi |$ is a pure state.
In the bases
which diagonalize $\Phi$ and $\Omega$, we have
\begin{eqnarray}
| \psi \rangle = a_{00} | 00 \rangle + a_{01} | 01 \rangle + a_{10} | 10 \rangle +
a_{11} | 11 \rangle
\end{eqnarray}
Define the matrix $A$ to be
\begin{eqnarray}
A = \left(\matrix{a_{00} & a_{01} \cr a_{10} & a_{11}\cr}\right)
\end{eqnarray}
Then as shown in Appendix A, the reduced density matrices are
\begin{eqnarray}
{\rho}_1 = A A^{\dagger}, \qquad\qquad
{\rho}_2 = \Big( A^{\dagger} A \Big)^{\rm T}
\end{eqnarray}
We obtain the ``Schmidt decomposition'' by applying the singular
value decomposition to $A$. The result is a new basis in which $| \psi \rangle$
has the diagonal form assumed in Theorem \ref{thm:main}.
This decomposition is obtained by finding unitary operators
$U_1, U_2$ which diagonalize $A A^{\dagger}$ and $A^{\dagger} A$
respectively, so that $U_1 A U_2^{\dagger}$ is also diagonal. By assumption
${\rho}_1$ lies in ${\cal L}(\Phi)$, and hence so does
$A A^{\dagger}$. If $\mu$ is non-degenerate, then ${\cal L}(\Phi)$
is the line segment consisting of the diagonal density matrices.
Therefore
$A A^{\dagger}$ is also diagonal, so $U_1$ is equal to the identity,
up to a phase.
If $\nu$ is also non-degenerate then $U_2$ is also proportional to
the
identity, and hence ${\rho}_{12}$ is already in diagonal form. The
result follows immediately by applying Theorem \ref{thm:main}.
In general either or both of $\mu$ and $\nu$ may be degenerate.
For example, if $\mu$ is 2-fold degenerate, then ${\cal L}(\Phi)$ is a disk
containing the $z$-axis in the Bloch sphere. So $A A^{\dagger}$ lies in this
disk, and hence it is diagonalized by a rotation of the Bloch sphere which
preserves this disk. By definition, such a rotation commutes with the action
of $\Phi$, since the plane which contains this disk is an eigenspace of
$\Phi$. Hence the unitary operator $U_1$ commutes with $\Phi$.
Similarly if $\mu$ is 3-fold
degenerate, then every unitary operator commutes with $\Phi$.
To apply the argument to $\Omega$, note that by assumption ${\rho}_2$ lies
in ${\cal L}(\Omega)$. If $\nu$ is non-degenerate, or is 3-fold
degenerate, the same argument applies. If $\nu$ is 2-fold degenerate,
then ${\cal L}(\Omega)$ is a disk.
The transpose operation on the Bloch sphere
is the reflection in the xz-plane, and this does not in general
preserve a disk containing the z-axis. However we have assumed that
$\Omega$ is diagonal, and hence ${\cal L}(\Omega)$ either lies in
the xz-plane or the yz-plane. In both cases the transpose leaves
${\cal L}(\Omega)$ invariant, and hence the same argument can be applied
to deduce that $U_2$ also commutes with
$\Omega$. It follows that $(\Phi \otimes \Omega)(| \psi \ket \bra \psi |)$ is
unitarily equivalent to $(\Phi \otimes \Omega)(| \psi' \ket \bra \psi' |)$
where $| \psi' \rangle$ has the form assumed in Theorem \ref{thm:main},
and hence the result follows.
\bigskip
\bigskip
\noindent{\bf Remarks:}
\begin{enumerate}
\item
The set ${\cal L}(\Phi)$ contains the states of minimal entropy for $\Phi$.
Theorem \ref{thm:mixing} shows that by entangling the minimal entropy
states of the individual channels we cannot decrease the entropy of the product
channel. Since it seems unlikely that entangling states of higher
entropy will improve the situation, we present this as strong
evidence for our conjecture.
\item
We illustrate Theorem \ref{thm:mixing} in the case where
$\Phi[{\lambda}_1, {\lambda}_2, {\lambda}_3]$ is self-adjoint and diagonal,
with ${\lambda}_1 = {\lambda}_3$.
If the $ |\psi \rangle $ is real, i.e., the matrix $A = a_{jk}$ is real, then
the $2 \times 2$ unitary matrices $U_1$ and $U_2$ which diagonalize
$A A^{\dagger}$ and $A^{\dagger} A$ can be chosen real and orthogonal,
in which case we emphasize this by writing them as ${\cal O}_1$ and ${\cal O}_2$.
Now suppose that our original orthogonal basis
$ |0\rangle, |1\rangle $ on ${\bf C}^2$ corresponds to the eigenvectors of
$\sigma_z$ so that the corresponding pure state projections are
${\textstyle \frac{1}{2}} [I + {\bf w} \cdot \sigma]$, with
$ {\bf w} = (0,0,1)$ corresponding to the ``North pole'' of a sphere.
Each unitary $2 \times 2$ matrix $U$ can be associated with a
real orthogonal $3 \times 3$ matrix, and the effect of $U$ on the
basis vectors corresponds to a rotation on the sphere or the
action of a real orthogonal $3 \times 3$ matrix on $ {\bf w}.$
When the original $2 \times 2$ matrix is real orthogonal, the
corresponding rotation on the sphere reduces to a rotation
in the xz-plane.
If we now write the unital stochastic map $\Phi$ in the form
\begin{eqnarray*}
\Phi : \rho = {\textstyle \frac{1}{2}} {[I + {\bf w} \cdot \sigma]} \rightarrow
\Phi(\rho) = {\textstyle \frac{1}{2}} [I + T {\bf w} \cdot \sigma] ,
\end{eqnarray*}
then the change of basis
is equivalent to replacing $T$ by $\widehat{{\cal O}} T \widehat{{\cal O}}^{-1}$
where $\widehat{{\cal O}}$ denotes the $3 \times 3$ orthogonal matrix
associated with ${\cal O}$. Thus, for
example, when
${\cal O} = \pmatrix{ ~\cos \, \theta/2 & \sin \, \theta/2 \cr
- \sin \, \theta/2 & \cos \, \theta/2 } $, ~~
$\widehat{{\cal O}} = \pmatrix{ ~\cos \theta & 0 & \sin \theta \cr
0 & 1 & 0 \cr - \sin \theta & 0 & \cos \theta }$.
If $T$ is diagonal with eigenvalues $\lambda_x = \lambda_z$, then
$\widehat{{\cal O}} T \widehat{{\cal O}}^{-1} = T.$ Hence every state
$ |\psi \rangle $ with real coefficients can be diagonalized, and the
matrix $T$ is unchanged.
\item
Suppose that $\Phi$ has one very large
singular value and two small ones. Then the unit sphere corresponding
to the set of density matrices is mapped into an ellipsoid
shaped like a football, and the states of minimal entropy lie at
the ends of the football (see Figure \ref{fig:ellip1}). We can interpret Theorem
\ref{thm:main} as saying that entangling these minimal entropy states will
not decrease the entropy below the sum of the minimum entropies.
Now suppose that we always keep two eigenvalues equal, but vary the
parameters so that the ends of the football move in until it
becomes a sphere and then a ``flying saucer'' (see Figure \ref{fig:ellip2}).
The
ends of the football have moved in to states corresponding to maximal
entropy. The minimal entropy states now form a great circle.
As we explained above, our special form for $\psi$ allows
a general entanglement of states
corresponding to these great circles of minimal entropy.
Yet even this more general entanglement does not decrease
the entropy below that of product states.
\item
The discussion above shows that, at least in the case of unital
maps, if Conjecture \ref{conj:minent} does not hold, then the
entanglements which use states of higher
entropy would achieve a lower entropy on the product space than
entanglements of states of minimal entropy. In addition,
such entanglements would have to lower the entropy without
increasing the largest eigenvalue of $\Phi(\rho_{12})$ beyond
the product value given in Theorem \ref{thm:normbnd}. We do
not find this plausible.
\end{enumerate}
\section{Non-unital Maps}\label{sect:non.unit}
In this section we give some heuristic evidence to support
Conjecture \ref{conj:minent}. Before doing so, we illustrate
the differences between unital and non-unital maps by discussing
some of the properties of a special class of maps on
${\bf C}^{2 \times 2}$.
A non-unital map is one for which $\Phi(I) \neq I$. This means
that it takes a randomly distributed alphabet to a non-random
distribution. One would intuitively expect that the maximum
capacity would then be achieved for alphabets which are {\em not}
evenly distributed. Although this is true classically, it need
not be true for quantum stochastic maps as shown by the example
below.
It follows from equation (\ref{eq:Trep}) that a unital stochastic
map defines a linear map on the subspace of matrices with trace
zero. However, a non-unital map is affine when restricted to this
subspace.
\subsection{Special subclass} \label{sect:nonunit.sub}
We now consider non-unital maps which correspond, in the
notation of Section \ref{sect:notation} to the matrix
\begin{eqnarray} \label{eq:phi.fuchs}
{{\Bbb T}} = \pmatrix{ 1 & 0 & 0 &0 \cr 0 & \lambda_1 & 0 & 0
\cr 0 & 0 & 0 & 0 \cr t & 0& 0 & \lambda_3 }
\end{eqnarray}
with $t \neq 0$. This is easily seen to yield the map
\begin{eqnarray} \label{eq:spec.nonunit}
\Phi\big( {\textstyle \frac{1}{2}}[I + {\bf w} {\mathbf \cdot \sigma}]\big) = {\textstyle \frac{1}{2}}
\left[I + w_1 \lambda_1 \sigma_1 + (t + w_3 \lambda_3) \sigma_3 \right].
\end{eqnarray}
If $\lambda_1 = \frac{1}{\sqrt{3}}$ and $t = \lambda_3 = {\textstyle \frac{1}{3}}$,
this is equivalent to the ``splaying'' channel introduced by Fuchs
\cite{Fu1} to demonstrate that there exist stochastic maps for
which the Holevo capacity (\ref{Holevo}) is achieved only by
non-orthogonal states. The case $\lambda_3 = 0$ was considered in
\cite{LesRu} in a different context.
The case $\lambda_1 = 0$ is essentially classical, i.e., if
$\rho$ is restricted to the subset of states of the form
${\textstyle \frac{1}{2}}[I \pm w \sigma_3]$, the action of $\phi$ is equivalent
to the action of the column stochastic matrix
${\textstyle \frac{1}{2}} \pmatrix{1 + t + \lambda_3 & 1 - t + \lambda_3 \cr
1 - t - \lambda_3 & 1 + t - \lambda_3}$
on the probability vector ${\textstyle \frac{1}{2}} \pmatrix{ 1 + w \cr 1 - w}.$
Since equality in (\ref{eq:AF.nonunit}) holds for
Fuchs example, it can be regarded as an extreme case. Because $\lambda_2
= 0$ for this class of maps, they map the unit sphere of density matrices
into the ellipse in the x-z plane satisfying the equation
\begin{eqnarray} \label{eq:ellipse}
\frac{w_1^2}{\lambda_1^2} + \frac{(w_3 - t)^2}{\lambda_3^2} = 1
\end{eqnarray}
In the special cases, $\lambda_1 = 0$ and $\lambda_3 = 0$, these
ellipses become vertical and horizontal line segments respectively.
\bigskip
In the classical case ($\lambda_1 = 0$) it is not hard to
show that the maximal capacity is
{\em never} achieved for $\pi = {\textstyle \frac{1}{2}}$. One has only two
pure states $\rho_{\pm} = {\textstyle \frac{1}{2}} [I \pm \sigma_3]$
(which {\em are} orthogonal)
for which $S[\Phi_{\pm}(\rho)]$ is not identical.
\bigskip
By contrast, for the non-unital quantum case with
${\lambda_1} > {\lambda_3}$, it appears, in general, that maximal
capacity is achieved at $\pi = {\textstyle \frac{1}{2}}$ and with non-orthogonal
states. Moreover, these non-orthogonal states need not
correspond to the minimal entropy states. Some insight
into these observations can be obtained by looking at the
ellipse (\ref{eq:ellipse}) in Figure \ref{fig:Fuchs} corresponding to Fuchs
channel. (Fuchs \cite{Fu1} showed explictly that non-orthogonal
states are required to achieve maximum capacity, and that
the maximal capacity achievable with orthogonal states occurs
for $\pi = {\textstyle \frac{1}{2}}$.)
The endpoints of the ellipse (denoted $A_{\pm}$) correspond to
$\Phi \big( {\textstyle \frac{1}{2}} [I \pm \sigma_1] \big) $ and have
entropy $h \big[ {\textstyle \frac{1}{2}} (1 + 2/3)\big]$ while the minimal
entropy states (denoted $C_{\pm}$) correspond to the images of
$\Phi \left( {\textstyle \frac{1}{2}} \Big[ I +
\big( \pm \frac{\sqrt{3}}{2}, 0, {\textstyle \frac{1}{2}} \big) {\mathbf \cdot \sigma} \Big] \right)$
and have entropy $h \big[ {\textstyle \frac{1}{2}} (1 + 1/\sqrt{2})\big]$.
Note that this is the point at which the ellipse meets the
circle $x^2 + z^2 = {\textstyle \frac{1}{2}}$, which is a level set for
the entropy on the Bloch sphere.
\bigskip
The states ${\textstyle \frac{1}{2}} [I \pm \sigma_1]$ are the only pair of
orthogonal states with identical entropy. If one tries to
move from $A_+$ toward $C_+$ to lower the entropy from one
of a pair of orthogonal states, the other orthogonal state
must move along the ellipse away from $A_-$, down and closer
to the origin and hence has a higher entropy than $A_-$.
Explicit computation shows that the entropy price paid by moving
away from $A_-$ is greater than that gained by moving
$A_+$ toward $C_+$ . For any pair of states $\rho_i ~~ (i = 1,2)$,
the state $\tilde{\rho} = \pi \tilde{\rho}_1 + (1-\pi) \tilde{\rho}_2$
which occurs in (\ref{Holevo}) lies along the line segment between
$\tilde{\rho}_1 $ and $\tilde{\rho}_2$. For the points $A_{\pm}$
it is easy to see that the maximum capacity will occur for
$\pi = {\textstyle \frac{1}{2}}$. If one does not require orthogonal states, then
simultaneously moving both $A_{\pm}$ toward $C_{\pm}$ to decrease
the entropy seems advantageous. However, this will also decrease
the entropy of the convex combination $\tilde{\rho}$ which
{\em increases} the capacity. Hence, maximal capacity does not occur
at the minimal entropy states but at states on the ellipse which
lie between $C_{\pm}$ and $A_{\pm}$. (As long as the move is
symmetric, maximal capacity will occur at $\pi = {\textstyle \frac{1}{2}}$.
Symmetry suggests this is optimal, but that does not seem to have
been proved explicitly.)
One expects similar behavior for any map $\Phi$ of the form
(\ref{eq:phi.fuchs}) for which $\lambda_1 > \lambda_3$.
When $\lambda_1 < \lambda_3$ the major axis of the ellipse
lies along the x-axis, there is only one state of minimal entropy
at the ``top'' of the ellipse, and one expects the channel to
behave more and more like a classical channel as the ratio
$\lambda_1/\lambda_3$ increases.
When $\lambda_3 = 0$, the ellipse becomes a horizontal line so that the
minimal entropy states and the endpoints of the ellipse
coincide. Hence, the maximal capacity is again achieved
for orthogonal states. In the limiting case
$t^2 + \lambda_1^2 = 1$, the endpoints of the ellipse lie
on the unit circle of pure states and hence, have entropy
zero. However, the capacity does {\em not} achieve its
maximum value of $\log 2$ but instead has the value
$h(t)$.
As was noted earlier, $\Phi$ always has a fixed point.
For channels of the form (\ref{eq:phi.fuchs}), this fixed point
is at $(0, 0, \frac{t}{1 - \lambda_3})$. However, we have
been unable to attach any significance to the fixed point.
(For Fuchs channel, the fixed point is at ${\textstyle \frac{1}{2}} [C_+ + C_-]$;
however, this seems coincidental.)
\bigskip
At first glance, it might seem that the price paid for the
versatility of non-unital channels is too great. If we
fix $\lambda_2$ and $\lambda_3$, then when $t \neq 0$
the requirement (\ref{eq:cpmcond.nonunit}) implies
that $\lambda_1$ must be smaller (i.e., ``noisier'') than for
the corresponding unital channel. For example, using
Fuchs values $t = \lambda_3 = {\textstyle \frac{1}{3}}, ~ \lambda_2 = 0 $,
one finds that $\lambda_1 = \frac{1}{\sqrt{3}} = \approx 0.577 $ is
optimal and corresponds to the least ``noisy'' direction. However,
if $t = 0$, one could increase this to $\lambda_1 = 2/3 \approx 0.667$
with corresponding decrease in noise so that the minimal
entropy (which comes from the states ${\textstyle \frac{1}{2}}[I \pm \sigma_3]$)
is $h(0.667)$. For the non-unital channels these same states
would yield an entropy of only $h(0.577)$. However, the
states ${\textstyle \frac{1}{2}}
\Big[ I + \big( \pm \frac{\sqrt{3}}{2}, 0, {\textstyle \frac{1}{2}} \big) {\mathbf \cdot \sigma}\ \Big] $
will emerge with entropy $h(0.707)$. The decrease in eigenvalues
of the part of $\Phi$ corresponding to the restriction to matrices of
trace zero, is overcome by the contribution to the emerging state
of the non-unital part of the map. We expect that this behavior is
generic for non-unital maps.
\subsection{Minimal entropy}\label{subsect:nonu.min.ent}
Recall the discussion in Section \ref{sect:notation}.
For unital maps,
the generic situation is that there are two states of minimal
entropy corresponding to the endpoints of the major axis.
In the generic situation for a non-unital map,
unless ${\mathbf t}$ is perpendicular to the major axis, the
ellipsoid in (\ref{eq:ellip.nonunit}) will have {\em only one} state
of minimal entropy. Hence {\em any} entanglement will
require mixing with a state of higher entropy. In the case
of a map (such as the Fuchs map (\ref{eq:spec.nonunit}) with
$|\lambda_3| > |\lambda_1|$) for which the translation ${\mathbf t}$
is perpendicular to the (non-degenerate) major axis of the ellipsoid,
there will be two states of minimal entropy. However, because the
translated ellipsoid is not centered at the origin,
these two states will not be the images of two orthogonal states,
but rather the images of two non-orthogonal pure states corresponding
to vectors $| \psi_1 \rangle$ and $| \psi_2 \rangle$.
This suggests that the best candidate for a minimal entropy entangled state
should have the form
$\Psi = \sum_{jk} a_{jk} | \psi_j \otimes \psi_k \rangle $.
However, after changing to orthogonal bases and using the SVD
decomposition (\ref{eq:schmidt}), this can be rewritten in the form
$\Psi = a | \chi_1 \otimes \chi_3 \rangle + d | \chi_2 \otimes \chi_4 \rangle$
where $ \langle \chi_{1} , \chi_{2} \rangle = \langle \chi_{3} , \chi_{4} \rangle = 0$
and, at most, only {\em one} state in each of the pairs $\chi_1,
\chi_2$ and $\chi_3, \chi_4$
can equal either $\psi_1$ or $\psi_2$. Hence any entanglement
must include states which are mapped into states of higher entropy
under the action of $\Phi_S$.
Thus one expects states of minimal entropy under the action of
$\Phi_S \otimes \Phi_S$ to have the product form
$| \psi_j \otimes \psi_k \rangle $.
In the case of two-fold degeneracy (of the form
$|\lambda_j| = |\lambda_k|$), a shift orthogonal to both
major axes will yield a circle of
states of minimal entropy; however, that circle will {\em not} correspond
to a ``great circle'' but rather a circle of constant latitude. Such a circle
never includes the image of two orthogonal states and hence the argument
above still holds. In the case of three-fold degeneracy, the image will
be a sphere and {\em any} shift will again yield only a single state of
minimal entropy.
Since the maps $\Phi$ and $\Phi_S$ differ only by rotations of the
Bloch sphere, the same conclusions hold for $\Phi$.
Moreover, our analysis suggests that for {\em any} pair of maps $\Phi$ and
$\Omega$,
the states which yield minimum entropy under $\Phi \otimes \Omega$
will be simple product states, regardless of whether one or both
are non-unital.
\section{Conclusion}\label{sect:conc}
\bigskip
Our main result in this paper is that channel capacity is
additive for unital channels, where {\it unital} means that
the channel maps the totally random input state (whose density matrix
is proportional to the identity) into itself (for example, every
self-adjoint channel is unital). Specifically, we present strong evidence
that both the Shannon capacity (no entanglement for input states
or output measurements) and the Holevo capacity (inputs unentangled,
but output measurements may be entangled) are additive over two uses of
a two-dimensional unital channel. This is the first result
that establishes additivity of channel capacity for a broad
class of quantum channels.
We show that the problem reduces to finding the states of
{\it minimal entropy} for two uses of the channel. If these
minimal entropy states are {\it product
states}, then this implies additivity of capacity. We then prove
that one of two possibilities occurs:
either these minimal entropy states are product states,
or else they are entangled states whose reduced density matrices
have greater than minimal entropy. We argue that the latter
case is very unlikely (numerical experiments confirm this),
and so conclude that the former is true. As further
supporting evidence, we prove that the maximal norm of
states which emerge from the channel is multiplicative over two
channel uses.
\bigskip
Our results rely heavily on the Stokes parametrization and
the properties of the image of the Bloch sphere under
stochastic maps. In order to extend them to higher dimensions,
we would need an effective parameterization of the subspace
of matrices of trace zero in higher dimensions. One can always
write a density matrix in ${\bf C}^{d \times d}$ as
$\rho = \frac{1}{d} [I + N]$ where $\hbox{Tr} N = 0$. However,
for $d = 4$, we do not know what the analog of the Bloch
sphere looks like. We know only that its boundary
corresponds to those $N$ with eigenvalues $+3, -1, -1, -1$
which is not the analogue of the surface of a sphere.
Without knowing the geometry of this region, we can hardly hope
to answer the important question of how it transforms under
maps of the form $\Phi \otimes \Phi$. Thus, we have been
forced to use indirect methods to reach conclusions about
the states of minimal entropy emerging from $\Phi \otimes \Phi$.
\bigskip
Our results have implications for the design of communication
channels. Ideally, one wants to eliminate all noise. However,
this will not be practical and one wants to know how best to
allocate resources.
In the case of unital channels, minimal entropy and maximal
capacity are achieved if signal codes are chosen to correspond
to the least noisy ``direction'' or polarization. (Here, we use
``direction'' in the sense of the Bloch sphere or maximum $\lambda_k$
in our notation. This is unrelated to direction of signal
transmission.) Hence, if maximizing capacity is the primary goal, then
it would seem sufficient to minimize noise in only one direction.
Even if the orthogonal directions are extremely noisy, signals
sent using optimal codes will not be affected. However, in this
case ``classical communication'' becomes truly classical. If
codes are restricted to one direction, then one is back in the
classical situation with one choice for encoding $0$ and $1$.
One has effectively lost the versatility of rotating the code basis
as a tool for such purposes as signal encryption.
Non-unital channels have far more versatility, some aspects of
which were discussed briefly in section \ref{sect:nonunit.sub}.
Much more work needs to be done analyzing the properties of
non-unital channels. Thus far, most authors have looked for
examples of particular maps which
illustrate particular facets of stochastic maps
(such as Fuchs \cite{Fu1} example demonstrating the possibility
of maximizing capacity with non-orthogonal states). Our approach
has been to try to find parameters which characterize subclasses
of stochastic maps with certain properties. As summarized in
Appendix C, most of the
known examples of noisy channels can easily be shown to belong
to one of the groups discussed in this paper. A complete
analysis of non-unital maps would seem to require an
extension of conditions of the type (\ref{eq:AF.nonunit})
to general maps of the form (\ref{eq:T.nonunit}) with
$t_k, \lambda_k \neq 0$.
| {
"attr-fineweb-edu": 1.671875,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUberxK6nrxpQczpAJ | \section{Introduction}
Our motivation for this paper comes from the problem of turbulent mixing. However, instead of studying the motion of fluids, which can be mathematically described by trajectories in the group of diffeomorphisms of the domain containing the fluid (as pointed out by V.~I.~Arnold~\cite{Arnold}), we will study its finite-dimensional version when the diffeomorphism group is replaced by a finite-dimensional Lie group $G$. We equip $G$ with a left-invariant metric, and consider stochastically perturbed geodesic flows. This situation may admittedly be somewhat removed from important phenomena in the real flows related to very high (or even infinite) dimensionality of the phase-spaces relevant there (at least if do not take the dimension of $G$ as large parameter), but it does retain an important feature: the amplification of the stochastic effects by the non-linearity. A well-known example of this effect in the context of 2d incompressible fluids has been established in a seminal paper by Hairer-Mattingly~\cite{HM}, where ergodicity for stochastically forced 2d Navier-Stokes equation was proved for degenerate forcing, under optimal assumptions. There are at least two important themes involved in this result. One might perhaps be called algebraic, and involves calculations of Lie algebra hulls related to the H\"ormander hypoellipticity condition~\cite{Hormander, Hairer-Hormander}. The other one belongs to Analysis/Probability and deals with consequences of the H\"ormader condition (which are of course already of great interest in finite dimension) in the infinite-dimensional setting (under suitable assumptions). In the finite-dimensional models we consider in this paper the analysis component is much simpler, although there still are many non-trivial and interesting issues related to various aspects of hypoelliptic operators, such as the domains of positivity of the of fundamental solutions and convergence to equilibria.
Our focus here will be on the algebraic part. Roughly speaking, we will be interested in algebraic conditions which imply the H\"ormander condition, ergodicity and convergence to equilibria. The stochastic forces will be essential for this, but it is interesting to try in some sense to minimize the ``amount" of stochasticity which is needed.
One can of course also study the ergodicity of the non-stochastic dynamics, but we have nothing new to say about this notoriously difficult problem.
We consider two different types of models. The first one might be called the Langevin-type perturbation of the geodesic flow. It is related to the stochastic equation
\be\la{i1}
\ddot x +\nu \dot x = \xi\,,
\ee
where $\xi$ is a random force, which is ``degenerate", in the sense that it acts only in a few directions. On a group $G$ with a left-invariant metric (and under suitable assumptions on~$\xi$) one can employ symplectic reduction and obtain an equation
\be\la{i2}
\dot z^k = q^k(z,z)-\nu z^k + \sigma^k_l \dot w^l\,,
\ee
in the Lie algebra $\gl$ of the group, where we sum over repeated indices, $k$ runs from $1$ to the dimension of the group, $l$ runs from $1$ to the dimension of the noise (which can be $1$), $w^l$ are independent standard Wiener processes, and the equation
\be\la{i3}
\dot z = q(z,z)
\ee
is the Euler-Arnold equation in $\gl $ as established in~\cite{Arnold}.
For this model we determine an algebraic condition on $q$ which is necessary and sufficient for the H\"ormander condition for the corresponding Fokker-Planck equation to be satisfied in the cotangent space $T^*G$, see Theorem~\ref{thm1}. For a compact group $G$ this condition implies ergodicity, and the projection of the ergodic measure to $G$ is the Haar measure. This means that the (stochastically perturbed) geodesic flow will visit all points on the group with the same probability (with respect to the Haar measure). We note that in the setting of the left-invariant metrics on a group this can never be the case without forcing, due to known conserved quantities one gets from Noether's theorem.
For our next group of models we take a compact manifold $Z\subset \gl$ which is invariant under the flow of~\rf{i3} and consider
\be\la{i4}
\dot z = q(z,z)+\xi\,,
\ee
where $\xi$ schematically stands for random forcing induced by the Brownian motion in $Z$ with respect to a natural Riemannian metric. One example we have in mind - in the co-tangent bundle $T^*G$ picture\footnote{We can of course go back and forth between $TG$ and $T^*G$ with the help of the metric.} - is the intersection of a co-adjoint orbit and an energy level. The manifold $Z$ can have much lower dimension than $G$. This situation may in fact be a fairly realistic description of a motion with random perturbations in which the quantities defining $Z$ are monitored and kept close to constant values by some control mechanism. When combined with random perturbations, such control might easily induce random drift along the surface defined by specified values of the controlled quantities. (A more concrete mathematical process is described in subsection~\ref{constr33}.)
Together with the equation
\be\la{kinematic}
a^{-1}\dot a =z,
\ee
the stochastic equation~\rf{i4} gives a stochastic equation in $G\times Z$. In this situation we again determine an algebraic condition on $Z$ which is necessary and sufficient for the Fokker-Planck equation in $G\times Z$ associated to~\rf{i4} to satisfy the H\"ormander condition, see Theorem~\ref{thm2}, and, when the condition is satisfied, establish ergodicity and convergence to equilibrium. For compact $G$ the ergodic measure is given by a product of the Haar measure on $G$ and an invariant measure on $Z$.
In the case of a non-compact $G$ and a compactly supported initial condition for the Fokker-Planck equation the behavior will of course be different, and we illustrate what one might expect by an explicit calculation for $G=\R^n$ and a one-dimensional manifold $Z$, see Proposition~\ref{prop1}.
The themes above have strong connections to control theory. In addition to the remark above about intrepreting $Z$ as a ``control surface", there is another connection via the Stroock-Varadhan Theorem~\cite{StroockVaradhan}. Roughly speaking, instead of random forcing $\xi$ one can consider forcing by control and ask which states can be reached (and how efficiently). For an introduction to control theory see for example~\cite{Jurdjevic}.
\section{Preliminaries}
\subsection{Basic notation and setup}
Let $G$ be a Lie group. Its elements will be denoted by $a, b, \dots$ We will denote by $\gl$ and $\gls$ respectively its Lie algebra and its dual. Let $e_1,\dots, e_n$ be a basis of $\gl$ and let $e^1,\dots,e^n$ be its dual basis in $\gls$, determined by $\langle e^i,e_j\rangle=\delta^i_j$. We assume that a metric tensor with coordinates $g_{ij}$ in our basis is given on $\gl$. In what follows we will mostly use the standard formalism of othonormal frames and assume that $g_{ij}=\delta_{ij}$, which can of course always be achieved by a suitable choice of the original basis, although sometimes it may be useful not to normalize $g_{ij}$ this way, so that other objects could be normalized instead. When $g_{ij}=\delta_{ij}$, we can then identify vectors with co-vectors without too much notation and write $|x|^2$ for the square of the norm of an $x\in\gl$ or $x\in \gls$ given by the metric tensor.
However, we will try to avoid relying on this normalization too much and many of our formulae will be independent of it. In such situations we will use the classical convention of using upper indices for vectors and lower indices for co-vectors, with the usual convetions $y_k=g_{kl}y^l$ and $y^k=g^{kl}y_l$, where $g^{kl}$ is the inverse matrix of $g_{kl}$. In this notation we can for example write $|y|^2=y_ky^k$.
The various objects on $\gl$ and $\gls$ can be transported to $T_aG$ and $T_a^*G$ for any $a\in G$ in the standard way, by using the left translation $b\to ab$. The resulting frame of vectors fields on $G$ (or 1-forms), will still be denoted by $e_1,\dots, e_n$.
We can then consider $G$ as a Riemannian manifold. The left translations $b\to ab$ are more or less by definition isometries of the manifold. They obviously act transitively on $G$, and hence $G$ is a homogeneous Riemannian manifold.
The relevance of this construction for the mechanics of fluids and rigid bodies was pointed out in Arnold's paper~\cite{Arnold} already mentioned above. The main point is that for fluids and rigid bodies the configuration space of the corresponding physical system is naturally given by a group (which however is infinite-dimensional for fluids), and the kinetic energy given a natural metric tensor on it. We refer the reader to the book by Arnold and Khesin~\cite{ArnoldKhesin} for a deeper exposition of these topics and additional developments.
\subsection{The symplectic structure in $T^*G$ in left-invariant frames}
The cotangent space $T^*G$ is the canonical phase space for describing the geodesic flow in $G$ via the Hamiltonian formalism. For a group $G$ with a left-invariant metric the space $T^*G$ can be identified with $G\times \gls$ by using the frame $e_1,\dots,e_n$ on $G$:
\be\la{1}
(a,y)\in G\times \gls \qquad\rightarrow\qquad y_k e^k(a)\in T^*_aG\,,
\ee
where $e^1,\dots, e^n$ is the frame in $T^*G$ which is dual to $e_1,\dots,e_n$.
Here and in what follows we use the standard convention of summing over repeated indices. The ``coordinates" in $T^*G$ given by $(a,y)$ are convenient for calculations, and will be freely used in what follows. Note that the prolongation of the action $a\to ba$ of $G$ on itself to $T^*G$ has a very simple form in the $(a,y)$ coordinates:
\be\la{1b}
(a,y)\to (ba,y)\,,
\ee
i.\ e.\ the $y$ coordinate stays unchanged. This is exactly because the frame $e^k$ is left-invariant.
As any cotangent space of a smooth manifold, the space $T^*M$ carries a natural symplectic structure. We start with the canonical 1-form on $T^*G$, which is given by
\be\la{2}
\alpha=y_k e^k(a)\,.
\ee
The symplectic form $\om$ is then given by
\be\la{3}
\om=d\alpha\,.
\ee
We have
\be\la{4}
d\alpha=dy_k\wedge e^k+y_kde^k\,.
\ee
The calculation of $de^k$ is standard. First, we introduce the structure constants of $\gl$ (with respect to the basis $e_k$ by
\be\la{5}
[e_i,e_j]=c^k_{ij}e_k\,.
\ee
Next, we apply Cartan's formula for the exterior differentiation:
\be\la{cartan}
de^k(e_i,e_j)=e_i\cdot e^k(e_j)-e_j\cdot e^k(e_i)-e^k([e_i,e_j])\,.
\ee
Combining~\rf{5} and~\rf{cartan}, together with the fact that the first two terms on the right-hand side of~\rf{cartan} vanish due to left-invariance of the objects involved, we obtain
\be\la{8}
\om\,\,=\,\,d\alpha\,\,=\,\,dy_k\wedge e^k-\pul \,y_k\,c^k_{ij}\,e^i\wedge e^j\,.
\ee
In other words, in the local frame on $T^*G$ given by $e_1,\dots, e_n,e^1\sim \frac{\partial}{\partial y_1},\dots, e^n\sim\frac{\partial}{\partial y_n}$, the form $\omega$ is given by the block matrix
\footnote{We use the usual identifications: if $f, g$ are two co-vectors with coordinates $f_i, g_j$ respectively, then the two-form $f\wedge g$ is identified with
the antisymmetric matrix $\om_{ij}=f_ig_j-f_jg_i$ and $(f\wedge g)(\xi,\eta)=
\om_{ij}\xi^i\eta^j$ for any two vectors $\xi,\eta$.}
\be\la{9}
\left(\begin{array}{cc}
-C(y) & -I \\ I & 0
\end{array}\right)\,,
\ee
where $C(y)$ denotes the matrix $y_kc^k_{ij}$. The inverse of the matrix~\rf{9} is
\be\la{10}
\left(\begin{array}{cc}
0 & I \\ -I & -C(y)
\end{array}\right)\,,
\ee
and for any function $H=H(a,y)$ on $T^*G$ the corresponding Hamiltonian equations are
\be\la{H}
\begin{array}{rcl}
(a^{-1}\dot a)^k & = & \frac {\partial H}{\partial y_k}\,,\\
\dot y_k & = & -e_k H +y_lc^l_{jk}\frac{\partial H}{\partial y_j}\,,
\end{array}
\ee
where $(a^{-1}\dot a)^k$ denotes the $k-$th coordinate of the vector $a^{-1}\dot a\in \gl$, the expression $e_k H$ denotes the derivative along the $e_k$ direction in the variable $a$. The last term on the right-hand side of the second equations represents the Poisson bracket $\{H,y_k\}$ with $H$ considered as a functions of $y$ (and $a$ considered as fixed when calculating the bracket). The bracket is uniquely given by its usual properties and the relations
\be\la{PB}
\{y_i,y_j\}=y_kc^k_{ij}\,.
\ee
It can be obtained by applying the standard Poisson bracket on the symplectic manifold $T^*G$ to functions independent of $a$ in the above coordinates $(a,y)$.
Note that the equations~\rf{H} do not depend on the metric, they depend only on the structure of the Lie algebra.
\subsection{The symplectic reduction to $\gls$ and the Euler-Arnold equation}
When $H$ is invariant under the prolongation of the action by left multiplication of $G$ on itself to $T^*G$, which is equivalent to $H$ not depending on $a$ in the above coordinates $(a,y)$, i.\ e.\ $H=H(y)$, then the second equation of~\rf{H} does not contain $a$ and is simply
\be\la{EA}
\dot y_k=\{H,y_k\}\,.
\ee
This is a form of the Euler-Arnold equation, originally formulated in $\gl$ in~\cite{Arnold}. This equation represents one form of the reduction of the equations on $T^*G$ to $\gls$ by the symmetries of the left action of $G$ on itself, see for example~\cite{MarsdenWeinstein}. The space $\gls$ has a natural structure of a Poisson manifold (with the Poisson bracket given by~\rf{PB}) and is foliated into ``symplectic leaves", which are given by the orbits of the co-adjoint representation, see for example~\cite{ArnoldKhesin, MarsdenWeinstein}. The orbits are given by
\be\la{orb}
\OO _{\bar y}=\{(\Ad a)^* \bar y, \,\,a\in G\}
\ee
where $\bar y$ is a fixed vector in $\gls$ and $\Ad a$ is defined below, and they have a natural structure of a symplectic manifold (with the maps $(\Ad a)^*$ acting by as symplectic diffeomorphism).
\subsection{Conserved quantities, the moment map, and Noether's theorem}
The Killing fields associated with the symmetries of the Riemannian structure on $G$ with the left-invariant metric given by left multiplications $b\to ab$ are easily seen to be given by {\it right-invariant} vector fields $e(a)=\xi a$ (where $\xi\in \gl$) on $G$. By Noether's theorem there should be a conserved quantity associated to any such field. It is easy to see that the quantity is given by
\be\la{moment}
(a,y)\to (({\rm Ad} \,a^{-1})\,\xi\, ,\, y)=(\xi\,,\,({\rm Ad}\, a^{-1})^* y)\,,
\ee
where the operator $\Ad a$ is defined as usual by
\be\la{ad}
\Ad a \,\xi = a\xi a^{-1}\,.
\ee
The map $M\colon T^*G\to \gls$ given in the $(a,y)$ coordinates by
\be\la{M}
M(a,y)= (\Ad a^{-1})^*\, y
\ee
is the usual moment map associated with the (symplectic) action of $G$ on $T^*G$ (given by the prolongation of the left multiplication). The vector $M(a,y)$ is conserved under the Hamiltonian evolution, and the quantities $(\xi, M)$ are the conserved quantities from Noether's theorem applied to our situation. In particular, the Hamiltonian equations~\rf{H} obtained from taking the Hamiltonian as
\be\la{NH}
H(a,y)=(\xi, M(a,y))
\ee
are
\be\la{NHE}
\begin{array}{rcl}
(\dot a)a^{-1} & = & \xi \\
\dot y & = & 0\,.
\end{array}
\ee
The conservation of $M$ also has a geometric interpretation: if $x(s)$ is a geodecics (pa\-ra\-metrized by length) on a Riemannian manifold and $X$ is a Killing field (infinitesimal symmetry), then the scalar product $(\dot x, X)$ is constant. This is of course just another way to state the Noether's theorem in this particular case, but it can also be interpreted in terms of properties of Jacobi fields along our geodesics.
In the context of rotating rigid bodies, the quantity $M$ corresponds to the conservation of angular momentum, see~\cite{Arnold}. In the context of ideal fluids, the conservation of $M$ corresponds to the Kelvin-Helmholtz laws for vorticity, as observed by many authors.
It is easy to check the following fact: when $H$ is independent of $a$, i.\ e.\ $H=H(y)$, then for a curve $(a(t), y(t))$ in $T^*G$ satisfying the ``kinematic" equation
\be\la{kinem}
(a^{-1}\dot a)^k= \frac {\partial H}{\partial y_k}
\ee
the ``dynamical" equation
\be\la{dyn}
\dot y_k = \{H,y_k\}
\ee
is equivalent to the (generalized) momentum conservation
\be\la{mc}
M(a,y)={\rm const.}
\ee
Also, if $(a(t),y(t))$ is a solution of the equations of motion and $H=H(y)$, then $y(t)$ is given by
\be\la{yt}
y(t)=(\Ad a(t))^*\,\, \bar y
\ee
for some fixed co-vector $\bar y\in\gls$.
\section{Perturbations by random forces}
\subsection{Langevin equation}
A very natural random perturbation of the geodesic flow is the Langevin equation, which can be symbolically written as
\be\la{L1}
\ddot a = - \nu \dot a + \ve \dot w\,,
\ee
for some parameters $\nu>0$ and $\ve >0$, which for a given $t>0$ and $a(t)$ is considered as an equation in $T_{a(t)} G$, with $\ddot a$ interpreted as the covariant derivative of $\dot a$ along the curve $a(t)$, and $w$ is a suitable form of Brownian motion in the Riemannian manifold $G$. Of course, the expression $\dot w$ is somewhat ambiguous and there are some subtle points in writing things in the correct way from the point of view of rigorous stochastic calculus. In particular, one has to distiguish carefully between the It\^o and Stratonovich integrals.
Here we will mostly avoid the subtleties of the right interpretation of the stochastic equations such as~\rf{L1} by working instead with the Fokker-Planck equation, and we can define the transition probabilities for our processes via that equation.
A good starting point for writing the Fokker-Planck equation associated with~\rf{L1} is the Liouville equation in $T^*G$. This equation describes the evolution of a density $f(a,y)$ (with respect to the volume element given by the natural extension of the Riemannian metric from $G$ to $T^*G$, which is proportional to the volume element given by the $n-$th power $\om\wedge\om\wedge\dots\wedge \om$ ($n$ times) of the canonical symplectic form $\om$ above. The Liouville equation is
\be\la{Liouville}
f_t+ v^ke_kf+b_k\frac{\partial f}{\partial y_k}=0\,,
\ee
where
\be\la{vb}
v^k=\frac{\partial H}{\partial y_k}\,,\qquad b_k=\{H,y_k\}\,,
\ee
and $e_kf$ denotes the differentiation of $f(a,y)$ as a function of $a$ in the direction of the field $e_k$ defined earlier.
The vector field $X=v^ke_k+b_k\frac{\partial}{\partial y_k}$ is div-free (with respect to our volume form in $T^*G$), as follows from the Liouville theorem in Hamiltonian mechanics.
Hence equation~\rf{Liouville} is the same as
\be\la{L2}
f_t+\div (X f)=0\,,
\ee
where $\div$ is taken in our metric on $T^*G$. Our Fokker-Planck equation should then be
\be\la{FP}
f_t+v^ke_kf+b_k\frac{\partial f}{\partial y_k}+\frac{\partial}{\partial y_k}\left(-\nu v_k f -\frac{\ve^2}{2} \frac{\partial f}{\partial y^k}\right)=0 \,,
\ee
where $v_k$ is as above.
It can be considered as a combination of the Liouville transport with an Ornstein-Uhlenbeck process along the linear fibers of $T^*G$. To have an exact correspondence to~\rf{L1}, we have to take $H(y)=\pul|y|^2$, the Hamiltonian of the geodesic flow.
This equation can then be interpreted as describing a ``physical Brownian motion" of a particle in $G$. (We can for example think of $G$ being filled with an imcompressible fluid which is at rest and the Brownian particle suspended in the fluid and being subject to random ``kicks" from the fluid molecules and friction due to viscosity of the fluid, in the spirit of Einstein's paper~\cite{Einstein1905}.
The symmetry reduction for~\rf{FP} corresponding the the symmetry reduction for~\rf{L1} is very simple: we consider it only for functions depending on $y$, which results in dropping the term $v^ke_kf$. The symplectic reduction of~\rf{H} to~\rf{EA} corresponds to the same procedure applied to the Liouville equation~\rf{Liouville}.
There is an explicit steady solution of~\rf{FP} given by
\be\la{sol1}
f(a,y)=Ce^{-\beta H }\,,\qquad \beta=\frac{2\nu}{\ve^2}\,,
\ee
where $C$ is any constant. The formula is the same as in the flat space. The approach to equilibrium will however be influenced by the term $b_k\frac{\partial }{\partial y_k}$ which is absent in the flat case. Strictly speaking, the last statement applies unambiguously only to compact groups $G$, where the equilibrium~\rf{sol1} is easily seen to be unique among probability densities (for a suitable $C$, under some natural assumptions on $H$). We will discuss this point in some detail below in the more difficult case of degenerate forcing.
Given that the conservation of $M(a,y)$, from the point of view of Statistical Mechanics it is natural to consider (at least when $G$ is compact) distributions
in the phase space $T^*G$ given by
\be\la{equilibria}
f(a,y)=C e^{-\beta H(y) + (\xi\,,\, M(a,y))}=Ce^{-\beta H(y) + ((\Ad a^{-1})\xi\,,\, y)}
\ee
for $\beta>0$ and $\xi\in\gl$. In fact, if we replaced the Langevin equation by the Boltzmann equation
\be\la{Boltzmann}
f_t+v^ke_kf_k+b_k\frac{\partial f}{\partial y_k}=Q(f,f) \,,
\ee
for appropriate ``collision operator" $Q$ (defined on each fiber $T_a^*G$ in the same way as in the flat case), densities~\rf{equilibria} should be among the equilibria
(the set of which could possibly be larger due to symmetries other than those generated by the left shifts).
The large degeneracy of the set of equilibria is an important feature of the Boltzmann equation which is crucial for fluid mechanics. It is not shared by the Langevin equation, for which the equilibrium is unique (under reasonable assumptions). This is related to the hypoellipticity of the differential operator in~\rf{FP}, which we will discuss in some detail for more general operators in the next subsection.
We remark that one can modify the Langevin equation and get~\rf{equilibria} as equilibria for the modified equation. For this we simply change the Hamiltonian in~\rf{FP} to the expression
\be\la{nH}
\tilde H(a,y)= H(y)-((\Ad a^{-1})\xi\,, \,y)
\ee
This corresponds to watching a Brownian motion of a particle in incompressible fluid which moves in $G$ as a rigid body along the Killing field $\xi a$. (This is a steady solution of the equations of motion for an incompressible fluid.) The term
$((\Ad a^{-1})\xi\,, \,y)$ in the Hamiltonian is then produces the analogues of centrifugal and Coriolis forces which we encounter in rotating coordinate frames.
\subsection{Langevin equation with degenerate forcing}\label{sslang}
In PDEs of fluid mechanics one sometimes considers forcing through low spatial Fourier modes which is ``white noise" in time. See for example~\cite{HM, Kuksin}. In our context here this is akin to considering the system
\be\la{DL1}
\begin{array}{rcl}
(a^{-1}\dot a)^k & = & \frac{\partial H}{\partial y_k}\,\,,\\
\dot y_k & = & -e_k H + y_lc^l_{jk}\frac{\partial H}{\partial y_j}-\nu \frac{\partial H}{\partial y_k} + \sum_{i=1}^r \ve\dot w_i \tilde f^i_k\,\,,
\end{array}
\ee
where $\tilde f^1,\dots \tilde f^r$ are some fixed vectors in $\gls$ and $w_i$ are standard independent Wiener processes.
The term $-\nu \frac{\partial H}{\partial y_k}$ represents friction. One could consider more general forms of friction, but here we will be content with the above special form. Many of the results below hold for more general friction forces.
The main complication in~\rf{DL1} as compared to the previous section is that $r$ can be less than the dimension $n$ of $\gls$.
In the remainder of this subsection we will assume that
\be\la{HH}
H=H(y)=\pul |y|^2=\pul y_ky^k\,,
\ee
which correspond to geodesic flow, or kinetic energy in classical mechanics.
Also, below we will need to do some Lie bracket calculations for which some formulae seem to be easier when we work in $\gl$ rather than $\gls$. This amounts to ``raising the indices" in the old-fashioned language, i.\ e.\ working in the coordinates $y^k$ rather then $y_k$. We note that with these assumptions we have
\be\la{vey}
y^k=v^k\,.
\ee
Equation~\rf{DL1} then becomes
\be\la{DL2}
\begin{array}{rcl}
(a^{-1}\dot a)^k & = & y^k\,\,,\\
\dot y^k & = & \tilde q^k_{ij}y^iy^j-\nu y^k + \sum_{i=1}^r\ve \dot w^i f_i^k \,\,,\,
\end{array}
\ee
where the notation is self-explanatory, perhaps with the exception of the term
$\tilde q^k_{ij}y^iy^j$, in which the coefficients are not uniquely determined by the function $y\to \tilde q(y,y)$.
A~straightforward ``raising of indices" gives the definition
\be\la{qdef}
([x,y],z)=(\tilde q(z,x),y)\,,\qquad x,y,z\in \gl\,\,,
\ee
which coincides with the Arnold form $B$ from~\cite{Arnold}. In what follows it will be advantageous to work with the symmetrization of $\tilde q$, which will be denoted by $q$:
\be\la{symq}
q(x,y)=\pul(\tilde q(x,y) + \tilde q(y,x))\,.
\ee
In equation~\rf{DL2} it does not matter whether we use $\tilde q$ or $q$, of course.
Instead of~\rf{DL2} we can write
\be\la{dl3}
\begin{array}{rcl}
\dot a & = & az\,\,,\\
\dot z & = &q(z,z)-\nu z + \ve \sigma \dot w\,\,,
\end{array}
\ee
where we use $z$ to emphasize that the equations are considered in $\gl$, as the variable $y$ was use to denote elements of $\gls$, $w$ is the vector of the standard Wiener process in $R^r$ and $\sigma$ is a suitable $n\times r$ matrix.
The corresponding Fokker-Planck equation for $f=f(a,z;t)$ then is
\be\la{fpd}
f_t+z^ke_k f + q^k(z,z)\frac{\partial f}{\partial z^k} + \frac{\partial}{\partial z^k} \left(-\nu z^k f- \frac{\ve^2}2 h^{kl} \frac{\partial f}{\partial z^l} \right)\,\,,
\ee
for a suitable symmetric positive semi-definite matrix $h$ (which is constant in $z$).
This is clearly a degenerate parabolic operator and we will study the classical parabolic H\"ormander condition for hypoellipticity for the Lie brackets generated by the vector fields relevant for the operator, see~\cite{Hairer-Hormander}.
In our context here the condition can be formulated in terms of the ``Lie algebra hull" of the vector fields
\be\la{add1}
\XX_k=\sigma_k^l\frac{\partial}{\partial z^l}
\ee
which satisfies the crucial additional condition that it is closed under the operation
\be\la{add2}
\Ad \XX_0\colon \XX\to [\XX_0,\XX]\,,
\ee
where the last bracket is the Lie bracket of vector fields and $\XX_0$ is the ``Euler-Arnold component" of our operator as specified below.
The coordinates on $TG$ we will use are $(a,z)$, which correspond to $z^ke_k(a)$. The vector fields on $TG$ which will be relevant for our purposes will be of the form $A^k(z)e_k(a) + X^k(z)\frac{\partial}{\partial z^k}$\,. We will write
\be\la{fields}
A^k(z)e_k(a) + X^k(z)\frac{\partial}{\partial z^k} = \vek A X = \vek {A(z)}{X(z)}\,\,.
\ee
In these coordinates the Lie bracket is
\be\la{lbc}
\left[\,\,\vek A X \,\,,\,\,\vek B Y \,\,\right]
= \vek{A\wedge B + D_X B-D_Y A}{{[X,Y]}}\,\,,
\ee
where we use $A\wedge B$ to denote the function of $z$ obtained from $A(z)$ and $B(z)$ by taking the Lie bracket in $\gl$ pointwise, as opposed to $[X,Y]$, which denotes the Lie bracket of $X, Y$ considered as vector fields in $\gl$. The notation $D_A X$ has the usual meaning: the derivative of $X=X(z)$ (at $z$) in the direction of $A=A(z)$.
Let us write $Q=Q(z,z)$ for the vector field in $\gl$ given by the vector field
$q^k(z,z)\frac{\partial}{\partial z^k}$.
For simplicity we will work out the case when $h^{kl}$ is of rank one, which means that the random forcing is applied only in one direction, which will be denoted by $F$ (and considered as a constant vector field in $\gl$). Hence
\be\la{F}
h^{kl}=F^kF^l\,.
\ee
In this case the vector fields for the H\"ormander condition calculation can be taken as
\be\la{21}
\vek 0 F \,\,,\quad \mbox{and}\quad \XX_0=\pul\vek z {Q-\nu z}\,\,,
\ee
We have
\be\la{22}
\left[\vek 0 F, \vek{z}{Q-\nu z}\right]=\vek F {D_F Q - \nu F}\,\,
\ee
and
\be\la{23}
\left[\vek 0 F\,,\left[ \vek 0 F \,, \,\vek z {Q-\nu z}\right]\right]=\vek 0 {D^2_F Q}\,\,.
\ee
This means that we have extended our list of vector fields by the field
\be\la{24}
\vek 0 G \,\,,\,\,G=\pul D^2 Q= Q(F, F).
\ee
We can now take
\be\la{25}
\left[\vek 0 G\,,\left[ \vek 0 F \,, \,\vek z {Q-\nu z}\right]\right]=\vek 0 {D_GD_F Q}\,\,.
\ee
and extend our list of fields by
\be\la{26}
\vek 0 {Q(F,G)}\,.
\ee
We note that the new fields obtained in this way are ``constant" (in the coordinates we use), so the procedure can be easily iterated.
\begin{definition}
We will say that $Q$ is non-degenerate with respect to a set $\mathcal F\subset \gl$ if there is no non-trivial subspace $M\subset \gl$ contaning $\mathcal F$ which is invariant under $Q$, in the sense that $Q(z,z')\in M$ whenever $z,z'\in M$.
\end{definition}
\noindent
We can now formulate the main result of this subsection:
\begin{theorem}\label{thm1}
{The operator of the Fokker-Planck equation~\rf{fpd} satisfies the H\"ormander condition if and only if $Q$ is non-degenerate with respect to the range of the matrix $h$ (considered as a map from $\gl$ to $\gl$).}
\end{theorem}
\noindent
{\it Proof:} The necessity of the condition can be seen when we consider functions depending only on $z$. If there is a non-trivial linear subspace invariant under both $Q$ and the diffusion, then particle trajectories starting at $M$ clearly cannot leave $M$, and therefore the operator cannot satisfy the H\"ormander condition.
On the other hand, if $Q$ is non-degenerate with respect to the range of $h$, then the above calculation shows that the Lie brackets of the fields~\rf{21} (with perhaps several fields of the same form as the first one) generate the fields of the form
\be\la{27}
\vek 0 {X_j}\,,\quad j=1,\dots n\,,
\ee
where $X_1,\dots, X_n\in \gl$ form a basis of $\gl$.
Formula~\rf{21} now shows that the fields of the form
\be\la{28}
\vek {X_j} {Y_j(z)}
\ee
can also be generated. Together with the fields~\rf{27} they clearly form a basis of $T(TG)$ at each point $(a,z)$, and the proof is finished.
\bb
\noindent
{\it Remarks\,\,} 1. If one replaces the damping term $-\nu z$ in~\rf{dl3} by a more general expression $-\nu Dz$, where $D$ is a positive-definite matrix, interesting new questions arise. We plan to address these in a future work.\\
2. Very recently we learned about the paper~\cite{HerzogMattingly}. The results there could be used (modulo simple modifications) to prove the above theorem and also to say more about the set where the solutions of the Fokker-Planck equation are positive.
\begin{corollary}
When $G$ is compact and $Q$ is non-degenerate with respect to the diffucion matrix $h$ in the sense above, the process~\rf{dl3} (and hence also~\rf{DL2}) is ergodic with respect to a distribution density given by a function which is independent of $a$. In other words, the Lagrangian positions of the ``particles" are uniformly distributed (with respect to the Haar measure) in the limit of infinite time.
\end{corollary}
In our situation this is not hard to prove once the H\"ormander condition is established by following methods in~\cite{Hairer-ergodic, Hairer-Hormander} and~\cite{Khasminskii}.
\bigskip
\noindent
{\it Remark.}
One should be also able to prove convergence to the equilibrium measure following the methods of~\cite{Villani}, but we will not pursue this direction here. It is perhaps worth reminding that in general there is a difference between uniqueness of the ergodic measure and the convergence to equilibrium. A simple example in our context here is provided by the equation
\be\la{ce1}
f_t+f_{x_1}=\pul f_{x_2 x_2}\,.
\ee
considered in the 2d torus.
Note that this equation does not satisfy the parabolic H\"ormander condition, while its spatial part satisfies the elliptic H\"ormander condition.
\subsection{Constrained diffusion in the momentum space}\label{constr33}
The Euler-Arnold equation~\rf{EA} leaves invariant the co-adjoint orbits~\rf{orb} and also the energy levels $\{H=E\}$. It is therefore of interest to consider perturbations by noise which ``respects" some of the constraints. For example, one can add noise respecting the coadjoint orbit, but not the energy levels. An example of this situation (in the presence on non-holonomic constraints) is considered in~\cite{Ratiu}. It is closely related to stochastic processes on co-adjoint orbits introduced by Bismut~\cite{Bismut}. One can also consider noise which preserve energy levels but not necessarily the co-adjoint orbits, or one can consider noise which preserves both the coadjoint orbits and the energy levels.
We wish to include all these situations in our considerations, and therefore we will consider the following scenario. We assume that we are given a Hamiltonian $H=H(y)$ and a manifold $M\subset\gls$ which is invariant under the evolution by the Euler-Arnold equation~\rf{EA}. If $M$ is given locally as a non-degenerate level set of some conserved quantities $\phi_1,\dots, \phi_r$ (in the sense that $\{H,\phi_k\}=0\,,\,\,k=1,\dots,r\,$),
\be\la{4-1}
M=\{y\in\gls\,\,,\,\,\phi_1(y)=c_1,\phi_2(y)=c_2,\dots \phi_r(y)=c_r\}\qquad \hbox{locally in $y$},
\ee
there is a natural measure $m$ on $M$ (invariant for the Hamiltonian flow) which is generated by the volume in $\gls$ (given by our metric there) and the conserved quantities by first restricting the volume measure in $\gls$ to
\be\la{4-2}
M_\ve=\{y\in\gls\,,\,\phi_1(y)\in(c_1-\ve,c_1+\ve),\phi_2(y)\in(c_2-\ve,c_2+\ve),\dots, \phi_r(y)\in(c_r-\ve,c_r+\ve)\}
\ee
then normalizing the restricted measure by a factor $\frac 1{2\ve}$ and taking the limit $\ve\to 0_+$. In the case $r=1$ we have
\be\la{4-3}
m= \frac{1}{|\nabla\phi_1|}\,\,\HH^{n-1}|_M\,,
\ee
where $\HH^{n-1}$ is the $n-1$ dimesnional Hausdorff measure generated by our metric, and the gradient and its norm in the formula are also calculated with our given metric. For general $r$ we have similar formulae, the corresponding expression can be seen easily from the co-area formula, for example. However, the above definition via the limit $\ve\to0_+$ is perhaps more natural, as is relies only on the objects which are ``intrinsic" from the point of view of the definition of $m$: the underlying measure in $\gls$ and the constraits $\phi_k$. (The proof that the limit as $\ve\to 0_+$ is well-defined is standard and is left to the interested reader.)
As the Hamiltonian evolution in the phase-space $T^*G\sim G\times \gls$ preserves the Liouville measure, which, in the $(a,y)$ coordinates defined by~\rf{1}, is the product of the Haar measure on $G$ and the canonical volume measure in $\gls$,
we see that the product of the (left) Haar measure $h_G$ on $G$ and $m$ is an invariant measure for the Hamiltomian evolution in the subset of $T^*G$ given by $G\times M$ in the $(a,y)$ coordinates. If the group $G$ is not unimodular\footnote{Recall that a group is unimodular of the notions of left invariant and right invariant Haar measures coincide. This is the same as demanding that the maps $y\to \Ad a^* y$ preserve the volume in $\gls$, i.\ e.\ have determinant $1$.}, the measure $m$ may not be preserved by the Euler-Arnold equation~\rf{EA} in $\gls$, which represents the symplectic reduction of the original full system. This is because while the
vector field
\be\la{4-4}
v^ke_k+q_k\frac{\partial}{\partial y_k}
\ee
in the Liouville equation~\rf{Liouville} is div-free in $T^*G$, its two parts may not be div-free in $G$ or $\gls$ respectvely, unless the group is unimodular.
The Liouville equation for the evolution is $G\times M$ is the same as~\rf{Liouville}
\be\la{LiouvilleM}
f_t+ v^ke_kf+b_k\frac{\partial f}{\partial y_k}=0\,,
\ee
where $f=f(a,y)$ now denotes the density with respect to the measure $h_G\times m$ (where $h_G$ is again the left Haar measure on $G$).
We now consider stochastic perturbations of the Liouville equation~\rf{LiouvilleM} on $G\times M$. As in the Langevin-type equations, the random forces will act only in the $y-$component, so that the kinematic equation $(a^{-1}\dot a)^k=v^k$ is left unchanged.
We will demand that the stochastic term will also leave invariant the measure $h_G\times m$, and as it acts only in the $y-$variable, it then must leave invariant the measure $m$.
There is more than one way in which noise can be introduced in a reasonable way into~\rf{LiouvilleM}. For example, if $V$ is a vector field (with coordinates $V_k$) tangent to $M$ which generates a flux on $M$ preserving the measure $m$, one can replace the equation~\rf{H} by
\be\la{H2}
d y_k = \{H,y_k\}\,dt+\ve \,V_k\circ dW\,,
\ee
where $W$ is the standard $1d$ Wiener process and $\circ$ indicates, as usual, that the corresponding stochastic integrals should be taken in the Stratonovich sense.\footnote{Note that with It\^o integration it the particle trajectories might not stay in $M$.}
The corresponding Fokker-Planck equation is given by
\be\la{fpm1}
f_t+ v^ke_kf+b_k\frac{\partial f}{\partial y_k}=\frac{\ve^2} 2(D_V^*)^2f\,,
\ee
where $D_V^*$ is adjoint to $D_V=V_k\frac{\partial}{\partial y_k} $ with respect to the measure $m$.
As the flux by $V$ preserves $m$, we in fact have $D_V^*=-D_V$. In this case the operators $D_V^2$ and $(D_V^*)^2$ coincide and arise from the functional
\be\la{funcV}
\int_M \pul |D_V f|^2\, d\,m
\ee
This is in some sense the ``minimal non-trivial noise" model, and it might be of interest in some situations.
Here we will consider the situation when the noise is non-degenerate in $M$, leaving the interesting case of the degenerate noise in $M$ to future studies. Our motivation is the following. For $\ve>0$ we consider the usual Brownian motion in $\gls$, but restricted to the set $M_\ve$ above, with the understanding that the trajectories ``reflect back" (we can think about an action of some control mechanism) at the boundary (corresponding to the Neumann condition at the boundary for the corresponding Fokker-Planck equation, which is just the heat equation in this case). As $\ve\to 0_+$, a good model for the limiting process on $M$ is given by an operator constructed as follows.
First, we take the metric induced on $M$ by the given metric in $\gls$. Assume the metric is given by $\tilde g_{ij}$ in some local coordinates. Assume the measure $m$ is given as $m(x)\,dx$ in these coordinates, where $m(x)$ is a (smooth) function. Denoting by $\tilde g$ the determinant of $\tilde g_{ij}$, the volume element given by $\tilde g_{ij}$ in our coordinates is $\sqrt{\tilde g}\,dx$. We then define a new metric
\be\la{newh}
h_{ij}=\varkappa \tilde g_{ij}
\ee
so that the volume element $\sqrt{h}\,dx$ satisfies
\be\la{hdef}
\sqrt{h}\,dx=m(x)\,dx\,\,.
\ee
Then we take the generator of our process to be the Laplace operator on $M$ with respect to the metric $h$. We will denote this operator by $L_M$.
Our Fokker-Planck equation then will be
\be\la{fpm2}
f_t+ v^ke_kf+b_k\frac{\partial f}{\partial y_k}=\frac{\ve^2}2 L_M f\,.
\ee
We will be interested in ergodicity properties of the process given by this equation.
\bb
In the remainder of this subsection we will assume again~\rf{HH}, i.\ e.\ the Hamiltonian $H$ is quadratic (and positive definite). We can then ``lower the indices" and work with $TG$ and $\gl$ rather then with $T^*G$ and $\gls$. We will denote by $Z$ the image of $M$ in $\gl$ under the ``lowering indices" map, and will denote the elements of $Z\subset\gl$ by $z$, with coordinates $z^k$. Similarly to~\rf{vey} we have
$z^k=v^k$.
The Fokker-Planck equation~\rf{fpm2}, now considered on $G\times Z$ becomes
\be\la{fpz}
f_t+z^ke_k f + q^k(z,z)\frac{\partial f}{\partial z^k} = \frac{\ve^2}{2}Lf \,\,,
\ee
where $q^k$ is defined by~\rf{symq}, and $L$ is the operator on $Z$ corresponding to $L_M$. It is of course again a Laplacian for some metric on $Z$ (which is conformally equivalent to the metric on $Z$ induced by the underlying metric in $\gl$).
Let us now consider conditions under which the operators corresponding to \rf{fpz} or \rf{fpm2} satisfy the usual H\"ormander commutator condition for hypoellipticity.
\begin{definition}\label{def1}
A p-hull\,\,\footnote{Here p stands for parabolic, as the definition is tied to the parabolic H\"ormander condition.} of a subset $S\subset \gl$ is the smallest Lie sub-algebra $\mathfrak h\subset\gl$ with the following properties:
\begin{itemize}
\item[(i)] $\mathfrak h$ contains the set $S-S=\{s_1-s_2, \,,\,s_1,s_2\in S\}$\,,
\item [(ii)] $\mathfrak h$ is invariant under the mappings $\Ad s\colon \,\,z\to [s,z]$ for each $s\in S$.
\end{itemize}
\end{definition}
\noindent
Remarks:
\sm
\noindent
1. The p-hull will be relevant in the context of the evolution equation~\rf{fpz}. For the ``spatial part" of the operator~\rf{fpz}, obtained by omitting the term $f_t$, the relevant ``hull" is simply the Lie algebra generated by $S$.
\sm
\noindent
2. Condition (i) in the definition already implies that $\mathfrak h$ is invariant under the mapping $\Ad (s_1-s_2)$ for any $s_1,s_2\in S$. Therefore in (ii) we can require invariance of $\mathfrak h$ under $\Ad s_0$ for just one fixed $s_0\in S$ (and - given (i) - the definition will be independent of the choice of $s_0$).
The main result of this section is the following:
\begin{theorem}\label{thm2}
In the notation introduced above, assume that $M$ is a smooth analytic submanifold of $\gls$. Then the operator on $G\times M$ corresponding to~\rf{fpm2} (or, equivalently, the operator on $G\times Z$ corresponding to~\rf{fpz}) satisfies the H\"ormander condition if and only if the p-hull of $Z$ coincides with $\gl$.
\end{theorem}
\noindent
{\it Proof:}
\noindent
Let us first show that the p-hull condition is necessary for the H\"ormander condition. One can see this from the Lie bracket calculations below, but it is instructive to verify it directly.
Assume $\mathfrak h\,$ is a non-trivial Lie subalgebra of $\gl$ containing $Z-Z$ for which we can find $z_0\in Z\setminus\hl$ such that $\hl$ is invariant under $\Ad z_0$. Let us set
\be\la{edef}
e=z_0^ke_k\,\,.
\ee
The Lie algebra $\hl$ defines (locally) a foliation $\mathcal F$ of $G$ into cosets $aH$, where $H$ is the (local) Lie subgroup of $G$ corresponding to $\hl$. The main point now is that the invariance of $\hl$ under $\Ad z_0$ implies that the flow given by the equation
\be\la{eq1}
a^{-1}\dot a =e
\ee
preserves the foliation. (Another formulation of this statement could be that the equation~\rf{eq1} ``descends" to $G/H$.)
This means that the perturbations given by the stochastic terms in~\rf{fpz} will still preserve the foliation (e.\ g.\ by the Stroock-Varadhan Theorem~\cite{StroockVaradhan}) and it is not hard to conclude that set of points reachable by the corresponding process cannot be open.
For the proof that the p-hull condition is sufficient, we write our operator (locally) in the form
\be\la{op}
f_t+ \XX_0 f- \sum_{j=1}^m \XX_j^2 f\,,
\ee
where $m$ is the dimension of $Z$ (which is of course the same as the dimension of $M$) and $\XX_j$ are suitable vector fields on $G\times Z$. All these fields will be of the form~\rf{fields}, and we will use the same notation as in~\rf{fields} in what follows.
We will be working locally near a point $(a,z_0)\in G\times Z$.
We choose $\XX_j\,,j=1,\dots, m$ so that
\be\la{f1}
\XX_j=\vek 0 {Y_j}\,,\qquad j=1,\dots, m\,.
\ee
where $Y_j$ are analytic near $z_0$ and $Y_j(z)$ form a basis of $T_z Z$ for each $z$ close to $z_0$. The field $\XX_0$ will be of the form
\be\la{f2}
\XX_0=\vek z V\,,
\ee
where $V$ is an field on $Z$ (analytic near $z_0$).
Let us consider local analytic vector fields on $G\times Z$ near $(a,z_0)$ of the form
\be\la{ourfields}
\XX(a,z)=\vek {X(z)}{Y(z)}
\ee
as a (Lie) module $\AAA$ over the set of analytic functions of $z\in Z$. (Recall that we assume that $Z$ is analytic.)
Let $\MM$ be the minimal (Lie) submodule of $\AAA$ satisfying the following requirements:
\begin{itemize}
\item[(a)] $\MM$ contains $\XX_1,\XX_2,\dots, \XX_m$\,, and
\item[(b)] $\MM$ is invariant under the map $\Ad \XX_0\colon \XX\to [\XX_0, \XX]$\,, where $[\,\cdot\,,\,\cdot\,]$ denotes the Lie bracket for vector fields.
\end{itemize}
The parabolic H\"ormander condition at $(a,z_0)$ for the fields $\XX_0,\XX_1,\dots, \XX_m$ then is that
\be\la{pH}
\{\XX(a,z_0)\,,\,\,\XX\in \MM\} = T_{(a,z_0)}( G\times Z)\,\,.
\ee
For $\XX\in \AAA$ we will denote by $\pi \XX\in\gl$ the projection to the first
component in the notation~\rf{fields}, i.\ e.\
\be\la{pidef}
\pi\vek X Y = X\,.
\ee
Let
\be\la{Mdef}
M=\pi \MM\,.
\ee
As $\MM$ contains the vector fields~\rf{f1}, the condition~\rf{pH} is equivalent to
\be\la{pH1}
M_{z_0}=\{X(z_0)\,,\, X\in M\}= \gl\,.
\ee
Using~\rf{lbc} and the fact that the fields $\XX_1,\dots, \XX_m$ belong to $\MM$, we see that $M$ has the following properties.
\be\la{p0}
\hbox{\sl If $Y$ is an analytic vector field on $Z$ (defined locally near $z_0$), then $Y\in M$.}
\ee
This follows by taking the Lie bracket of $\vek 0 Y$ and $\XX_0$.
\be\la{p1}
\begin{array}{c}
\hbox{\sl If $A\in M$ and $Y$ is an analytic vector field on $Z$ (defined locally near $z_0$),} \\ \hbox{\sl then $D_Y A$ is in $M$.}
\end{array}
\ee
This follows by taking the Lie bracket of $\vek 0 Y$ and an $\XX$ with $\pi \XX=A$.
\be\la{p2}
\hbox{\sl If $A,B\in M$, then $A\wedge B$ is in $M$.}
\ee
This follows by taking the Lie bracket of the fields $\XX$ and $\YY$ with $\pi \XX=A$ and $\pi\YY=B$ and then using~\rf{p1}.
\be\la{p3}
\hbox{\sl If $A\in M$, then $z\wedge A$ is in $M$.}
\ee
This follows by taking the Lie bracket of $\XX$ with $\pi \XX= A$ with $\XX_0$ and using~\rf{p0}, \rf{p1}, and \rf{p2}.
Taking these properties of $M$ into account, it is clear that the proof of the theorem will be finished if we show that
\be\la{ZmZ}
Z-Z\subset M_{z_0}\,.
\ee
Let $l$ be a linear function in $\gl$ which vanishes on $M_{z_0}$. As $Z$ is analytic, the function $l$ considered as a function on the manifold $Z$ will be analytic. The property~\rf{p1} of $M$ implies that the derivatives of all orders $\ge 1$ of $l$ at $z_0$ vanish, and therefore $l$ must be constant on $Z$. In particular, $l$ must vanish on $Z-Z$. We see that no point of $Z-Z$ can be separated from the subspace $M_{z_0}$ by a linear function, and~\rf{ZmZ} follows. This finishes the proof of the theorem.
\begin{corollary}
If the assumptions of Theorem~\ref{thm2} are satisfied and the group $G$ is compact, then any solution of the Fokker-Planck equation~\rf{fpm2} approaches constant. In particular, the system is ergodic for the (stochastic) dynamics, with the unique ergodic measure given by the constant density $f$.
\end{corollary}
\noindent
{\it Proof:}
We note that
\be\la{decay1}
\frac{d}{d\, t}\int_{G\times Z} f^2(a,z,t) \,da\,m(dz)=-\ve^2\int_{G\times Z}|\nabla_z f(a,z,t)|^2\,da\,m(dz)\,,
\ee
where we take $Z$ with the metric defining $L$. By regularity which follows from the H\"ormander condition we can consider the $\Omega-$limit set $\Omega(f_0)$ of the evolution starting with $f_0$, and it consists of smooth functions. Moreover the integral on the right of~\rf{decay1} has to vanish identically for each function in $\Om(f_0)$, by the usual Laypunov-function-type arguments. This means that any function in $\Omega(f_0)$ is constant in $z$ and hence solves the equation
\be\la{decay2}
f_t+z^ke_kf=0\,.
\ee
It is now easy to see that our assumptions imply that such $f$ is constant also in $a$.
\subsection{A calculation for a non-compact group}
We now consider the situation in the previous subsection for the special case $G=\R^n$ and a one-dimensional manifold $Z\subset \gl\sim\R^n$. In other words, $Z$ will be an analytic curve in $\R^n$. We will see that analyticity is not really needed for the calculation below, but we keep it as an assumption, so that we have the H\"ormander condition for the Fokker-Planck equation under the assumptions of Theorem~\ref{thm2}. We will assume that $Z$ is equipped with a measure $m$, the density of which is also analytic with respect to the parameter which gives an analytic parametrization of $Z$. We will re-parametrize $Z$ so that it is given by an analytic periodic function
\be\la{ nc1}
\gamma\colon R \to Z\subset \R^n
\ee
with minimal period $l$ and, in addition, the measure (as measured by $m$) of a segment on the curve between $\gamma(s_1)$ and $\gamma(s_2)$ for some $0\le s_1<s_2<l$ will be given by $s_2-s_1$. Sometimes we will also write
\be\la{nc2}
\gamma(s)=z(s)\,,
\ee
with slight abuse of notation which will hopefully not cause any confusion.
In this special case the Fokker-Plack equation discussed in the previous section, written in the variables $a=(a^1,\dots, a^n)\in G$ and $s$ (which parametrizes $Z$), is \be\la{fps}
f_t+z^k(s)\frac{\partial f}{\partial a^k} = \frac{\ve^2}2 \frac{\partial ^2 f}{\partial s^2}\,,
\ee
where $f=f(a^1,\dots,a^n,s,t)\,$ is periodic in $s$, with period $l$.
The p-hull condition from Definition~\ref{def1} is that $Z-Z$ generates $\R^n$.
We are interested in the long-time behavior of the solutions of~\rf{fps}. We will assume that the p-hull condition is satisfied. It is easy to see that the case when the condition is not satisfied can be reduced to this case by a suitable choice of variable.\footnote{Here and below this is of course meant only in the context of the example we are considering in this subsection. }
We note that the change of variables $a^k\to a^k - z^k_0t$ for some $z_0\in\R^n$ is equivalent to shifting $Z$ to $Z-z_0$. We can therefore assume without loss of generality that
\be\la{nc3}
\int_0^l \gamma(s)\,ds = \int_Z z\,m(dz)=0\,.
\ee
This condition enables us to write
\be\la{phidef}
\gamma(s)=\vf''(s)
\ee
for some periodic (analytic) $\vf\colon \R\to\R^n$.
An important role will be played by the matrix
\be\la{covm}
\Sigma_{kl}=\frac 1 l\int_0^l \vf'_k(s)\vf'_l(s)\,ds\,.
\ee
\vbox{
\begin{proposition}\label{prop1}
Assume~\rf{nc3} (which can be always achieved by a change of variables $a\to a-z_0t$) and let $\Sigma_{kl}$ be defined by~\rf{covm}. For any compactly supported initial density $f_0=f_0(a,s)$ (normalized to total mass one) the quantity
\be\la{limf}
a\to {t^{\frac n2}} \int_0^lf(\sqrt{t}\, a, s,t)\,ds
\ee
converges as $t\to\infty$ (in distribution) to the density of the normal distribution with average $0$ and covariance matrix $\frac{4}{\ve^2}\Sigma_{kl}$.
In other words, the distribution of the positions of trajectories starting at time t in some compact region will approach (after re-scaling) the same distribution as the diffusion with covariance matrix $\frac{4}{\ve^2}\Sigma_{kl}$.
\end{proposition}
}
\noindent{\it Proof:}
\noindent
We will work with the corresponding stochastic ODE
\be\la{sODE}
\begin{array}{rcl}
\dot a & = & \gamma(s)\\
\dot s & = & \ve \dot w\,,
\end{array}
\ee
where $w(t)$ is the standard one-dimensional Wiener process starting at the origin.
Our task reduces to evaluating
\be\la{integral}
a(t)-a(0)=\int_0^t \gamma(\ve w(t'))\,dt'=\int_0^t\vf''(\ve w(t'))\,dt'\,.
\ee
We will evaluate the integral by a standard procedure based on the martingale version of the central limit theorem. We only sketch the main steps.
By It\^o formula we have
\be\la{nc4}
\vf(\ve w(t))-\vf(\ve w(0))=\int_0^t \ve\vf'(\ve w(t'))dw(t') +\int_0^t\frac{\ve^2}2\vf''(\ve w(t'))\,dt'\,.
\ee
We re-write this as
\be\la{nc5}
\frac1{\sqrt{t}} \int_0^t \gamma(\ve w(t'))\,dt'=\int_0^t\frac {2}{\ve\sqrt {t}}\vf'(w(t'))(-dw(t'))+\frac2{\ve^2\sqrt{t}} \left(\vf(\ve w(t))-\vf(\ve w(0))\right)
\ee
The last term on the right clearly approaches zero for $t\to\infty$, as $\vf$ is bounded.
The key point now is to use a martingale version of the central limit theorem (such as, for example Theorem 3.2, page 58 in~\cite{Hall}) to get a good asymptotics for the integral on the right. The covariance matrix for that integral generated along a trajectory $w(t')$ is
\be\la{var}
\frac {4}{\ve^2 t} \int_0^t \vf'_k(\ve w(t'))\vf'_l(\ve w(t'))\,dt'\,.
\ee
For large times $t'$ the distribution of the variable $\ve w(t')$ taken mod $l$ will be approaching the uniform distribution in $[0,l)$ and therefore it is not hard to see that for the purposes of our calculation we can replace the random quantity~\rf{var} by a deterministic quantity given by
\be\la{var2}
\frac {4}{\ve^2l}\int_0^l \vf'_k(s)\vf'_l(s)\,ds= \frac{4}{\ve^2}\Sigma_{kl}\,.
\ee
The claim of the proposition now essentially follows from the central limit theorem.
\bigskip
\bigskip
\centerline{\bf Acknowledgement}
\smallskip
\noindent
We thank Jonathan Mattingly for an illuminating discussion.\\
The research was supported in part by grants DMS 1362467 and DMS 1159376 from the National Science Foundation.
\bigskip
| {
"attr-fineweb-edu": 1.269531,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbfPxK7Tt6AlxCJhg | \section{Introduction}
In \cite{Wirth2008} the second author considered the linear Cauchy problem for a damped wave equation with time-periodic dissipation term
$b(t)$,
\begin{equation}
\label{eq:CPWirth}
\begin{cases}
u_{tt}-\Delta u + 2b(t)u_t=0,\\
u(0,x)=u_0(x), \quad u_t(0,x)=u_1(x),
\end{cases}
\end{equation}
and proved that the solution to \eqref{eq:CPWirth} satisfies the well-known Matsumura-type estimate obtained for constant dissipation by A.~Matsumura in \cite{Mat76}, that is
\begin{equation}
\label{eq:matzumuraestimate}
\norm{\partial_t^k\nabla^j u(t,\cdot)}_{L^2}\leq C (1+t)^{-j-\frac{k}{2}}(\|u_0\|_{H^{j+k}}+\|u_1\|_{H^{j+k-1}}),
\end{equation}
for $j,\,k =0,\,1$ and $C$ a positive constant independent on the initial data. In this paper we generalise these results and consider the Cauchy problem
\begin{equation}
\label{eq:CPmain}
\begin{cases}
u_{tt}-\Delta u + 2b(t)u_t+m^2(t)u=0,\\
u(0,x)=u_0(x), \quad u_t(0,x)=u_1(x)
\end{cases}
\end{equation}
with positive time-periodic dissipation $b(t)$ and mass $m(t)$. We study how the presence of a periodic mass term influences the decay estimates for the solution to \eqref{eq:CPmain}.
Let us first explain, why such a problem is interesting and how it relates to known results from the literature.
There exist many papers in which decay estimates for the solution to wave models of the form
\eqref{eq:CPmain} are investigated under different assumptions on the coefficients $b(t)$ and $m(t)$.
The survey articles \cite{Rei04} and \cite{Wir10} provide for an overview of results; moreover, we refer to the works of M.~Reissig and K.~Yagdjian~\cite{ReiYa00}, of F.~Hirosawa and M.~Reissig~\cite{HiRe06}, of M.~Reissig and J.~Smith~\cite{ReiSm05}, as well as the papers of the second author~\cite{Wirth2006}, \cite{Wirth2007}. In the latter two papers a classification of dissipation terms as \textit{non-effective} or \textit{effective} is introduced, which distinguishes the dissipation terms according to their strength and influence on the large-time behaviour of solutions. In all these results a control on the amount of oscillations present in the coefficients is essential.
To understand this and the meaning of this classification we consider the Cauchy problem \eqref{eq:CPWirth} with the coefficient $b$ assumed to be a bounded, non-negative, sufficiently smooth function satisfying a condition of the form
\begin{equation}
\label{eq:oscillations}
| \partial_t^k b(t)|\leq C_k \frac{b(t)}{(1+t)^k}\qquad \text{ for } k=1,\,2.
\end{equation}
Then, we distinguish between two cases. First, if
\begin{equation}
\limsup_{t\to\infty} tb(t)<1.
\end{equation}
we say that $b$ is \textit{non-effective}, in the sense that the solution behaves in an asymptotic sense like a free wave multiplied by a decay factor, that is there exists a solution $v=v(t,x)$ to the wave equation $v_{tt} - \Delta v = 0$ such that
\[ \begin{pmatrix}
\nabla u(t,x) \\ u_t(t,x)
\end{pmatrix} \sim \frac{1}{\lambda(t)}\begin{pmatrix}
\nabla v(t,x) \\ v_t(t,x)
\end{pmatrix}, \qquad t\to \infty, \]
the asymptotic equivalence understood in an appropriate $L^p$-sense and with $\lambda=\lambda(t)$ given as
\[\lambda(t)=\exp\Big(\frac{1}{2}\int_0^t b(\tau)\,d\tau\Big).\]
The initial data to the free wave $v=v(t,x)$ are uniquely determined by the solution $u=u(t,x)$ and thus by the initial data $u_0$ and $u_1$. Thus, a modified form of scattering is valid. On the other hand, if
\[ \lim_{t\to \infty} tb(t)=\infty\]
holds true we say that the dissipation $b$ is \textit{effective}; in this case solutions to damped wave equation are asymptotically related to solutions $w= w(t,x)$ of the parabolic heat equation $w_t=\Delta w$, i.e.
\[ u(t,x)\sim w(t,x)\]
holds true again in an appropriate $L^p$-sense. This can be made precise in the form of the so-called \textit{diffusion phenomenon} for damped waves; see \cite{Wirth07} for the time-dependent dissipation case or the papers of Nishihara \cite{Nish03} and Narazaki \cite{Nar04} for the case of constant dissipation.
Wave models with mass and dissipation of the form \eqref{eq:CPmain} were considered by the second author and Nunes in \cite{NunesWirth}. This paper provides in particular $L^p-L^q$ decay estimates in the non-effective case.
In \cite{DAGR18} the first author considered with M.~D'Abbico and M.~Reissig the Cauchy problem \eqref{eq:CPmain} in the case in which the damping term is effective and dominates the mass term, i.e. $m(t)=o(b(t))$ as $t\to \infty$, again under control assumptions on the oscillations of the coefficients. In that paper it is shown that under a simple condition on the interaction between~$b(t)$ and~$m(t)$, one can prove that the solutions to \eqref{eq:CPmain} satisifies the estimate
\begin{align}
\norm{u(t,\cdot)}_{L^2}
\label{eq:estimateDAGR} & \leq C\,\gamma(t)\,\norm{(u_0,u_1)}_{H^1\times L^2},\\
\intertext{where we define}
\gamma(t)
\label{eq:gamma} & = \exp \left(-\int_0^t \frac{m^2(\tau)}{b(\tau)}\,d\tau \right).
\end{align}
Thus, the decreasing function $\gamma=\gamma(t)$ in~\eqref{eq:gamma} represents the influence on the estimates of the mass term with respect to the damping term. In particular, estimate \eqref{eq:estimateDAGR} shows that the presence of the mass term produces an additional decay which becomes faster as the mass term becomes more influent. In fact, in \cite{GIRNTNP2019} the first author proved an exponential decay in the case of dominant mass, that is
\begin{align}
\norm{u(t,\cdot)}_{L^2}
\label{eq:estimateGir2019} & \leq C\,\exp \left(-\delta\int_0^t b(\tau)\,d\tau \right)\,\norm{(u_0,u_1)}_{H^1\times L^2},
\end{align}
provided that $\liminf_{t\to\infty} m(t)/b(t) > 1/4$.
This latter estimate is almost the same as for the solution to the Cauchy problem to a damped Klein--Gordon model with constant coefficients $b(t)\equiv 1$ and $m(t)\equiv 1$, that is
\begin{equation}
\label{dampedKG}
\begin{cases}
u_{tt}-\Delta u+u_t+u=0,\\
u(0,x)=u_0(x), \quad u_t(0,x)=u_1(x).
\end{cases}
\end{equation}
All these cited papers have in common that they use assumptions on derivatives of the coefficients as in \eqref{eq:oscillations}
to avoid a bad influence of oscillations. That oscillations may have deteriorating influences was shown
for example by K.~Yagdjian in \cite{Yag01} for a wave equation with time-periodic speed of
propagation. In this case (many) solutions have exponentially growing energy. Controlling oscillations is done by requiring estimates for derivatives of the coefficients.
It is clear that for dissipative wave equations oscillations in the positive dissipation term can not lead to solutions with increasing energy. Therefore, it is interesting to ask whether conditions on derivatives of the coefficient are indeed necessary for proving large-time decay estimates for solutions of \eqref{eq:CPWirth}. A first step to look into that was done in \cite{Wirth2008}, where the author proved that the solution to \eqref{eq:CPWirth} satisifies estimate \eqref{eq:matzumuraestimate} without any condition on the oscillations of $b=b(t)$ provided that $b$ is periodic. This led to the conjecture that estimate \eqref{eq:matzumuraestimate} can be obtained with a general dissipation term $b=b(t)$, with $tb(t)\to\infty$, without further assumptions on derivatives. However, it is still an open problem how to prove such a result.
In the present paper we also avoid assumptions on the derivatives of the coefficients $b(t)$ and $m(t)$ assuming only that they are positive, periodic and of bounded variation. We are going to prove an exponential decay by using the same technique used as in \cite{Wirth2008} combined with a perturbation argument for the mass term. We remark that the presence of the mass term simplifies the study of the estimates at small frequencies; in fact, in this zone it is not necessary to use tools of Floquet theory as in the case of vanishing mass: we use only a contradiction argument together with some results of spectral theory of matrices.
The study of decay estimates for the solution to the linear problem \eqref{eq:CPmain} has an important application in the study of global (in time) existence results for the corresponding nonlinear problem
\begin{equation*}
\begin{cases}
u_{tt}-\Delta u + 2b(t)u_t+m^2(t)u=h(t,u),\\
u(0,x)=u_0(x), \quad u_t(0,x)=u_1(x)
\end{cases}
\end{equation*}
with nonlinearity $h(t,u)=(1+\int_0^t 1/b(\tau)\,d\tau)^\omega |u(t,\cdot)|^p$ for a $\omega \in[-1,\infty)$. Such applications can be found
for example in \cite{DA13}, \cite{DALR13} in the purely dissipative case and in \cite{DAGR18,GIR2019,GIRNTNP2019} for equations including mass terms.
The paper is organized as follows: In Section \ref{sec:main results} we give the basic assumptions on the Cauchy problem and we state our main results that are Theorem \ref{th:Thconstant} and Theorem \ref{th:Thperturbed}; in Section \ref{sec:representationsolution} we make considerations and discuss properties of the fundamental solution to \eqref{eq:CPmain} and the associated monodromy operator. In Section \ref{sec:smallfreqconstant} we treat the case of constant mass for small frequencies and we prove a fundamental lemma useful for the proof of the main theorems. Finally in Section \ref{sec:Proofofresults} the main theorems are proved.
\section{Main results}\label{sec:main results}
In this paper we suppose that the coefficient $b=b(t)$ is a non-negative and continuous periodic function of bounded variation, i.e., we assume that its weak derivative is essentially bounded, $b'\in L^{\infty}$. We further suppose that the coefficient $m=m(t)$ is measurable and periodic with the same period. We denote the period of both coefficients by $T$. The first result concerns constant mass terms and provides an exponential decay result.
\begin{theorem}
\label{th:Thconstant}
Suppose $m\equiv m_0\in\R$ is constant. There exists $\delta>0$ such that the solution $u=u(t,x)$ to the Cauchy problem \eqref{eq:CPmain} satisfies
\begin{align*}
\| u(t,\cdot)\|_{L^2}&\leq Ce^{-\delta t}(\|u_0\|_{L^2}+\|u_1\|_{H^{-1}}),\\
\| \nabla u(t,\cdot)\|_{L^2}&\leq Ce^{-\delta t}(\|u_0\|_{H^1}+\|u_1\|_{L^2}),\\
\|u_t(t,\cdot)\|_{L^2}&\leq Ce^{-\delta t}(\|u_0\|_{H^1}+\|u_1\|_{L^2}),
\end{align*}
where $\delta$ and $C$ are positive constants depending on the coefficient $b$ and on $m_0$.
\end{theorem}
If the mass term is non-constant, the exponential decay is obtained under a smallness condition for the deviation of the mass-term from a constant.
\begin{theorem}
\label{th:Thperturbed}
Let $m_0\in \R$ and $m_1=m_1(t)$ a measurable $T$-periodic function such that $\sup_{t\geq 0} |m_1(t)|=1$. Then, there exists $\epsilon$ sufficiently small such that the solution to
\begin{equation}
\label{eq:CPepsilon}
\begin{cases}
u_{tt}-\Delta u + 2b(t)u_t+m_\epsilon^2(t)u=0,\\
u(0,x)=u_0(x), \quad u_t(0,x)=u_1(x)
\end{cases}
\end{equation}
with $m_\epsilon^2(t)=m_0^2+\epsilon m_1(t)$ satisfies
\begin{align*}
\| u(t,\cdot)\|_{L^2}&\leq Ce^{-\sigma t}(\|u_0\|_{L^2}+\|u_1\|_{H^{-1}}),\\
\| \nabla u(t,\cdot)\|_{L^2}&\leq Ce^{-\sigma t}(\|u_0\|_{H^1}+\|u_1\|_{L^2}),\\
\|u_t(t,\cdot)\|_{L^2}&\leq Ce^{-\sigma t}(\|u_0\|_{H^1}+\|u_1\|_{L^2}),
\end{align*}
where $\sigma$ and $C$ are positive constant depending on $m_0,\, m_1$, $b$ and $\epsilon$.
\end{theorem}
\begin{remark}
It is still an open problem to understand which is the largest value that $\epsilon$ can assume in order to guarantee an exponential decay of the energy. A possible estimate of $\epsilon $ is given in the proof of Theorem \ref{th:Thperturbed}: from estimate \eqref{eq:epsilon} it is clear that the value of $\epsilon$ depends on how large we choose $N$, such that the line $|\xi|=N$ divides the phase space in small and large frequencies. In particular, the value of $N$ depends only on the dissipation and does not depend on the mass term.
\end{remark}
\section{Representation of solution} \label{sec:representationsolution}
In a first step we derive properties of the representation of solutions for the Cauchy problem
\begin{equation}
\label{eq:CPconstant}
u_{tt}-\Delta u + 2b(t)u_t+m^2(t)u=0, \qquad u(0,x)=u_0(x), \qquad u_t(0,x)=u_1(x),
\end{equation}
with $b=b(t)\geq 0$ and $m=m(t)\geq 0$ both periodic of period $T$. We denote the mean value of $b(t)$ as
\[ \beta=\frac{1}{T}\int_0^T b(t)\,dt. \]
A partial Fourier transform with respect to the spatial variables reduces the problem to an ordinary differential equation
\begin{equation}
\label{eq:fouriertransformed}
\hat{u}_{tt}+\xii^2 \hat{u}+2b(t)\hat{u}_t+m^2(t)\hat{u}=0,
\end{equation}
parameterised by $\xii\in\R$. To reformulate this as first order system, we introduce the symbol $\langle \xi \rangle_{m(t)}:=\sqrt{\xii^2+m^2(t)}\,$ and we define the new variable $V=(\langle \xi \rangle_{m(t)} \hat{u}, D_t \hat{u})^T$. Then
we obtain the system $D_tV=A(t,\xi)V$ with
\begin{equation}
\label{eq:mainsystem}
A(t,\xi)= \begin{pmatrix}
0 &\langle \xi \rangle_{m(t)} \\ \langle \xi \rangle_{m(t)}& 2ib(t)
\end{pmatrix},
\end{equation}
using the Fourier derivative $D_t=-i\partial_t$. We want to study the fundamental solution $\mathcal{E}=\mathcal{E}(t,s,\xi)$ to \eqref{eq:mainsystem}, that is the matrix-valued solution to the Cauchy problem
\begin{equation}
\label{eq:system}
D_t \mathcal{E}(t,s,\xi)=A(t,\xi)\mathcal{E}(t,s,\xi), \qquad \mathcal{E}(s,s,\xi)=I.
\end{equation}
In particular, we consider the family of monodromy matrices $\mathcal{M}(t,\xi)=\mathcal{E}(t+T,t,\xi)$.
The fundamental solution to \eqref{eq:system} can be represented by the Peano--Baker series
\begin{equation}
\label{eq:Erepresntation}
\mathcal{E}(t,s,\xi)=I+\sum_{\ell=1}^\infty i^k \int_s^tA(t_1,\xi)\int_s^{t_1}A(t_2,\xi)\cdots \int_s^{t_{\ell-1}}A(t_\ell,\xi)\,dt_\ell\cdots dt_1.
\end{equation}
The $T$-periodicity of coefficients implies periodicity of the matrix $A(t,\xi)$ and hence the $T$-translation invariance of the fundamental solution, i.e. $\mathcal{E}(t+T,s+T,\xi)=\mathcal{E}(t,s,\xi)$. Thus, the the monodromy matrix $\mathcal{M}(t,\xi)$ is $T$-periodic.
Moreover, since $\mathcal E(t,s,\xi)\mathcal E(s,t,\xi) = I$ it follows that $\mathcal{E}(t,s,\xi)$ satisfies $D_s \mathcal{E}(t,s,\xi)=-\mathcal{E}(t,s,\xi)A(s,\xi)$, and, therefore, $\mathcal{M}(t,\xi)$ satisfies the equation
\begin{equation*}
D_t\mathcal{M}(t,\xi)=[A(t,\xi), \mathcal{M}(t,\xi)], \qquad \mathcal{M}(T,\xi)= \mathcal{M}(0,\xi).
\end{equation*}
In what follows we will distinguish between small and large frequencies and provide estimates for $\mathcal M$.
\subsection{Large Frequencies} \label{sec:largefrequencies}
For large frequencies we want to prove that the monodromy matrix is uniformly contractive, i.e.
\begin{equation}
\label{eq:contractive}
\|\mathcal{M}(t,\xi)\|<1
\end{equation}
holds true uniformly in $t\in [0,T]$ and $|\xi|\ge N$ for a constant $N$ chosen large enough. The choice of $N$ does not depend on the coefficient $m=m(t)$. In order to prove \eqref{eq:contractive} we apply two steps of diagonalization. We consider the unitary matrices
\[ M= \frac{1}{\sqrt{2}}\begin{pmatrix}
1&-1\\1& 1
\end{pmatrix} \qquad M^{-1}=\frac{1}{\sqrt{2}}\begin{pmatrix}
1&1\\-1& 1
\end{pmatrix} \]
and define the new variable $V^{(0)}=M^{-1}V$, which satisfies
\[D_tV^{(0)}= (D(t,\xi)+R(t,\xi))V^{(0)} \]
with
\[ D(t, \xi)=\begin{pmatrix}
\langle \xi \rangle_{m(t)} & 0 \\
0 & -\langle \xi \rangle_{m(t)}
\end{pmatrix},\qquad
R(t,\xi)=ib(t)\begin{pmatrix}
1 & 1 \\
1 & 1
\end{pmatrix}.\]
Next, we define $D_1=D+\diag R$ and $R_1=R-\diag R$ and construct a matrix $N_1=N_1(t,\xi)$ with
\begin{equation}
\label{eq:N1}
D_tN_1= [D_1,N_1]+R_1,
\end{equation}
and $N_1(0,\xi)=I$.
Thus, the requirement for $N_1$ is equivalent to the operator identity
\[ (D_t-D_1-R_1)N_1-N_1(D_t-D_1)= D_t N_1- [D_1,N_1]-R_1N_1=R_1(I-N_1).\]
Hence by denoting $R_2=-N_1^{-1}R_1(I-N_1)$ we obtain
\[ (D_t-D_1-R_1)N_1= N_1(D_t-D_1-R_2) \]
and as a consequence, provided that $N_1$ is invertible, we obtain that the new unknown $V^{(1)}=N_1^{-1}V^{(0)}$ satisfies the transformed equation
\[ D_tV^{(1)}= (D_1+R_2)V^{(1)}\]
with improved remainder allowing us later on to prove \eqref{eq:contractive}.
Since $N_1=N_1(t,\xi)$ satisfies equation \eqref{eq:N1} and $D_1$ is diagonal, we find $D_t\diag N_1=0$. Thus, we can use a matrix $N_1$ of the form
\[N_1=\begin{pmatrix}
1 & n^-\\ n^+ & 1
\end{pmatrix},\]
with
\[ D_t n^\pm(t,\xi)= \mp\langle \xi \rangle_{m(t)}n^\pm(t,\xi)+i b(t).\]
The initial conditions $n^\pm(0,\xi)=0$ giving $N_1(0,\xi)=I$ imply
\[ n^\pm(t,\xi)=\int_0^t e^{\mp i\int_s^t\langle \xi \rangle_{m(r)}\,dr}b(s)\,ds. \]
Integrating by parts, we obtain
\[ |n^\pm(t,\xi)|=\Big|\Big[ \frac{\mp i}{\langle \xi \rangle_{m(s)}}e^{\mp i\int_s^t\langle \xi \rangle_{m(r)}\,dr}b(s)\Big]_0^t-\int_0^t \frac{\mp i}{\langle \xi \rangle_{m(s)}}e^{\mp i\int_s^t\langle \xi \rangle_{m(r)}\,dr}b'(s)ds \Big| \]
and using that $b=b(t)$ is of bounded variation we find a constant $C>0$ such that
\[ |n^\pm(t,\xi)|\leq C(1+t)|\xi|^{-1}.\]
Thus we get that $n^\pm(t,\xi)\to 0$ when $\xi\to \infty$, uniformly in $[0,2T]$. Then we can conclude that $N_1(t,\xi)\to I$ and therefore $N^{-1}(t,\xi)\to I$ uniformly in $t\in [0,2T]$ as $|\xi|\to\infty$. Hence $\|R_2(t,\xi)\|\to 0$ as $|\xi|\to\infty$ uniformly in $t\in[0,2T]$. Thus the supremum on the left hand side in the following formula tends to $1$ as $N\to\infty$ and we fix $N$ such that
\begin{equation}
\label{eq:suplargefreq}
\sup_{|\xi|\ge N} \sup_{t\in [0,T]} \| N_1(t+T,\xi)\| e^{\int_t^{t+T}\|R_2(s,\xi)\|ds}\|N_1^{-1}(t,\xi)\|\leq e^{\beta T/2}
\end{equation}
holds true. Note that this choice of $N$ can be made indepent of the coefficient $m=m(t)$, in fact $R_1=R_1(t,\xi)$ does not depend on $m(t)$, and $N_1(t,\xi),\, N_1^{-1}(t,\xi)$ tend both to $I$ uniformly with respect to $m(t)$.
In order to prove the desired estimate $\eqref{eq:contractive}$ we go back to the original problem. We define $\lambda(t):=\exp( \int_0^t b(\tau)d\tau)$. Then, for each $|\xi|>N$ the fundamental solution $\mathcal{E}(t,s,\xi)$ to $D_tV=A(t,\xi)V$ with $A$ defined in \eqref{eq:mainsystem} is given by
\begin{equation}
\label{eq:Erepresentation}
\mathcal{E}(t,s,\xi)=\frac{\lambda(s)}{\lambda(t)}M N_1(t,\xi)\tilde{\mathcal{E}}_0(t,s,\xi)Q(t,s,\xi)N_1^{-1}(t,\xi)M^{-1},
\end{equation}
for all $t\in [0,T]$, where
\[ \tilde{\mathcal{E}}_0(t,s,\xi)= \begin{pmatrix}
e^{i\int_s^t \langle \xi \rangle_{m(\tau)}d\tau} & 0 \\0 & e^{-i\int_s^t \langle \xi \rangle_{m(\tau)}d\tau}
\end{pmatrix}\]
and $Q=Q(t,s,\xi)$ is the solution to the Cauchy problem
\[D_tQ(t,s,\xi)=\tilde{\mathcal{E}}_0(s,t,\xi)R_2(t,\xi)\tilde{\mathcal{E}}_0(t,s,\xi)Q(t,s,\xi), \qquad Q(s,s,\xi)=I.\]
Let $ \mathcal{R}_2(t,s,\xi)=\tilde{\mathcal{E}}_0(s,t,\xi)R_2(t,\xi)\tilde{\mathcal{E}}_0(t,s,\xi)$. Then by using the Peano-Beaker formula again we can represent $Q(t,s,\xi)$ as
\[ Q(t,s,\xi)=I+\sum_{\ell=1}^\infty i^k \int_s^t \mathcal{R}_2(t_1,s,\xi)\int_s^{t_1} \mathcal{R}_2(t_2,t_1,\xi)\cdots \int_s^{t_{k-1}} \mathcal{R}_2(t_k,t_{k-1},\xi) dt_k\dots dt_1. \]
Since $\| \mathcal{R}_2(t,s,\xi)\|=\|R_2(t,\xi)\|$ we conclude
\begin{equation}
\label{eq:Qestimate}
\|Q(t,s,\xi)\|\leq \exp\Big( \int_s^t \|R_2(\tau,\xi)\| d\tau \Big).
\end{equation}
By \eqref{eq:Erepresentation} we can represent the monodromy matrix $\mathcal{M}(t,\xi)=\mathcal{E}(t,s,\xi)$ as
\[ \mathcal{M}(t,\xi)= \frac{\lambda(t)}{\lambda(t+T)}M N_1(t+T,\xi)\tilde{\mathcal{E}}_0(t+T,t,\xi)Q(t+T,t,\xi)N_1^{-1}(t+T,\xi)M^{-1}.\]
Since $\lambda(t)/\lambda(t+T)= e^{-\beta T}$ the desired result $\|\mathcal{M}(t,\xi)\|\le e^{-\beta T/2}<1$ for each $t\in [0,T]$ and each $|\xi|\geq N$ follows by \eqref{eq:suplargefreq} and \eqref{eq:Qestimate}. Hence we obtain
\begin{lemma}\label{lem1}
There exists a constant $N$ depending only on $T$, $\|b'\|_\infty$ and $\|b\|_\infty$ such that the monodromy matrix $\mathcal M(t,\xi)$ satisfies
\[
\| \mathcal M(t,\xi) \| \le e^{-\beta T/2}
\]
uniformly on $t\in\R$ and $|\xi|\ge N$ and independent of the mass term $m(t)$.
\end{lemma}
\section{Small frequencies: constant mass} \label{sec:smallfreqconstant}
In this section we want to prove that there exists $k\in \N$ such that
\begin{equation}
\label{eq:middlefreq}
\| \mathcal{M}^k(t,\xi)\|<1
\end{equation}
uniformly in $ |\xi|\leq N$ and $t\in[0,T]$ provided that the mass term is constant. Thus, in this section, we restrict our study to the Cauchy problem
\begin{equation}
\label{eq:CPconstantmass}
v_{tt}-\Delta v +2b(t)v_t + m_0^2v=0
\qquad v(0,x)=v_0(x), \quad v_t(0,x)=v_1(x).
\end{equation}
In particular, we denote by $\mathcal{E}_0(t,s,\xi)$ the fundamental solution associated to the system $D_tV=A_0(t,\xi)V$ with
\begin{equation}
\label{eq:systemconstantmass}
A_0(t,\xi)= \begin{pmatrix}
0 &\langle \xi \rangle_{m_0} \\ \langle \xi \rangle_{m_0}& 2ib(t)
\end{pmatrix}, \qquad \langle \xi \rangle_{m_0}=\sqrt{|\xi|^2+m_0^2}.
\end{equation}
Let $\mathcal{M}_0(t,\xi)=\mathcal{E}_0(t+T,0,\xi)$ be the corresponding family of monodromy matrices.
In order to get our aim we will prove at first that the spectrum $\spec \mathcal{M}_0(t,\xi)$ is contained in the open ball $\{ \eta\in \C \vert |\eta|<1 \}$.
Since it holds $$\mathcal{M}_0(t,\xi)\mathcal{E}_0(t,0,\xi)=\mathcal{E}_0(t+T,0,\xi)=\mathcal{E}_0(t+T,T,\xi)\mathcal{E}_0(T,0,\xi)=\mathcal{E}_0(t,0,\xi)\mathcal{M}_0(0,\xi),$$ we conclude that for each $t\in [0,T]$ the monodromy matrix $\mathcal{M}_0(t,\xi)$ is similar to $\mathcal{M}_0(0,\xi)$ and, hence, has the same spectrum. Moreover, as both $b(t)$ and $m(t)$ are real; the equation \eqref{eq:fouriertransformed} has real solutions and it follows that $\mathcal{M}_0(t,\xi)$ is similar to a real-valued matrix. Furthermore, by Liouville Theorem we know that
\begin{equation}
\det \mathcal{M}_0(0,\xi)= e^{i\int_0^T \tr A_0(\tau,\xi)d\tau }= e^{-2\beta T}.
\end{equation}
Hence, for each $\xi\in \R^n$ the eigenvalues $\eta_1(\xi),\,\eta_2(\xi)$ of $\mathcal{M}_0(0,\xi)$ are either real, in the form $\eta_2(\xi)= \eta_1^{-1}(\xi)e^{-2\beta T}$, or complex conjugate with $|\eta_1(\xi)|=|\eta_2(\xi)|=e^{-\beta T}$. In the latter case it is clear that $\spec \mathcal{M}_0(0,\xi)\subset \{ \xi\in \R^n | |\xi|=\exp(-\beta T)\}. $ In the case in which the eigenvalues are real we need to prove that for each $\xi \in \R^n$ both $\eta_1(\xi)$ and $\eta_2(\xi)$ have modulus less then $1$. We will prove this by using a contradiction argument.
Suppose that there exists $\bar{\xi}\in \R^n$ such that the monodromy matrix $\mathcal{M}_0(0,\bar{\xi})$ has an eigenvalue of modulus $1$, i.e, $\eta_1(\bar{\xi})= \pm 1$ and so $\eta_2(\bar{\xi})= \pm e^{-2\beta T}$. Let $\vec{c}=(c_1,c_2)$ be an eigenvector corresponding to $\eta_1(\bar{\xi})$. Then, we can find a domain $\Omega_R=\{ x\in \R^n | |x|\leq R \}$ (with $R$ depending on $m_0$) and a function $\Phi=\Phi(x)$ defined on $\Omega_R$ such that $-|\bar{\xi}|^2-m_0^2$ is an eigenvalue for the Dirichlet Laplacian with normal eigenfunction $\Phi=\Phi(x)$, i.e.
\begin{equation}
\label{eq:laplacedirichlet}
-\Delta \Phi(x)= (|\bar{\xi}|^2+m_0^2)\Phi(x), \qquad \Phi(x)=0 \text{ on } \partial \Omega_R.
\end{equation}
Let us consider $v=v(t,x)$ the solution to the Cauchy problem, with Dirichlet boundary condition on $\Omega_R$
\begin{equation}
\label{eq:dampedwavedirichlet}
\begin{cases} v_{tt}-\Delta v+2b(t)v=0, \\ v(0,x)=c_1\langle \bar{\xi}\rangle_{m_0}^{-1}\Phi(x), \quad v_t(0,x)=ic_2 \Phi(x), \\ v(t,\cdot)\equiv 0 \quad \text{ on } \partial\Omega_R \text{ for each } t\geq 0. \end{cases}
\end{equation}
In particular, we look for a solution in the form
$$v(t,x)=f(t)\Phi(x),$$
and we show that $f=f(t)$ is $T$-periodic (or $2T$-periodic). Since, $\Phi=\Phi(x)$ satisfies the Dirichlet problem \eqref{eq:laplacedirichlet}, the partial differential equation $v_{tt}-\Delta v+2b(t)v=0$ turns into the ordinary differential equation $v_{tt}+|\bar{\xi}|^2 v+2b(t)v_t+m_0^2 v=0$, with $x$ regarded as a parameter. In particular, $f=f(t)$ satisfies the ordinary differential equation
\begin{equation}
\label{eq:odef}
f''(t)+2b(t)f'(t)+(|\bar\xi |^2+m_0^2)f(t)=0.
\end{equation}
Moreover, the corresponding solution $v(t,x)=f(t)\Phi(x),$ satisfies the Cauchy problem
\begin{align*}
D_t \begin{pmatrix}
\langle \bar\xi \rangle_{m_0} v(t,x)\\
D_t v(t,x)\end{pmatrix}&= \begin{pmatrix}
0 &\langle \bar\xi \rangle_{m_0} \\ \langle \bar\xi \rangle_{m_0}& 2ib(t)
\end{pmatrix}\begin{pmatrix}
\langle \bar\xi \rangle_{m_0} v(t,x)\\
D_t v(t,x)\end{pmatrix}\\
\begin{pmatrix}
\langle \bar\xi \rangle_{m_0} v(t,x)\\
D_t v(t,x)\end{pmatrix}\Big| _{t=0}&= \begin{pmatrix}
c_1\\
c_2\end{pmatrix}\Phi(x).
\end{align*}
This system can be solved by using the fundamental solution $\mathcal{E}_0(t,0,\bar\xi)$; in particular, we have that
\[ \begin{pmatrix}
\langle \bar\xi \rangle_{m_0} v(t,x)\\
D_t v(t,x)\end{pmatrix}\Big|_{t=T}= \mathcal{M}_0(0,\bar\xi)\begin{pmatrix}
c_1\\
c_2\end{pmatrix}\Phi(x)=\pm \begin{pmatrix}
c_1\\
c_2\end{pmatrix}\Phi(x). \]
We conclude that $f=f(t)$ is $T$-periodic (or $2T$-periodic) and $f(0)=c_1\langle \bar{\xi}\rangle_{m_0}^{-1}$. This gives a contradiction: if we denote the energy of this solution as
\[
E(u,t)=1/2\|v_t(t,\cdot)\|_{L^2(\Omega_R)}^2+1/2\|\nabla v\|_{L^2(\Omega_R)}^2,\]
we obtain
\[ \frac{d}{dt}E(v,t)= -b(t)\|v_t\|_{L^2(\Omega_R)}^2=-b(t)|f'(t)|^2. \]
But, by integrating the previous equation we obtain that
$$ -\int_0^T b(t)|f'(t)|^2 \,dt=0,$$
that is not possible since $f=f(t)$ can not be constant, by equation \eqref{eq:odef} as $(|\bar\xi|^2+m^2_0)>0$ for each $\bar\xi\in \R^n$. Thus, $\pm 1\notin \spec\mathcal{M}_0(t,\xi)$ for each $\xi\in \R^n$, and therefore the spectral radius $\rho(\mathcal{M}_0(t,\xi))<1$ for all $\xi \in \R^n$. \\
By the spectral radius formula, we know that $$\lim_{k\to \infty}\|\mathcal{M}_0^k(t,\xi)\|^{\frac{1}{k}}=\rho(\mathcal{M}_0(t,\xi))<1.$$ Thus, we conclude that for each $t\in [0,T]$ and $\xi\in\R^n$ there exists $k=k(t,\xi)\in \mathbb{N}$ such that
\begin{equation}
\label{eq:Mk}
\|\mathcal{M}_0^k(t,\xi)\|<1.
\end{equation}
We want to show that we can find a number $k$ such that the condition \eqref{eq:Mk} holds uniformly with respect to $t\in [0,T]$ and $|\xi|\in [0,N]$.\\
Let us define for each $k\in \mathbb{N}$ the set $\mathcal{U}_k=\{(t,\xi)\in \R_+\times \R^n|\,\|\mathcal{M}_0^k(t,\xi)\|<1 \}$. It is open due to the continuity of the monodromy matrix ${M}_0^k(t,\xi)$; moreover, it holds $\mathcal{U}_k \subset \mathcal{U}_\ell$, for $k\leq \ell$. Then, by \eqref{eq:Mk} we have that the compact set $\mathcal{C}=\{(t,\xi)| 0\leq t\leq T, |\xi|\le N\}$ is contained in $\bigcup_{k}\mathcal{U}_k$. By compactness we find $k\in \mathbb{N}$ such that $\mathcal{C}\subset \mathcal{U}_k$. This concludes the proof of estimate \eqref{eq:middlefreq}. By continuity of $\mathcal M_0^k(t,\xi)$ in both variables, the estimate is uniform. Hence we obtain
\begin{lemma}\label{lem2}
For constant mass term $m_0$ and fixed $N>0$ there exists a number $k$ such that the monodromy matrix for the problem with constant mass satisfies
\[ \sup_{|\xi|\le N} \sup_{t\in [0,T]}\|\mathcal{M}_0^k(t,\xi)\|<1.\]
\end{lemma}
\section{Proof of the main theorems} \label{sec:Proofofresults}
\subsection{Proof of Theorem \ref{th:Thconstant}}
In order to prove Theorem \ref{th:Thconstant} we distinguish between small and large frequencies.
Let $|\xi|\geq N$. Then the monodromy matrix $\mathcal{M}(t,\xi)$ is estimated in Lemma~\ref{lem1}. Let $t\geq 0$, $t=\ell T+s$, with $\ell\in \mathbb{N}$ and $s\in [0,T]$. Then, we obtain
\[ \|\mathcal{E}(t,0,\xi)\|=\|\mathcal{M}^\ell(s,\xi)\mathcal{E}(s,0,\xi)\|\leq e^{-\ell \beta T/2} \|\mathcal{E}(s,0,\xi)\|. \]
Moreover, since $b(t)>0$ we know that $\|\mathcal{E}(s,0,\xi)\|\leq 1$ and therefore we find
\[ \|\mathcal{E}(t,0,\xi)\|\leq e^{-\delta_0(t-T)},\]
by defining $\delta_0:=\beta/2>0$. We remark that this estimate for large frequencies is valid for arbitrary periodic mass terms.
For the remainder of the proof assume that $m^2(t)\equiv m_0^2$ constant and $|\xi|\le N$. By Lemma~\ref{lem2} there exists $k\in \mathbb{N}$ depending only on $m_0$ such that the matrix $\mathcal{M}_0^k(t,\xi)$ is a contraction uniform in $t$ and $\xi$. Let $t=\ell k T+s \ge 0$ for some $\ell \in \N$ and $s\in [0,kT]$. Then, we obtain the exponential decay
\begin{equation}
\label{eq:mainsmallconstant}
\|\mathcal{E}_0(t,0,\xi)\| = \| \mathcal M_0^{k\ell} (s,\xi) \mathcal E_0(s,0,\xi)\| \leq e^{-\delta_1(t-kT)},
\end{equation}
where we set $\delta_1:=(kT)^{-1}\log(c_1(N)^{-1})>0$ and
\begin{equation}\label{c1def}
c_1(N):=\sup_{|\xi|\le N} \sup_{t\in [0,T]} \|\mathcal{M}_0^k(t,\xi)\|<1.
\end{equation}
Going back to the original problem \eqref{eq:CPmain}, we find
\[ \begin{pmatrix}
\langle \xi \rangle_{m_0} \hat{u}(t,\xi) \\D_t \hat{u}(t,\xi)
\end{pmatrix}= \mathcal{E}(t,0,\xi)\begin{pmatrix}
\langle \xi \rangle_{m_0} \hat{u}_0(\xi) \\ \hat{u}_1(\xi)
\end{pmatrix}.\]
Thus, we find that
\begin{align*}
\| u(t,\cdot)\|_{L^2}&\leq \sup_{\xi\in\R^n} \|\mathcal{E}_0(t,0,\xi)\|(\|u_0\|_{L^2}+\|u\|_{H^{-1}} ),\\
\| \nabla u(t,\cdot)\|_{L^2}&\leq \sup_{\xi\in\R^n} \|\mathcal{E}_0(t,0,\xi)\|(\|u_0\|_{H^1}+\|u_1\|_{L^2}),\\
\|u_t(t,\cdot)\|_{L^2}&\leq \sup_{\xi\in\R^n} \|\mathcal{E}_0(t,0,\xi)\|(\|u_0\|_{H^1}+\|u_1\|_{L^2}).
\end{align*}
The proof of Theorem \ref{th:Thconstant} with $C=e^{\delta_1 k T}$ follows immediately by estimate \eqref{eq:mainsmallconstant}.
\subsection{Proof of Theorem \ref{th:Thperturbed}}
Let $u=u(t,x)$ the solution to \eqref{eq:CPepsilon}
where $m_\epsilon^2(t)=m_0^2+\epsilon m_1(t)$, whit $ m_1(t)$ periodic of period $T$ and $m_0$ a sufficiently large constant such that $m_0^2+\epsilon m_1(t)>0$.
The corresponding system is
\begin{equation}
\label{eq:perturbedsystem}
D_tV_\epsilon=A_\epsilon(t,\xi)V_\epsilon= \begin{pmatrix}
0 &\langle \xi \rangle_{m_\epsilon(t)} \\ \langle \xi \rangle_{m_\epsilon(t)}& 2ib(t)
\end{pmatrix}V_\epsilon,
\end{equation}
where $V_\epsilon=(\langle\xi\rangle_{ m_\epsilon(t)}\hat{u^\epsilon}, D_t \hat{u^\epsilon} )$.
In order to obtain our result we need to estimate $\| \mathcal{E}_\epsilon(t,0,\xi)\|$, where we denoted by $\mathcal{E}_\epsilon$ the fundamental solution to the system \eqref{eq:perturbedsystem}. In particular, $\mathcal{E}_0$ solves $D_tV_0=A_0(t,\xi)V_0$ where
\begin{equation}
\label{eq:constantsystem}
D_tV_0=A_0(t,\xi)V_0= \begin{pmatrix}
0 &\langle \xi \rangle_{m_0} \\ \langle \xi \rangle_{m_0}& 2ib(t)
\end{pmatrix}V_0.
\end{equation}
We again distinguish between small and large frequencies. If $|\xi|\geq N$, as in the case of constant mass we conclude
$$\|\mathcal{E}_\epsilon(t,0,\xi)\|\leq e^{-\delta_0(t-T)},$$
where we recall $\delta_0=\beta/2>0$ by making use of Lemma~\ref{lem1}.
If $|\xi|\leq N$, there exists $k\in \mathbb{N}$ given by Lemma~\ref{lem2} such that the matrix $\mathcal{M}_0^k(t,\xi)$ is a contraction uniformly in $t\in [0,T]$ and $|\xi|\in [0,N]$. We write $t=\ell k T+s\ge0$ for some $\ell \in \mathbb N$ and $s\in [0,kT]$; then, we have
\begin{equation}
\label{eq:mainsmallperiodic}
\mathcal{E}_\epsilon(t,0,\xi) = \mathcal M_\epsilon^{k\ell} (s,\xi) \mathcal E_\epsilon(s,0,\xi);
\end{equation}
we can treat the fundamental solution as a perturbation of constant case
\[\begin{split}
\|\mathcal{E}_\epsilon(t,s,\xi)\|&\leq \|\mathcal{E}_\epsilon(t,s,\xi)-\mathcal{E}_0(t,s,\xi)\|+\|\mathcal{E}_0(t,s,\xi)\|\\&\leq \|\mathcal{E}_\epsilon(t,s,\xi)-\mathcal{E}_0(t,s,\xi)\|+ e^{-\delta(t-s-kT)},
\end{split}\]
where we recall $\delta_1=(kT)^{-1}\log(c_1(N)^{-1})>0$ and $c_1(N)$ as in \eqref{c1def}.
In order to estimate the difference $\|\mathcal{E}_\epsilon(t,s,\xi)-\mathcal{E}_0(t,s,\xi)\|$ we use that for each $\epsilon\geq 0$ the fundamental solution $\mathcal{E}_\epsilon$ satisfies the integral equation
\[\mathcal{E}_\epsilon(t,s,\xi)=I+\int_s^t A_\epsilon(\tau,\xi) \mathcal{E}_\epsilon(\tau,s,\xi)\,ds,\]
such that
\begin{align*}
\mathcal{E}_\epsilon(t,s,\xi)-\mathcal{E}_0(t,s,\xi)=&\int_s^t A_\epsilon(\tau,\xi)(\mathcal{E}_\epsilon(\tau,s,\xi)-\mathcal{E}_0(\tau,s,\xi))\,ds\\&+\int_s^t (A_\epsilon(\tau,\xi)-A_0(\tau,\xi))\mathcal{E}_0(\tau,s,\xi)\,ds.
\end{align*}
By using the Gronwall inequality we get
\[\|\mathcal{E}_\epsilon(t,s,\xi)-\mathcal{E}_0(t,s,\xi)\|\leq \int_s^t \|\mathcal{E}_0(\tau,s,\xi)\|\,\|A_\epsilon(\tau,\xi)-A_0(\tau,\xi)\|\,ds\cdot e^{\int_s^t \|A_\epsilon(\tau,\xi)\|\,d\tau}; \]
here, for any $\tau>0$ and $\xi\in \R^n$, since we are assuming $\displaystyle{\sup_{t\geq0} |m_1(t)|=1}$ we can estimate $$ \|A_\epsilon(\tau,\xi)-A_0(\tau,\xi)\|\leq \frac{\epsilon}{\langle\xi\rangle_{m_0}},\qquad \|A_0(\tau,\xi)\|\leq \langle \xi \rangle_{m_0}+2b(\tau),$$
and so $$ \|A_\epsilon(\tau,\xi)\|\leq C_\epsilon(\xi)+\langle \xi \rangle_{m_0}+2b(\tau), \qquad C_\epsilon(\xi)=\frac{\epsilon}{\langle\xi\rangle_{m_0}}.$$
Thus, recalling that $\mathcal{M}^k_\epsilon(s,\xi)= \mathcal{E}_\epsilon(s+kT,s,\xi)$, we find
\begin{align*} \| \mathcal{M}^k_\epsilon(s,\xi)-\mathcal{M}^k_0(s,\xi)\|&\leq C_\epsilon(\xi)e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}\int_s^{s+kT}\|\mathcal{E}_0(\tau,s,\xi)\|\,d\tau \\ &\leq C_\epsilon(\xi)e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}\int_s^{s+kT} e^{-\delta_1(\tau-s-kT)}\,d\tau \\
&\leq C_\epsilon(\xi)e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}\int_s^{s+kT} e^{-\delta_1(\tau-s-kT)}\,d\tau \\
&\leq \frac{C_\epsilon(\xi)}{\delta_1}e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}(e^{\delta_1 kT}-1).
\end{align*}
Therefore, recalling that $\exp(\delta_1kT)=c_1(N)^{-1}$, we can conclude
\begin{align*}
\sup_{\xii\leq N}\sup_{s\in [0,T]}\| \mathcal{M}^k_\epsilon(s,\xi)\|&\leq \sup_{\xii\leq N}\sup_{s\in [0,T]}\|\mathcal{M}^k_0(s,\xi)\|\\
&\qquad + \sup_{\xii\leq N}\Big\{
\frac{C_\epsilon(\xi)}{\delta_1}e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}(c_1(N)^{-1}-1)\Big\}\\&=c_1(N)+ \sup_{\xii\leq N}\Big\{ \frac{C_\epsilon(\xi)}{\delta_1}e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}(c_1(N)^{-1}-1)\Big\}.
\end{align*}
By \eqref{eq:mainsmallperiodic} we get the desired result
\[\sup_{\xii\leq N}\sup_{s\in [0,T]}\| \mathcal{M}^k_\epsilon(s,\xi)\|<1,\]
by choosing $\epsilon$ sufficiently small such that
\begin{equation}
\label{eq:epsilonrough}
\frac{C_\epsilon(\xi)}{\delta_1}e^{C_\epsilon(\xi) kT}e^{(\langle\xi\rangle_{m_0}+2\beta)k T}(c_1(N)^{-1}-1)<1-c_1(N).
\end{equation}
Let us introduce $W=W(x)$ the Lambert W-function defined in the set $\R_+:=\{x\in\R: x\geq 0\}$ such that for any $x\in \R_+$ it holds $x=W(x)e^{W(x)}$. The function $W$ is increasing (see \cite{WLambert} for more details); thus, recalling the definition of $\delta_1$, we find that estimate \eqref{eq:epsilonrough} is equivalent to ask
\[ \epsilon \leq \frac{\langle \xi \rangle_{m_0}}{kT}W\Big(c_1(N)\log(c_1(N)^{-1})e^{-(\langle\xi\rangle_{m_0}+\beta)kT}\Big),\]
for any $\xi \in [0,N]$, that is
\begin{equation}
\label{eq:epsilon}
\epsilon \leq \frac{m_0}{kT}W\Big(c_1(N)\log(c_1(N)^{-1})e^{-(\langle N \rangle_{m_0}+\beta)kT}\Big).
\end{equation}
\begin{acknowledgement}
The paper is based on discussions the authors had during the stay of Giovanni Girardi at the University of Stuttgart in spring 2019. G.G. is grateful for the hospitality of the Department of Mathematics during his stay.
\end{acknowledgement}
\input{references}
\end{document}
| {
"attr-fineweb-edu": 1.672852,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUbfw5qsJBjmSXv6At | \section{Outline}
There are many approaches to formulating a discrete theory of quantum
gravity. In this talk I will focus on the dynamical triangulations
(DT) formulation of simplicial gravity. This was first developed in
the context of string theory and two-dimensional quantum gravity
($QG_2$) where the discrete approach has proven extremely powerful and
actually preceded continuum treatments. By now we have considerable
confidence in the validity of the assumptions underlying the DT model
in the case of coupling of matter with central charge c less than
one. This confidence stems from the agreement of discrete, continuum
and numerical results.
The situation for central charge c greater than one was clarified in
the last year by David \cite{Davi97}. After the Introduction a
discussion of this work will be the first part of this review. This
is followed by a discussion of recent numerical tests of David's
proposal \cite{ThPe97}.
I will then move on to the fascinating issue of the intrinsic fractal
geometry of $QG_2$ and $QG_2$ coupled to conformal matter. In any
theory of gravity it is natural to ask what effect quantum
fluctuations have on the structure of spacetime. The most basic
question we could ask of any spacetime with a given topology is:
``What is its Hausdorff dimension?'' Our current knowledge of the
answer to this question will be reviewed, with emphasis on a
comparison of analytic predictions with recent numerical simulations.
The success of the DT approach to simplicial gravity in two dimensions
has inspired several groups to tackle large scale simulations of the
DT discretization of Euclidean Einstein-Hilbert gravity in 3 and 4
dimensions. Here the situation is much cloudier, both theoretically
and numerically. The current status will be summarized in the third
part of my talk.
Any rich idea usually has many unsuspected spin-offs. The field of
random surfaces and non-critical string theory is intimately connected
with the physics of membranes. Some recent numerical results on new
phases of anisotropic membranes will be discussed as an example of the
fascinating interdisciplinary nature of the subject of random
surfaces.
Finally I present a list of challenging future problems that I think
are essential for progress in the field. I hope that some of them
will be solved by the time of Lattice 98.
\section{Introduction}
String theories may be viewed as 2-dimensional Euclidean quantum
field theories with particular actions and matter content. From this
viewpoint they may also be considered as 2d{-}statistical mechanical
models on lattices with dynamical geometry. Statistical mechanical
models on fixed lattices often possess special critical points where
they are scale invariant. Correlation functions of generic matter
fields $\Phi(x)$ reflect this scale invariance by transforming very
simply under scale transformations: $\langle
\Phi\left(\vec{r}\right)\Phi(0) \rangle \sim r^{-2\Delta^o_\Phi}$,
where $\Delta^{o}_{\Phi}$ is the scaling dimension of $\Phi$, or
equivalently its anomalous dimension in field theory. Combining scale
invariance with locality leads to the much larger symmetry of
conformal invariance. In two dimensions conformal invariance and
unitarity restrict the possible values of critical exponents because
the unitarisable representations of the associated Virasoro algebra
form a discrete series parameterized by a single real number {---} the
central charge $c$. The central charge determines the effect of scalar
curvature on the free energy of the model \cite{Gin88}.
For $c<1$ the only allowed values are
\begin{equation}
\label{eq:minseries}
c = 1 - \frac{6}{m(m+1)}\quad \mbox{with}\quad m = 2,3,4 \ldots
\end{equation}
It is also understood, through the classic work of KPZ \cite{KPZ88},
how scaling dimensions of fields are modified when the lattice becomes
dynamical i.e. when the model is coupled to two-dimensional gravity.
The dressed scaling dimensions $\Delta_\Phi$ are determined solely by
$\Delta^{0}_{\Phi}$ and the central charge $c$ via
\begin{equation}
\Delta_\Phi - \Delta^o_{\Phi} = \frac{\Delta_{\Phi}(1-\Delta_\Phi)}
{1-\gamma_s\left(0\right)},
\end{equation}
where the string susceptibility exponent $\gamma_s$ describes the
entropy of random surfaces of fixed area $A$ {\em viz}:
\begin{equation}
\label{eq:fixedaz}
Z(A)={\rm e}^{\Lambda_{c}A} A^{\gamma_{s}-3}
\end{equation}
and
\begin{equation}
\label{eq:gams}
\gamma_s(h) = 2 - \frac{1-h}{12}\Big\{25-c+\sqrt{(1-c)(25-c)}\,\Big\}
\end{equation}
for a Riemann surface of genus $h$.
Eq.(\ref{eq:fixedaz}) implies the singularity structure
\begin{equation}
\label{eq:fixedcz}
Z(\Lambda)=\int^{\infty}_0 dA e^{-\Lambda A} Z(A) \sim
(\Lambda_c-\Lambda)^{2-\gamma_s}
\end{equation}
as $\Lambda \rightarrow \Lambda_c$.
The renormalized continuum theory is obtained by tuning the
cosmological constant $\Lambda$ to the critical cosmological constant
$\Lambda_c$. In this limit the mean area $\langle A \rangle$ diverges like
$1/(\Lambda-\Lambda_c)$.
The linearity of $\gamma_s$ with genus $h$ given by Eq.(\ref{eq:gams})
has the remarkable consequence that the partition function including the sum
over genus
\begin{equation}
\label{eq:topsum}
Z(\Lambda,\mu) = \sum^{\infty}_0 e^{-\mu h} \int^{\infty}_0 dA\; e^{-\Lambda A}
Z(A)
\end{equation}
is actually a function not of two variables $\Lambda$ and $\mu$ but only
of the single scaling combination
\begin{equation}
\label{eq:scalvar}
x=(\Lambda-\Lambda_c)\;{\rm exp}
\Biggl\lbrack\frac{\mu}{2}\Biggl{(}1-\sqrt{\frac{1-c}{25-c}}\;\Biggr{)}\Biggr\rbrack \; .
\end{equation}
For the minimal models of Eq.(\ref{eq:minseries}) the scaling variable is
\begin{equation}
\label{eq:minscalvar}
x=(\Lambda-\Lambda_c)\;{\rm exp}
\Biggl\lbrack\frac{m}{2m+1} \; \mu \Biggr\rbrack \; .
\end{equation}
The discrete formulation of $QG_2$ dates to 1982 and has proven to
be even more powerful than the continuum approach \cite{DisQG2}. It
is very rare in conventional field theories for the discrete formulation
to be more flexible than the continuum treatment and
the fact that it is so here certainly deserves notice.
In discrete $QG_2$ one replaces the 2d-manifold $\Sigma(\xi_1,\xi_2)$
by a set of $n$ discrete nodes $\{i\}$ and the metric $g_{ab}(\xi_1,\xi_2)$
by the adjacency matrix $C_{ij}$, where $C_{ij}=1$ if $i$ is connected to $j$
and $C_{ij}=0$ otherwise.
The connectivity $q_i$ of node $i$ is
\begin{equation}
\label{eq:conn}
q_i = \sum_j C_{ij} \; .
\end{equation}
The scalar curvature $R_i$ is
\begin{equation}
\label{eq:discurv}
R_i = 2\pi\frac{6-q_i}{q_i} \; .
\end{equation}
The gravity partition function becomes
\begin{equation}
\label{eq:disz}
Z(\Lambda,\mu) = \sum^{\infty}_0 e^{-\mu h} \sum^{\infty}_{n=0}
e^{-\Lambda n} Z_{h,n} \; ,
\end{equation}
where $Z_{h,n}$ for pure gravity is the number of distinct triangulations
(${\cal T}$) of $n$ nodes with genus $h$ and $Z_{h,n}$ for models with matter
is symbolically
\begin{equation}
\label{eq:dismatt}
Z_{h,n} = \sum_{{\cal T}} \int {\cal D}\lbrack \Phi \rbrack e^{-S_{MG}}
\end{equation}
for a generic matter field $\Phi$ coupled to gravity with action $S_{MG}$.
The density of states $Z_{h,n}$ is best computed by dualizing the
triangulation to a $\Phi^3$ graph
and counting the number of distinct such connected graphs with $n$ vertices
that can be drawn, without crossing, on a surface of genus $h$ or greater.
To incorporate the topology of the graph one must promote $\Phi$ to an
$N \times N$ matrix \`{a} la 't Hooft.
The full partition function $Z(\Lambda,\mu)$ is, indeed, simply related
to the free energy of the corresponding matrix model:
\begin{equation}
\label{eq:matrixmodel}
Z(\Lambda,\mu) \equiv \frac{1}{N^2} {\rm log} \; \zeta(N,g) \; ,
\end{equation}
where
\begin{equation}
\label{eq:matrixdef}
\zeta(N,g) = \int d^{N^2} \Phi \;{\rm exp} \lbrack -N\;{\rm Tr}\; ( \frac{1}{2}
\Phi^2 - \frac{g}{3} \Phi^3)\rbrack
\end{equation}
with the identification
\begin{equation}
\label{eq:ident}
\frac{1}{N^2}={\rm e}^{-\mu} \quad \mbox{and} \quad g = {\rm e}^{-\Lambda}\; .
\end{equation}
\section{The $c=1$ Barrier}
As we have seen in the Introduction there is a good understanding of
how scaling dimensions of operators are modified by the coupling
of gravity to conformal field theories characterized by a central charge $c$
less than one. In several cases the models are exactly solvable and the
sum over topologies is even possible in the double scaling limit $N
\rightarrow \infty$ and $\Lambda \rightarrow \Lambda_c$ with the scaling
variable $x$ of Eq.(\ref{eq:scalvar}) fixed \cite{dosc90}.
When the central charge $c$
exceeds one the string susceptibility exponent $\gamma_s$, according to
KPZ \cite{KPZ88}, becomes complex. This unphysical prediction is indicative
of an instability in the model. What is the true character of the
theory for $c>1$? The $c=1$ point is analogous to the upper
critical dimension $d=4$ in the theory of phase transitions and, in fact,
there are known logarithmic violations of scaling at c=1 \cite{log}.
From numerical simulations for $c>1$ the following picture has emerged.
Two basic classes of models have been carefully investigated.
The first consists of bosonic matter fields on dynamical
triangulations. In these models one finds a
branched polymer {\bf BP} phase for $c$ large (typically $c\ge5$).
The {\bf BP} phase is characterized by
$\gamma_s=1/2$ and the proliferation of minimal neck baby universes
(mimbus). For smaller values of the central charge ($1<c<4$) the
simulations indicate an intermediate phase with $0<\gamma_s<1/2$.
There is no apparent discontinuity in the vicinity of $c=1$.
The second class of models is multiple Ising models on
dynamical triangulations. The distinct species of spins couple
through the dynamical connectivity of the lattice.
For $n$ copies of Ising model there is a spin-ordering transition at a
critical temperature. At the critical point one recovers a $c=n/2$
conformal field theory.
For $n>2$, but not too large, $\gamma_s$ increases smoothly from $0$ to $1/2$
with, again, no discontinuity near $n=2$ ($c=1$). For large $n$ the ordered and
disordered spin phases (both with pure gravity exponents $\gamma_s=-1/2$)
are separated by an intermediate {\bf BP} phase with $\gamma_s=1/2$ and
vanishing magnetization. The transition from the magnetized pure gravity
phase to the disordered {\bf BP} phase is a branching transition with
$\gamma_s=1/2$. This branching is one sign of the expected
instability discussed above.
Similar phenomena are found for the q-state Potts model for q large.
A plot of $\gamma_s$ as a function of the central charge $c$ from existing
simulations for two classes of triangulations (degenerate ${\cal T_D}$ and
combinatorial ${\cal T_C}$) is shown in Fig.\ref{fig:gamc}.
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.8in
\epsfbox{gamcblack.eps}
\caption{$\gamma_s$ vs c for both degenerate ${\cal T_D}$ and combinatorial
${\cal T_C}$ triangulations.}
\label{fig:gamc}
\end{figure}
Could there be a nontrivial infrared fixed point governing the
critical properties of the $c\ge1$ models? A renormalization group
scheme for matrix models, roughly analogous to the $\epsilon${-}expansion
about the upper critical dimension, was developed sometime ago by
Br\'{e}zin and Zinn-Justin (BZ) \cite{BrZi92}. A typical example
is given by the $2^n$-matrix model formulation of the $n$-Ising spins
coupled to gravity. The BZ technique
consists of integrating out one row and one column of the matrix to
obtain an $N-1$ dimensional matrix. Changing $N$ is equivalent to changing
the string coupling constant $e^{-\mu}$ of Eq.(\ref{eq:ident}).
This induces a flow in the matrix model coupling
which leads to definite renormalization group flows.
One can then look for fixed points.
The method is only qualitatively correct for the well-understood case
of $c<1$ minimal models but has the advantage it can be extended
to $c>1$. What does it tell us about $c>1$?
David's idea was to apply the BZ scheme to matrix models that include
{\em branching} interactions. These interactions generate microscopic
wormholes that connect two macroscopic pieces of a Riemann surface. The
action for such models is given by
\begin{equation}
S_{N} (\Phi) = N\;{\rm Tr} \left(\frac{\Phi^2}{2}-g\frac{\Phi^4}{4}\right)
-\frac{x}{2} {\rm Tr}^2\left(\frac{\Phi^2}{2}\right)
\end{equation}
The trace-squared (gluing) interaction (with coupling $x$) corresponds to
branching and is naturally generated at second order in perturbation
theory within a renormalization-group framework. Such models were first
treated by Das et al. \cite{Das90}. For pure gravity it was found that
increasing the coupling $x$ leads to a new critical point $(g_c, x_c)$ in the
$(g,x)${--}plane separating a large-$x$ {\bf BP} phase ($\gamma=1/2$) from
the pure gravity phase ($\gamma=-1/2$).
At the (branching) transition $\gamma_s=1/3$.
This result extends to the case of the $m-th$ minimal model. In this case
the branching critical point has $\gamma_s=1/(m+1)$. Applying the
BZ RG method to this case David found the RG flows
\begin{equation}
\begin{array}{l}
\beta_{g} = g-6g{^2} - 2g x \\
\beta_{x} = 2x - 3x{^2} - 6g x
\end{array}
\label{eq:rgflow}
\end{equation}
For $c>1$ the only true fixed point is the {\bf BP} fixed point.
But the influence of
the $c<1$ fixed points is still felt for $c$ in the neighbourhood of $1$.
For $c<1$ the scaling dimension of the field $x$ (corresponding to the
renormalized coupling $x$) is positive and wormholes are irrelevant.
At $c=1$ the branching interaction is marginal and the matter-gravity fixed
point merges with
the branching fixed point. For $c>1$ wormholes are relevant and the
matter-gravity and branching fixed points become complex conjugate pairs
off the real
axis. As a result there are exponentially enhanced crossover effects which
imply that an exponential fine-tuning of couplings
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.8in
\epsfbox{fig10.eps}
\caption{Schematic RG flows for $n>2$ ($c>1$) multiple-Ising models from
\protect\cite{Davi97}.}
\label{fig:rgflow}
\end{figure}
\begin{equation}
\label{eq:finetune}
|g-g{_c}| \sim {\rm exp} \left(-\frac{1}{\sqrt{c-1}}\right)
\end{equation}
is necessary to see the flow to the true {\bf BP} fixed point.
Without this fine-tuning flows appear similar to those for $c<1$.
Fig.~\ref{fig:rgflow} shows a schematic RG flow for the generalized
$n>2$ ($c>1$) multiple-Ising model with branching interactions.
In the shaded region flows are similar to the
case $c<1$ unless the temperature is fine-tuned. {\bf A'} and {\bf A''}
denote pure gravity fixed points and {\bf C'} and {\bf C''} denote
branching critical points. The regions $O$ and $D$ are ordered and disordered
spin phases respectively. The line $x=0$ corresponds to the original
discretized model of $n$-Ising spins on a dynamically triangulated lattice.
On this line there is no spin-ordering transition {--} only the branching
transition to the {\bf BP} phase.
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.8in
\epsfbox{gam0pr.eps}
\caption{$\gamma_s$ vs $\rho$ for $c=0$ from \protect\cite{ThPe97}.}
\label{fig:gamc0}
\end{figure}
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.8in
\epsfbox{gam1logpr.eps}
\caption{$\gamma_s$ vs $\rho$ for $c=1$ from \protect\cite{ThPe97}.}
\label{fig:gamc1}
\end{figure}
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.8in
\epsfbox{gam2logpr.eps}
\caption{$\gamma_s$ vs $\rho$ for $c=2$ from \protect\cite{ThPe97}.}
\label{fig:gamc2}
\end{figure}
This nice picture is consistent with the existing numerical results and
may also be tested numerically.
A first step was reported at this meeting \cite{ThPe97}. These authors
study the partition function
\begin{equation}
Z_A=\sum_{T\epsilon {\cal T}} {\rm e}^{\rho n_m}Z_m
\end{equation}
where $\rho$ is a chemical potential, $n_m$ is the number of minimal necks
and $Z_M$ is a standard matter action for multiple Gaussian fields. The cases
studied are zero ($c=0$), one ($c=1$) and two ($c=2$) scalar fields. They look
for a transition to the {\bf BP} phase at a finite critical chemical potential
$\rho_c$. The clearest results come from an analysis of $\gamma_s$ and are
shown in Figs.~\ref{fig:gamc0}{--}\ref{fig:gamc2}. For
$c=0$ and $c=1$ $\gamma_s$ changes sharply from $\gamma\sim -1/2$ to
$\gamma\sim 1/2$ at a definite $\rho_c$ with $\rho_c \sim 0.6$ ($c=0$)
and $\rho_c \sim 0.4$ ($c=1$). For $c=2$ one finds instead a smooth
volume-dependent cross-over to the {\bf BP} phase.
Furthermore the results are consistent with the
model being only in the {\bf BP} phase in the infinite volume limit. These
interpretations are supported by an analysis of the specific heat. For $c=0$
and $c=1$ one sees a definite phase transition but for $c=2$ no
self-consistent critical exponents can be extracted from finite-size
scaling of the specific heat curves.
\section{Fractal Dimension}
\begin{table*}[hbt]
\setlength{\tabcolsep}{1.5pc}
\newlength{\digitwidth} \settowidth{\digitwidth}{\rm 0}
\catcode`?=\active \def?{\kern\digitwidth}
\caption{The fractal dimension for $c\le1$ models: theory and simulations.}
\label{table:dh}
\begin{tabular*}{\textwidth}{@{}l@{\extracolsep{\fill}}rrrrrr}
\hline
\multicolumn{6}{c}{$d_h$}\\
\hline
\multicolumn{1}{l}{$c=-2$} &
\multicolumn{1}{r}{$c=0$} &
\multicolumn{1}{r}{$c=1/2$} &
\multicolumn{1}{r}{$c=4/5$} &
\multicolumn{1}{r}{$c=1$} &
\multicolumn{1}{c}{Method} \\
\hline
2 & 4 & 6 & 10 & $\infty$ &
Theory:$\;$Eq.(\ref{eq:dhformone}) \\
3.562 & 4 & 4.21 & 4.42 & 4.83 & Theory:$\;$Eq.(\ref{eq:dhformtwo}) \\
3.58(4) & 3.58{--}4.20 & 3.95{--}4.35 & 4.00{--}4.55 & 3.8{--}4.4
& Simulations\\
\hline
\end{tabular*}
\end{table*}
Our current understanding of the purely spacetime aspects of $QG_2$ coupled
to matter is much less complete than our knowledge of the effects of gravity
on the matter fields and critical behaviour. Of basic interest is the
intrinsic Hausdorff dimension of the typical surface appearing in the
ensemble of random surfaces. The Hausdorff dimension is defined by
\begin{equation}
d_H=\frac{\log A}{\log r} \; ,
\end{equation}
where $A$ is the area of the surface and $r$ is some appropriate measure
of the geodesic size of the surface. There is a considerable variety of
alternative definitions of $d_H$. In fact $A$ and $r$ here may be replaced by
any reparameterization-invariant observables with dimensions of area and
distance respectively.
For the case of pure $QG_2$ $d_H$ is known to be 4 \cite{KKMW93,AmWa95}.
This result employs a transfer matrix formalism to study the evolution of
loops on the surface. One may also use a string field theory for non-critical
strings \cite{IsKa93} to calculate $d_H$. Perhaps the simplest approach is to
formulate the theory on a disk with boundary length $\ell$ and to use $\ell$ as
a ruler for determining the scaling of both $A$ and $r$. In the string field
theory approach an ADM-type gauge is chosen in which geodesic distance plays
the role of time. Together with a choice of the string field theory
Hamiltonian this determines the scaling of geodesic distance $r$ with
boundary length $\ell$ to be
\begin{equation}
r \sim \ell^{1/m} \; ,
\end {equation}
for the $m-th$ minimal model coupled to $QG_2$. The scaling of $A$ vs. $\ell$
may be determined by standard matrix model calculations. If we assume that the
area $A$ scales canonically as $\ell^2$ we conclude that
\begin{equation}
A \sim r^{2m},
\end {equation}
implying $d_H = 2m$. This result may be equivalently written as
\begin{equation}
\label{eq:dhformone}
d_H = \frac{2}{|\gamma_s|} = \frac{24}{1-c+{\sqrt{(25-c)(1-c)}}}\; .
\end{equation}
A completely different result is obtained by studying the diffusion of a
fermion with the methods of Liouville theory \cite{KSW93}. This gives
\begin{equation}
\label{eq:dhformtwo}
d_H = -2\frac{\alpha_1}{\alpha_{-1}}=2 \times
\frac{\sqrt{25-c}+\sqrt{49-c}}{\sqrt{25-c}+{\sqrt{1-c}}} \; ,
\end {equation}
where $\alpha_n$ is the gravitational dressing of a $(n+1,n+1)$ primary
spinless conformal field.
In Table~\ref{table:dh} we list the predictions from these two formulae
together with the results from numerical simulations \cite{Nudh95,Anag97}.
Both the analytic predictions discussed above, as well as the exact solution,
agree on $d_H=4$ for pure $QG_2$. A detailed numerical investigation of
the fractal dimension, determining both the scaling of two-point functions
defined in terms of geodesic distance and the behavior of the loop length
distribution function, was presented in this meeting \cite{Anag97}.
For $0<c\leq{1}$ $d_H$ is found to be very close to 4, in agreement with
earlier simulations \cite{Nudh95}.
An example of the excellent scaling curves obtained
is shown in Fig.~\ref{fig:dhscal}.
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.8in
\epsfbox{twpopt.eps}
\caption{Scaling fits for the two point function for $QG_2$ from
\protect\cite{Anag97}.}
\label{fig:dhscal}
\end{figure}
The current best numerical results therefore do not agree with either of the
analytic predictions discussed above. There is not yet complete
agreement, to high precision, between the various methods of determining
$d_H$ and so one cannot confidently rule out both theories. Further work
is highly desirable to consolidate the result that $d_H$ for $0<c<1$
is independent of the matter.
The subtlety of the problem when Ising matter is included may be
seen in \cite{BJT97}. The scaling of area versus boundary length depends on
the precise order in which the infinite-volume limit is taken with respect
to the tuning to the Ising critical temperature. It is possible to obtain
$d_H=4$ if the infinite-volume thermodynamic limit is taken with $T\neq{T}_{c}$
followed by tuning to the spin-ordering transition. Finally large-scale
numerical results for $c=-2$ \cite{AAIJKWY97}, made possible by recursive
sampling of the space of graphs for this topological model, are in
excellent agreement with the Liouville prediction $d_H=3.56$.
\section{Higher Dimensional Simplicial Gravity}
Since the DT approach to quantum gravity is very successful in two
dimensions it is natural to explore its implications in
higher dimensions. There has, in fact, been considerable effort
in this direction in recent years.
Consider, to begin, the case of pure Einstein-Hilbert gravity
in $D$ dimensions for $D=3$ or $4$.
The functional integral to be evaluated is
\begin{equation}
\label{eq:part}
Z = \sum_{{\cal M} \in {\rm Top}} \int_M \frac{D \lbrack g \rbrack} {{\rm Vol}({\rm Diff})}
\,{\rm e}^{-S(g)},
\end{equation}
where $M$ is the spacetime manifold with topology chosen from the class
{\em Top}
and the action $S(g)$ is given by
\begin{equation}
\label{eq:ehaction}
S(g) = \int d^4x\;\sqrt{g}\;(\Lambda - \frac{R}{16\pi G}).
\end{equation}
In the following we will restrict ourselves to fixed topology since
there is little rigorous understanding of the meaning of the
sum over of topologies in the general higher dimensional case.
The continuum integral over diffeomorphism-inequivalent
metrics is replaced in the DT approach by a discrete sum over all
possible cellular decompositions (gluings) of
$D${-}simplices along their $D-1${-}faces with the simplicial manifold
requirement that the neighbourhood of each vertex is a $D$-ball and the
DT constraint that all edge lengths are fixed. So far the majority
of numerical results have been limited to the fixed topology
of the sphere. To be specific let us concentrate on the $4$-dimensional
case of the four-sphere $S^4$.
\begin{figure}[htb]
\centerline{\psfig{file=n0_6.ps,width=2.8in,angle=270}}
\caption{4d Monte Carlo time history of $N_0$ for $N_4=64,000$ and
$k_2=1.278$ from \protect\cite{BdeB96}. The horizontal units are 100 sweeps.}
\label{fig:firstorder}
\end{figure}
\begin{figure}[htb]
\vspace{9pt}
\epsfxsize=2.5in
\epsfbox{corr.eps}
\caption{Correlation of vertex number $N_0$ with singular vertex volume from
\protect\cite{CRK97}.}
\label{fig:singvertex}
\end{figure}
The free global variables at our disposal are the numbers $N_i({\cal T})$ of
$i$-dimensional simplices in a given triangulation ${\cal T}$ ($i=0,1,..,4$).
These 5 parameters are constrained by two Dehn-Sommerville relations
\begin{equation}
\label{eq:dehn}
\sum^4_{i=2k-1} (-1)^i \pmatrix{i+1\cr 2k-1} N_i({\cal T}) = 0 \quad k=1,2
\end{equation}
together with the Euler relation
\begin{equation}
\label{eq:euler}
\chi\left(S^4\right) = \sum_0^4 (-1)^i \; N_i({\cal T}) = 2 \; .
\end{equation}
These three relations leave two independent variables which we may take to be
$N_2$ and $N_4$. These are the discrete analogues of the mean scalar
curvature and the volume. The Einstein-Hilbert action Eq.(\ref{eq:ehaction})
then becomes
\begin{equation}
\label{eq:discreteeh}
Z\lbrack k_2,k_4 \rbrack = \sum_{{\cal T}(S^4)} {\rm exp} \lbrack -k_4
N_4({\cal T}) + k_2 N_2({\cal T}) \rbrack
\end{equation}
where $k_4$ is the discrete cosmological constant and $k_2$ is the
discrete inverse Newton's constant.
In practice (almost) fixed $N_4$ (volume) systems are usually simulated
by adding a constraint that restricts the volume to be near a target volume.
One is then really approximating the fixed volume partition function
\begin{equation}
\label{eq:fixedvol}
Z\lbrack k_2,N_4 \rbrack = \sum_{{\cal T}(S^4)} {\rm exp} \lbrack k_2
N_2({\cal T}) \rbrack \; .
\end{equation}
From extensive numerical simulations it has emerged that the
system described by the partition function of Eq.(\ref{eq:fixedvol})
has two distinct phases. For $k_2$ small (strong coupling) the system
is crumpled (infinite Hausdorff dimension) and the mean scalar curvature
$\langle R \rangle$ is negative. For $k_2$ large (weak coupling)
the system is elongated (branched-polymer like) with Hausdorff dimension 2
and positive mean scalar curvature of order the volume. In the crumpled
phase generic triangulations contain one singular one-simplex with
two singular vertices \cite{HIN95,CTKR96}.
These singular simplices are connected to an extensive
fraction of the volume of the simplicial manifold. The local volume associated
with the singular one-simplex grows like $V^{2/3}$ while the local volume
associated with the singular vertices grows like $V$ itself.
The appearance of singular structures is generic to simplicial DT
gravity in dimensions $D\ge4$.
For $D\ge4$ one finds a singular $D-3$ simplex (with
local volume of order $V^{2/3}$) and singular sub-simplices
(with local volume of order $V$).
It is now clearly established that there is a {\em first order}
phase transition connecting the crumpled phase with the branched polymer phase
in both 3 \cite{QG3} and 4 dimensions \cite{BBKP96,BdeB96}.
This is seen dramatically in the time series of Monte Carlo sweeps shown in
Fig.~\ref{fig:firstorder} \cite{BdeB96}.
A finite-size scaling analysis of the variance of the $N_2$
(or equivalently $N_0$) fluctuations also reveals a maximum which
grows linearly with the system volume {---} a classic signal
of a first-order transition.
It requires both large volumes (order $64,000$) and long simulations (order
$10^6$ sweeps) to see the first order nature of the transition emerge clearly
in four dimensions.
It is also becoming clear that the transition itself is closely
connected to the formation of the singular simplices \cite{CRK97}.
This is demonstrated in Fig.~\ref{fig:singvertex}.
Many characteristics of the elongated phase are analogous to those of
branched-polymers \cite{AmJu95} and may be reproduced by simple and elegant
statistical mechanical models of branching \cite{BrPo}.
With a suitable choice of ensemble this class of models also
possesses a discontinuous transition \cite{BBJ97}.
Thus the weak coupling phase of the theory may reflect more about the
combinatorial nature of the simplicial DT action than about the nature of
gravity itself. This remains to be seen.
The lack of a continuous transition for DT simplicial gravity in
higher dimensions has been a deterrent for recent
progress in the field. From a traditional point of view
this absence of a critical point implies that the model has no continuum
limit we could call continuum quantum gravity.
At least three viewpoints are possible at this stage.
It may be that the theory is fundamentally discrete and that
{\em no} continuum limit should be taken. This viewpoint is advocated
in different ways in several other approaches to quantum gravity such as the
causal sets formulation of Sorkin \cite{Sorkin} and the theory of spin
networks \cite{Smolin}.
Alternatively the theory might only have an interpretation as an effective
theory valid over a limited range of length scales.
Finally it may be that pure DT simplicial gravity is ill-defined but that
models with appropriate matter or modified measures \cite{BrMa93}
possess critical points and admit a suitable continuum limit.
This latter approach was discussed in this meeting by Izubuchi
for $QG_3$ \cite{Izu97}. One modifies the action by adding terms
that effectively enhance higher curvature fluctuations {---} these
correspond to changing the measure locally by powers of $\sqrt{g}$.
There is some evidence that by tuning this extra term one can
soften the transition to a continuous one at finite volumes.
It is very likely though that the effect disappears in the infinite-volume
limit. This issue needs further clarification.
\section{Membranes}
The theory of random surfaces with the addition of an extrinsic curvature
to the action has a direct connection with the statistical
mechanics of flexible membranes \cite{MemRev}.
Physical membranes are two-dimensional
surfaces fluctuating in three embedding dimensions.
The simplest examples of 2-dimensional surfaces are strictly planar systems
called films or monolayers. These alone are surprisingly complex systems.
But when they also have the freedom to fluctuate, or bend,
in the embedding space it is a considerable challenge to determine
their physical properties. Two broad classes of membranes have been
extensively studied. Membranes with fixed connectivity (bonds are never
broken) are known as crystalline or tethered membranes. Membranes
with dynamical connectivity (relatively weak bonds which are free to break
and rejoin) are known as fluid membranes.
The simplest action for self-intersecting (non-self-avoiding)
fluid membranes resembles that
of the Polyakov string with extrinsic curvature (bending energy).
Since the beta function for the inverse-bending rigidity
(where the bending rigidity is the coupling to the extrinsic curvature)
is well-known to be asymptotically free at one loop, the bending
rigidity is irrelevant in the infrared and self-intersecting
fluid membranes are commonly believed to exist only in a crumpled phase.
Crystalline membranes, on the other hand, have a non-linear
coupling between elastic (phonon) interactions and bending
fluctuations which drives a phase transition from a high-temperature
crumpled phase to a low-temperature orientationally ordered (flat) phase.
Much can be learned about membranes by applying the techniques and experience
of lattice gravity simulations to these condensed matter/biological
problems. This is illustrated by some recent \cite{BFT97}
large-scale Monte Carlo simulations of anisotropic crystalline membranes
{---} these are systems in which the
bending or elastic energies are different in different directions.
We were able to verify for the first time the predicted existence
\cite{RT95} of a novel {\em tubular} phase in this class of membranes.
This phase is intermediate between the flat and crumpled phases {---}
it is extended (flat) in one direction but crumpled in the transverse
direction. Correspondingly there are two phase transitions: the
crumpled-to-tubular transition and the tubular-to-flat transition.
Thermalized configurations from the three phases of anisotropic crystalline
membranes are shown in Fig.~\ref{fig:tubules}.
These simulations employed a variety of improved Monte Carlo algorithms
such as hybrid overrelaxation and unigrid methods \cite{ThFa97}.
An even more challenging problem is to incorporate the self-avoidance found
in realistic membranes \cite{BG97,RT297}.
\begin{figure}[htb]
\vspace{9pt}
\centerline{\psfig{file=tubuleconfigs.eps,width=2.8in,clip=}}
\caption{The three phases of anisotropic membranes:(a) tubular (b) crumpled
and (c) flat.}
\label{fig:tubules}
\end{figure}
Finally the full physics of fluid membranes, in which dynamical connectivity
may be modeled by dynamical triangulations, is a problem of limitless
challenges which ties together common techniques in lattice gravity and the
burgeoning field of soft condensed matter physics \cite{Lub96}.
\section{Future Challenges}
There are many challenges facing the program of simplicial lattice gravity.
I will give here a few outstanding problems and possible future
directions.
\vspace{0.4cm}
\noindent $\bullet$ {\em Topology Change}
\vspace{0.4cm}
Most of the numerical simulations in the field
have been on spacetimes with fixed topology. From the functional integral
point of view it is more natural (and perhaps essential) to allow the
topology of spacetime to fluctuate. In $2d$ it is even possible
to perform analytically the sum over all topologies in the double scaling
limit. A preliminary investigation of a $4d$ dynamical triangulation model
of Euclidean quantum gravity with fluctuating topology was made some
time ago by de Bakker \cite{BdeB95}.
\vspace{0.4cm}
\noindent $\bullet$ {\em Supersymmetry}
\vspace{0.3cm}
Whether or not it has physical relevance it is clear that
one of the most powerful ideas in particle physics/quantum field
theory/string theory at present is supersymmetry. Supersymmetry
severely constrains the analytic structure of any model.
Almost no progress has been made in formulating or simulating
supersymmetric simplicial gravity or supersymmetric random
surfaces. If we are ever to make contact with critical or non-critical
superstrings and recent developments like duality relations
this will be essential.
\vspace{0.3cm}
\noindent $\bullet$ {\em Classical Limit}
\vspace{0.3cm}
Although simplicial quantum gravity does provide a non-perturbative
definition of quantum gravity in four dimensions there is no understanding
of how classical gravity emerges in the long-wavelength limit and indeed
of how perturbative scattering amplitudes are reproduced in this framework.
\vspace{0.3cm}
\noindent $\bullet$ {\em Triangulation Class}
\vspace{0.3cm}
At present there is no classification of the precise class of graphs
that result in KPZ, rather than Onsager, critical exponents
when matter is coupled to a dynamical lattice ($QG_2$).
For example it has been shown \cite{BCT97} that MDT
({\em minimal dynamical triangulation}) models in which the local
coordination number is restricted to be 5, 6 or 7 still result
in KPZ exponents.
\vspace{0.3cm}
\noindent $\bullet$ {\em Fractal Dimension}
\vspace{0.3cm}
Is the fractal dimension of all $0<c<1$ matter coupled to $QG_2$ really four?
\vspace{0.3cm}
\noindent $\bullet$ {\em Interpretation of $QG_4$}
\vspace{0.3cm}
What is the correct interpretation of $4d$ simplicial quantum gravity
given the first order transition from the strong to weak coupling phases?
What is the proper mathematical framework for $QG_4$ and how much analytic
progress can be made? Recent progress in this direction is reviewed in
\cite{ACM97}.
\vspace{0.3cm}
\noindent $\bullet$ {\em Renormalization Group}
\vspace{0.3cm}
Recently renormalization group (RG) methods have been developed for
pure simplicial gravity and for simple matter coupled to gravity
\cite{RG}. There is no rigorous understanding of the principles
behind the success of these methods and further
improvement of the technique is highly desirable.
Further progress in this direction to enable, for example,
the direct computation of the beta function for random surfaces with
extrinsic curvature would be very nice.
I would like to acknowledge Kostas Anagnostopoulos, Simon Catterall,
Marco Falcioni and Gudmar Thorleifsson for extensive discussion
on many issues treated in this talk.
| {
"attr-fineweb-edu": 1.65625,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |